[GitHub] [nifi] AnujJain7 commented on pull request #2724: NIFI-5133: Implemented Google Cloud PubSub Processors

2020-05-20 Thread GitBox


AnujJain7 commented on pull request #2724:
URL: https://github.com/apache/nifi/pull/2724#issuecomment-631885146


   Found that there is already a bug raised in NIFI since Sep'19
   https://issues.apache.org/jira/browse/NIFI-6701
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] pcgrenier commented on pull request #4282: NIFI-7462: Update to allow FlowFile Table's schema to be more intelligent when using CHOICE types

2020-05-20 Thread GitBox


pcgrenier commented on pull request #4282:
URL: https://github.com/apache/nifi/pull/4282#issuecomment-631815567


   > Yeah, I think those should work fine. For something like "concat" it 
should work with both numbers and strings - and it does. In that case, what 
gets returned to calcite for the table's schema is an Object for that column, 
and concat will work against any type of object.
   > 
   > > I also think taking this choice out of the hands of the flow developer 
is bad.
   > 
   > I don't believe we are taking the choice out of the hands of the flow 
developer. We're simply saying that if you want to do some sort of numeric-only 
aggregate function, your schema must indicate that the field is a number. I 
think this is fair game. The user can, in this case, simply use a schema that 
indicates that field is numeric.
   
   My bad, I did test a few other string based aggregations like the LISTAGG 
functions and they all seem to work as intended. I just assumed if the number 
based functions failed on Choice returning an Object type so would the String 
based ones. So yeah if calcite, which it looks to be doing, is properly calling 
the toString on objects then yeah. This is awesome thanks for following up on 
this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] MikeThomsen commented on pull request #4231: NIFI-7037 Split off username and password fields for GetMongo processor

2020-05-20 Thread GitBox


MikeThomsen commented on pull request #4231:
URL: https://github.com/apache/nifi/pull/4231#issuecomment-631800420


   @karthik-kadajji It would be easier to apply this change to the Mongo 
controller service. If you look at all of the processors, you'll see that they 
have "client service" as an option. In the long run, I'm planning to start 
deprecating the current configuration options and have the Mongo processors use 
that because it matches our design patterns better.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7463) Write empty flowfile for RunMongoAggregation empty results

2020-05-20 Thread Mike Thomsen (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-7463:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Write empty flowfile for RunMongoAggregation empty results
> --
>
> Key: NIFI-7463
> URL: https://issues.apache.org/jira/browse/NIFI-7463
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Eduardo Mota Fontes
>Assignee: Eduardo Mota Fontes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In RunMongoAggregation processor, when the aggregation returns no value the 
> processor only returns the original flowfile in its relationship. In this 
> case we have to create a test auxiliary flow to handle each situation. This 
> improvement will save some processors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7463) Write empty flowfile for RunMongoAggregation empty results

2020-05-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112671#comment-17112671
 ] 

ASF subversion and git services commented on NIFI-7463:
---

Commit c7edcd68e1d6d98472953897ec8c9a18d2f7d290 in nifi's branch 
refs/heads/master from eduardofontes
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=c7edcd6 ]

NIFI-7463
Create empty relationship for RunMongoAggregation

Fix default autoterminate and condition to redirect to REL_EMPTY

Change from new relationship to write an empty FlowFile to RESULT

Fix MONGO_URI

This closes #4281

Signed-off-by: Mike Thomsen 


> Write empty flowfile for RunMongoAggregation empty results
> --
>
> Key: NIFI-7463
> URL: https://issues.apache.org/jira/browse/NIFI-7463
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Eduardo Mota Fontes
>Assignee: Eduardo Mota Fontes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In RunMongoAggregation processor, when the aggregation returns no value the 
> processor only returns the original flowfile in its relationship. In this 
> case we have to create a test auxiliary flow to handle each situation. This 
> improvement will save some processors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4281: NIFI-7463 - Write empty flowfile for RunMongoAggregation empty results

2020-05-20 Thread GitBox


asfgit closed pull request #4281:
URL: https://github.com/apache/nifi/pull/4281


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] esecules commented on a change in pull request #4286: NIFI-7386: Azurite emulator support

2020-05-20 Thread GitBox


esecules commented on a change in pull request #4286:
URL: https://github.com/apache/nifi/pull/4286#discussion_r428312280



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/AzureStorageEmulatorCrendentialsControllerService.java
##
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.services.azure.storage;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.processor.util.StandardValidators;
+
+/**
+ * Implementation of AbstractControllerService interface
+ *
+ * @see AbstractControllerService
+ */
+@Tags({ "azure", "microsoft", "emulator", "storage", "blob", "queue", 
"credentials" })
+@CapabilityDescription("Defines credentials for Azure Storage processors that 
connects to Azurite emulator. ")
+public class AzureStorageEmulatorCrendentialsControllerService extends 
AbstractControllerService implements AzureStorageCredentialsService {
+
+
+public static final PropertyDescriptor DEVELOPMENT_STORAGE_PROXY_URI = new 
PropertyDescriptor.Builder()
+.name("azurite-proxy-uri")
+.displayName("Azurite Proxy URI")
+.description("Default null will connect to http://127.0.0.1. 
Otherwise, overwrite this value with your proxy url.")

Review comment:
   Could you rewrite the description?
   
   "URI to connect to Azure Storage Emulator\n\nDefault: http://127.0.0.1;

##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/AzureStorageEmulatorCrendentialsControllerService.java
##
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.services.azure.storage;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.processor.util.StandardValidators;
+
+/**
+ * Implementation of AbstractControllerService interface
+ *
+ * @see AbstractControllerService
+ */
+@Tags({ "azure", "microsoft", "emulator", "storage", "blob", "queue", 
"credentials" })
+@CapabilityDescription("Defines credentials for Azure Storage processors that 
connects to Azurite emulator. ")
+public class AzureStorageEmulatorCrendentialsControllerService extends 

[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #776: MINIFICPP-1202 - Handle C2 requests/responses using MinifiConcurrentQueue

2020-05-20 Thread GitBox


szaszm commented on a change in pull request #776:
URL: https://github.com/apache/nifi-minifi-cpp/pull/776#discussion_r428322626



##
File path: libminifi/src/c2/C2Agent.cpp
##
@@ -75,54 +78,55 @@ C2Agent::C2Agent(const 
std::shared_ptr lock(request_mutex, std::adopt_lock);
-if (!requests.empty()) {
-  int count = 0;
-  do {
-const C2Payload payload(std::move(requests.back()));
-requests.pop_back();
-try {
-  C2Payload && response = 
protocol_.load()->consumePayload(payload);
-  enqueue_c2_server_response(std::move(response));
-}
-catch(const std::exception ) {
-  logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
-}
-catch(...) {
-  logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
-}
-  }while(!requests.empty() && ++count < max_c2_responses);
+if (protocol_.load() != nullptr) {
+  std::vector payload_batch;
+  payload_batch.reserve(max_c2_responses);
+  auto getRequestPayload = [_batch] (C2Payload&& payload) { 
payload_batch.emplace_back(std::move(payload)); };
+  const std::chrono::system_clock::time_point timeout_point = 
std::chrono::system_clock::now() + std::chrono::milliseconds(1);
+  for (std::size_t attempt_num = 0; attempt_num < max_c2_responses; 
++attempt_num) {
+if (!requests.consumeWaitUntil(getRequestPayload, timeout_point)) {
+  break;
 }
   }
-  try {
-performHeartBeat();
-  }
-  catch(const std::exception ) {
-logger_->log_error("Exception occurred while performing heartbeat. 
error: %s", e.what());
-  }
-  catch(...) {
-logger_->log_error("Unknonwn exception occurred while performing 
heartbeat.");
-  }
+  std::for_each(
+std::make_move_iterator(payload_batch.begin()),
+std::make_move_iterator(payload_batch.end()),
+[&] (C2Payload&& payload) {
+  try {
+C2Payload && response = 
protocol_.load()->consumePayload(std::move(payload));
+enqueue_c2_server_response(std::move(response));
+  }
+  catch(const std::exception ) {
+logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
+  }
+  catch(...) {
+logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
+  }
+});
 
-  checkTriggers();
+try {
+  performHeartBeat();
+}
+catch (const std::exception ) {
+  logger_->log_error("Exception occurred while performing heartbeat. 
error: %s", e.what());
+}
+catch (...) {
+  logger_->log_error("Unknonwn exception occurred while performing 
heartbeat.");
+}
+}
+
+checkTriggers();
+
+return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(heart_beat_period_));
+  };
 
-  return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(heart_beat_period_));
-};
   functions_.push_back(c2_producer_);
 
-  c2_consumer_ = [&]() {
-if ( queue_mutex.try_lock_for(std::chrono::seconds(1)) ) {
-  C2Payload payload(Operation::HEARTBEAT);
-  {
-std::lock_guard lock(queue_mutex, std::adopt_lock);
-if (responses.empty()) {
-  return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(C2RESPONSE_POLL_MS));
-}
-payload = std::move(responses.back());
-responses.pop_back();
+  c2_consumer_ = [&] {
+if (responses.size()) {
+  if (!responses.consumeWaitFor([this](C2Payload&& e) { 
extractPayload(std::move(e)); }, std::chrono::seconds(1))) {

Review comment:
   1. My preference is `!empty()`, because it reads as "not empty", which 
is the intention here. On the other hand I'm ok with using `size() > 0`, as 
that's only one logical step away from the intention, but not plain `size()` 
because that implicit int -> bool conversion is surprising to the reader, like 
every implicit conversion that converts to something that's not logically "the 
same thing". I think the readers mind goes like this:
   1. if the size of responses. wtf, that doesn't make sense
   2. Ah, implicit int -> bool conversion, so "if the size of the responses 
is not zero".
   3. That's "if the responses are not empty", i.e. "if there are responses"
   
   
   edit: After a bit of research, I am now even more in favor of empty vs size.
   - Effective STL Item 4: Call `empty` instead of checking `size()` against 
zero. (Scott Meyers, 2001)
 The rationale is that `std::list` used to have linear-time `size()`. 
Doesn't apply here, but it's easier to follow a simple guideline than evaluate 
the situation everytime.
   - C++ Core Guidelines 

[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #776: MINIFICPP-1202 - Handle C2 requests/responses using MinifiConcurrentQueue

2020-05-20 Thread GitBox


szaszm commented on a change in pull request #776:
URL: https://github.com/apache/nifi-minifi-cpp/pull/776#discussion_r428322626



##
File path: libminifi/src/c2/C2Agent.cpp
##
@@ -75,54 +78,55 @@ C2Agent::C2Agent(const 
std::shared_ptr lock(request_mutex, std::adopt_lock);
-if (!requests.empty()) {
-  int count = 0;
-  do {
-const C2Payload payload(std::move(requests.back()));
-requests.pop_back();
-try {
-  C2Payload && response = 
protocol_.load()->consumePayload(payload);
-  enqueue_c2_server_response(std::move(response));
-}
-catch(const std::exception ) {
-  logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
-}
-catch(...) {
-  logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
-}
-  }while(!requests.empty() && ++count < max_c2_responses);
+if (protocol_.load() != nullptr) {
+  std::vector payload_batch;
+  payload_batch.reserve(max_c2_responses);
+  auto getRequestPayload = [_batch] (C2Payload&& payload) { 
payload_batch.emplace_back(std::move(payload)); };
+  const std::chrono::system_clock::time_point timeout_point = 
std::chrono::system_clock::now() + std::chrono::milliseconds(1);
+  for (std::size_t attempt_num = 0; attempt_num < max_c2_responses; 
++attempt_num) {
+if (!requests.consumeWaitUntil(getRequestPayload, timeout_point)) {
+  break;
 }
   }
-  try {
-performHeartBeat();
-  }
-  catch(const std::exception ) {
-logger_->log_error("Exception occurred while performing heartbeat. 
error: %s", e.what());
-  }
-  catch(...) {
-logger_->log_error("Unknonwn exception occurred while performing 
heartbeat.");
-  }
+  std::for_each(
+std::make_move_iterator(payload_batch.begin()),
+std::make_move_iterator(payload_batch.end()),
+[&] (C2Payload&& payload) {
+  try {
+C2Payload && response = 
protocol_.load()->consumePayload(std::move(payload));
+enqueue_c2_server_response(std::move(response));
+  }
+  catch(const std::exception ) {
+logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
+  }
+  catch(...) {
+logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
+  }
+});
 
-  checkTriggers();
+try {
+  performHeartBeat();
+}
+catch (const std::exception ) {
+  logger_->log_error("Exception occurred while performing heartbeat. 
error: %s", e.what());
+}
+catch (...) {
+  logger_->log_error("Unknonwn exception occurred while performing 
heartbeat.");
+}
+}
+
+checkTriggers();
+
+return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(heart_beat_period_));
+  };
 
-  return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(heart_beat_period_));
-};
   functions_.push_back(c2_producer_);
 
-  c2_consumer_ = [&]() {
-if ( queue_mutex.try_lock_for(std::chrono::seconds(1)) ) {
-  C2Payload payload(Operation::HEARTBEAT);
-  {
-std::lock_guard lock(queue_mutex, std::adopt_lock);
-if (responses.empty()) {
-  return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(C2RESPONSE_POLL_MS));
-}
-payload = std::move(responses.back());
-responses.pop_back();
+  c2_consumer_ = [&] {
+if (responses.size()) {
+  if (!responses.consumeWaitFor([this](C2Payload&& e) { 
extractPayload(std::move(e)); }, std::chrono::seconds(1))) {

Review comment:
   1. My preference is `!empty()`, because it reads as "not empty", which 
is the intention here. On the other hand I'm ok with using `size() > 0`, as 
that's only one logical step away from the intention, but not plain `size()` 
because that implicit int -> bool conversion is surprising to the reader, like 
every implicit conversion that converts to something that's not logically "the 
same thing". I think the readers mind goes like this:
   1. if the size of responses. wtf, that doesn't make sense
   2. Ah, implicit int -> bool conversion, so "if the size of the responses 
is not zero".
   3. That's "if the responses are not empty", i.e. "if there are responses"
   
   Sorry for the long comment, I hope this makes sense.
   
   2. Sorry for the accusation, you're right about the second point. I must 
have been salty because of something when writing this. :(
   
   However, this raises another problem, which is that I, as the reader, 
misunderstood the code, so it's probably too complex. I suggest extracting the 
lambda and maybe even the consume call, so that the identifiers can guide the 
reader.
   
   3. \-
   
   

[GitHub] [nifi] mattyb149 commented on pull request #4282: NIFI-7462: Update to allow FlowFile Table's schema to be more intelligent when using CHOICE types

2020-05-20 Thread GitBox


mattyb149 commented on pull request #4282:
URL: https://github.com/apache/nifi/pull/4282#issuecomment-631730497


   That's a good point, if an inferred schema is being handed around but the 
flow file changes (partitioned, filtered, e.g.) such that the desired fields 
are all numeric, then even the inference stuff can pick up the more specific 
schema (without having to necessarily and manually update it)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #776: MINIFICPP-1202 - Handle C2 requests/responses using MinifiConcurrentQueue

2020-05-20 Thread GitBox


szaszm commented on a change in pull request #776:
URL: https://github.com/apache/nifi-minifi-cpp/pull/776#discussion_r428310454



##
File path: libminifi/src/c2/C2Agent.cpp
##
@@ -75,54 +78,55 @@ C2Agent::C2Agent(const 
std::shared_ptr lock(request_mutex, std::adopt_lock);
-if (!requests.empty()) {
-  int count = 0;
-  do {
-const C2Payload payload(std::move(requests.back()));
-requests.pop_back();
-try {
-  C2Payload && response = 
protocol_.load()->consumePayload(payload);
-  enqueue_c2_server_response(std::move(response));
-}
-catch(const std::exception ) {
-  logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
-}
-catch(...) {
-  logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
-}
-  }while(!requests.empty() && ++count < max_c2_responses);
+if (protocol_.load() != nullptr) {
+  std::vector payload_batch;
+  payload_batch.reserve(max_c2_responses);
+  auto getRequestPayload = [_batch] (C2Payload&& payload) { 
payload_batch.emplace_back(std::move(payload)); };
+  const std::chrono::system_clock::time_point timeout_point = 
std::chrono::system_clock::now() + std::chrono::milliseconds(1);
+  for (std::size_t attempt_num = 0; attempt_num < max_c2_responses; 
++attempt_num) {
+if (!requests.consumeWaitUntil(getRequestPayload, timeout_point)) {

Review comment:
   The mutex inside the queue is not `timed_mutex` so we wait indefinitely 
for locking and only wait definitely for content.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on pull request #4268: MINIFI-488: Allow flow construction from YAML config

2020-05-20 Thread GitBox


mattyb149 commented on pull request #4268:
URL: https://github.com/apache/nifi/pull/4268#issuecomment-631729598


   Ah you were talking about the middle ground the whole time, went right over 
my head :) So, I can push a branch called `MINIFI-422` up to the Apache repo, 
then we can do PRs against that branch until it's ready as a single PR against 
master?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 edited a comment on pull request #4282: NIFI-7462: Update to allow FlowFile Table's schema to be more intelligent when using CHOICE types

2020-05-20 Thread GitBox


markap14 edited a comment on pull request #4282:
URL: https://github.com/apache/nifi/pull/4282#issuecomment-631724090


   Yeah, I think those should work fine. For something like "concat" it should 
work with both numbers and strings - and it does. In that case, what gets 
returned to calcite for the table's schema is an Object for that column, and 
concat will work against any type of object.
   
   > I also think taking this choice out of the hands of the flow developer is 
bad.
   
   I don't believe we are taking the choice out of the hands of the flow 
developer. We're simply saying that if you want to do some sort of numeric-only 
aggregate function, your schema must indicate that the field is a number. I 
think this is fair game. The user can, in this case, simply use a schema that 
indicates that field is numeric.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on pull request #4282: NIFI-7462: Update to allow FlowFile Table's schema to be more intelligent when using CHOICE types

2020-05-20 Thread GitBox


markap14 commented on pull request #4282:
URL: https://github.com/apache/nifi/pull/4282#issuecomment-631724090


   Yeah, I think those should work fine. For something like "concat" it should 
work with both numbers and strings - and it does. In that case, what gets 
returned to calcite for the table's schema is an Object for that column, and 
concat will work against any type of object.
   
   > I also think taking this choice out of the hands of the flow developer is 
bad.
   I don't believe we are taking the choice out of the hands of the flow 
developer. We're simply saying that if you want to do some sort of numeric-only 
aggregate function, your schema must indicate that the field is a number. I 
think this is fair game. The user can, in this case, simply use a schema that 
indicates that field is numeric.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-6571) TLS toolkit server mode - check token length at startup

2020-05-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112600#comment-17112600
 ] 

ASF subversion and git services commented on NIFI-6571:
---

Commit a9e9e5d137d979d3082dd1fbccef18c4fc50 in nifi's branch 
refs/heads/master from Pierre Villard
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=a9e9e5d ]

NIFI-6571 Check token length on TLS toolkit server startup

This closes #3659.

Signed-off-by: Joey Frazee 


> TLS toolkit server mode - check token length at startup
> ---
>
> Key: NIFI-6571
> URL: https://issues.apache.org/jira/browse/NIFI-6571
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Tools and Build
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
>  Labels: tls, tls-toolkit
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> It is possible to start the TLS toolkit in server mode with a token length 
> below the required 16 bits. But when the client is performing the request, 
> it'll be denied with the message "Token does not meet minimum size of 16 
> bytes". This task is about preventing the TLS toolkit to start in server mode 
> when the token is below 16 bytes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] jfrazee closed pull request #3659: NIFI-6571 - TLS toolkit server mode - check token length at startup

2020-05-20 Thread GitBox


jfrazee closed pull request #3659:
URL: https://github.com/apache/nifi/pull/3659


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] jfrazee commented on pull request #3659: NIFI-6571 - TLS toolkit server mode - check token length at startup

2020-05-20 Thread GitBox


jfrazee commented on pull request #3659:
URL: https://github.com/apache/nifi/pull/3659#issuecomment-631720302


   @pvillard31 Getting this in is probably more useful than just adding more 
tests. This LGTM and I've checked it a few times.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] apiri commented on pull request #4268: MINIFI-488: Allow flow construction from YAML config

2020-05-20 Thread GitBox


apiri commented on pull request #4268:
URL: https://github.com/apache/nifi/pull/4268#issuecomment-631713164


   I think it is fair to do this piecewise.  Maybe make this a branch against 
our core repo and iteratively feed in PRs along the way?  Might help in 
striking the balance of getting the refactoring/inclusion correct without 
dropping a massive PR in there?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on pull request #4268: MINIFI-488: Allow flow construction from YAML config

2020-05-20 Thread GitBox


mattyb149 commented on pull request #4268:
URL: https://github.com/apache/nifi/pull/4268#issuecomment-631697641


   Definitely pros and cons to both. If we incrementally bring over code from 
MiNiFi, we need to keep up to date with any commits that are happening in the 
MINIFI repo, plus there's the duplication as you mentioned. Working in a branch 
until MiNiFi is fully in NiFi means we have to keep up with commits in NiFi 
master on the branch, some of which may be fairly invasive.
   
   If I had to pick, I think the branch strategy is the better of the two, as 
the code can be brought over "all at once" so to speak, and we don't have to 
worry about refactoring code that we bring in piecemeal. The final PR will be a 
beast but as you said we'll have to pay the price someday, better to only pay 
it once.
   
   If you agree, I can close this PR and continuing working in my branch.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on pull request #4282: NIFI-7462: Update to allow FlowFile Table's schema to be more intelligent when using CHOICE types

2020-05-20 Thread GitBox


mattyb149 commented on pull request #4282:
URL: https://github.com/apache/nifi/pull/4282#issuecomment-631695622


   I tried `LISTAGG` and the concat operator `||` with the current PR and they 
work on choice (int,string) fields. @pcgrenier do you have an example of a 
query that doesn't work with the current PR?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-1893) Add processor for validating JSON

2020-05-20 Thread Carlos (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-1893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112527#comment-17112527
 ] 

Carlos commented on NIFI-1893:
--

 this is still on going? I need to validate json and I want to know if this 
processor is going to be part of the official release some day

> Add processor for validating JSON
> -
>
> Key: NIFI-1893
> URL: https://issues.apache.org/jira/browse/NIFI-1893
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Matt Burgess
>Priority: Major
> Attachments: image-2018-12-07-18-02-52-813.png
>
>
> NiFi has a ValidateXml processor to validate incoming XML files against a 
> schema. It would be good to have one to validate JSON files as well.
> For example, an input JSON of:
> {
>   name: "Test",
>   timestamp: 1463499695,
>   tags: {
>"host": "Test_1",
>"ip" : "1.1.1.1"
>   },
>   fields: {
> "cpu": 10.2,
> "load": 15.6
>   }
> }
> Could be validated successfully against the following "schema":
> {
>   "type": "object",
>   "required": ["name", "tags", "timestamp", "fields"],
>   "properties": {
> "name": {"type": "string"},
> "timestamp": {"type": "integer"},
> "tags": {"type": "object", "items": {"type": "string"}},
> "fields": { "type": "object"}
>   }
> }
> There is at least one ASF-friendly library that could be used for 
> implementation: https://github.com/everit-org/json-schema



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] joewitt commented on pull request #4278: NIFI-7456: Ignite Processors - Choose between Thick and Thin Clients

2020-05-20 Thread GitBox


joewitt commented on pull request #4278:
URL: https://github.com/apache/nifi/pull/4278#issuecomment-631647205


   i didnt get to this in time and will need to come back to it in a couple 
weeks if nobody has.  just putting this here in case someone else can pick it 
up.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] sjyang18 commented on pull request #4286: NIFI-7386: Azurite emulator support

2020-05-20 Thread GitBox


sjyang18 commented on pull request #4286:
URL: https://github.com/apache/nifi/pull/4286#issuecomment-631621679


   reference documentation in implementing the support for azurite emulator. 
   
   
https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string#connect-to-the-emulator-account-using-a-shortcut
   
   
https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string#specify-an-http-proxy



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-7473) Add @SupportsBatching to Azure storage processors

2020-05-20 Thread Seokwon Yang (Jira)
Seokwon Yang created NIFI-7473:
--

 Summary: Add @SupportsBatching to Azure storage processors
 Key: NIFI-7473
 URL: https://issues.apache.org/jira/browse/NIFI-7473
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Seokwon Yang
Assignee: Seokwon Yang


{{}}

Some stuff to think through would be if there was one failure in a batch of 
many and it all ran again is the result the same and does the SDK throw any 
exceptions?

{{}}

For a process that's just modifying some content or adding attributes it's as 
simple as adding the annotation. 
But when it modifies some external resource you sometimes have to take care to 
make sure it'd do the right thing if there's a failure. 
I think the Fetch won't require any code changes. Not sure about Delete or Put.

{{}}

 

{{}}

 

{{}}

{{[https://nifi.apache.org/docs/nifi-docs/html/developer-guide.html]}}

{{SupportsBatching}}: This annotation indicates that it is okay for the 
framework to batch together multiple ProcessSession commits into a single 
commit. If this annotation is present, the user will be able to choose whether 
they prefer high throughput or lower latency in the Processor’s Scheduling tab. 
This annotation should be applied to most Processors, but it comes with a 
caveat: if the Processor calls {{ProcessSession.commit}}, there is no guarantee 
that the data has been safely stored in NiFi’s Content, FlowFile, and 
Provenance Repositories. As a result, it is not appropriate for those 
Processors that receive data from an external source, commit the session, and 
then delete the remote data or confirm a transaction with a remote resource.

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7473) Add @SupportsBatching to Azure storage processors

2020-05-20 Thread Seokwon Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seokwon Yang updated NIFI-7473:
---
Description: 
Some stuff to think through would be if there was one failure in a batch of 
many and it all ran again is the result the same and does the SDK throw any 
exceptions?

For a process that's just modifying some content or adding attributes it's as 
simple as adding the annotation. 
 But when it modifies some external resource you sometimes have to take care to 
make sure it'd do the right thing if there's a failure. 
 I think the Fetch won't require any code changes. Not sure about Delete or Put.

 

{{[https://nifi.apache.org/docs/nifi-docs/html/developer-guide.html]}}

{{SupportsBatching}}: This annotation indicates that it is okay for the 
framework to batch together multiple ProcessSession commits into a single 
commit. If this annotation is present, the user will be able to choose whether 
they prefer high throughput or lower latency in the Processor’s Scheduling tab. 
This annotation should be applied to most Processors, but it comes with a 
caveat: if the Processor calls {{ProcessSession.commit}}, there is no guarantee 
that the data has been safely stored in NiFi’s Content, FlowFile, and 
Provenance Repositories. As a result, it is not appropriate for those 
Processors that receive data from an external source, commit the session, and 
then delete the remote data or confirm a transaction with a remote resource.

 

 

 

  was:
{{}}

Some stuff to think through would be if there was one failure in a batch of 
many and it all ran again is the result the same and does the SDK throw any 
exceptions?

{{}}

For a process that's just modifying some content or adding attributes it's as 
simple as adding the annotation. 
But when it modifies some external resource you sometimes have to take care to 
make sure it'd do the right thing if there's a failure. 
I think the Fetch won't require any code changes. Not sure about Delete or Put.

{{}}

 

{{}}

 

{{}}

{{[https://nifi.apache.org/docs/nifi-docs/html/developer-guide.html]}}

{{SupportsBatching}}: This annotation indicates that it is okay for the 
framework to batch together multiple ProcessSession commits into a single 
commit. If this annotation is present, the user will be able to choose whether 
they prefer high throughput or lower latency in the Processor’s Scheduling tab. 
This annotation should be applied to most Processors, but it comes with a 
caveat: if the Processor calls {{ProcessSession.commit}}, there is no guarantee 
that the data has been safely stored in NiFi’s Content, FlowFile, and 
Provenance Repositories. As a result, it is not appropriate for those 
Processors that receive data from an external source, commit the session, and 
then delete the remote data or confirm a transaction with a remote resource.

 

 

 


> Add @SupportsBatching to Azure storage processors
> -
>
> Key: NIFI-7473
> URL: https://issues.apache.org/jira/browse/NIFI-7473
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Seokwon Yang
>Assignee: Seokwon Yang
>Priority: Major
>  Labels: azureblob
>
> Some stuff to think through would be if there was one failure in a batch of 
> many and it all ran again is the result the same and does the SDK throw any 
> exceptions?
> For a process that's just modifying some content or adding attributes it's as 
> simple as adding the annotation. 
>  But when it modifies some external resource you sometimes have to take care 
> to make sure it'd do the right thing if there's a failure. 
>  I think the Fetch won't require any code changes. Not sure about Delete or 
> Put.
>  
> {{[https://nifi.apache.org/docs/nifi-docs/html/developer-guide.html]}}
> {{SupportsBatching}}: This annotation indicates that it is okay for the 
> framework to batch together multiple ProcessSession commits into a single 
> commit. If this annotation is present, the user will be able to choose 
> whether they prefer high throughput or lower latency in the Processor’s 
> Scheduling tab. This annotation should be applied to most Processors, but it 
> comes with a caveat: if the Processor calls {{ProcessSession.commit}}, there 
> is no guarantee that the data has been safely stored in NiFi’s Content, 
> FlowFile, and Provenance Repositories. As a result, it is not appropriate for 
> those Processors that receive data from an external source, commit the 
> session, and then delete the remote data or confirm a transaction with a 
> remote resource.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7273) Add flow metrics REST endpoint with for Prometheus scraping

2020-05-20 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-7273:
---
Fix Version/s: 1.12.0

> Add flow metrics REST endpoint with for Prometheus scraping
> ---
>
> Key: NIFI-7273
> URL: https://issues.apache.org/jira/browse/NIFI-7273
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> NiFi has the ability to expose endpoints for Prometheus to scrape via the 
> PrometheusReportingTask (NIFI-4362) and via components that use the 
> PrometheusRecordSink controller service. However that involves adding 
> components to the overall flow, which requires their own configuration and 
> ends up generating their own metrics that contribute to rollup metrics and 
> queries.
> This Jira proposes to add an endpoint to the NiFi REST API that exposes the 
> following metrics/information in Prometheus format for scraping:
> - Root Process Group status (recursive to include all components)
> - Connection Status Analytics (backpressure predictions, e.g.)
> - JVM Metrics
> - Bulletins (for use by AlertManager, not necessarily a metric per se)
> Standard security/authorization principles apply, and it is proposed to offer 
> node-specific metrics rather than cluster-wide aggregates, as Prometheus can 
> then choose how to do the aggregates as necessary.
> It may be prudent to refactor PrometheusMetricsUtil out into its own module, 
> for use by the various components in various modules (to now include the 
> framework).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #770: MINIFICPP-1203 - upgrade linter version to 1.4.4 and fix relevant linter errors

2020-05-20 Thread GitBox


szaszm commented on a change in pull request #770:
URL: https://github.com/apache/nifi-minifi-cpp/pull/770#discussion_r428174378



##
File path: libminifi/test/tensorflow-tests/TensorFlowTests.cpp
##
@@ -19,14 +19,14 @@
 #include 
 #include 
 
-#include 
-#include 
-#include 
-#include 
-#include 
+#include  // NOLINT
+#include  // NOLINT
+#include  // NOLINT
+#include  // NOLINT
+#include  // NOLINT

Review comment:
   Regarding my previous suggestion: I suggest trying to convert our 
headers to use quotation marks and leave 3rd parties as is, even if it means 
that we need to use NOLINT.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #770: MINIFICPP-1203 - upgrade linter version to 1.4.4 and fix relevant linter errors

2020-05-20 Thread GitBox


szaszm commented on a change in pull request #770:
URL: https://github.com/apache/nifi-minifi-cpp/pull/770#discussion_r428173660



##
File path: libminifi/src/utils/OsUtils.cpp
##
@@ -24,16 +24,18 @@
 #ifndef WIN32_LEAN_AND_MEAN
 #define WIN32_LEAN_AND_MEAN
 #endif
-#include 
 #include 
+#include 

Review comment:
   Judging from the appveyor output, the order of these includes matter. 
Not sure of the exact reason, but in general windows headers are a mess, so I'm 
not surprised. :(
   
   See the end of the log here: 
https://ci.appveyor.com/project/ApacheSoftwareFoundation/nifi-minifi-cpp/builds/33004452





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7386) AzureStorageCredentialsControllerService should also connect to storage emulator

2020-05-20 Thread Seokwon Yang (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112448#comment-17112448
 ] 

Seokwon Yang commented on NIFI-7386:


reference documentation in implementing the support for azurite emulator. 

[https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string#connect-to-the-emulator-account-using-a-shortcut]

[https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string#specify-an-http-proxy]

 

 

 

> AzureStorageCredentialsControllerService should also connect to storage 
> emulator
> 
>
> Key: NIFI-7386
> URL: https://issues.apache.org/jira/browse/NIFI-7386
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Eric Secules
>Assignee: Seokwon Yang
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The controller service "AzureStorageCredentialsControllerService" should be 
> able to take an optional parameter to connect to another azure storage 
> provider, like the [Azurite 
> emulator|https://hub.docker.com/_/microsoft-azure-storage-azurite]. This will 
> likely mean taking additional parameters for a base URL and possibly 
> switching between http and https depending on Azurite's capabilities.
> I am currently setting up an isolated test environment for my application and 
> the only piece that I cannot effectively isolate is our azure storage 
> connection because NiFi doesn't support connecting to anything but the 
> official azure storage service.
> *Acceptance Criteria:*
>  * AzureStorageCredentialsControllerService can connect to an alternative 
> azure storage provider
>  * Individual azure storage processors can connect to an alternative azure 
> storage provider
>  * New parameters will be optional and default to existing behaviour.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #770: MINIFICPP-1203 - upgrade linter version to 1.4.4 and fix relevant linter errors

2020-05-20 Thread GitBox


szaszm commented on a change in pull request #770:
URL: https://github.com/apache/nifi-minifi-cpp/pull/770#discussion_r428167985



##
File path: libminifi/src/io/StreamFactory.cpp
##
@@ -16,12 +16,13 @@
  * limitations under the License.
  */
 #include "io/StreamFactory.h"
+
 #include 
 #include 
 #include 
 #include 
-#include 
 
+#include  // NOLINT

Review comment:
   
https://google.github.io/styleguide/cppguide.html#Names_and_Order_of_Includes
   
   I think if we replaced the angle brackets with quotation marks, the warning 
would go away. Could you try this? If it doesn't work, them I'm fine with 
leaving the NOLINT comments of such cases in place.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] pcgrenier commented on pull request #4282: NIFI-7462: Update to allow FlowFile Table's schema to be more intelligent when using CHOICE types

2020-05-20 Thread GitBox


pcgrenier commented on pull request #4282:
URL: https://github.com/apache/nifi/pull/4282#issuecomment-631588369


   I do agree, but there are other functions like concat, that do make more
   sense. I also think taking this choice out of the hands of the flow
   developer is bad. I think you are correct in that it is a bad idea to allow
   the sum of strings and ints but also not allowing it at all if the
   developer needs to do it is worse. Really I'm cool with it either way and
   happy to see the fix.
   
   On Wed, May 20, 2020, 12:16 PM markap14  wrote:
   
   > *@markap14* commented on this pull request.
   > --
   >
   > In
   > 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/queryrecord/FlowFileTable.java
   > :
   >
   > > +return getRelDataType(widestDataType, typeFactory);
   > +}
   > +
   > +// None of the data types is strictly the widest. Check 
if all data types are numeric.
   > +// This would happen, for instance, if the data type is a 
choice between float and integer.
   > +// If that is the case, we can use a String type for the 
table schema because all values will fit
   > +// into a String. This will still allow for casting, etc. 
if the query requires it.
   > +boolean allNumeric = true;
   > +for (final DataType possibleType : 
choiceDataType.getPossibleSubTypes()) {
   > +if (!isNumeric(possibleType)) {
   > +allNumeric = false;
   > +break;
   > +}
   > +}
   > +
   > +if (allNumeric) {
   >
   > @pcgrenier  I do see your argument. But I
   > think I disagree. Let's say that we have the following CSV then:
   >
   > name, other
   > markap14, 48
   > pcgrenier, 19
   >
   > And the data's schema indicates that "other" is a CHOICE between STRING or
   > INT. Fundamentally, it comes down to one question: in this case, should we
   > allow the user to run a query like SELECT SUM(other) as total FROM
   > FLOWFILE.
   >
   > Your argument is yes, because I happen to know that for this particular
   > FlowFile it will succeed.
   >
   > My argument, though, is that we should not allow it. Yes, it would succeed
   > for this FlowFile, but what will happen is that it will fail for other
   > FlowFiles. Other FlowFiles that have the same schema and other FlowFiles
   > for which the data is exactly correct according to the schema. This makes
   > things more confusing because some FlowFiles succeed and others fail. All
   > the while, the schema clearly indicates that the data may be a STRING, and
   > you shouldn't really be able to sum together STRING data. If you do sort
   > out all the number values, it makes sense to use a schema that reflects
   > that. Otherwise, if we allow summing together a CHOICE[int, string] we're
   > not really honoring the schema, in my point of view.
   >
   > —
   > You are receiving this because you were mentioned.
   > Reply to this email directly, view it on GitHub
   > , or
   > unsubscribe
   > 

   > .
   >
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7445) Add Conflict Resolution property to PutAzureDataLakeStorage processor

2020-05-20 Thread Peter Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Gyori updated NIFI-7445:
--
Status: Patch Available  (was: In Progress)

> Add Conflict Resolution property to PutAzureDataLakeStorage processor
> -
>
> Key: NIFI-7445
> URL: https://issues.apache.org/jira/browse/NIFI-7445
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Peter Turcsanyi
>Assignee: Peter Gyori
>Priority: Major
>  Labels: azure
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> PutAzureDataLakeStorage currently overwrites existing files without error 
> (azure-storage-file-datalake 12.0.1).
> Add Conflict Resolution property with values: fail (default), replace, ignore 
> (similar to PutFile).
> DataLakeDirectoryClient.createFile(String fileName, boolean overwrite) can be 
> used (available from 12.1.x)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 commented on a change in pull request #4282: NIFI-7462: Update to allow FlowFile Table's schema to be more intelligent when using CHOICE types

2020-05-20 Thread GitBox


markap14 commented on a change in pull request #4282:
URL: https://github.com/apache/nifi/pull/4282#discussion_r428138798



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/queryrecord/FlowFileTable.java
##
@@ -223,12 +225,69 @@ private RelDataType getRelDataType(final DataType 
fieldType, final JavaTypeFacto
 case BIGINT:
 return typeFactory.createJavaType(BigInteger.class);
 case CHOICE:
+final ChoiceDataType choiceDataType = (ChoiceDataType) 
fieldType;
+DataType widestDataType = 
choiceDataType.getPossibleSubTypes().get(0);
+for (final DataType possibleType : 
choiceDataType.getPossibleSubTypes()) {
+if (possibleType == widestDataType) {
+continue;
+}
+if 
(possibleType.getFieldType().isWiderThan(widestDataType.getFieldType())) {
+widestDataType = possibleType;
+continue;
+}
+if 
(widestDataType.getFieldType().isWiderThan(possibleType.getFieldType())) {
+continue;
+}
+
+// Neither is wider than the other.
+widestDataType = null;
+break;
+}
+
+// If one of the CHOICE data types is the widest, use it.
+if (widestDataType != null) {
+return getRelDataType(widestDataType, typeFactory);
+}
+
+// None of the data types is strictly the widest. Check if all 
data types are numeric.
+// This would happen, for instance, if the data type is a 
choice between float and integer.
+// If that is the case, we can use a String type for the table 
schema because all values will fit
+// into a String. This will still allow for casting, etc. if 
the query requires it.
+boolean allNumeric = true;
+for (final DataType possibleType : 
choiceDataType.getPossibleSubTypes()) {
+if (!isNumeric(possibleType)) {
+allNumeric = false;
+break;
+}
+}
+
+if (allNumeric) {

Review comment:
   @pcgrenier I do see your argument. But I think I disagree. Let's say 
that we have the following CSV then:
   ```
   name, other
   markap14, 48
   pcgrenier, 19
   ```
   And the data's schema indicates that "other" is a CHOICE between STRING or 
INT. Fundamentally, it comes down to one question: in this case, should we 
allow the user to run a query like `SELECT SUM(other) as total FROM FLOWFILE`.
   
   Your argument is yes, because I happen to know that for this particular 
FlowFile it will succeed.
   
   My argument, though, is that we should not allow it. Yes, it would succeed 
for this FlowFile, but what will happen is that it will fail for other 
FlowFiles. Other FlowFiles that have the same schema and other FlowFiles for 
which the data is exactly correct according to the schema. This makes things 
more confusing because some FlowFiles succeed and others fail. All the while, 
the schema clearly indicates that the data may be a STRING, and you shouldn't 
really be able to sum together STRING data. If you do sort out all the number 
values, it makes sense to use a schema that reflects that. Otherwise, if we 
allow summing together a CHOICE[int, string] we're not really honoring the 
schema, in my point of view.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] pgyori opened a new pull request #4287: NIFI-7445: Add Conflict Resolution property to PutAzureDataLakeStorage processor

2020-05-20 Thread GitBox


pgyori opened a new pull request #4287:
URL: https://github.com/apache/nifi/pull/4287


   https://issues.apache.org/jira/browse/NIFI-7445
   
    Description of PR
   
   PutAzureDataLakeStorage processor now has a conflict resolution property. 
With the help of this property the user can specify how the processor should 
behave in case there is a name conflict when uploading a file Azure.
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on pull request #770: MINIFICPP-1203 - upgrade linter version to 1.4.4 and fix relevant linter errors

2020-05-20 Thread GitBox


hunyadi-dev commented on pull request #770:
URL: https://github.com/apache/nifi-minifi-cpp/pull/770#issuecomment-631574636


   > I find the Google include ordering rules strange, but that's in 
CONTRIB.md, so I guess we're better off enforcing them.
   > 
   > Thanks for the upgrade!
   
   Agreed on the rules being strange, I would much rather prefer the llvm 
standard too. I could patch the linter for changing the order, but that would 
mean that we would need to apply the same patch every time we want to upgrade 
the linter version (and that everyone developping a code with linter assistance 
would need to apply the same patch to their own linter). I don't think the 
change is worth the effort.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFIREG-393) VersionedComponent field name does not match getter/setter

2020-05-20 Thread Matthew Knight (Jira)
Matthew Knight created NIFIREG-393:
--

 Summary: VersionedComponent field name does not match getter/setter
 Key: NIFIREG-393
 URL: https://issues.apache.org/jira/browse/NIFIREG-393
 Project: NiFi Registry
  Issue Type: Bug
Affects Versions: 0.6.0
Reporter: Matthew Knight


In org.apache.nifi.registry.flow.VersionedComponent the field name groupId does 
not match the getter/setter getGroupIdentifier.

This causes problems with exporting flows to json using nifi-registry (uses 
jackson, which references getter/setters) and importing those flows into 
nifi-stateless (uses gson, which looks at field names).

Changing the field name from groupId to groupIdentifier should fix this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #776: MINIFICPP-1202 - Handle C2 requests/responses using MinifiConcurrentQueue

2020-05-20 Thread GitBox


hunyadi-dev commented on a change in pull request #776:
URL: https://github.com/apache/nifi-minifi-cpp/pull/776#discussion_r428131256



##
File path: libminifi/include/c2/C2Agent.h
##
@@ -41,8 +41,15 @@ namespace org {
 namespace apache {
 namespace nifi {
 namespace minifi {
+
+namespace utils {
+template 
+class ConditionConcurrentQueue;
+}

Review comment:
   You are completely right, at first I thought we would only need the 
declaration and left this here. Will replace it with a proper include.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFIREG-390) Add .asf.yaml file to GitHub repo

2020-05-20 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFIREG-390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess resolved NIFIREG-390.
--
Fix Version/s: 1.0.0
   Resolution: Fixed

> Add .asf.yaml file to GitHub repo
> -
>
> Key: NIFIREG-390
> URL: https://issues.apache.org/jira/browse/NIFIREG-390
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.0.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The ASF has a new way of configuring integrations and such for the Apache 
> GitHub repositories. It involves putting a .asf.yaml file at the root of the 
> project with various settings, such as email addresses for notifications, 
> which buttons/facets should be available on the UI, etc.
> This case is to cover adding a .asf.yaml file to the nifi-registry repo. The 
> other repos will be covered in their respective Jira projects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-registry] mattyb149 closed pull request #280: NIFIREG-390: Add .asf.yaml file to GitHub repo

2020-05-20 Thread GitBox


mattyb149 closed pull request #280:
URL: https://github.com/apache/nifi-registry/pull/280


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #776: MINIFICPP-1202 - Handle C2 requests/responses using MinifiConcurrentQueue

2020-05-20 Thread GitBox


hunyadi-dev commented on a change in pull request #776:
URL: https://github.com/apache/nifi-minifi-cpp/pull/776#discussion_r428117653



##
File path: libminifi/src/c2/C2Agent.cpp
##
@@ -75,54 +78,55 @@ C2Agent::C2Agent(const 
std::shared_ptr lock(request_mutex, std::adopt_lock);
-if (!requests.empty()) {
-  int count = 0;
-  do {
-const C2Payload payload(std::move(requests.back()));
-requests.pop_back();
-try {
-  C2Payload && response = 
protocol_.load()->consumePayload(payload);
-  enqueue_c2_server_response(std::move(response));
-}
-catch(const std::exception ) {
-  logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
-}
-catch(...) {
-  logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
-}
-  }while(!requests.empty() && ++count < max_c2_responses);
+if (protocol_.load() != nullptr) {
+  std::vector payload_batch;
+  payload_batch.reserve(max_c2_responses);
+  auto getRequestPayload = [_batch] (C2Payload&& payload) { 
payload_batch.emplace_back(std::move(payload)); };
+  const std::chrono::system_clock::time_point timeout_point = 
std::chrono::system_clock::now() + std::chrono::milliseconds(1);
+  for (std::size_t attempt_num = 0; attempt_num < max_c2_responses; 
++attempt_num) {
+if (!requests.consumeWaitUntil(getRequestPayload, timeout_point)) {
+  break;
 }
   }
-  try {
-performHeartBeat();
-  }
-  catch(const std::exception ) {
-logger_->log_error("Exception occurred while performing heartbeat. 
error: %s", e.what());
-  }
-  catch(...) {
-logger_->log_error("Unknonwn exception occurred while performing 
heartbeat.");
-  }
+  std::for_each(
+std::make_move_iterator(payload_batch.begin()),
+std::make_move_iterator(payload_batch.end()),
+[&] (C2Payload&& payload) {
+  try {
+C2Payload && response = 
protocol_.load()->consumePayload(std::move(payload));
+enqueue_c2_server_response(std::move(response));
+  }
+  catch(const std::exception ) {
+logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
+  }
+  catch(...) {
+logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
+  }
+});
 
-  checkTriggers();
+try {
+  performHeartBeat();
+}
+catch (const std::exception ) {
+  logger_->log_error("Exception occurred while performing heartbeat. 
error: %s", e.what());
+}
+catch (...) {
+  logger_->log_error("Unknonwn exception occurred while performing 
heartbeat.");
+}
+}
+
+checkTriggers();
+
+return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(heart_beat_period_));
+  };
 
-  return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(heart_beat_period_));
-};
   functions_.push_back(c2_producer_);
 
-  c2_consumer_ = [&]() {
-if ( queue_mutex.try_lock_for(std::chrono::seconds(1)) ) {
-  C2Payload payload(Operation::HEARTBEAT);
-  {
-std::lock_guard lock(queue_mutex, std::adopt_lock);
-if (responses.empty()) {
-  return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(C2RESPONSE_POLL_MS));
-}
-payload = std::move(responses.back());
-responses.pop_back();
+  c2_consumer_ = [&] {
+if (responses.size()) {
+  if (!responses.consumeWaitFor([this](C2Payload&& e) { 
extractPayload(std::move(e)); }, std::chrono::seconds(1))) {

Review comment:
   I am probably wrong, but as far as I can see, this is the same behaviour:
   
   - If no elements were present in the queue, we did not call 
`extractPayload()` at all.
   - If there were elements, but by the time we wanted to grab the lock the 
queue became busy, we waited 1 second and tried fetching an element
 - If by this time we succeeded, we called `extractPayload()` with this 
dequeued data
 - Otherwise we used the default constructed payload: `C2Payload{ 
Operation::HEARTBEAT }` instead.
   
   Regarding the other point:
   To me `!empty()` feels like something that is easily mistaken for `empty()`, 
`size()` is guaranteed to be constant time for `std::deque` and `empty()` is 
probably implemented as `size() == 0` anyway, right?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #776: MINIFICPP-1202 - Handle C2 requests/responses using MinifiConcurrentQueue

2020-05-20 Thread GitBox


hunyadi-dev commented on a change in pull request #776:
URL: https://github.com/apache/nifi-minifi-cpp/pull/776#discussion_r428117653



##
File path: libminifi/src/c2/C2Agent.cpp
##
@@ -75,54 +78,55 @@ C2Agent::C2Agent(const 
std::shared_ptr lock(request_mutex, std::adopt_lock);
-if (!requests.empty()) {
-  int count = 0;
-  do {
-const C2Payload payload(std::move(requests.back()));
-requests.pop_back();
-try {
-  C2Payload && response = 
protocol_.load()->consumePayload(payload);
-  enqueue_c2_server_response(std::move(response));
-}
-catch(const std::exception ) {
-  logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
-}
-catch(...) {
-  logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
-}
-  }while(!requests.empty() && ++count < max_c2_responses);
+if (protocol_.load() != nullptr) {
+  std::vector payload_batch;
+  payload_batch.reserve(max_c2_responses);
+  auto getRequestPayload = [_batch] (C2Payload&& payload) { 
payload_batch.emplace_back(std::move(payload)); };
+  const std::chrono::system_clock::time_point timeout_point = 
std::chrono::system_clock::now() + std::chrono::milliseconds(1);
+  for (std::size_t attempt_num = 0; attempt_num < max_c2_responses; 
++attempt_num) {
+if (!requests.consumeWaitUntil(getRequestPayload, timeout_point)) {
+  break;
 }
   }
-  try {
-performHeartBeat();
-  }
-  catch(const std::exception ) {
-logger_->log_error("Exception occurred while performing heartbeat. 
error: %s", e.what());
-  }
-  catch(...) {
-logger_->log_error("Unknonwn exception occurred while performing 
heartbeat.");
-  }
+  std::for_each(
+std::make_move_iterator(payload_batch.begin()),
+std::make_move_iterator(payload_batch.end()),
+[&] (C2Payload&& payload) {
+  try {
+C2Payload && response = 
protocol_.load()->consumePayload(std::move(payload));
+enqueue_c2_server_response(std::move(response));
+  }
+  catch(const std::exception ) {
+logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
+  }
+  catch(...) {
+logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
+  }
+});
 
-  checkTriggers();
+try {
+  performHeartBeat();
+}
+catch (const std::exception ) {
+  logger_->log_error("Exception occurred while performing heartbeat. 
error: %s", e.what());
+}
+catch (...) {
+  logger_->log_error("Unknonwn exception occurred while performing 
heartbeat.");
+}
+}
+
+checkTriggers();
+
+return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(heart_beat_period_));
+  };
 
-  return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(heart_beat_period_));
-};
   functions_.push_back(c2_producer_);
 
-  c2_consumer_ = [&]() {
-if ( queue_mutex.try_lock_for(std::chrono::seconds(1)) ) {
-  C2Payload payload(Operation::HEARTBEAT);
-  {
-std::lock_guard lock(queue_mutex, std::adopt_lock);
-if (responses.empty()) {
-  return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(C2RESPONSE_POLL_MS));
-}
-payload = std::move(responses.back());
-responses.pop_back();
+  c2_consumer_ = [&] {
+if (responses.size()) {
+  if (!responses.consumeWaitFor([this](C2Payload&& e) { 
extractPayload(std::move(e)); }, std::chrono::seconds(1))) {

Review comment:
   I am probably wrong, but as far as I can see, this is the same behaviour:
   
   - If no elements were present in the queue, we did not call 
`extractPayload()` at all.
   - If there were elements, but by the time we wanted to grab the lock the 
queue became busy, we waited 1 second and tried fetching an element
 - If by this time we succeeded, we called `extractPayload()` with this 
dequeued data
 - Otherwise we used the default constructed payload: `C2Payload{ 
Operation::HEARTBEAT }` instead.
   
   To me `!empty()` feels like something that is easily mistaken for `empty()`, 
`size()` is guaranteed to be constant time for `std::deque` and `empty()` is 
probably implemented as `size() == 0` anyway, right?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #776: MINIFICPP-1202 - Handle C2 requests/responses using MinifiConcurrentQueue

2020-05-20 Thread GitBox


hunyadi-dev commented on a change in pull request #776:
URL: https://github.com/apache/nifi-minifi-cpp/pull/776#discussion_r428117653



##
File path: libminifi/src/c2/C2Agent.cpp
##
@@ -75,54 +78,55 @@ C2Agent::C2Agent(const 
std::shared_ptr lock(request_mutex, std::adopt_lock);
-if (!requests.empty()) {
-  int count = 0;
-  do {
-const C2Payload payload(std::move(requests.back()));
-requests.pop_back();
-try {
-  C2Payload && response = 
protocol_.load()->consumePayload(payload);
-  enqueue_c2_server_response(std::move(response));
-}
-catch(const std::exception ) {
-  logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
-}
-catch(...) {
-  logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
-}
-  }while(!requests.empty() && ++count < max_c2_responses);
+if (protocol_.load() != nullptr) {
+  std::vector payload_batch;
+  payload_batch.reserve(max_c2_responses);
+  auto getRequestPayload = [_batch] (C2Payload&& payload) { 
payload_batch.emplace_back(std::move(payload)); };
+  const std::chrono::system_clock::time_point timeout_point = 
std::chrono::system_clock::now() + std::chrono::milliseconds(1);
+  for (std::size_t attempt_num = 0; attempt_num < max_c2_responses; 
++attempt_num) {
+if (!requests.consumeWaitUntil(getRequestPayload, timeout_point)) {
+  break;
 }
   }
-  try {
-performHeartBeat();
-  }
-  catch(const std::exception ) {
-logger_->log_error("Exception occurred while performing heartbeat. 
error: %s", e.what());
-  }
-  catch(...) {
-logger_->log_error("Unknonwn exception occurred while performing 
heartbeat.");
-  }
+  std::for_each(
+std::make_move_iterator(payload_batch.begin()),
+std::make_move_iterator(payload_batch.end()),
+[&] (C2Payload&& payload) {
+  try {
+C2Payload && response = 
protocol_.load()->consumePayload(std::move(payload));
+enqueue_c2_server_response(std::move(response));
+  }
+  catch(const std::exception ) {
+logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
+  }
+  catch(...) {
+logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
+  }
+});
 
-  checkTriggers();
+try {
+  performHeartBeat();
+}
+catch (const std::exception ) {
+  logger_->log_error("Exception occurred while performing heartbeat. 
error: %s", e.what());
+}
+catch (...) {
+  logger_->log_error("Unknonwn exception occurred while performing 
heartbeat.");
+}
+}
+
+checkTriggers();
+
+return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(heart_beat_period_));
+  };
 
-  return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(heart_beat_period_));
-};
   functions_.push_back(c2_producer_);
 
-  c2_consumer_ = [&]() {
-if ( queue_mutex.try_lock_for(std::chrono::seconds(1)) ) {
-  C2Payload payload(Operation::HEARTBEAT);
-  {
-std::lock_guard lock(queue_mutex, std::adopt_lock);
-if (responses.empty()) {
-  return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(C2RESPONSE_POLL_MS));
-}
-payload = std::move(responses.back());
-responses.pop_back();
+  c2_consumer_ = [&] {
+if (responses.size()) {
+  if (!responses.consumeWaitFor([this](C2Payload&& e) { 
extractPayload(std::move(e)); }, std::chrono::seconds(1))) {

Review comment:
   As far as I can see, this is the same behaviour:
   
   - If no elements were present in the queue, we did not call 
`extractPayload()` at all.
   - If there were elements, but by the time we wanted to grab the lock the 
queue became busy, we waited 1 second and tried fetching an element
 - If by this time we succeeded, we called `extractPayload()` with this 
dequeued data
 - Otherwise we used the default constructed payload: `C2Payload{ 
Operation::HEARTBEAT }` instead.
   
   To me `!empty()` feels like something that is easily mistaken for `empty()`, 
`size()` is guaranteed to be constant time for `std::deque` and `empty()` is 
probably implemented as `size() == 0` anyway, right?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #776: MINIFICPP-1202 - Handle C2 requests/responses using MinifiConcurrentQueue

2020-05-20 Thread GitBox


hunyadi-dev commented on a change in pull request #776:
URL: https://github.com/apache/nifi-minifi-cpp/pull/776#discussion_r428109777



##
File path: libminifi/src/c2/C2Agent.cpp
##
@@ -75,54 +78,55 @@ C2Agent::C2Agent(const 
std::shared_ptr lock(request_mutex, std::adopt_lock);
-if (!requests.empty()) {
-  int count = 0;
-  do {
-const C2Payload payload(std::move(requests.back()));
-requests.pop_back();
-try {
-  C2Payload && response = 
protocol_.load()->consumePayload(payload);
-  enqueue_c2_server_response(std::move(response));
-}
-catch(const std::exception ) {
-  logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
-}
-catch(...) {
-  logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
-}
-  }while(!requests.empty() && ++count < max_c2_responses);
+if (protocol_.load() != nullptr) {
+  std::vector payload_batch;
+  payload_batch.reserve(max_c2_responses);
+  auto getRequestPayload = [_batch] (C2Payload&& payload) { 
payload_batch.emplace_back(std::move(payload)); };
+  const std::chrono::system_clock::time_point timeout_point = 
std::chrono::system_clock::now() + std::chrono::milliseconds(1);
+  for (std::size_t attempt_num = 0; attempt_num < max_c2_responses; 
++attempt_num) {
+if (!requests.consumeWaitUntil(getRequestPayload, timeout_point)) {
+  break;
 }
   }
-  try {
-performHeartBeat();
-  }
-  catch(const std::exception ) {
-logger_->log_error("Exception occurred while performing heartbeat. 
error: %s", e.what());
-  }
-  catch(...) {
-logger_->log_error("Unknonwn exception occurred while performing 
heartbeat.");
-  }
+  std::for_each(
+std::make_move_iterator(payload_batch.begin()),
+std::make_move_iterator(payload_batch.end()),
+[&] (C2Payload&& payload) {
+  try {
+C2Payload && response = 
protocol_.load()->consumePayload(std::move(payload));
+enqueue_c2_server_response(std::move(response));
+  }
+  catch(const std::exception ) {
+logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
+  }
+  catch(...) {
+logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
+  }
+});
 
-  checkTriggers();
+try {
+  performHeartBeat();
+}
+catch (const std::exception ) {
+  logger_->log_error("Exception occurred while performing heartbeat. 
error: %s", e.what());
+}
+catch (...) {
+  logger_->log_error("Unknonwn exception occurred while performing 
heartbeat.");
+}

Review comment:
   That is correct, that is one-too-many indents due to the `std::for_each` 
actually taking up two identation levels.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #776: MINIFICPP-1202 - Handle C2 requests/responses using MinifiConcurrentQueue

2020-05-20 Thread GitBox


hunyadi-dev commented on a change in pull request #776:
URL: https://github.com/apache/nifi-minifi-cpp/pull/776#discussion_r428105220



##
File path: libminifi/src/c2/C2Agent.cpp
##
@@ -75,54 +78,55 @@ C2Agent::C2Agent(const 
std::shared_ptr lock(request_mutex, std::adopt_lock);
-if (!requests.empty()) {
-  int count = 0;
-  do {
-const C2Payload payload(std::move(requests.back()));
-requests.pop_back();
-try {
-  C2Payload && response = 
protocol_.load()->consumePayload(payload);
-  enqueue_c2_server_response(std::move(response));
-}
-catch(const std::exception ) {
-  logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
-}
-catch(...) {
-  logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
-}
-  }while(!requests.empty() && ++count < max_c2_responses);
+if (protocol_.load() != nullptr) {
+  std::vector payload_batch;
+  payload_batch.reserve(max_c2_responses);
+  auto getRequestPayload = [_batch] (C2Payload&& payload) { 
payload_batch.emplace_back(std::move(payload)); };
+  const std::chrono::system_clock::time_point timeout_point = 
std::chrono::system_clock::now() + std::chrono::milliseconds(1);
+  for (std::size_t attempt_num = 0; attempt_num < max_c2_responses; 
++attempt_num) {
+if (!requests.consumeWaitUntil(getRequestPayload, timeout_point)) {

Review comment:
   Good catch, we want to wait 1 second, not 1 millisecond on locking.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #776: MINIFICPP-1202 - Handle C2 requests/responses using MinifiConcurrentQueue

2020-05-20 Thread GitBox


hunyadi-dev commented on a change in pull request #776:
URL: https://github.com/apache/nifi-minifi-cpp/pull/776#discussion_r428105220



##
File path: libminifi/src/c2/C2Agent.cpp
##
@@ -75,54 +78,55 @@ C2Agent::C2Agent(const 
std::shared_ptr lock(request_mutex, std::adopt_lock);
-if (!requests.empty()) {
-  int count = 0;
-  do {
-const C2Payload payload(std::move(requests.back()));
-requests.pop_back();
-try {
-  C2Payload && response = 
protocol_.load()->consumePayload(payload);
-  enqueue_c2_server_response(std::move(response));
-}
-catch(const std::exception ) {
-  logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
-}
-catch(...) {
-  logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
-}
-  }while(!requests.empty() && ++count < max_c2_responses);
+if (protocol_.load() != nullptr) {
+  std::vector payload_batch;
+  payload_batch.reserve(max_c2_responses);
+  auto getRequestPayload = [_batch] (C2Payload&& payload) { 
payload_batch.emplace_back(std::move(payload)); };
+  const std::chrono::system_clock::time_point timeout_point = 
std::chrono::system_clock::now() + std::chrono::milliseconds(1);
+  for (std::size_t attempt_num = 0; attempt_num < max_c2_responses; 
++attempt_num) {
+if (!requests.consumeWaitUntil(getRequestPayload, timeout_point)) {

Review comment:
   Good catch, we want to wait 1 second not 1 millisecond on locking.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #770: MINIFICPP-1203 - upgrade linter version to 1.4.4 and fix relevant linter errors

2020-05-20 Thread GitBox


hunyadi-dev commented on a change in pull request #770:
URL: https://github.com/apache/nifi-minifi-cpp/pull/770#discussion_r428081193



##
File path: libminifi/test/archive-tests/FocusArchiveTests.cpp
##
@@ -16,24 +16,23 @@
  * limitations under the License.
  */
 
+#include 
 #include 
 #include 
+#include 
 #include 
 #include 
-#include 
-#include 
 #include 
 
-#include "../TestBase.h"
+#include  // NOLINT
+#include  // NOLINT

Review comment:
   This is incorrectly recognized as a C header. Also `NOLINT` comments do 
not need to be preceded by 2 spaces.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #770: MINIFICPP-1203 - upgrade linter version to 1.4.4 and fix relevant linter errors

2020-05-20 Thread GitBox


hunyadi-dev commented on a change in pull request #770:
URL: https://github.com/apache/nifi-minifi-cpp/pull/770#discussion_r428080271



##
File path: libminifi/src/io/StreamFactory.cpp
##
@@ -16,12 +16,13 @@
  * limitations under the License.
  */
 #include "io/StreamFactory.h"
+
 #include 
 #include 
 #include 
 #include 
-#include 
 
+#include  // NOLINT

Review comment:
   This is incorrectly recognized as a C header. Also `NOLINT` comments do 
not need to be preceded by 2 spaces.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-7471) impossible run nifi with redis cluster state manager

2020-05-20 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne resolved NIFI-7471.
--
Fix Version/s: 1.12.0
   Resolution: Fixed

> impossible run nifi with redis cluster state manager
> 
>
> Key: NIFI-7471
> URL: https://issues.apache.org/jira/browse/NIFI-7471
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Ilya Kovalev
>Priority: Critical
> Fix For: 1.12.0
>
> Attachments: docker-compose.yml, nifi.properties, state-management.xml
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> below error occurs due to start nifi with redis cluster state manager
> {code:java}
> Caused by: java.lang.IllegalStateException: Could not initialize State 
> Providers because the Cluster State Provider is not valid. The following 1 
> Valid ation Errors occurred:'Connection String' is invalid 
> because Connection String is required Please check the configuration of the 
> Cluster State Provider with ID [redis-provider] in the file 
> /opt/nifi/nifi-current/./conf/state-management.xml
> {code}
> attach my {color:#1d1c1d}state-management.xml and nifi.properties{color}
> {color:#1d1c1d}How to recreate{color}
> {color:#1d1c1d}1 docker-compose up containers{color}
> {color:#1d1c1d}2 copy configurations into nifi node{color}
> {color:#1d1c1d}3 restart node{color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7471) impossible run nifi with redis cluster state manager

2020-05-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112306#comment-17112306
 ] 

ASF subversion and git services commented on NIFI-7471:
---

Commit 7034d7e44c0644fb126136b503da4c11d6c41aff in nifi's branch 
refs/heads/master from KovalevIV
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=7034d7e ]

NIFI-7471 fix bug with property validation


> impossible run nifi with redis cluster state manager
> 
>
> Key: NIFI-7471
> URL: https://issues.apache.org/jira/browse/NIFI-7471
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Ilya Kovalev
>Priority: Critical
> Attachments: docker-compose.yml, nifi.properties, state-management.xml
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> below error occurs due to start nifi with redis cluster state manager
> {code:java}
> Caused by: java.lang.IllegalStateException: Could not initialize State 
> Providers because the Cluster State Provider is not valid. The following 1 
> Valid ation Errors occurred:'Connection String' is invalid 
> because Connection String is required Please check the configuration of the 
> Cluster State Provider with ID [redis-provider] in the file 
> /opt/nifi/nifi-current/./conf/state-management.xml
> {code}
> attach my {color:#1d1c1d}state-management.xml and nifi.properties{color}
> {color:#1d1c1d}How to recreate{color}
> {color:#1d1c1d}1 docker-compose up containers{color}
> {color:#1d1c1d}2 copy configurations into nifi node{color}
> {color:#1d1c1d}3 restart node{color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 commented on pull request #4285: NIFI-7471 fix bug with property validation

2020-05-20 Thread GitBox


markap14 commented on pull request #4285:
URL: https://github.com/apache/nifi/pull/4285#issuecomment-631523161


   That's a great catch @IlyaKovalev! Thanks for the fix - +1 merged it to 
master.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 merged pull request #4285: NIFI-7471 fix bug with property validation

2020-05-20 Thread GitBox


markap14 merged pull request #4285:
URL: https://github.com/apache/nifi/pull/4285


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] james94 commented on pull request #781: MINIFICPP-1214: Converts H2O Processors to use ALv2 compliant H20-3 library

2020-05-20 Thread GitBox


james94 commented on pull request #781:
URL: https://github.com/apache/nifi-minifi-cpp/pull/781#issuecomment-631508317


   @phrocker and @szaszm  would writing regression tests for this new python 
processor be similar approach to writing regression test for nifi processor, 
but instead of using nifi-mock, we use pytest-mock with mutiple test cases 
toward this processor? Are integration tests the same as regression tests?
   
   There are two python processors, I would need to write regression tests for:
   
   - **ExecuteH2oMojoScoring.py**
   - **ConvertDsToCsv.py**



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #770: MINIFICPP-1203 - upgrade linter version to 1.4.4 and fix relevant linter errors

2020-05-20 Thread GitBox


szaszm commented on a change in pull request #770:
URL: https://github.com/apache/nifi-minifi-cpp/pull/770#discussion_r428014274



##
File path: libminifi/src/io/StreamFactory.cpp
##
@@ -16,12 +16,13 @@
  * limitations under the License.
  */
 #include "io/StreamFactory.h"
+
 #include 
 #include 
 #include 
 #include 
-#include 
 
+#include  // NOLINT

Review comment:
   why do we need NOLINT here?
   Also, end of line comments should have 2 preceding spaces.

##
File path: libminifi/test/archive-tests/FocusArchiveTests.cpp
##
@@ -16,24 +16,23 @@
  * limitations under the License.
  */
 
+#include 
 #include 
 #include 
+#include 
 #include 
 #include 
-#include 
-#include 
 #include 
 
-#include "../TestBase.h"
+#include  // NOLINT
+#include  // NOLINT

Review comment:
   why do we need NOLINT here?
   
   End of line comments should have 2 spaces before and 1 after the `//`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (MINIFICPP-1230) Enable MergeFileTests on Windows.

2020-05-20 Thread Adam Debreceni (Jira)
Adam Debreceni created MINIFICPP-1230:
-

 Summary: Enable MergeFileTests on Windows.
 Key: MINIFICPP-1230
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1230
 Project: Apache NiFi MiNiFi C++
  Issue Type: Bug
Reporter: Adam Debreceni
Assignee: Adam Debreceni


The only archive test that runs on Windows is the CompressContentTests, turn 
MergeFileTests on and fix all arising issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] hunyadi-dev opened a new pull request #789: MINIFICPP-1203 - Enable header linting in include directories and resolve linter recommendations

2020-05-20 Thread GitBox


hunyadi-dev opened a new pull request #789:
URL: https://github.com/apache/nifi-minifi-cpp/pull/789


   ### Please pay special attention to the following commits on reviewing:
   
   - 1789ea5 MINIFICPP-1203 - Resolve include order linter recommendations
   - 4313831 MINIFICPP-1203 - Fix all leftover linter recommendations
   
   ---
   ### :a: uto generated text, please do not waste your time reading this:
   
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #776: MINIFICPP-1202 - Handle C2 requests/responses using MinifiConcurrentQueue

2020-05-20 Thread GitBox


szaszm commented on a change in pull request #776:
URL: https://github.com/apache/nifi-minifi-cpp/pull/776#discussion_r427436157



##
File path: libminifi/include/c2/C2Agent.h
##
@@ -41,8 +41,15 @@ namespace org {
 namespace apache {
 namespace nifi {
 namespace minifi {
+
+namespace utils {
+template 
+class ConditionConcurrentQueue;
+}

Review comment:
   Why don't we include the concurrent queue header? `C2Agent` needs the 
definition, not just the declaration. It compiles probably because we 
accidentally include it anyway (ThreadPool?), but we should include it 
explicitly if we use it.
   
   https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#Rs-implicit

##
File path: libminifi/src/c2/C2Agent.cpp
##
@@ -75,54 +78,55 @@ C2Agent::C2Agent(const 
std::shared_ptr lock(request_mutex, std::adopt_lock);
-if (!requests.empty()) {
-  int count = 0;
-  do {
-const C2Payload payload(std::move(requests.back()));
-requests.pop_back();
-try {
-  C2Payload && response = 
protocol_.load()->consumePayload(payload);
-  enqueue_c2_server_response(std::move(response));
-}
-catch(const std::exception ) {
-  logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
-}
-catch(...) {
-  logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
-}
-  }while(!requests.empty() && ++count < max_c2_responses);
+if (protocol_.load() != nullptr) {
+  std::vector payload_batch;
+  payload_batch.reserve(max_c2_responses);
+  auto getRequestPayload = [_batch] (C2Payload&& payload) { 
payload_batch.emplace_back(std::move(payload)); };
+  const std::chrono::system_clock::time_point timeout_point = 
std::chrono::system_clock::now() + std::chrono::milliseconds(1);
+  for (std::size_t attempt_num = 0; attempt_num < max_c2_responses; 
++attempt_num) {
+if (!requests.consumeWaitUntil(getRequestPayload, timeout_point)) {

Review comment:
   Previously we took the immediately available payloads only. Now we wait 
1ms. Why is this change?
   
   If we reverted to the old way, it would also no longer be a problem to 
reunify the first two loops and eliminate the need for the temporary buffer 
(`payload_batch`).

##
File path: libminifi/src/c2/C2Agent.cpp
##
@@ -75,54 +78,55 @@ C2Agent::C2Agent(const 
std::shared_ptr lock(request_mutex, std::adopt_lock);
-if (!requests.empty()) {
-  int count = 0;
-  do {
-const C2Payload payload(std::move(requests.back()));
-requests.pop_back();
-try {
-  C2Payload && response = 
protocol_.load()->consumePayload(payload);
-  enqueue_c2_server_response(std::move(response));
-}
-catch(const std::exception ) {
-  logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
-}
-catch(...) {
-  logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
-}
-  }while(!requests.empty() && ++count < max_c2_responses);
+if (protocol_.load() != nullptr) {
+  std::vector payload_batch;
+  payload_batch.reserve(max_c2_responses);
+  auto getRequestPayload = [_batch] (C2Payload&& payload) { 
payload_batch.emplace_back(std::move(payload)); };
+  const std::chrono::system_clock::time_point timeout_point = 
std::chrono::system_clock::now() + std::chrono::milliseconds(1);
+  for (std::size_t attempt_num = 0; attempt_num < max_c2_responses; 
++attempt_num) {
+if (!requests.consumeWaitUntil(getRequestPayload, timeout_point)) {
+  break;
 }
   }
-  try {
-performHeartBeat();
-  }
-  catch(const std::exception ) {
-logger_->log_error("Exception occurred while performing heartbeat. 
error: %s", e.what());
-  }
-  catch(...) {
-logger_->log_error("Unknonwn exception occurred while performing 
heartbeat.");
-  }
+  std::for_each(
+std::make_move_iterator(payload_batch.begin()),
+std::make_move_iterator(payload_batch.end()),
+[&] (C2Payload&& payload) {
+  try {
+C2Payload && response = 
protocol_.load()->consumePayload(std::move(payload));
+enqueue_c2_server_response(std::move(response));
+  }
+  catch(const std::exception ) {
+logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
+  }
+  catch(...) {
+logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
+  }
+});
 
-  checkTriggers();
+try {
+  performHeartBeat();
+}
+catch (const std::exception ) {
+  

[GitHub] [nifi-minifi-cpp] adamdebreceni opened a new pull request #788: MINIFICPP-1229 - Fix and enable CompressContentTests

2020-05-20 Thread GitBox


adamdebreceni opened a new pull request #788:
URL: https://github.com/apache/nifi-minifi-cpp/pull/788


   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFIREG-392) JAVA 11 issue javax/xml/bind/JAXBException

2020-05-20 Thread GERandroAPACHE (Jira)


[ 
https://issues.apache.org/jira/browse/NIFIREG-392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112033#comment-17112033
 ] 

GERandroAPACHE commented on NIFIREG-392:


OS layer is MSFT with JAVA JDK 11.0.7 and nifi-registry 0.6

> JAVA 11 issue javax/xml/bind/JAXBException
> --
>
> Key: NIFIREG-392
> URL: https://issues.apache.org/jira/browse/NIFIREG-392
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.6.0
>Reporter: GERandroAPACHE
>Priority: Major
>  Labels: nifi-registry
>
> Hi NIFI-registr team
>  
> I tried to upgrade nifi-registry from 0.5 to 0.6 with the only objective to 
> apply JAVA 11 to the stack.
> However, the nifi registry does not start up with JAVA jdk 11.0.7. If I 
> switch back to java 8 everything is working as expected.
> As per below the stack trace for JDK 11.0.7 pointing to the issues for your 
> reference.
> It might be related to that
> {code:java}
> Caused by: java.lang.NoClassDefFoundError: javax/xml/bind/JAXBException
> {code}
> More in the comment below
>  
> ANDRO



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7472) ListS3 should support for page-size, max-items, starting-token

2020-05-20 Thread Morten Buhl (Jira)
Morten Buhl created NIFI-7472:
-

 Summary: ListS3 should support for page-size, max-items, 
starting-token
 Key: NIFI-7472
 URL: https://issues.apache.org/jira/browse/NIFI-7472
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Morten Buhl


S3 supports serverside pagination

[https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-pagination.html]

You can supply how many items you want returned in each batch, how many in 
total and a starting token.

Reading the documentation i doesn't seem like ListS3 supports at least 
page-size and max-items. I am a little unsure if starting-token is supported by 
the internal state of ListS3?

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFIREG-392) JAVA 11 issue javax/xml/bind/JAXBException

2020-05-20 Thread GERandroAPACHE (Jira)


[ 
https://issues.apache.org/jira/browse/NIFIREG-392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112027#comment-17112027
 ] 

GERandroAPACHE commented on NIFIREG-392:


as per below the full stack trace

 
{code:java}
2020-05-19 15:45:12,710 INFO [main] o.e.j.a.AnnotationConfiguration Scanning 
elapsed time=909ms2020-05-19 15:45:12,898 INFO [main] 
o.e.j.s.h.C._nifi_registry_api 2 Spring WebApplicationInitializers detected on 
classpath2020-05-19 15:45:13,828 INFO [background-preinit] 
o.h.validator.internal.util.Version HV01: Hibernate Validator 6.0.18.Final
2020-05-19 15:45:14,453 INFO [main] o.a.n.r.NiFiRegistryApiApplication Starting 
NiFiRegistryApiApplication on VSRVDCF0042 with PID 10984 
(D:\nifi-registry-0.6.0\work\jetty\nifi-registry-web-api-0.6.0.war\webapp\WEB-INF\classes
 started by P01921 in D:\nifi-registry-0.6.0)

2020-05-19 15:45:14,453 INFO [main] o.a.n.r.NiFiRegistryApiApplication No 
active profile set, falling back to default profiles: default
2020-05-19 15:45:16,812 ERROR [main] o.springframework.boot.SpringApplication 
Application run failedjava.lang.IllegalStateException: Error processing 
condition on 
org.springframework.boot.autoconfigure.context.PropertyPlaceholderAutoConfiguration.propertySourcesPlaceholderConfigurer
 at 
org.springframework.boot.autoconfigure.condition.SpringBootCondition.matches(SpringBootCondition.java:60)
 at 
org.springframework.context.annotation.ConditionEvaluator.shouldSkip(ConditionEvaluator.java:108)
 at 
org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForBeanMethod(ConfigurationClassBeanDefinitionReader.java:181)
 at 
org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForConfigurationClass(ConfigurationClassBeanDefinitionReader.java:141)
 at 
org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitions(ConfigurationClassBeanDefinitionReader.java:117)
 at 
org.springframework.context.annotation.ConfigurationClassPostProcessor.processConfigBeanDefinitions(ConfigurationClassPostProcessor.java:327)
 at 
org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanDefinitionRegistry(ConfigurationClassPostProcessor.java:232)
 at 
org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanDefinitionRegistryPostProcessors(PostProcessorRegistrationDelegate.java:275)
 at 
org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:95)
 at 
org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:705)
 at 
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:531)
 at 
org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
 at 
org.springframework.boot.SpringApplication.refresh(SpringApplication.java:744) 
at 
org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:391)
 at org.springframework.boot.SpringApplication.run(SpringApplication.java:312) 
at 
org.springframework.boot.web.servlet.support.SpringBootServletInitializer.run(SpringBootServletInitializer.java:151)
 at 
org.springframework.boot.web.servlet.support.SpringBootServletInitializer.createRootApplicationContext(SpringBootServletInitializer.java:131)
 at 
org.springframework.boot.web.servlet.support.SpringBootServletInitializer.onStartup(SpringBootServletInitializer.java:91)
 at 
org.springframework.web.SpringServletContainerInitializer.onStartup(SpringServletContainerInitializer.java:172)
 at 
org.eclipse.jetty.plus.annotation.ContainerInitializer.callStartup(ContainerInitializer.java:140)
 at 
org.eclipse.jetty.annotations.ServletContainerInitializersStarter.doStart(ServletContainerInitializersStarter.java:64)
 at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
 at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:347)
 at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1497) 
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1459) 
at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:854)
 at 
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:278)
 at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545) at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
 at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:167)
 at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:119)
 at 

[jira] [Commented] (NIFI-7455) FetchSFTP failed to close SFTPClient

2020-05-20 Thread Kate Miller (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112025#comment-17112025
 ] 

Kate Miller commented on NIFI-7455:
---

I noticed that the problem appears in 3 situations:
 * when there are no files to fetch for several hours
 * during flowfile when Nifi tries to close the connection from the previous 
flowfile
 * when I stop the processor

> FetchSFTP failed to close SFTPClient
> 
>
> Key: NIFI-7455
> URL: https://issues.apache.org/jira/browse/NIFI-7455
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.4
>Reporter: Kate Miller
>Priority: Minor
>
> I use ListSftp-FetchSftp pattern and almost every time Fetch gives me warning:
> _"FetchSFTP[id=] Failed to close SFTPClient due to 
> net.schmizz.sshj.connection.ConnectionException: Timeout expired: "_
> This problem occurs after the file has been fetched but also when I stop the 
> processor. It occurs in many different flowfiles (in different servers, but 
> not all of them)
> I used to use the GetSftp before and it worked without any problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFIREG-392) JAVA 11 issue javax/xml/bind/JAXBException

2020-05-20 Thread GERandroAPACHE (Jira)
GERandroAPACHE created NIFIREG-392:
--

 Summary: JAVA 11 issue javax/xml/bind/JAXBException
 Key: NIFIREG-392
 URL: https://issues.apache.org/jira/browse/NIFIREG-392
 Project: NiFi Registry
  Issue Type: Bug
Affects Versions: 0.6.0
Reporter: GERandroAPACHE


Hi NIFI-registr team

 

I tried to upgrade nifi-registry from 0.5 to 0.6 with the only objective to 
apply JAVA 11 to the stack.

However, the nifi registry does not start up with JAVA jdk 11.0.7. If I switch 
back to java 8 everything is working as expected.

As per below the stack trace for JDK 11.0.7 pointing to the issues for your 
reference.

It might be related to that
{code:java}
Caused by: java.lang.NoClassDefFoundError: javax/xml/bind/JAXBException
{code}
More in the comment below

 

ANDRO



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7455) FetchSFTP failed to close SFTPClient

2020-05-20 Thread Kate Miller (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kate Miller updated NIFI-7455:
--
Priority: Minor  (was: Major)

> FetchSFTP failed to close SFTPClient
> 
>
> Key: NIFI-7455
> URL: https://issues.apache.org/jira/browse/NIFI-7455
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.4
>Reporter: Kate Miller
>Priority: Minor
>
> I use ListSftp-FetchSftp pattern and almost every time Fetch gives me warning:
> _"FetchSFTP[id=] Failed to close SFTPClient due to 
> net.schmizz.sshj.connection.ConnectionException: Timeout expired: "_
> This problem occurs after the file has been fetched but also when I stop the 
> processor. It occurs in many different flowfiles (in different servers, but 
> not all of them)
> I used to use the GetSftp before and it worked without any problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7296) BST TimeZone parsing fails, breaking webgui and API

2020-05-20 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamás Bunth reassigned NIFI-7296:
-

Assignee: Tamás Bunth

> BST TimeZone parsing fails, breaking webgui and API
> ---
>
> Key: NIFI-7296
> URL: https://issues.apache.org/jira/browse/NIFI-7296
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.11.3
> Environment: Nifi 1.11.3 running on 
> jre-11-openjdk-11.0.4.11-1.el7_7.x86_64 and RHEL 8 
> cluster of 6 servers
>Reporter: Michael Percival
>Assignee: Tamás Bunth
>Priority: Blocker
>
> Since clocks have changed in the UK and we have moved to BST, API calls and 
> browsing to the web gui fails with a 'An unexpected error has occurred. 
> Please check the logs for additional details.' error. reviewing the 
> nifi-user.log shows the below when attempting to access the webgui, appears 
> the timezone is not being parsed properly by the web server, see below:
> Caused by: java.time.format.DateTimeParseException: Text '12:23:17 BST' could 
> not be parsed: null
>  at 
> java.base/java.time.format.DateTimeFormatter.createError(DateTimeFormatter.java:2017)
>  at 
> java.base/java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1952)
>  at java.base/java.time.LocalDateTime.parse(LocalDateTime.java:492)
>  at 
> org.apache.nifi.web.api.dto.util.TimeAdapter.unmarshal(TimeAdapter.java:55)
>  at 
> org.apache.nifi.web.api.dto.util.TimeAdapter.unmarshal(TimeAdapter.java:33)
>  at 
> com.fasterxml.jackson.module.jaxb.AdapterConverter.convert(AdapterConverter.java:35)
>  at 
> com.fasterxml.jackson.databind.deser.std.StdDelegatingDeserializer.convertValue(StdDelegatingDeserializ$
>  at 
> com.fasterxml.jackson.databind.deser.std.StdDelegatingDeserializer.deserialize(StdDelegatingDeserialize$
>  at 
> com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
>  at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
>  ... 122 common frames omitted



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6112) Add some useful commands to NiFi Toolkit for automating NiFi cluster construction.

2020-05-20 Thread Mayki (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111849#comment-17111849
 ] 

Mayki commented on NIFI-6112:
-

Hi [~yoshiata]

I think there are some errors in the examples above concerning the command.

Also, there are some errors in the help of command.

For example nifi create-service, we need use --properties and NOT --baseUrl.

Perpahs, you could remove baseUrl argument in the HELP command, it could be 
confusing for user.
{code:java}
]#  bin/cli.sh nifi pg-create-service --help

Creates the controller service for the given process group from the local 
file.usage: pg-create-service
 -h,--help   Help
 -i,--input A local file to read as input contents, or a
 public URL to fetch
 -kp,--keyPasswdThe key password of the keystore being used
 -ks,--keystore A keystore to use for TLS/SSL connections
 -ksp,--keystorePasswd  The password of the keystore being used
 -kst,--keystoreTypeThe type of key store being used (JKS or
 PKCS12)
 -ot,--outputType   The type of output to produce (json or simple)
 -p,--propertiesA properties file to load arguments from,
 command line values will override anything in
 the properties file, must contain full path to
 file
 -pe,--proxiedEntityThe identity of an entity to proxy
 -pgid,--processGroupId The id of a process group
 -ts,--truststore   A truststore to use for TLS/SSL connections
 -tsp,--truststorePasswdThe password of the truststore being used
 -tst,--truststoreType  The type of trust store being used (JKS or
 PKCS12)
 -u,--baseUrl   The URL to execute the command against
 -verbose,--verbose  Indicates that verbose output should be
 provided

{code}
 

 

> Add some useful commands to NiFi Toolkit for automating NiFi cluster 
> construction.
> --
>
> Key: NIFI-6112
> URL: https://issues.apache.org/jira/browse/NIFI-6112
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Tools and Build
>Affects Versions: 1.9.0
>Reporter: Yoshiaki Takahashi
>Assignee: Yoshiaki Takahashi
>Priority: Major
>  Labels: SDLC
> Fix For: 1.10.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Add toolkit commands useful for automating NiFi cluster construction.
> h2. Commands for "User"
>  * nifi create-user
>  ** Add a user
> {code}
> % ./cli.sh nifi create-user --userName user_1
> 6ba52e46-0169-1000--49c07246
> % ./cli.sh nifi create-user --userName user_2
> 6ba54b58-0169-1000--69e8abcc
> % ./cli.sh nifi create-user --userName user_3  
> 6bd400c3-0169-1000--c605c27f
> {code}
> * nifi list-users
> ** Retrieve the user list
> {code}
> % ./cli.sh nifi list-users
> #   Name   Member of   
> -      -   
> 1   user_1 
> 2   user_2 
> 3   user_3 
> {code}
> h2. Commands for "User group"
> * nifi create-user-group
> ** Add a user group
> {code}
> % ./cli.sh nifi create-user-group --userGroupName admin --userList 
> 6ba52e46-0169-1000--49c07246
> 6bd06533-0169-1000--ac5cfbcf
> % ./cli.sh nifi create-user-group --userGroupName users
> 6bd1b4b2-0169-1000--dd0ff820
> {code}
> * nifi list-user-groups
> ** Retrieve the user group list
> {code}
> % ./cli.sh nifi list-user-groups   
> #   Name Members
> -   --      
> 1   adminuser_1 
> 2   users   
> {code}
> * nifi update-user-group
> ** Update users belonging the user group
> {code}
> % ./cli.sh nifi update-user-group --userGroupId 
> 6bd1b4b2-0169-1000--dd0ff820 --userList 
> 6ba54b58-0169-1000--69e8abcc,6bd400c3-0169-1000--c605c27f
> % ./cli.sh nifi list-user-groups
> #   Name Members
> -   --      
> 1   adminuser_1 
> 2   usersuser_2, user_3 
> {code}
> h2. Commands for "Access policy"
> * nifi get-policy
> ** Retrieve the access policy
> {code}
> % 

[jira] [Commented] (NIFI-6112) Add some useful commands to NiFi Toolkit for automating NiFi cluster construction.

2020-05-20 Thread Yoshiaki Takahashi (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111816#comment-17111816
 ] 

Yoshiaki Takahashi commented on NIFI-6112:
--

Hi [~Wogno]

I have no idea from that information.
Does any command other than 'create-service' also fail with 404?

> Add some useful commands to NiFi Toolkit for automating NiFi cluster 
> construction.
> --
>
> Key: NIFI-6112
> URL: https://issues.apache.org/jira/browse/NIFI-6112
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Tools and Build
>Affects Versions: 1.9.0
>Reporter: Yoshiaki Takahashi
>Assignee: Yoshiaki Takahashi
>Priority: Major
>  Labels: SDLC
> Fix For: 1.10.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Add toolkit commands useful for automating NiFi cluster construction.
> h2. Commands for "User"
>  * nifi create-user
>  ** Add a user
> {code}
> % ./cli.sh nifi create-user --userName user_1
> 6ba52e46-0169-1000--49c07246
> % ./cli.sh nifi create-user --userName user_2
> 6ba54b58-0169-1000--69e8abcc
> % ./cli.sh nifi create-user --userName user_3  
> 6bd400c3-0169-1000--c605c27f
> {code}
> * nifi list-users
> ** Retrieve the user list
> {code}
> % ./cli.sh nifi list-users
> #   Name   Member of   
> -      -   
> 1   user_1 
> 2   user_2 
> 3   user_3 
> {code}
> h2. Commands for "User group"
> * nifi create-user-group
> ** Add a user group
> {code}
> % ./cli.sh nifi create-user-group --userGroupName admin --userList 
> 6ba52e46-0169-1000--49c07246
> 6bd06533-0169-1000--ac5cfbcf
> % ./cli.sh nifi create-user-group --userGroupName users
> 6bd1b4b2-0169-1000--dd0ff820
> {code}
> * nifi list-user-groups
> ** Retrieve the user group list
> {code}
> % ./cli.sh nifi list-user-groups   
> #   Name Members
> -   --      
> 1   adminuser_1 
> 2   users   
> {code}
> * nifi update-user-group
> ** Update users belonging the user group
> {code}
> % ./cli.sh nifi update-user-group --userGroupId 
> 6bd1b4b2-0169-1000--dd0ff820 --userList 
> 6ba54b58-0169-1000--69e8abcc,6bd400c3-0169-1000--c605c27f
> % ./cli.sh nifi list-user-groups
> #   Name Members
> -   --      
> 1   adminuser_1 
> 2   usersuser_2, user_3 
> {code}
> h2. Commands for "Access policy"
> * nifi get-policy
> ** Retrieve the access policy
> {code}
> % ./cli.sh nifi get-policy --accessPolicyResource /tenants 
> --accessPolicyAction write
> Resource: /tenants
> Action  : write
> Users   :
> Groups  : admin
> {code}
> * nifi update-policy
> ** Update users authorized for the resource
> {code}
> % ./cli.sh nifi update-policy --accessPolicyResource /tenants 
> --accessPolicyAction write --userList 
> 6ba52e46-0169-1000--49c07246,6ba54b58-0169-1000--69e8abcc 
> --groupList 
> 6bd06533-0169-1000--ac5cfbcf,6bd1b4b2-0169-1000--dd0ff820
> User "user_2" (id 6ba54b58-0169-1000--69e8abcc) added
> User "user_1" (id 6ba52e46-0169-1000--49c07246) added
> User group id 6bd06533-0169-1000--ac5cfbcf already included
> User group "users" (id 6bd1b4b2-0169-1000--dd0ff820) added
> Access policy was updated
> id: 15e4e0bd-cb28-34fd-8587-f8d15162cba5
> % ./cli.sh nifi get-policy --accessPolicyResource /tenants 
> --accessPolicyAction write
> Resource: /tenants
> Action  : write
> Users   : user_2, user_1
> Groups  : admin, users
> {code}
> h2. Commands for "Controller service"
> * nifi create-service
> ** Create a controller service
> {code}
> % cat ./ssl_service.json
> {
>   "component": {
> "name": "Sample SSL context service",
> "type": "org.apache.nifi.ssl.StandardRestrictedSSLContextService",
> "properties": {
>   "Keystore Filename": "/tmp/sample_keystore.jks",
>   "Keystore Password": "xx",
>   "key-password": "xx",
>   "Keystore Type": "JKS",
>   "Truststore Filename": "/tmp/sample_truststore.jks",
>   "Truststore Password": "xx",
>   "Truststore Type": "JKS",
>   "SSL Protocol": "TLS"
> }
>   }
> }
> % ./cli.sh nifi create-service --input