[jira] [Updated] (NIFI-7707) [Processor/GCP - PubSub] Avro event re-format when pushed to PubSub

2020-08-05 Thread Guitton Alexandre (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guitton Alexandre updated NIFI-7707:

Description: 
Hi, 

as discussed on Slack : 
[https://apachenifi.slack.com/archives/C0L9VCD47/p1596463216211700]

When we generate an avro event from record and we publish it using 
_*PublishPubSub*_ processor the content-type of the file change from 
_*avro/binary*_ to *_UTF-8_* due to the following code part : 
[https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/pubsub/PublishGCPubSub.java#L157]

This make the event unreadable once Consumed :/ 

The workaround we found is base64 encoding the avro message, publishing it to 
PubSub, consuming it, and finally decoding it. 

We expect this part of encoding would be handled by the processors.

 

  was:
Hi, 

as discussed on Slack : 
[https://apachenifi.slack.com/archives/C0L9VCD47/p1596463216211700]

When we generate an avro event from record and we publish it using 
_*PublishPubSub*_ processor the content-type of the file change from 
_*avro/binary*_ to *_UTF-8_* due to the following code part : 
[https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/pubsub/PublishGCPubSub.java#L157]

 

This make the event unreadable once Consumed :/ 

 

The workaround we found is base64 encoding the avro message, publishing it to 
PubSub, consuming it, and finally decoding it. 

 

 

We expect this part of encoding would be handled by the processors.

 


> [Processor/GCP - PubSub] Avro event re-format when pushed to PubSub
> ---
>
> Key: NIFI-7707
> URL: https://issues.apache.org/jira/browse/NIFI-7707
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.11.4
>Reporter: Guitton Alexandre
>Priority: Major
>  Labels: Avro, GCP
>
> Hi, 
> as discussed on Slack : 
> [https://apachenifi.slack.com/archives/C0L9VCD47/p1596463216211700]
> When we generate an avro event from record and we publish it using 
> _*PublishPubSub*_ processor the content-type of the file change from 
> _*avro/binary*_ to *_UTF-8_* due to the following code part : 
> [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/pubsub/PublishGCPubSub.java#L157]
> This make the event unreadable once Consumed :/ 
> The workaround we found is base64 encoding the avro message, publishing it to 
> PubSub, consuming it, and finally decoding it. 
> We expect this part of encoding would be handled by the processors.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7707) [Processor/GCP - PubSub] Avro event re-format when pushed to PubSub

2020-08-05 Thread Guitton Alexandre (Jira)
Guitton Alexandre created NIFI-7707:
---

 Summary: [Processor/GCP - PubSub] Avro event re-format when pushed 
to PubSub
 Key: NIFI-7707
 URL: https://issues.apache.org/jira/browse/NIFI-7707
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.11.4
Reporter: Guitton Alexandre


Hi, 

as discussed on Slack : 
[https://apachenifi.slack.com/archives/C0L9VCD47/p1596463216211700]

When we generate an avro event from record and we publish it using 
_*PublishPubSub*_ processor the content-type of the file change from 
_*avro/binary*_ to *_UTF-8_* due to the following code part : 
[https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/pubsub/PublishGCPubSub.java#L157]

 

This make the event unreadable once Consumed :/ 

 

The workaround we found is base64 encoding the avro message, publishing it to 
PubSub, consuming it, and finally decoding it. 

 

 

We expect this part of encoding would be handled by the processors.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (MINIFICPP-1320) Clean up connectionMap usages

2020-08-05 Thread Adam Hunyadi (Jira)
Adam Hunyadi created MINIFICPP-1320:
---

 Summary: Clean up connectionMap usages
 Key: MINIFICPP-1320
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1320
 Project: Apache NiFi MiNiFi C++
  Issue Type: Task
Affects Versions: 0.7.0
Reporter: Adam Hunyadi
Assignee: Adam Hunyadi
 Attachments: Screenshot 2020-08-05 at 11.13.50.png

*Background:*

Currently we propagate (eg. for serialization) connections via two methods:
 # Using the obsolete {{{color:#403294}{{connectionMaps}}{color}}}
 # Using the {{{color:#403294}{{Repository::containers}}{color}}} member field

Repository.h even has these declared next to each other:

!Screenshot 2020-08-05 at 11.13.50.png|width=447,height=54!

*Proposal:*

As 2. should be included in 1., we should investigate and see if we can combine 
these two maps in a sensible way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #861: MINIFICPP-1312 - Fix flaky FlowControllerTest

2020-08-05 Thread GitBox


hunyadi-dev commented on a change in pull request #861:
URL: https://github.com/apache/nifi-minifi-cpp/pull/861#discussion_r465508813



##
File path: libminifi/test/flow-tests/FlowControllerTests.cpp
##
@@ -133,12 +144,9 @@ TEST_CASE("Flow shutdown waits for a while", 
"[TestFlow2]") {
   testController.startFlow();
 
   // wait for the source processor to enqueue its flowFiles
-  int tryCount = 0;
-  while (tryCount++ < 10 && root->getTotalFlowFileCount() != 3) {
-std::this_thread::sleep_for(std::chrono::milliseconds{20});
-  }
+  busy_wait(std::chrono::milliseconds{50}, [&] {return 
root->getTotalFlowFileCount() != 0;});

Review comment:
   I cannot see where to link points, but -1 from me for the additional 
argument as deducing if we need to sleep should be a compile-time operation.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (MINIFICPP-1319) Refactor streams

2020-08-05 Thread Adam Debreceni (Jira)
Adam Debreceni created MINIFICPP-1319:
-

 Summary: Refactor streams
 Key: MINIFICPP-1319
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1319
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Adam Debreceni
Assignee: Adam Debreceni


Currently our stream implementations are quite chaotic, with lot of code 
duplication, and possibly obsolete features (e.g. composable_stream_). We 
should revisit these and refactor where needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #861: MINIFICPP-1312 - Fix flaky FlowControllerTest

2020-08-05 Thread GitBox


adamdebreceni commented on a change in pull request #861:
URL: https://github.com/apache/nifi-minifi-cpp/pull/861#discussion_r465518803



##
File path: libminifi/test/flow-tests/FlowControllerTests.cpp
##
@@ -133,12 +144,9 @@ TEST_CASE("Flow shutdown waits for a while", 
"[TestFlow2]") {
   testController.startFlow();
 
   // wait for the source processor to enqueue its flowFiles
-  int tryCount = 0;
-  while (tryCount++ < 10 && root->getTotalFlowFileCount() != 3) {
-std::this_thread::sleep_for(std::chrono::milliseconds{20});
-  }
+  busy_wait(std::chrono::milliseconds{50}, [&] {return 
root->getTotalFlowFileCount() != 0;});

Review comment:
   @hunyadi-dev I don't see why it should be compile-time decidable if we 
need sleep but alright will not add an extra parameter, @fgerlits could you 
expand on your proposal?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on pull request #862: MINIFICPP-1318 migrate PersistenceTests to /var/tmp

2020-08-05 Thread GitBox


adamdebreceni commented on pull request #862:
URL: https://github.com/apache/nifi-minifi-cpp/pull/862#issuecomment-669040469


   should we think about adding an agent to the CI where `/tmp` is mounted as 
tmpfs?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7707) [Processor/GCP - PubSub] Avro event re-format when pushed to PubSub

2020-08-05 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7707:
-
Component/s: (was: Core Framework)
 Extensions

> [Processor/GCP - PubSub] Avro event re-format when pushed to PubSub
> ---
>
> Key: NIFI-7707
> URL: https://issues.apache.org/jira/browse/NIFI-7707
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: Guitton Alexandre
>Priority: Major
>  Labels: Avro, GCP
>
> Hi, 
> as discussed on Slack : 
> [https://apachenifi.slack.com/archives/C0L9VCD47/p1596463216211700]
> When we generate an avro event from record and we publish it using 
> _*PublishPubSub*_ processor the content-type of the file change from 
> _*avro/binary*_ to *_UTF-8_* due to the following code part : 
> [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/pubsub/PublishGCPubSub.java#L157]
> This make the event unreadable once Consumed :/ 
> The workaround we found is base64 encoding the avro message, publishing it to 
> PubSub, consuming it, and finally decoding it. 
> We expect this part of encoding would be handled by the processors.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (MINIFICPP-1281) Decrease the time spent sleeping in MiNiFi unit/integration tests

2020-08-05 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi resolved MINIFICPP-1281.
-
Resolution: Fixed

> Decrease the time spent sleeping in MiNiFi unit/integration tests
> -
>
> Key: MINIFICPP-1281
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1281
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Assignee: Adam Hunyadi
>Priority: Minor
>  Labels: MiNiFi-CPP-Hygiene
> Fix For: 0.8.0
>
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> *Background:*
> Some tests wait for many seconds in order to guarantee that all the paralell 
> processing happens by the time the test makes its assertions. Using sleep for 
> synching is a very bad pattern resulting in tests taking unnecessarily long 
> to execute.
> *Proposal:*
> An alternative for setting a run-time for the tests is polling for events we 
> expect to happen during the test allowing for an earlier finish in most of 
> the cases. We should make an effort replacing unnecessarily long sleeps if 
> possible.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7712) Remove cache key from cache server

2020-08-05 Thread Chandraprakash kabra (Jira)
Chandraprakash kabra created NIFI-7712:
--

 Summary: Remove cache key from cache server
 Key: NIFI-7712
 URL: https://issues.apache.org/jira/browse/NIFI-7712
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Chandraprakash kabra


Hi Team,

I am using distributedmap cache and my requirement is I just would like to 
remove cache key from cache server after fetching it from 
fetchdistributedmapcache.

So is there any inbuild processor and functionality available or I have to 
write script for that. I have seem some of the issue but they are suggesting 
script but I am in favour of inbuild processor if we can just pass the key and 
remove it.

Every 24 hour new cache entry would come and old is also there so I would like 
to remove old.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] bbende commented on a change in pull request #4450: NIFI-7681 - Add update-bucket-policy command, add option to specify timeout and fix documentation to include previously implemented

2020-08-05 Thread GitBox


bbende commented on a change in pull request #4450:
URL: https://github.com/apache/nifi/pull/4450#discussion_r465727136



##
File path: nifi-docs/src/main/asciidoc/toolkit-guide.adoc
##
@@ -94,10 +94,42 @@ The following are available commands:
  nifi pg-get-services
  nifi pg-enable-services
  nifi pg-disable-services
+ nifi pg-create-service
+ nifi create-user
+ nifi list-users
+ nifi create-user-group
+ nifi list-user-groups
+ nifi update-user-group
+ nifi get-policy
+ nifi update-policy
+ nifi create-service
+ nifi get-services
+ nifi get-service
+ nifi disable-services
+ nifi enable-services
+ nifi get-reporting-task
+ nifi get-reporting-tasks
+ nifi create-reporting-task
+ nifi set-param
+ nifi delete-param
+ nifi list-param-contexts
+ nifi get-param-context
+ nifi create-param-context
+ nifi delete-param-context
+ nifi merge-param-context
+ nifi import-param-context
+ nifi pg-get-param-context
+ nifi pg-set-param-context
+ nifi list-templates
+ nifi download-template
+ nifi upload-template
+ nifi start-reporting-tasks
+ nifi stop-reporting-tasks
  registry current-user
  registry list-buckets
  registry create-bucket
  registry delete-bucket
+ registry update-bucket-policy

Review comment:
   SInce we are updating this, there are other registry commands that got 
added for users/groups/policies that never made it into this list. Can we add 
those? 

##
File path: 
nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/CommandOption.java
##
@@ -24,6 +24,8 @@
 public enum CommandOption {
 
 // General
+CONNECTION_TIMEOUT("cto", "connectionTimeout", "Timeout parameter for 
creating a connection to NiFi/Registry", true),
+READ_TIMEOUT("rto", "readTimeout", "Timeout parameter for reading from 
NiFi/Registry", true),

Review comment:
   Can we add to the description of these to say that the value is in 
milliseconds? 

##
File path: 
nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/registry/bucket/UpdateBucketPolicy.java
##
@@ -0,0 +1,129 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.toolkit.cli.impl.command.registry.bucket;
+
+import org.apache.commons.cli.ParseException;
+import org.apache.nifi.registry.authorization.AccessPolicy;
+import org.apache.nifi.registry.authorization.Tenant;
+import org.apache.nifi.registry.bucket.Bucket;
+import org.apache.nifi.registry.client.NiFiRegistryClient;
+import org.apache.nifi.registry.client.NiFiRegistryException;
+import org.apache.nifi.toolkit.cli.api.Context;
+import org.apache.nifi.toolkit.cli.impl.client.ExtendedNiFiRegistryClient;
+import org.apache.nifi.toolkit.cli.impl.client.registry.PoliciesClient;
+import org.apache.nifi.toolkit.cli.impl.command.CommandOption;
+import 
org.apache.nifi.toolkit.cli.impl.command.registry.AbstractNiFiRegistryCommand;
+import org.apache.nifi.toolkit.cli.impl.command.registry.tenant.TenantHelper;
+import org.apache.nifi.toolkit.cli.impl.result.VoidResult;
+import org.apache.nifi.util.StringUtils;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.Optional;
+import java.util.Properties;
+import java.util.Set;
+
+public class UpdateBucketPolicy extends 
AbstractNiFiRegistryCommand {
+
+
+public UpdateBucketPolicy() {
+super("update-bucket-policy", VoidResult.class);
+}
+
+@Override
+public String getDescription() {
+return "Updates access policy of bucket";
+}
+
+@Override
+public void doInitialize(final Context context) {
+addOption(CommandOption.BUCKET_NAME.createOption());
+addOption(CommandOption.BUCKET_ID.createOption());
+addOption(CommandOption.USER_NAME_LIST.createOption());
+addOption(CommandOption.USER_ID_LIST.createOption());
+addOption(CommandOption.GROUP_NAME_LIST.createOption());
+addOption(CommandOption.GROUP_ID_LIST.createOption());
+addOption(CommandOption.POLICY_ACTION.createOption());
+}
+
+
+@Override
+public VoidResult doExecute(NiFiRegistryClient client, Properties 
properties) throws IOException, NiFiRegistryException, 

[jira] [Updated] (MINIFICPP-1318) Migrate PersistenceTests repositories to /var/tmp

2020-08-05 Thread Marton Szasz (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Szasz updated MINIFICPP-1318:

Status: Reopened  (was: Closed)

> Migrate PersistenceTests repositories to /var/tmp
> -
>
> Key: MINIFICPP-1318
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1318
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Marton Szasz
>Assignee: Marton Szasz
>Priority: Minor
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> See MINIFICPP-1188



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm opened a new pull request #863: MINIFICPP-1318 move test rocksdb state to /var/tmp

2020-08-05 Thread GitBox


szaszm opened a new pull request #863:
URL: https://github.com/apache/nifi-minifi-cpp/pull/863


   Missed TailFileTests state storage in #862 
   __
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [x] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [x] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [x] If applicable, have you updated the LICENSE file?
   - [x] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [x] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (MINIFICPP-1318) Migrate PersistenceTests repositories to /var/tmp

2020-08-05 Thread Marton Szasz (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Szasz resolved MINIFICPP-1318.
-
Resolution: Fixed

> Migrate PersistenceTests repositories to /var/tmp
> -
>
> Key: MINIFICPP-1318
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1318
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Marton Szasz
>Assignee: Marton Szasz
>Priority: Minor
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> See MINIFICPP-1188



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (MINIFICPP-1318) Migrate PersistenceTests repositories to /var/tmp

2020-08-05 Thread Marton Szasz (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Szasz closed MINIFICPP-1318.
---

> Migrate PersistenceTests repositories to /var/tmp
> -
>
> Key: MINIFICPP-1318
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1318
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Marton Szasz
>Assignee: Marton Szasz
>Priority: Minor
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> See MINIFICPP-1188



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1318) Migrate PersistenceTests and TailFileTests repositories to /var/tmp

2020-08-05 Thread Marton Szasz (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Szasz updated MINIFICPP-1318:

Summary: Migrate PersistenceTests and TailFileTests repositories to 
/var/tmp  (was: Migrate PersistenceTests repositories to /var/tmp)

> Migrate PersistenceTests and TailFileTests repositories to /var/tmp
> ---
>
> Key: MINIFICPP-1318
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1318
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Marton Szasz
>Assignee: Marton Szasz
>Priority: Minor
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> See MINIFICPP-1188



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] pvillard31 commented on pull request #4451: NIFI-7707 - fixed Publish GCP PubSub processor for binary data

2020-08-05 Thread GitBox


pvillard31 commented on pull request #4451:
URL: https://github.com/apache/nifi/pull/4451#issuecomment-669114481


   Confirmed the fix using this flow definition 
([Google_Pub_Sub.json.txt](https://github.com/apache/nifi/files/5027732/Google_Pub_Sub.json.txt)),
 for anyone willing to test it (even though the code change is straightforward)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7708) Add Azure SDK for Java to LICENCE and NOTICE files

2020-08-05 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7708:
-
Fix Version/s: 1.12.0

> Add Azure SDK for Java to LICENCE and NOTICE files
> --
>
> Key: NIFI-7708
> URL: https://issues.apache.org/jira/browse/NIFI-7708
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Peter Gyori
>Assignee: Peter Gyori
>Priority: Major
>  Labels: Azure, adls
> Fix For: 1.12.0
>
>
> License and notice files should be extended with information regarding the 
> usage of Azure SDK for Java and its transitive dependencies.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7708) Add Azure SDK for Java to LICENCE and NOTICE files

2020-08-05 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7708:
-
Issue Type: Task  (was: Bug)

> Add Azure SDK for Java to LICENCE and NOTICE files
> --
>
> Key: NIFI-7708
> URL: https://issues.apache.org/jira/browse/NIFI-7708
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Reporter: Peter Gyori
>Assignee: Peter Gyori
>Priority: Major
>  Labels: Azure, adls
> Fix For: 1.12.0
>
>
> License and notice files should be extended with information regarding the 
> usage of Azure SDK for Java and its transitive dependencies.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7708) Add Azure SDK for Java to LICENCE and NOTICE files

2020-08-05 Thread Peter Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Gyori updated NIFI-7708:
--
Status: Patch Available  (was: In Progress)

Pull request created.

> Add Azure SDK for Java to LICENCE and NOTICE files
> --
>
> Key: NIFI-7708
> URL: https://issues.apache.org/jira/browse/NIFI-7708
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Reporter: Peter Gyori
>Assignee: Peter Gyori
>Priority: Major
>  Labels: Azure, adls
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> License and notice files should be extended with information regarding the 
> usage of Azure SDK for Java and its transitive dependencies.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #861: MINIFICPP-1312 - Fix flaky FlowControllerTest

2020-08-05 Thread GitBox


fgerlits commented on a change in pull request #861:
URL: https://github.com/apache/nifi-minifi-cpp/pull/861#discussion_r465615443



##
File path: libminifi/test/flow-tests/FlowControllerTests.cpp
##
@@ -133,12 +144,9 @@ TEST_CASE("Flow shutdown waits for a while", 
"[TestFlow2]") {
   testController.startFlow();
 
   // wait for the source processor to enqueue its flowFiles
-  int tryCount = 0;
-  while (tryCount++ < 10 && root->getTotalFlowFileCount() != 3) {
-std::this_thread::sleep_for(std::chrono::milliseconds{20});
-  }
+  busy_wait(std::chrono::milliseconds{50}, [&] {return 
root->getTotalFlowFileCount() != 0;});

Review comment:
   @adebreceni It wasn't really a proposal, but I would prefer two separate 
functions `verifyEventHappenedInPollTime()` and 
`verifyEventHappenedInPollTimeWithBusyWait()` (just an example, feel free to 
find a shorter name) rather than a single function where an argument value of 
-1 means "use the busy version".
   
   [TBH, I am not a fan of the `verifyEventHappenedInPollTime` name, as 
something like `REQUIRE(happensBeforeTimeout(...))` would read better, but 
renaming it is probably not in scope for this PR.]





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7707) [Processor/GCP - PubSub] Avro event re-format when pushed to PubSub

2020-08-05 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7707:
-
Fix Version/s: 1.12.0
   Status: Patch Available  (was: In Progress)

> [Processor/GCP - PubSub] Avro event re-format when pushed to PubSub
> ---
>
> Key: NIFI-7707
> URL: https://issues.apache.org/jira/browse/NIFI-7707
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: Guitton Alexandre
>Assignee: Pierre Villard
>Priority: Major
>  Labels: Avro, GCP, easyfix
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hi, 
> as discussed on Slack : 
> [https://apachenifi.slack.com/archives/C0L9VCD47/p1596463216211700]
> When we generate an avro event from record and we publish it using 
> _*PublishPubSub*_ processor the content-type of the file change from 
> _*avro/binary*_ to *_UTF-8_* due to the following code part : 
> [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/pubsub/PublishGCPubSub.java#L157]
> This make the event unreadable once Consumed :/ 
> The workaround we found is base64 encoding the avro message, publishing it to 
> PubSub, consuming it, and finally decoding it. 
> We expect this part of encoding would be handled by the processors.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7707) [Processor/GCP - PubSub] Avro event re-format when pushed to PubSub

2020-08-05 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7707:
-
Labels: Avro GCP easyfix  (was: Avro GCP)

> [Processor/GCP - PubSub] Avro event re-format when pushed to PubSub
> ---
>
> Key: NIFI-7707
> URL: https://issues.apache.org/jira/browse/NIFI-7707
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: Guitton Alexandre
>Assignee: Pierre Villard
>Priority: Major
>  Labels: Avro, GCP, easyfix
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hi, 
> as discussed on Slack : 
> [https://apachenifi.slack.com/archives/C0L9VCD47/p1596463216211700]
> When we generate an avro event from record and we publish it using 
> _*PublishPubSub*_ processor the content-type of the file change from 
> _*avro/binary*_ to *_UTF-8_* due to the following code part : 
> [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/pubsub/PublishGCPubSub.java#L157]
> This make the event unreadable once Consumed :/ 
> The workaround we found is base64 encoding the avro message, publishing it to 
> PubSub, consuming it, and finally decoding it. 
> We expect this part of encoding would be handled by the processors.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7708) Add Azure SDK for Java to LICENCE and NOTICE files

2020-08-05 Thread Peter Gyori (Jira)
Peter Gyori created NIFI-7708:
-

 Summary: Add Azure SDK for Java to LICENCE and NOTICE files
 Key: NIFI-7708
 URL: https://issues.apache.org/jira/browse/NIFI-7708
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Peter Gyori
Assignee: Peter Gyori


License and notice files should be extended with information regarding the 
usage of Azure SDK for Java and its transitive dependencies.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (MINIFICPP-1321) MergeContent processor should support multiple attribute strategies

2020-08-05 Thread Arpad Boda (Jira)
Arpad Boda created MINIFICPP-1321:
-

 Summary: MergeContent processor should support multiple attribute 
strategies
 Key: MINIFICPP-1321
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1321
 Project: Apache NiFi MiNiFi C++
  Issue Type: New Feature
Affects Versions: 0.7.0
Reporter: Arpad Boda


This part should be on par with NiFi implementation. 
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.6.0/org.apache.nifi.processors.standard.MergeContent/index.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #862: MINIFICPP-1318 migrate PersistenceTests to /var/tmp

2020-08-05 Thread GitBox


arpadboda closed pull request #862:
URL: https://github.com/apache/nifi-minifi-cpp/pull/862


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (MINIFICPP-1322) PublishKafka queue size and batch size properties should be in sync

2020-08-05 Thread Arpad Boda (Jira)
Arpad Boda created MINIFICPP-1322:
-

 Summary: PublishKafka queue size and batch size properties should 
be in sync
 Key: MINIFICPP-1322
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1322
 Project: Apache NiFi MiNiFi C++
  Issue Type: New Feature
Affects Versions: 0.7.0
Reporter: Arpad Boda
Assignee: Arpad Boda


Queue size is responsible for setting the max message queue size in librdkafka. 
Setting this smaller or close to batchsize will most probably cause issues: not 
being able to insert segments as the queue is full. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1121) Upgrade spdlog

2020-08-05 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1121:

Flagged: Impediment

> Upgrade spdlog
> --
>
> Key: MINIFICPP-1121
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1121
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Marton Szasz
>Assignee: Adam Hunyadi
>Priority: Minor
>  Labels: MiNiFi-CPP-Hygiene
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Or version of spdlog is 2+ years old. The new spdlog version uses a new 
> version of the cppformat (back then)/fmt (now) formatting library.
> We should consider directly depending on {{fmt}} since we already have it as 
> a transitive dependency and it would be useful for e.g. formatting 
> exception/error messages, etc.
>  
> *Update (hunyadi):*
> Seems like we have to skip version 1.0 with upgrading. There are quite a lot 
> of non-documented breaking changes, for example this commit:
>  
> [https://github.com/gabime/spdlog/commit/6f4cd8d397a443f095c1dce5c025f55684c70eac#diff-9458442ae281c51018015fd2773dc688]
>  breaks ::instance() on stdout/stderr sinks. Unfortunately, changes like this 
> in spdlog are not documented and the codebase is kept up-to-date with commits 
> sent directly to the central repository.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1261) Deprecate obsolete ScopeGuard class

2020-08-05 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1261:

Flagged: Impediment

> Deprecate obsolete ScopeGuard class
> ---
>
> Key: MINIFICPP-1261
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1261
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Assignee: Adam Hunyadi
>Priority: Minor
>  Labels: MiNiFi-CPP-Hygiene
> Fix For: 0.8.0
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> *Background:*
> With the introduction of the gsl support library, we ported tools that work 
> as sufficient substitute for scope guards. 
> *Proposal:*
> Replace ScopeGuard objects with unique_ptrs and gsl::final_action objects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1121) Upgrade spdlog

2020-08-05 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1121:

Flagged: Impediment

> Upgrade spdlog
> --
>
> Key: MINIFICPP-1121
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1121
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Marton Szasz
>Assignee: Adam Hunyadi
>Priority: Minor
>  Labels: MiNiFi-CPP-Hygiene
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Or version of spdlog is 2+ years old. The new spdlog version uses a new 
> version of the cppformat (back then)/fmt (now) formatting library.
> We should consider directly depending on {{fmt}} since we already have it as 
> a transitive dependency and it would be useful for e.g. formatting 
> exception/error messages, etc.
>  
> *Update (hunyadi):*
> Seems like we have to skip version 1.0 with upgrading. There are quite a lot 
> of non-documented breaking changes, for example this commit:
>  
> [https://github.com/gabime/spdlog/commit/6f4cd8d397a443f095c1dce5c025f55684c70eac#diff-9458442ae281c51018015fd2773dc688]
>  breaks ::instance() on stdout/stderr sinks. Unfortunately, changes like this 
> in spdlog are not documented and the codebase is kept up-to-date with commits 
> sent directly to the central repository.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on pull request #862: MINIFICPP-1318 migrate PersistenceTests to /var/tmp

2020-08-05 Thread GitBox


szaszm commented on pull request #862:
URL: https://github.com/apache/nifi-minifi-cpp/pull/862#issuecomment-669160694


   I've changed ubuntu-16.04-all to use tmpfs as /tmp. Please let me know if 
you agree with the new changes too, @arpadboda and @adamdebreceni 
   See a completed run here: 
https://github.com/szaszm/nifi-minifi-cpp/runs/949010955 (ubuntu-16.04-all 
fails PersistenceTests without the first commit of this PR)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] pvillard31 opened a new pull request #4451: NIFI-7707 - fixed Publish GCP PubSub processor for binary data

2020-08-05 Thread GitBox


pvillard31 opened a new pull request #4451:
URL: https://github.com/apache/nifi/pull/4451


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on pull request #862: MINIFICPP-1318 migrate PersistenceTests to /var/tmp

2020-08-05 Thread GitBox


arpadboda commented on pull request #862:
URL: https://github.com/apache/nifi-minifi-cpp/pull/862#issuecomment-669167296


   > I've changed ubuntu-16.04-all to use tmpfs as /tmp. Please let me know if 
you agree with the new changes too, @arpadboda and @adamdebreceni
   > See a completed run here: 
https://github.com/szaszm/nifi-minifi-cpp/runs/949010955 (ubuntu-16.04-all 
fails PersistenceTests without the first commit of this PR)
   
   I'm happy with this. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (MINIFICPP-1203) Set up cpplint so that it can be configured per-directory

2020-08-05 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1203:

Flagged: Impediment

> Set up cpplint so that it can be configured per-directory
> -
>
> Key: MINIFICPP-1203
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1203
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Assignee: Adam Hunyadi
>Priority: Minor
>  Labels: MiNiFi-CPP-Hygiene
> Fix For: 0.8.0
>
>  Time Spent: 18.5h
>  Remaining Estimate: 0h
>
> *Background:*
> Currently cpplink checks are not checking all the files correctly and are 
> hiding some errors that are meant to be displayed. Manually running the 
> following cpplint check shows overreports the number of errors but it is a 
> decent estimate on the number of errors ignored:
> {code:bash}
> # This command shows some errors that we otherwise suppress in the project
> cpplint --linelength=200 
> filter=-runtime/reference,-runtime/string,-build/c++11,-build/include_order,-build/include_alpha
>  `find libminifi/ -name \.cpp -o -name \*.h`
> (...)
> Total errors found: 1730
> {code}
> When running {{{color:#403294}{{make linter}}{color}}} these errors are 
> supressed. It runs the following command in 
> {{{color:#403294}run_linter.sh{color}}}:
> {code:bash}
> python ${SCRIPT_DIR}/cpplint.py --linelength=200 --headers=${HEADERS} 
> ${SOURCES}
> {code}
> For some reason, it seems like the files specified in the 
> {{{color:#403294}{{--headers}}{color}}} flag are ignored altogether. For 
> example
> {code:bash}
> # Running w/ headers option set
> cpplint --filter="-runtime/reference,-runtime/string,-build/c++11" 
> --linelength=200 --headers=`find . -name "*.h" | tr '\n' ','` 
> libminifi/include/processors/ProcessorUtils.h 2>/dev/null
> Done processing libminifi/include/processors/ProcessorUtils.h
> # Running w/ unspecified headers
> cpplint --filter="-runtime/reference,-runtime/string,-build/c++11" 
> --linelength=200 libminifi/include/processors/ProcessorUtils.h 2>/dev/null
> Done processing libminifi/include/processors/ProcessorUtils.h
> Total errors found: 6
> {code}
> *Proposal:*
> We should remove the header specification from {{{color:#403294}{{make 
> linter}}{color}}} and set up linter configuration files in the project 
> directories that set all the rules to be applied on the specific directory 
> contents recursively.
> There is to approach for doing this: we can either specify files or rules to 
> be ignored when doing the linter check. The latter is preferable, so that 
> when we want to clear them up later, we can have separate commits/pull 
> request for each of the warning fixed (and potentially automatize fixes (eg. 
> writing clang-tidy rules or applying linter fixes).
> (!) The commits on this Jira are not expected to fix any warnings reported by 
> the linter, but to have all the checks disabled.
> *Update:*
> We decided to replace header guards with {{{color:#403294}{{#pragma 
> once}}{color}}}. It is not standardized, but all the compilers we support 
> have it, and we already have it scattered in our headers, so we can consider 
> this update safe. This is now extracted into its own Jira (see related).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (MINIFICPP-1261) Deprecate obsolete ScopeGuard class

2020-08-05 Thread Adam Hunyadi (Jira)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-1261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17171413#comment-17171413
 ] 

Adam Hunyadi commented on MINIFICPP-1261:
-

(flag) Flag added

Waiting for Review

> Deprecate obsolete ScopeGuard class
> ---
>
> Key: MINIFICPP-1261
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1261
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Assignee: Adam Hunyadi
>Priority: Minor
>  Labels: MiNiFi-CPP-Hygiene
> Fix For: 0.8.0
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> *Background:*
> With the introduction of the gsl support library, we ported tools that work 
> as sufficient substitute for scope guards. 
> *Proposal:*
> Replace ScopeGuard objects with unique_ptrs and gsl::final_action objects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1121) Upgrade spdlog

2020-08-05 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1121:

Flagged:   (was: Impediment)

> Upgrade spdlog
> --
>
> Key: MINIFICPP-1121
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1121
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Marton Szasz
>Assignee: Adam Hunyadi
>Priority: Minor
>  Labels: MiNiFi-CPP-Hygiene
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Or version of spdlog is 2+ years old. The new spdlog version uses a new 
> version of the cppformat (back then)/fmt (now) formatting library.
> We should consider directly depending on {{fmt}} since we already have it as 
> a transitive dependency and it would be useful for e.g. formatting 
> exception/error messages, etc.
>  
> *Update (hunyadi):*
> Seems like we have to skip version 1.0 with upgrading. There are quite a lot 
> of non-documented breaking changes, for example this commit:
>  
> [https://github.com/gabime/spdlog/commit/6f4cd8d397a443f095c1dce5c025f55684c70eac#diff-9458442ae281c51018015fd2773dc688]
>  breaks ::instance() on stdout/stderr sinks. Unfortunately, changes like this 
> in spdlog are not documented and the codebase is kept up-to-date with commits 
> sent directly to the central repository.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (MINIFICPP-1121) Upgrade spdlog

2020-08-05 Thread Adam Hunyadi (Jira)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-1121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17171414#comment-17171414
 ] 

Adam Hunyadi commented on MINIFICPP-1121:
-

(flag) Flag added

Waiting for review

> Upgrade spdlog
> --
>
> Key: MINIFICPP-1121
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1121
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Marton Szasz
>Assignee: Adam Hunyadi
>Priority: Minor
>  Labels: MiNiFi-CPP-Hygiene
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Or version of spdlog is 2+ years old. The new spdlog version uses a new 
> version of the cppformat (back then)/fmt (now) formatting library.
> We should consider directly depending on {{fmt}} since we already have it as 
> a transitive dependency and it would be useful for e.g. formatting 
> exception/error messages, etc.
>  
> *Update (hunyadi):*
> Seems like we have to skip version 1.0 with upgrading. There are quite a lot 
> of non-documented breaking changes, for example this commit:
>  
> [https://github.com/gabime/spdlog/commit/6f4cd8d397a443f095c1dce5c025f55684c70eac#diff-9458442ae281c51018015fd2773dc688]
>  breaks ::instance() on stdout/stderr sinks. Unfortunately, changes like this 
> in spdlog are not documented and the codebase is kept up-to-date with commits 
> sent directly to the central repository.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1203) Set up cpplint so that it can be configured per-directory

2020-08-05 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1203:

Flagged: Impediment

> Set up cpplint so that it can be configured per-directory
> -
>
> Key: MINIFICPP-1203
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1203
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Assignee: Adam Hunyadi
>Priority: Minor
>  Labels: MiNiFi-CPP-Hygiene
> Fix For: 0.8.0
>
>  Time Spent: 18.5h
>  Remaining Estimate: 0h
>
> *Background:*
> Currently cpplink checks are not checking all the files correctly and are 
> hiding some errors that are meant to be displayed. Manually running the 
> following cpplint check shows overreports the number of errors but it is a 
> decent estimate on the number of errors ignored:
> {code:bash}
> # This command shows some errors that we otherwise suppress in the project
> cpplint --linelength=200 
> filter=-runtime/reference,-runtime/string,-build/c++11,-build/include_order,-build/include_alpha
>  `find libminifi/ -name \.cpp -o -name \*.h`
> (...)
> Total errors found: 1730
> {code}
> When running {{{color:#403294}{{make linter}}{color}}} these errors are 
> supressed. It runs the following command in 
> {{{color:#403294}run_linter.sh{color}}}:
> {code:bash}
> python ${SCRIPT_DIR}/cpplint.py --linelength=200 --headers=${HEADERS} 
> ${SOURCES}
> {code}
> For some reason, it seems like the files specified in the 
> {{{color:#403294}{{--headers}}{color}}} flag are ignored altogether. For 
> example
> {code:bash}
> # Running w/ headers option set
> cpplint --filter="-runtime/reference,-runtime/string,-build/c++11" 
> --linelength=200 --headers=`find . -name "*.h" | tr '\n' ','` 
> libminifi/include/processors/ProcessorUtils.h 2>/dev/null
> Done processing libminifi/include/processors/ProcessorUtils.h
> # Running w/ unspecified headers
> cpplint --filter="-runtime/reference,-runtime/string,-build/c++11" 
> --linelength=200 libminifi/include/processors/ProcessorUtils.h 2>/dev/null
> Done processing libminifi/include/processors/ProcessorUtils.h
> Total errors found: 6
> {code}
> *Proposal:*
> We should remove the header specification from {{{color:#403294}{{make 
> linter}}{color}}} and set up linter configuration files in the project 
> directories that set all the rules to be applied on the specific directory 
> contents recursively.
> There is to approach for doing this: we can either specify files or rules to 
> be ignored when doing the linter check. The latter is preferable, so that 
> when we want to clear them up later, we can have separate commits/pull 
> request for each of the warning fixed (and potentially automatize fixes (eg. 
> writing clang-tidy rules or applying linter fixes).
> (!) The commits on this Jira are not expected to fix any warnings reported by 
> the linter, but to have all the checks disabled.
> *Update:*
> We decided to replace header guards with {{{color:#403294}{{#pragma 
> once}}{color}}}. It is not standardized, but all the compilers we support 
> have it, and we already have it scattered in our headers, so we can consider 
> this update safe. This is now extracted into its own Jira (see related).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1203) Set up cpplint so that it can be configured per-directory

2020-08-05 Thread Adam Hunyadi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Hunyadi updated MINIFICPP-1203:

Flagged:   (was: Impediment)

> Set up cpplint so that it can be configured per-directory
> -
>
> Key: MINIFICPP-1203
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1203
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Assignee: Adam Hunyadi
>Priority: Minor
>  Labels: MiNiFi-CPP-Hygiene
> Fix For: 0.8.0
>
>  Time Spent: 18.5h
>  Remaining Estimate: 0h
>
> *Background:*
> Currently cpplink checks are not checking all the files correctly and are 
> hiding some errors that are meant to be displayed. Manually running the 
> following cpplint check shows overreports the number of errors but it is a 
> decent estimate on the number of errors ignored:
> {code:bash}
> # This command shows some errors that we otherwise suppress in the project
> cpplint --linelength=200 
> filter=-runtime/reference,-runtime/string,-build/c++11,-build/include_order,-build/include_alpha
>  `find libminifi/ -name \.cpp -o -name \*.h`
> (...)
> Total errors found: 1730
> {code}
> When running {{{color:#403294}{{make linter}}{color}}} these errors are 
> supressed. It runs the following command in 
> {{{color:#403294}run_linter.sh{color}}}:
> {code:bash}
> python ${SCRIPT_DIR}/cpplint.py --linelength=200 --headers=${HEADERS} 
> ${SOURCES}
> {code}
> For some reason, it seems like the files specified in the 
> {{{color:#403294}{{--headers}}{color}}} flag are ignored altogether. For 
> example
> {code:bash}
> # Running w/ headers option set
> cpplint --filter="-runtime/reference,-runtime/string,-build/c++11" 
> --linelength=200 --headers=`find . -name "*.h" | tr '\n' ','` 
> libminifi/include/processors/ProcessorUtils.h 2>/dev/null
> Done processing libminifi/include/processors/ProcessorUtils.h
> # Running w/ unspecified headers
> cpplint --filter="-runtime/reference,-runtime/string,-build/c++11" 
> --linelength=200 libminifi/include/processors/ProcessorUtils.h 2>/dev/null
> Done processing libminifi/include/processors/ProcessorUtils.h
> Total errors found: 6
> {code}
> *Proposal:*
> We should remove the header specification from {{{color:#403294}{{make 
> linter}}{color}}} and set up linter configuration files in the project 
> directories that set all the rules to be applied on the specific directory 
> contents recursively.
> There is to approach for doing this: we can either specify files or rules to 
> be ignored when doing the linter check. The latter is preferable, so that 
> when we want to clear them up later, we can have separate commits/pull 
> request for each of the warning fixed (and potentially automatize fixes (eg. 
> writing clang-tidy rules or applying linter fixes).
> (!) The commits on this Jira are not expected to fix any warnings reported by 
> the linter, but to have all the checks disabled.
> *Update:*
> We decided to replace header guards with {{{color:#403294}{{#pragma 
> once}}{color}}}. It is not standardized, but all the compilers we support 
> have it, and we already have it scattered in our headers, so we can consider 
> this update safe. This is now extracted into its own Jira (see related).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (MINIFICPP-1203) Set up cpplint so that it can be configured per-directory

2020-08-05 Thread Adam Hunyadi (Jira)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17171415#comment-17171415
 ] 

Adam Hunyadi commented on MINIFICPP-1203:
-

(flag) Flag added

Waiting for review

> Set up cpplint so that it can be configured per-directory
> -
>
> Key: MINIFICPP-1203
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1203
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Assignee: Adam Hunyadi
>Priority: Minor
>  Labels: MiNiFi-CPP-Hygiene
> Fix For: 0.8.0
>
>  Time Spent: 18.5h
>  Remaining Estimate: 0h
>
> *Background:*
> Currently cpplink checks are not checking all the files correctly and are 
> hiding some errors that are meant to be displayed. Manually running the 
> following cpplint check shows overreports the number of errors but it is a 
> decent estimate on the number of errors ignored:
> {code:bash}
> # This command shows some errors that we otherwise suppress in the project
> cpplint --linelength=200 
> filter=-runtime/reference,-runtime/string,-build/c++11,-build/include_order,-build/include_alpha
>  `find libminifi/ -name \.cpp -o -name \*.h`
> (...)
> Total errors found: 1730
> {code}
> When running {{{color:#403294}{{make linter}}{color}}} these errors are 
> supressed. It runs the following command in 
> {{{color:#403294}run_linter.sh{color}}}:
> {code:bash}
> python ${SCRIPT_DIR}/cpplint.py --linelength=200 --headers=${HEADERS} 
> ${SOURCES}
> {code}
> For some reason, it seems like the files specified in the 
> {{{color:#403294}{{--headers}}{color}}} flag are ignored altogether. For 
> example
> {code:bash}
> # Running w/ headers option set
> cpplint --filter="-runtime/reference,-runtime/string,-build/c++11" 
> --linelength=200 --headers=`find . -name "*.h" | tr '\n' ','` 
> libminifi/include/processors/ProcessorUtils.h 2>/dev/null
> Done processing libminifi/include/processors/ProcessorUtils.h
> # Running w/ unspecified headers
> cpplint --filter="-runtime/reference,-runtime/string,-build/c++11" 
> --linelength=200 libminifi/include/processors/ProcessorUtils.h 2>/dev/null
> Done processing libminifi/include/processors/ProcessorUtils.h
> Total errors found: 6
> {code}
> *Proposal:*
> We should remove the header specification from {{{color:#403294}{{make 
> linter}}{color}}} and set up linter configuration files in the project 
> directories that set all the rules to be applied on the specific directory 
> contents recursively.
> There is to approach for doing this: we can either specify files or rules to 
> be ignored when doing the linter check. The latter is preferable, so that 
> when we want to clear them up later, we can have separate commits/pull 
> request for each of the warning fixed (and potentially automatize fixes (eg. 
> writing clang-tidy rules or applying linter fixes).
> (!) The commits on this Jira are not expected to fix any warnings reported by 
> the linter, but to have all the checks disabled.
> *Update:*
> We decided to replace header guards with {{{color:#403294}{{#pragma 
> once}}{color}}}. It is not standardized, but all the compilers we support 
> have it, and we already have it scattered in our headers, so we can consider 
> this update safe. This is now extracted into its own Jira (see related).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] arpadboda commented on pull request #862: MINIFICPP-1318 migrate PersistenceTests to /var/tmp

2020-08-05 Thread GitBox


arpadboda commented on pull request #862:
URL: https://github.com/apache/nifi-minifi-cpp/pull/862#issuecomment-669136732


   > should we think about adding an agent to the CI where `/tmp` is mounted as 
tmpfs?
   
   I think covering only this scenario doesn't necessarily worth running a CI 
(insert Greta sideeye look here). 
   In case we can find a linear combination (some jobs that run tests have /tmp 
mounted as tmpfs), I'm ok with that. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-7709) Enable options for sftp connections (the "-o" parameter for sftp command line)

2020-08-05 Thread Kevin kapfer (Jira)
Kevin kapfer created NIFI-7709:
--

 Summary: Enable options for sftp connections (the "-o" parameter 
for sftp command line)
 Key: NIFI-7709
 URL: https://issues.apache.org/jira/browse/NIFI-7709
 Project: Apache NiFi
  Issue Type: New Feature
Affects Versions: 1.11.4
 Environment: software platform
Reporter: Kevin kapfer


When the sftp server requires a different algorithm (for example dss) than 
available from NiFi UI's server, you get
{code:java}
Unable to negotiate with {server ip} port 22: no matching host key type found. 
Their offer: ssh-dss{code}
for a response when invoking sftp from the command line.  To connect 
successfully, you must provide:
{noformat}
-oHostKeyAlgorithms=+ssh-dss{noformat}
to the command.  

The requested feature is to enable an option to specify the algorithm as an 
option, within the processors making sftp connections: GetSftp, PutSftp, 
FetchSftp, ListSftp.

A possible workaround is to add the missing algorithm to the server's ssh 
configuration



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7707) [Processor/GCP - PubSub] Avro event re-format when pushed to PubSub

2020-08-05 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard reassigned NIFI-7707:


Assignee: Pierre Villard

> [Processor/GCP - PubSub] Avro event re-format when pushed to PubSub
> ---
>
> Key: NIFI-7707
> URL: https://issues.apache.org/jira/browse/NIFI-7707
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: Guitton Alexandre
>Assignee: Pierre Villard
>Priority: Major
>  Labels: Avro, GCP
>
> Hi, 
> as discussed on Slack : 
> [https://apachenifi.slack.com/archives/C0L9VCD47/p1596463216211700]
> When we generate an avro event from record and we publish it using 
> _*PublishPubSub*_ processor the content-type of the file change from 
> _*avro/binary*_ to *_UTF-8_* due to the following code part : 
> [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-gcp-bundle/nifi-gcp-processors/src/main/java/org/apache/nifi/processors/gcp/pubsub/PublishGCPubSub.java#L157]
> This make the event unreadable once Consumed :/ 
> The workaround we found is base64 encoding the avro message, publishing it to 
> PubSub, consuming it, and finally decoding it. 
> We expect this part of encoding would be handled by the processors.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] pgyori opened a new pull request #4452: NIFI-7708: Add Azure SDK for Java (and transitive dependencies) to LI…

2020-08-05 Thread GitBox


pgyori opened a new pull request #4452:
URL: https://github.com/apache/nifi/pull/4452


   …CENCE and NOTICE files
   
   https://issues.apache.org/jira/browse/NIFI-7708
   
    Description of PR
   
   Adds dependencies to the LICENSE and NOTICE files in the Azure bundle and in 
the Assembly
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] scottyaslan commented on a change in pull request #4449: [NIFI-7705] update frontend deps

2020-08-05 Thread GitBox


scottyaslan commented on a change in pull request #4449:
URL: https://github.com/apache/nifi/pull/4449#discussion_r465808576



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/frontend/package-lock.json
##
@@ -144,20 +144,20 @@
   "resolved": "https://registry.npmjs.org/d3-brush/-/d3-brush-1.0.4.tgz;,
   "integrity": "sha1-AMLyOAGfJPbAoZSibUGhUw/+e8Q=",
   "requires": {
-"d3-dispatch": "1.0.3",
-"d3-drag": "1.2.1",
-"d3-interpolate": "1.1.6",
-"d3-selection": "1.3.0",
-"d3-transition": "1.1.1"
+"d3-dispatch": "1",

Review comment:
   This change may appear strange but what it is saying is that "d3-brush" 
version 1.0.4 requires any minor or patch release of "d3-dispatch" version 1. 
This however does not change the version of "d3-dispatch" that is installed. 
That is defined here: 
https://github.com/apache/nifi/pull/4449/files#diff-9a4616626c8b30875e090d2d589ce665R173.
 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] pvillard31 opened a new pull request #4453: NIFI-7710 - Flow XML validation warnings for DataSize

2020-08-05 Thread GitBox


pvillard31 opened a new pull request #4453:
URL: https://github.com/apache/nifi/pull/4453


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] scottyaslan commented on a change in pull request #4449: [NIFI-7705] update frontend deps

2020-08-05 Thread GitBox


scottyaslan commented on a change in pull request #4449:
URL: https://github.com/apache/nifi/pull/4449#discussion_r465834200



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/frontend/package-lock.json
##
@@ -144,20 +144,20 @@
   "resolved": "https://registry.npmjs.org/d3-brush/-/d3-brush-1.0.4.tgz;,
   "integrity": "sha1-AMLyOAGfJPbAoZSibUGhUw/+e8Q=",
   "requires": {
-"d3-dispatch": "1.0.3",
-"d3-drag": "1.2.1",
-"d3-interpolate": "1.1.6",
-"d3-selection": "1.3.0",
-"d3-transition": "1.1.1"
+"d3-dispatch": "1",

Review comment:
   For reference please read up on npm semantic versioning: 
https://docs.npmjs.com/about-semantic-versioning#using-semantic-versioning-to-specify-update-types-your-package-can-accept.
   
   Also, here is an interesting discussion on the topic: 
https://github.com/npm/npm/issues/20434





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] scottyaslan commented on a change in pull request #4449: [NIFI-7705] update frontend deps

2020-08-05 Thread GitBox


scottyaslan commented on a change in pull request #4449:
URL: https://github.com/apache/nifi/pull/4449#discussion_r465808576



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/frontend/package-lock.json
##
@@ -144,20 +144,20 @@
   "resolved": "https://registry.npmjs.org/d3-brush/-/d3-brush-1.0.4.tgz;,
   "integrity": "sha1-AMLyOAGfJPbAoZSibUGhUw/+e8Q=",
   "requires": {
-"d3-dispatch": "1.0.3",
-"d3-drag": "1.2.1",
-"d3-interpolate": "1.1.6",
-"d3-selection": "1.3.0",
-"d3-transition": "1.1.1"
+"d3-dispatch": "1",

Review comment:
   This change may appear strange but what it is saying is that "d3-brush" 
version 1.0.4 requires any minor or patch release of "d3-dispatch" version 1. 
This however does not change the version of "d3-dispatch" package that is 
installed during the build. That is defined here: 
https://github.com/apache/nifi/pull/4449/files#diff-9a4616626c8b30875e090d2d589ce665R173.
   
   This change is due to the upgrade of npm to version 6.10.0. Although this 
update produces a confusing diff in our package-lock.json this "new" approach 
seems correct to me. If you save the exact version of a direct dependency in 
our package.json (as we have done in this PR) the package-lock.json should not 
list that dependency's dependencies in exact versions. What if another direct 
dependencies also requires the same package as a transitive dependency but it 
lists a different exact minor version? That will produce warnings during the 
`npm install` process. 
   
   All that matters is that npm produces the same resulting node_modules every 
time - which it does.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on pull request #863: MINIFICPP-1318 move test rocksdb state to /var/tmp

2020-08-05 Thread GitBox


szaszm commented on pull request #863:
URL: https://github.com/apache/nifi-minifi-cpp/pull/863#issuecomment-669270931


   > There are a lot more:
   > 
   > ```
   > ~/src/minifi$ git grep '"/tmp' | wc -l
   > 147
   > ```
   > 
   > the rest do not need to be changed?
   
   They are fine as long as we don't try to create a rocksdb repository there. 
The problem is that rocksdb is configured to use direct IO, i.e. bypass the 
page cache, but tmpfs is just a mounted page cache without backing storage, so 
it makes no sense to use direct IO there. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm edited a comment on pull request #863: MINIFICPP-1318 move test rocksdb state to /var/tmp

2020-08-05 Thread GitBox


szaszm edited a comment on pull request #863:
URL: https://github.com/apache/nifi-minifi-cpp/pull/863#issuecomment-669270931


   > There are a lot more:
   > 
   > ```
   > ~/src/minifi$ git grep '"/tmp' | wc -l
   > 147
   > ```
   > 
   > the rest do not need to be changed?
   
   They are fine as long as we don't try to create a rocksdb database there. 
The problem is that rocksdb is configured to use direct IO, i.e. bypass the 
page cache, but tmpfs is just a mounted page cache without backing storage, so 
it makes no sense to use direct IO there. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-7710) Flow XML validation warnings for DataSize

2020-08-05 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-7710:


 Summary: Flow XML validation warnings for DataSize
 Key: NIFI-7710
 URL: https://issues.apache.org/jira/browse/NIFI-7710
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Pierre Villard
Assignee: Pierre Villard


In the UI, go in the configuration view of any relationship and provide a flot 
number for the data size of the back pressure settings like
{code:java}
1.5 GB{code}
When restarting NiFi, warnings will be displayed in the logs:
{code:java}
2020-08-05 17:19:44,092 WARN [main] o.a.nifi.fingerprint.FingerprintFactory 
Schema validation error parsing Flow Configuration at line 260040, col 58: 
cvc-pattern-valid: Value '1.5 GB' is not facet-valid with respect to pattern 
'\d+\s*(B|KB|MB|GB|TB|b|kb|mb|gb|tb)' for type 'DataSize'.
2020-08-05 17:19:44,092 WARN [main] o.a.nifi.fingerprint.FingerprintFactory 
Schema validation error parsing Flow Configuration at line 260040, col 58: 
cvc-type.3.1.3: The value '1.5 GB' of element 'maxWorkQueueDataSize' is not 
valid.{code}
The XSD pattern should be changed to match what is allowed in the UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #848: MINIFICPP-1268 Fix compiler warnings on Windows (VS2017)

2020-08-05 Thread GitBox


szaszm commented on a change in pull request #848:
URL: https://github.com/apache/nifi-minifi-cpp/pull/848#discussion_r463077312



##
File path: extensions/expression-language/Expression.cpp
##
@@ -194,9 +194,12 @@ Value expr_toLower(const std::vector ) {
 
 Value expr_substring(const std::vector ) {
   if (args.size() < 3) {
-return Value(args[0].asString().substr(args[1].asUnsignedLong()));
+size_t offset = gsl::narrow(args[1].asUnsignedLong());
+return Value(args[0].asString().substr(offset));
   } else {
-return Value(args[0].asString().substr(args[1].asUnsignedLong(), 
args[2].asUnsignedLong()));
+size_t offset = gsl::narrow(args[1].asUnsignedLong());
+size_t count = gsl::narrow(args[2].asUnsignedLong());
+return Value(args[0].asString().substr(offset, count));

Review comment:
   :+1: for naming the integers





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #848: MINIFICPP-1268 Fix compiler warnings on Windows (VS2017)

2020-08-05 Thread GitBox


szaszm commented on a change in pull request #848:
URL: https://github.com/apache/nifi-minifi-cpp/pull/848#discussion_r465624329



##
File path: libminifi/src/sitetosite/SiteToSiteClient.cpp
##
@@ -499,8 +499,9 @@ int16_t SiteToSiteClient::send(std::string transactionID, 
DataPacket *packet, co
   return -1;
 }
 
-ret = transaction->getStream().writeData(reinterpret_cast(const_cast(packet->payload_.c_str())), len);
-if (ret != (int64_t)len) {
+ret = transaction->getStream().writeData(reinterpret_cast(const_cast(packet->payload_.c_str())),
+ gsl::narrow(len));
+if (ret != static_cast(len)) {

Review comment:
   uint64_t to int64_t is a narrowing conversion

##
File path: extensions/http-curl/processors/InvokeHTTP.cpp
##
@@ -337,7 +337,7 @@ void InvokeHTTP::onTrigger(const 
std::shared_ptr ,
 const std::vector _body = client.getResponseBody();
 const std::vector _headers = client.getHeaders();
 
-int64_t http_code = client.getResponseCode();
+int http_code = gsl::narrow(client.getResponseCode());

Review comment:
   I would check the response code before assuming that it's valid. I'm not 
sure whether curl validates the number before returning, but I wouldn't want a 
malicious server or MITM to be able to crash (`std::terminate` on narrowing 
failure) the minifi c++ agent.

##
File path: nanofi/src/core/cstream.c
##
@@ -153,9 +148,9 @@ int readUTFLen(uint32_t * utflen, cstream * stream) {
   return ret;
 }
 
-int readUTF(char * buf, uint64_t buflen, cstream * stream) {
+int readUTF(char * buf, uint32_t buflen, cstream * stream) {

Review comment:
   Why uint32_t? Why not e.g. size_t or int?

##
File path: extensions/expression-language/Expression.cpp
##
@@ -194,9 +194,12 @@ Value expr_toLower(const std::vector ) {
 
 Value expr_substring(const std::vector ) {
   if (args.size() < 3) {
-return Value(args[0].asString().substr(args[1].asUnsignedLong()));
+size_t offset = gsl::narrow(args[1].asUnsignedLong());
+return Value(args[0].asString().substr(offset));
   } else {
-return Value(args[0].asString().substr(args[1].asUnsignedLong(), 
args[2].asUnsignedLong()));
+size_t offset = gsl::narrow(args[1].asUnsignedLong());
+size_t count = gsl::narrow(args[2].asUnsignedLong());
+return Value(args[0].asString().substr(offset, count));

Review comment:
   :+1: 

##
File path: libminifi/include/io/CRCStream.h
##
@@ -224,7 +222,7 @@ template
 int CRCStream::readData(uint8_t *buf, int buflen) {
   int ret = child_stream_->read(buf, buflen);
   if (ret > 0) {
-crc_ = crc32(crc_, buf, ret);
+crc_ = crc32(gsl::narrow(crc_), buf, ret);

Review comment:
   I suggest changing the type of `crc_` to `uint32_t` and `static_assert` 
that `uLong` has the same width and signedness as `uint32_t`. This should make 
`gsl::narrow` calls redundant.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-registry] kevdoran commented on pull request #287: NIFIREG-399 Support chown docker in docker

2020-08-05 Thread GitBox


kevdoran commented on pull request #287:
URL: https://github.com/apache/nifi-registry/pull/287#issuecomment-669304916


   @nkininge were you ever able to look at this? The issue, IIRC, was that the 
change broke the build when building outside of docker.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFIREG-410) Add integration tests that cover the new Database UserGroupProvider and AccessPolicyProvider

2020-08-05 Thread Kevin Doran (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFIREG-410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFIREG-410:

Description: 
Opening this ticket as discussed on GitHub PR review: 
[https://github.com/apache/nifi-registry/pull/232#issuecomment-665903431]

 

The goal of this ticket is to add coverage in the NiFi Registry Integration 
Tests for the new DatabaseUserGroupProvider and DatabaseAccessPolicyProvider 
implemented as part of NIFIREG-292. 

The advantage of such tests is that the test-containers framework we use allows 
us to test our migration scripts and JDBC templates against all the various DB 
vendors, but this test framework is utilized _as part of the integration tests, 
not the unit tests_. 

  was:
Opening this ticket as discussed on GitHub PR review: 
[https://github.com/apache/nifi-registry/pull/232#issuecomment-665903431]

 

The goal of this ticket is to add coverage in the NiFi Registry Integration 
Tests for the new DatabaseUserGroupProvider and DatabaseAccessPolicyProvider 
implemented as part of NIFIREG-292. 

 

** The advantage of such tests is that the test-containers framework we use 
allows us to test our migration scripts and JDBC templates against all the 
various DB vendors, but this test framework is utilized _as part of the 
integration tests, not the unit tests_. 


> Add integration tests that cover the new Database UserGroupProvider and 
> AccessPolicyProvider
> 
>
> Key: NIFIREG-410
> URL: https://issues.apache.org/jira/browse/NIFIREG-410
> Project: NiFi Registry
>  Issue Type: Test
>Reporter: Kevin Doran
>Priority: Trivial
>
> Opening this ticket as discussed on GitHub PR review: 
> [https://github.com/apache/nifi-registry/pull/232#issuecomment-665903431]
>  
> The goal of this ticket is to add coverage in the NiFi Registry Integration 
> Tests for the new DatabaseUserGroupProvider and DatabaseAccessPolicyProvider 
> implemented as part of NIFIREG-292. 
> The advantage of such tests is that the test-containers framework we use 
> allows us to test our migration scripts and JDBC templates against all the 
> various DB vendors, but this test framework is utilized _as part of the 
> integration tests, not the unit tests_. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFIREG-410) Add integration tests that cover the new Database UserGroupProvider and AccessPolicyProvider

2020-08-05 Thread Kevin Doran (Jira)
Kevin Doran created NIFIREG-410:
---

 Summary: Add integration tests that cover the new Database 
UserGroupProvider and AccessPolicyProvider
 Key: NIFIREG-410
 URL: https://issues.apache.org/jira/browse/NIFIREG-410
 Project: NiFi Registry
  Issue Type: Test
Reporter: Kevin Doran


Opening this ticket as discussed on GitHub PR review: 
[https://github.com/apache/nifi-registry/pull/232#issuecomment-665903431]

 

The goal of this ticket is to add coverage in the NiFi Registry Integration 
Tests for the new DatabaseUserGroupProvider and DatabaseAccessPolicyProvider 
implemented as part of NIFIREG-292. 

 

** The advantage of such tests is that the test-containers framework we use 
allows us to test our migration scripts and JDBC templates against all the 
various DB vendors, but this test framework is utilized _as part of the 
integration tests, not the unit tests_. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] sardell commented on a change in pull request #4449: [NIFI-7705] update frontend deps

2020-08-05 Thread GitBox


sardell commented on a change in pull request #4449:
URL: https://github.com/apache/nifi/pull/4449#discussion_r465873255



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/frontend/package-lock.json
##
@@ -144,20 +144,20 @@
   "resolved": "https://registry.npmjs.org/d3-brush/-/d3-brush-1.0.4.tgz;,
   "integrity": "sha1-AMLyOAGfJPbAoZSibUGhUw/+e8Q=",
   "requires": {
-"d3-dispatch": "1.0.3",
-"d3-drag": "1.2.1",
-"d3-interpolate": "1.1.6",
-"d3-selection": "1.3.0",
-"d3-transition": "1.1.1"
+"d3-dispatch": "1",

Review comment:
   I can confirm this is how their new versioning approach works and that 
`requires` is just used for version mismatch warnings. The top-level 
dependencies determines what is installed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] scottyaslan commented on a change in pull request #4449: [NIFI-7705] update frontend deps

2020-08-05 Thread GitBox


scottyaslan commented on a change in pull request #4449:
URL: https://github.com/apache/nifi/pull/4449#discussion_r465808576



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/frontend/package-lock.json
##
@@ -144,20 +144,20 @@
   "resolved": "https://registry.npmjs.org/d3-brush/-/d3-brush-1.0.4.tgz;,
   "integrity": "sha1-AMLyOAGfJPbAoZSibUGhUw/+e8Q=",
   "requires": {
-"d3-dispatch": "1.0.3",
-"d3-drag": "1.2.1",
-"d3-interpolate": "1.1.6",
-"d3-selection": "1.3.0",
-"d3-transition": "1.1.1"
+"d3-dispatch": "1",

Review comment:
   This change may appear strange but what it is saying is that "d3-brush" 
version 1.0.4 requires any minor or patch release of "d3-dispatch" version 1. 
This however does not change the version of "d3-dispatch" package that is 
installed during the build. That is defined here: 
https://github.com/apache/nifi/pull/4449/files#diff-9a4616626c8b30875e090d2d589ce665R173.
   
   This change is due to the upgrade of npm to version 6.10.0. Although this 
update produces a confusing diff in our package-lock.json this "new" approach 
seems correct to me. If you save the exact version of a direct dependency in 
our package.json (as we have done in this PR) the package-lock.json should not 
list that dependency's dependencies in exact versions. What if another direct 
dependencies also requires the same package as a transitive dependency but it 
lists a different exact minor version? That will produce warnings (or worse... 
errors!) during the `npm install` process. 
   
   All that matters is that npm produces the same resulting node_modules every 
time - which it does.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7710) Flow XML validation warnings for DataSize

2020-08-05 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7710:
-
Status: Patch Available  (was: Open)

> Flow XML validation warnings for DataSize
> -
>
> Key: NIFI-7710
> URL: https://issues.apache.org/jira/browse/NIFI-7710
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Trivial
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In the UI, go in the configuration view of any relationship and provide a 
> flot number for the data size of the back pressure settings like
> {code:java}
> 1.5 GB{code}
> When restarting NiFi, warnings will be displayed in the logs:
> {code:java}
> 2020-08-05 17:19:44,092 WARN [main] o.a.nifi.fingerprint.FingerprintFactory 
> Schema validation error parsing Flow Configuration at line 260040, col 58: 
> cvc-pattern-valid: Value '1.5 GB' is not facet-valid with respect to pattern 
> '\d+\s*(B|KB|MB|GB|TB|b|kb|mb|gb|tb)' for type 'DataSize'.
> 2020-08-05 17:19:44,092 WARN [main] o.a.nifi.fingerprint.FingerprintFactory 
> Schema validation error parsing Flow Configuration at line 260040, col 58: 
> cvc-type.3.1.3: The value '1.5 GB' of element 'maxWorkQueueDataSize' is not 
> valid.{code}
> The XSD pattern should be changed to match what is allowed in the UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] scottyaslan commented on a change in pull request #4449: [NIFI-7705] update frontend deps

2020-08-05 Thread GitBox


scottyaslan commented on a change in pull request #4449:
URL: https://github.com/apache/nifi/pull/4449#discussion_r465808576



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/frontend/package-lock.json
##
@@ -144,20 +144,20 @@
   "resolved": "https://registry.npmjs.org/d3-brush/-/d3-brush-1.0.4.tgz;,
   "integrity": "sha1-AMLyOAGfJPbAoZSibUGhUw/+e8Q=",
   "requires": {
-"d3-dispatch": "1.0.3",
-"d3-drag": "1.2.1",
-"d3-interpolate": "1.1.6",
-"d3-selection": "1.3.0",
-"d3-transition": "1.1.1"
+"d3-dispatch": "1",

Review comment:
   This change may appear strange but what it is saying is that "d3-brush" 
version 1.0.4 requires any minor or patch release of "d3-dispatch" version 1. 
This however does not change the version of "d3-dispatch" package that is 
installed during the build. That is defined here: 
https://github.com/apache/nifi/pull/4449/files#diff-9a4616626c8b30875e090d2d589ce665R173.
   
   This change is due to the upgrade of npm to version 6.10.0. Although this 
update produces a confusing diff in our package-lock.json this "new" approach 
seems correct to me. If you save the exact version of a direct dependency in 
our package.json (as we have done in this PR) the package-lock.json should not 
list that dependency's dependencies in exact versions. What if another direct 
dependencies also requires the same package as a transitive dependency but it 
lists a different exact minor version? That will produce warnings (or worse... 
errors!) during the `npm install` process. 
   
   All that matters is that npm produces the same resulting node_modules every 
time - which it does. It is the exact version at the top level that determines 
what is installed, not the loose version referenced nested inside each 
individual dependency.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] sardell commented on pull request #4449: [NIFI-7705] update frontend deps

2020-08-05 Thread GitBox


sardell commented on pull request #4449:
URL: https://github.com/apache/nifi/pull/4449#issuecomment-669301985


   Reviewing now.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] rfellows commented on pull request #4449: [NIFI-7705] update frontend deps

2020-08-05 Thread GitBox


rfellows commented on pull request #4449:
URL: https://github.com/apache/nifi/pull/4449#issuecomment-669309642


   will review



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #861: MINIFICPP-1312 - Fix flaky FlowControllerTest

2020-08-05 Thread GitBox


szaszm commented on a change in pull request #861:
URL: https://github.com/apache/nifi-minifi-cpp/pull/861#discussion_r465840051



##
File path: libminifi/test/flow-tests/FlowControllerTests.cpp
##
@@ -215,20 +220,19 @@ TEST_CASE("Extend the waiting period during shutdown", 
"[TestFlow4]") {
   testController.startFlow();
 
   // wait for the source processor to enqueue its flowFiles
-  int tryCount = 0;
-  while (tryCount++ < 10 && root->getTotalFlowFileCount() != 3) {
-std::this_thread::sleep_for(std::chrono::milliseconds{20});
-  }
+  busy_wait(std::chrono::milliseconds{50}, [&] {return 
root->getTotalFlowFileCount() != 0;});
 
-  REQUIRE(root->getTotalFlowFileCount() == 3);
+  REQUIRE(root->getTotalFlowFileCount() != 0);
   REQUIRE(sourceProc->trigger_count.load() == 1);
 
   std::thread shutdownThread([&]{
 execSinkPromise.set_value();
 controller->stop(true);
   });
 
-  while (controller->isRunning()) {
+  int extendCount = 0;
+  while (extendCount++ < 5 && controller->isRunning()) {

Review comment:
   shorter sleeps and checking for a timeout instead of run count should 
achieve the same goal without multiplying the uncertainty of the agents.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-registry] kevdoran closed pull request #232: NIFIREG-292 Add DB impls of UserGroupProvider and AccessPolicyProvider

2020-08-05 Thread GitBox


kevdoran closed pull request #232:
URL: https://github.com/apache/nifi-registry/pull/232


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFIREG-292) Add database implementations of UserGroupProvider and AccessPolicyProvider

2020-08-05 Thread Kevin Doran (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFIREG-292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran resolved NIFIREG-292.
-
Fix Version/s: 0.8.0
   Resolution: Done

> Add database implementations of UserGroupProvider and AccessPolicyProvider
> --
>
> Key: NIFIREG-292
> URL: https://issues.apache.org/jira/browse/NIFIREG-292
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 0.8.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> We should offer database backed implementations of UserGroupProvider and 
> AccessPolicyProvider as an alternative to the file-based impls. We have LDAP 
> and Ranger alternatives, but for people not using those, the DB impls would 
> be a good way to get the data off the local filesystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mcgilman commented on a change in pull request #4449: [NIFI-7705] update frontend deps

2020-08-05 Thread GitBox


mcgilman commented on a change in pull request #4449:
URL: https://github.com/apache/nifi/pull/4449#discussion_r465922789



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/frontend/package-lock.json
##
@@ -144,20 +144,20 @@
   "resolved": "https://registry.npmjs.org/d3-brush/-/d3-brush-1.0.4.tgz;,
   "integrity": "sha1-AMLyOAGfJPbAoZSibUGhUw/+e8Q=",
   "requires": {
-"d3-dispatch": "1.0.3",
-"d3-drag": "1.2.1",
-"d3-interpolate": "1.1.6",
-"d3-selection": "1.3.0",
-"d3-transition": "1.1.1"
+"d3-dispatch": "1",

Review comment:
   @scottyaslan @sardell I think I misunderstood the top-level dependency 
comment. It looks like all dependencies (transitive or not) will be 'top-level' 
in the package-lock.json. These will have explicit versions and will ensure 
repeatable builds assuming we use it in our builds.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mcgilman commented on a change in pull request #4449: [NIFI-7705] update frontend deps

2020-08-05 Thread GitBox


mcgilman commented on a change in pull request #4449:
URL: https://github.com/apache/nifi/pull/4449#discussion_r465896879



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/frontend/package-lock.json
##
@@ -144,20 +144,20 @@
   "resolved": "https://registry.npmjs.org/d3-brush/-/d3-brush-1.0.4.tgz;,
   "integrity": "sha1-AMLyOAGfJPbAoZSibUGhUw/+e8Q=",
   "requires": {
-"d3-dispatch": "1.0.3",
-"d3-drag": "1.2.1",
-"d3-interpolate": "1.1.6",
-"d3-selection": "1.3.0",
-"d3-transition": "1.1.1"
+"d3-dispatch": "1",

Review comment:
   @scottyaslan @sardell Thanks for the analysis here. If I've understood 
these comments correctly, they explain the behavior for a direct dependency of 
NiFi. They even touch on what happens is a conflict in versioned between a 
direct dependency of NiFi that is also a transitive dependency. 
   
   But what is the behavior for something that is only a transitive dependency 
and there is no direct dependency to compare against? If that `requires` 
section does not contain a specific version, what ensures that when newer 
versions become available they aren't used? Looking to confirm that we can 
claim that we have repeatable builds.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-registry] kevdoran closed pull request #291: NIFIREG-409 Refactoring revision management so that RevisionManager i…

2020-08-05 Thread GitBox


kevdoran closed pull request #291:
URL: https://github.com/apache/nifi-registry/pull/291


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (NIFI-5702) FlowFileRepo should not discard data (at least not by default)

2020-08-05 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne reassigned NIFI-5702:


Assignee: Mark Payne

> FlowFileRepo should not discard data (at least not by default)
> --
>
> Key: NIFI-5702
> URL: https://issues.apache.org/jira/browse/NIFI-5702
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.1, 1.9.2
>Reporter: Brandon Rhys DeVries
>Assignee: Mark Payne
>Priority: Major
>
> The WriteAheadFlowFileRepository currently discards data it cannot find a 
> queue for.  Unfortunately, we have run in to issues where, when rejoining a 
> node to a cluster, the flow.xml.gz can go "missing".  This results in the 
> instance creating a new, empty, flow.xml.gz and then continuing on... and not 
> finding queues for any of its existing data, dropping it all.  Regardless of 
> the circumstances leading to an empty (or unexpectedly modified) flow.xml.gz, 
> dropping data without user input seems less than ideal. 
> Internally, my group has added a property 
> "remove.orphaned.flowfiles.on.startup", defaulting to "false".  On 
> startup, rather than silently dropping data, the repo will throw an exception 
> preventing startup.  The operator can then choose to either "fix" any 
> unexpected issues with the flow.xml.gz, or they can set the above property to 
> "true" which restores the original behavior allowing the system to be 
> restarted.  When set to "true" this property also results in a warning 
> message indicating that in this configuration the repo can drop data without 
> (advance) warning.  
>  
>  
> [1] 
> https://github.com/apache/nifi/blob/support/nifi-1.7.x/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/WriteAheadFlowFileRepository.java#L596



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 opened a new pull request #4454: NIFI-7706, NIFI-5702: Allow NiFi to keep FlowFiles if their queue is …

2020-08-05 Thread GitBox


markap14 opened a new pull request #4454:
URL: https://github.com/apache/nifi/pull/4454


   …unknown. This way, if a Flow is inadvertently removed, updated, etc., and 
NiFi is restarted, the data will not be dropped by default. The old mechanism 
of dropping data is exposed via a property
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-5702) FlowFileRepo should not discard data (at least not by default)

2020-08-05 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-5702:
-
Fix Version/s: 1.12.0
   Status: Patch Available  (was: Open)

> FlowFileRepo should not discard data (at least not by default)
> --
>
> Key: NIFI-5702
> URL: https://issues.apache.org/jira/browse/NIFI-5702
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.9.2, 1.7.1
>Reporter: Brandon Rhys DeVries
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.12.0
>
>
> The WriteAheadFlowFileRepository currently discards data it cannot find a 
> queue for.  Unfortunately, we have run in to issues where, when rejoining a 
> node to a cluster, the flow.xml.gz can go "missing".  This results in the 
> instance creating a new, empty, flow.xml.gz and then continuing on... and not 
> finding queues for any of its existing data, dropping it all.  Regardless of 
> the circumstances leading to an empty (or unexpectedly modified) flow.xml.gz, 
> dropping data without user input seems less than ideal. 
> Internally, my group has added a property 
> "remove.orphaned.flowfiles.on.startup", defaulting to "false".  On 
> startup, rather than silently dropping data, the repo will throw an exception 
> preventing startup.  The operator can then choose to either "fix" any 
> unexpected issues with the flow.xml.gz, or they can set the above property to 
> "true" which restores the original behavior allowing the system to be 
> restarted.  When set to "true" this property also results in a warning 
> message indicating that in this configuration the repo can drop data without 
> (advance) warning.  
>  
>  
> [1] 
> https://github.com/apache/nifi/blob/support/nifi-1.7.x/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/WriteAheadFlowFileRepository.java#L596



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-5702) FlowFileRepo should not discard data (at least not by default)

2020-08-05 Thread Mark Payne (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17171719#comment-17171719
 ] 

Mark Payne commented on NIFI-5702:
--

Due to some changes that were made for 1.12 that are entirely unrelated to this 
concern, we did make some changes to the FlowFile Repository. Those changes 
were then addressed by implementing exactly this. I added a property to control 
whether or not data was dropped and default the property to retain the data 
instead of drop. I don't think there's a backward compatibility concern here - 
we are not changing an API. Not dropping the data, I think, is the more highly 
anticipated behavior, but more importantly the default should always err on the 
side of "Do not lose data" :)

> FlowFileRepo should not discard data (at least not by default)
> --
>
> Key: NIFI-5702
> URL: https://issues.apache.org/jira/browse/NIFI-5702
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.1, 1.9.2
>Reporter: Brandon Rhys DeVries
>Assignee: Mark Payne
>Priority: Major
>
> The WriteAheadFlowFileRepository currently discards data it cannot find a 
> queue for.  Unfortunately, we have run in to issues where, when rejoining a 
> node to a cluster, the flow.xml.gz can go "missing".  This results in the 
> instance creating a new, empty, flow.xml.gz and then continuing on... and not 
> finding queues for any of its existing data, dropping it all.  Regardless of 
> the circumstances leading to an empty (or unexpectedly modified) flow.xml.gz, 
> dropping data without user input seems less than ideal. 
> Internally, my group has added a property 
> "remove.orphaned.flowfiles.on.startup", defaulting to "false".  On 
> startup, rather than silently dropping data, the repo will throw an exception 
> preventing startup.  The operator can then choose to either "fix" any 
> unexpected issues with the flow.xml.gz, or they can set the above property to 
> "true" which restores the original behavior allowing the system to be 
> restarted.  When set to "true" this property also results in a warning 
> message indicating that in this configuration the repo can drop data without 
> (advance) warning.  
>  
>  
> [1] 
> https://github.com/apache/nifi/blob/support/nifi-1.7.x/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/WriteAheadFlowFileRepository.java#L596



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] rfellows commented on pull request #4449: [NIFI-7705] update frontend deps

2020-08-05 Thread GitBox


rfellows commented on pull request #4449:
URL: https://github.com/apache/nifi/pull/4449#issuecomment-669474604


   It looks like upgrading slick-grid may have had some adverse effects on some 
of the usages of it...
   
   In particular:
   * Parameter tab in Parameter Context dialog
   
![parameters](https://user-images.githubusercontent.com/713866/89458078-720dce00-d734-11ea-803e-7db2ad14f0aa.png)
   
   * Variables
   
![variables](https://user-images.githubusercontent.com/713866/89458175-95d11400-d734-11ea-8932-9a812c0d8f1b.png)
   
   
   I checked around for other usages of the grid and didn't see any other 
issues.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7711) GenerateTableFetch generates fetch on empty table

2020-08-05 Thread Daniel Lorych (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Lorych updated NIFI-7711:

Description: 
h6. Description

GenerateTableFetch generates "select all" where clause (1=1) for an empty 
table, which leads to inconsistent state (and duplicate flowfiles).
 Generated SQL statement (1=1) can return values, which were created after 
collecting max values during statement generation. On a subsequent run, with 
existing data, the statement will contain a where clause with maxValue limited.
h6. Environment:
 * Table is empty
 * 'Maximum-value Columns' property is set to PK
 * 'Partition Size' property set to 0
  

h6. Expected behaviour:

No or empty flowfile should be generated.


h6. Root Cause:

`numberOfFetches` is calculated incorrectly for `partitionSize == 0`. 
Calculation should take into account returned `rowCount`.
[GenerateTableFetch.java#L462|https://github.com/apache/nifi/blob/4d940bb151eb8d250b0319318b96d23c4a9819ae/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java#L462]


h6. Possible fix:
{code:java}
numberOfFetches = (partitionSize == 0) ? (rowCount == 0 ? 0 : 1) : (rowCount / 
partitionSize) + (rowCount % partitionSize == 0 ? 0 : 1);
{code}
 

  was:
h6. Description

GenerateTableFetch generates "select all" where clause (1=1) for an empty 
table, which leads to inconsistent state (and duplicate flowfiles).
 Generated SQL statement (1=1) can return values, which were created after 
collecting max values during statement generation. On a subsequent run, with 
existing data, the statement will contain a where clause with maxValue limited.
h6. Environment:
 * Table is empty
 * 'Maximum-value Columns' property is set to PK
 * 'Partition Size' property set to 0
  

h6. Expected behaviour:

Fetch statement should not be generated.


h6. Root Cause:

`numberOfFetches` is calculated incorrectly for `partitionSize == 0`. 
Calculation should take into account returned `rowCount`.
[GenerateTableFetch.java#L462|https://github.com/apache/nifi/blob/4d940bb151eb8d250b0319318b96d23c4a9819ae/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java#L462]


h6. Possible fix:
{code:java}
numberOfFetches = (partitionSize == 0) ? (rowCount == 0 ? 0 : 1) : (rowCount / 
partitionSize) + (rowCount % partitionSize == 0 ? 0 : 1);
{code}
 


> GenerateTableFetch generates fetch on empty table
> -
>
> Key: NIFI-7711
> URL: https://issues.apache.org/jira/browse/NIFI-7711
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: Daniel Lorych
>Priority: Major
>
> h6. Description
> GenerateTableFetch generates "select all" where clause (1=1) for an empty 
> table, which leads to inconsistent state (and duplicate flowfiles).
>  Generated SQL statement (1=1) can return values, which were created after 
> collecting max values during statement generation. On a subsequent run, with 
> existing data, the statement will contain a where clause with maxValue 
> limited.
> h6. Environment:
>  * Table is empty
>  * 'Maximum-value Columns' property is set to PK
>  * 'Partition Size' property set to 0
>   
> h6. Expected behaviour:
> No or empty flowfile should be generated.
> h6. Root Cause:
> `numberOfFetches` is calculated incorrectly for `partitionSize == 0`. 
> Calculation should take into account returned `rowCount`.
> [GenerateTableFetch.java#L462|https://github.com/apache/nifi/blob/4d940bb151eb8d250b0319318b96d23c4a9819ae/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java#L462]
> h6. Possible fix:
> {code:java}
> numberOfFetches = (partitionSize == 0) ? (rowCount == 0 ? 0 : 1) : (rowCount 
> / partitionSize) + (rowCount % partitionSize == 0 ? 0 : 1);
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7711) GenerateTableFetch generates fetch on empty table

2020-08-05 Thread Daniel Lorych (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Lorych updated NIFI-7711:

Priority: Minor  (was: Major)

> GenerateTableFetch generates fetch on empty table
> -
>
> Key: NIFI-7711
> URL: https://issues.apache.org/jira/browse/NIFI-7711
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: Daniel Lorych
>Priority: Minor
>
> h6. Description
> GenerateTableFetch generates "select all" where clause (1=1) for an empty 
> table, which leads to inconsistent state (and duplicate flowfiles).
>  Generated SQL statement (1=1) can return values, which were created after 
> collecting max values during statement generation. On a subsequent run, with 
> existing data, the statement will contain a where clause with maxValue 
> limited.
> h6. Environment:
>  * Table is empty
>  * 'Maximum-value Columns' property is set to PK
>  * 'Partition Size' property set to 0
>   
> h6. Expected behaviour:
> No or empty flowfile should be generated.
> h6. Root Cause:
> `numberOfFetches` is calculated incorrectly for `partitionSize == 0`. 
> Calculation should take into account returned `rowCount`.
> [GenerateTableFetch.java#L462|https://github.com/apache/nifi/blob/4d940bb151eb8d250b0319318b96d23c4a9819ae/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java#L462]
> h6. Possible fix:
> {code:java}
> numberOfFetches = (partitionSize == 0) ? (rowCount == 0 ? 0 : 1) : (rowCount 
> / partitionSize) + (rowCount % partitionSize == 0 ? 0 : 1);
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFIREG-409) Incorrect revision returned on an updated entity

2020-08-05 Thread Kevin Doran (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFIREG-409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran resolved NIFIREG-409.
-
Fix Version/s: 0.8.0
   Resolution: Fixed

> Incorrect revision returned on an updated entity
> 
>
> Key: NIFIREG-409
> URL: https://issues.apache.org/jira/browse/NIFIREG-409
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.7.0
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
> Fix For: 0.8.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> With revisions enabled, if you create an entity like a user or group, then 
> update it, and then use the returned entity from the update to issue a 
> delete, you will get an error about the entity being out of date with the 
> server.
> The version in the revision that is being returned is being incremented by 2 
> instead of by 1 because the JDBC revision manager is incrementing it, but 
> then it is getting incremented again in the StandardRevisableEntityService.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] sardell commented on pull request #4449: [NIFI-7705] update frontend deps

2020-08-05 Thread GitBox


sardell commented on pull request #4449:
URL: https://github.com/apache/nifi/pull/4449#issuecomment-669430990


   LGTM. +1 (non-binding)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-registry] kevdoran commented on pull request #292: NIFIREG-408 Adding clients for tenants and policies

2020-08-05 Thread GitBox


kevdoran commented on pull request #292:
URL: https://github.com/apache/nifi-registry/pull/292#issuecomment-669442655


   Thanks @bbende. I started reviewing this. Locally I rebased on `main` after 
merging #291. 
   
   I'm not sure why, but although the integration tests are passing in GH 
Actions, they are failing for me locally
   
   ```
   Results :
   
   
   Tests in error:
 SecureNiFiRegistryClientIT.testTenantsClient:290 » SSL Connection has been 
shu...
   ```
   
   Looking into it, but wanted to make you aware and ask if you had seen this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-registry] bbende commented on pull request #292: NIFIREG-408 Adding clients for tenants and policies

2020-08-05 Thread GitBox


bbende commented on pull request #292:
URL: https://github.com/apache/nifi-registry/pull/292#issuecomment-669461591


   Thanks for reviewing!  Hmmm I do remember seeing this error once, but I 
think I only encountered it when running the IT test from Intellij and then 
after fixing the other revision problem I never saw it again, but interesting 
that it is happening from a maven build here.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #848: MINIFICPP-1268 Fix compiler warnings on Windows (VS2017)

2020-08-05 Thread GitBox


fgerlits commented on a change in pull request #848:
URL: https://github.com/apache/nifi-minifi-cpp/pull/848#discussion_r465977037



##
File path: nanofi/src/core/cstream.c
##
@@ -153,9 +148,9 @@ int readUTFLen(uint32_t * utflen, cstream * stream) {
   return ret;
 }
 
-int readUTF(char * buf, uint64_t buflen, cstream * stream) {
+int readUTF(char * buf, uint32_t buflen, cstream * stream) {

Review comment:
   Because everywhere we call this function, we call it with an argument of 
type `uint32_t`, and the return value (output argument) of `readUTFLen()` is 
also of type `uint32_t`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7711) GenerateTableFetch generates fetch on empty table

2020-08-05 Thread Daniel Lorych (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Lorych updated NIFI-7711:

Summary: GenerateTableFetch generates fetch on empty table  (was: 
GenerateTableFetch where clause with 1=1 leads t )

> GenerateTableFetch generates fetch on empty table
> -
>
> Key: NIFI-7711
> URL: https://issues.apache.org/jira/browse/NIFI-7711
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: Daniel Lorych
>Priority: Major
>
> h6. Description
> GenerateTableFetch generates "select all" where clause (1=1) for an empty 
> table, which leads to inconsistent state (and duplicate flowfiles).
>  Generated SQL statement (1=1) can return values, which were created after 
> collecting max values during statement generation. On a subsequent run, with 
> existing data, the statement will contain a where clause with maxValue 
> limited.
> h6. Environment:
>  * Table is empty
>  * 'Maximum-value Columns' property is set to PK
>  * 'Partition Size' property set to 0
>   
> h6. Expected behaviour:
> Fetch statement should not be generated.
> h6. Root Cause:
> `numberOfFetches` is calculated incorrectly for `partitionSize == 0`. 
> Calculation should take into account returned `rowCount`.
> [GenerateTableFetch.java#L462|https://github.com/apache/nifi/blob/4d940bb151eb8d250b0319318b96d23c4a9819ae/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java#L462]
> h6. Possible fix:
> {code:java}
> numberOfFetches = (partitionSize == 0) ? (rowCount == 0 ? 0 : 1) : (rowCount 
> / partitionSize) + (rowCount % partitionSize == 0 ? 0 : 1);
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7711) GenerateTableFetch where clause with 1=1 leads t

2020-08-05 Thread Daniel Lorych (Jira)
Daniel Lorych created NIFI-7711:
---

 Summary: GenerateTableFetch where clause with 1=1 leads t 
 Key: NIFI-7711
 URL: https://issues.apache.org/jira/browse/NIFI-7711
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.11.4
Reporter: Daniel Lorych


h6. Description

GenerateTableFetch generates "select all" where clause (1=1) for an empty 
table, which leads to inconsistent state (and duplicate flowfiles).
 Generated SQL statement (1=1) can return values, which were created after 
collecting max values during statement generation. On a subsequent run, with 
existing data, the statement will contain a where clause with maxValue limited.
h6. Environment:
 * Table is empty
 * 'Maximum-value Columns' property is set to PK
 * 'Partition Size' property set to 0
  

h6. Expected behaviour:

Fetch statement should not be generated.


h6. Root Cause:

`numberOfFetches` is calculated incorrectly for `partitionSize == 0`. 
Calculation should take into account returned `rowCount`.
[GenerateTableFetch.java#L462|https://github.com/apache/nifi/blob/4d940bb151eb8d250b0319318b96d23c4a9819ae/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java#L462]


h6. Possible fix:
{code:java}
numberOfFetches = (partitionSize == 0) ? (rowCount == 0 ? 0 : 1) : (rowCount / 
partitionSize) + (rowCount % partitionSize == 0 ? 0 : 1);
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #848: MINIFICPP-1268 Fix compiler warnings on Windows (VS2017)

2020-08-05 Thread GitBox


fgerlits commented on a change in pull request #848:
URL: https://github.com/apache/nifi-minifi-cpp/pull/848#discussion_r465978905



##
File path: libminifi/include/io/CRCStream.h
##
@@ -224,7 +222,7 @@ template
 int CRCStream::readData(uint8_t *buf, int buflen) {
   int ret = child_stream_->read(buf, buflen);
   if (ret > 0) {
-crc_ = crc32(crc_, buf, ret);
+crc_ = crc32(gsl::narrow(crc_), buf, ret);

Review comment:
   `uLong` in zlib is a typedef to `unsigned long`, so on 64-bit Linux it 
is 64 bits.  I have changed the type of `crc_` to `uLong`, so now we only have 
one `gsl::narrow` (called once per instance) instead of three (called many 
times).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #848: MINIFICPP-1268 Fix compiler warnings on Windows (VS2017)

2020-08-05 Thread GitBox


fgerlits commented on a change in pull request #848:
URL: https://github.com/apache/nifi-minifi-cpp/pull/848#discussion_r465978905



##
File path: libminifi/include/io/CRCStream.h
##
@@ -224,7 +222,7 @@ template
 int CRCStream::readData(uint8_t *buf, int buflen) {
   int ret = child_stream_->read(buf, buflen);
   if (ret > 0) {
-crc_ = crc32(crc_, buf, ret);
+crc_ = crc32(gsl::narrow(crc_), buf, ret);

Review comment:
   `uLong` in zlib is a typedef to `unsigned long`, so on 64-bit Linux it 
is 64 bits long.  I have changed the type of `crc_` to `uLong`, so now we only 
have one `gsl::narrow` (called once per instance) instead of three (called many 
times).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #848: MINIFICPP-1268 Fix compiler warnings on Windows (VS2017)

2020-08-05 Thread GitBox


fgerlits commented on a change in pull request #848:
URL: https://github.com/apache/nifi-minifi-cpp/pull/848#discussion_r465979206



##
File path: libminifi/src/sitetosite/SiteToSiteClient.cpp
##
@@ -499,8 +499,9 @@ int16_t SiteToSiteClient::send(std::string transactionID, 
DataPacket *packet, co
   return -1;
 }
 
-ret = transaction->getStream().writeData(reinterpret_cast(const_cast(packet->payload_.c_str())), len);
-if (ret != (int64_t)len) {
+ret = transaction->getStream().writeData(reinterpret_cast(const_cast(packet->payload_.c_str())),
+ gsl::narrow(len));
+if (ret != static_cast(len)) {

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #848: MINIFICPP-1268 Fix compiler warnings on Windows (VS2017)

2020-08-05 Thread GitBox


fgerlits commented on a change in pull request #848:
URL: https://github.com/apache/nifi-minifi-cpp/pull/848#discussion_r465979548



##
File path: extensions/http-curl/processors/InvokeHTTP.cpp
##
@@ -337,7 +337,7 @@ void InvokeHTTP::onTrigger(const 
std::shared_ptr ,
 const std::vector _body = client.getResponseBody();
 const std::vector _headers = client.getHeaders();
 
-int64_t http_code = client.getResponseCode();
+int http_code = gsl::narrow(client.getResponseCode());

Review comment:
   I have removed this `gsl::narrow()` call.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] scottyaslan commented on pull request #4449: [NIFI-7705] update frontend deps

2020-08-05 Thread GitBox


scottyaslan commented on pull request #4449:
URL: https://github.com/apache/nifi/pull/4449#issuecomment-669551592


   Thanks @rfellows for the thorough review. I have updated this PR to address 
the table issues you found.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7705) UI - update frontend deps

2020-08-05 Thread Scott Aslan (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan updated NIFI-7705:
--
Status: Patch Available  (was: In Progress)

> UI - update frontend deps
> -
>
> Key: NIFI-7705
> URL: https://issues.apache.org/jira/browse/NIFI-7705
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (NIFI-7572) Add a ScriptedTransformRecord processor

2020-08-05 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reopened NIFI-7572:


Reopening due to bug where the Jython script is not recompiled when the script 
body changes

> Add a ScriptedTransformRecord processor
> ---
>
> Key: NIFI-7572
> URL: https://issues.apache.org/jira/browse/NIFI-7572
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> NiFi has started to put a heavier emphasis on Record-oriented processors, as 
> they provide many benefits including better performance and a better UX over 
> their purely byte-oriented counterparts. It is common to see users wanting to 
> transform a Record in some very specific way, but NiFi doesn't make this as 
> easy as it should. There are methods using ExecuteScript, 
> InvokedScriptedProcessor, ScriptedRecordWriter, and ScriptedRecordReader for 
> instance.
> But each of these requires that the Script writer understand a lot about NiFi 
> and how to expose properties, create Property Descriptors, etc. and for 
> fairly simple transformation we end up with scripts where the logic takes 
> fewer lines of code than the boilerplate.
> We should expose a Processor that allows a user to write a script that takes 
> a Record and transforms that Record in some way. The processor should be 
> configured with the following:
>  * Record Reader (required)
>  * Record Writer (required)
>  * Script Language (required)
>  * Script Body or Script File (one and only one of these required)
> The script should implement a single method along the lines of:
> {code:java}
> Record transform(Record input) throws Exception; {code}
> If the script returns null, the input Record should be dropped. Otherwise, 
> whatever Record is returned should be written to the Record Writer.
> The processor should have two relationships: "success" and "failure."
> The script should not be allowed to expose any properties or define any 
> relationships. The point is to keep the script focused purely on processing 
> the record itself.
> It's not entirely clear to me how easy the Record API works with some of the 
> scripting languages. The Record object does expose a method named toMap() 
> that returns a Map containing the underlying key/value pairs. 
> However, the values in that Map may themselves be Records. It might make 
> sense to expose a new method toNormalizedMap() or something along those lines 
> that would return a Map where the values have been 
> recursively normalized, in much the same way that we do for 
> JoltTransformRecord. This would perhaps allow for cleaner syntax, but I'm not 
> a scripting expert so I can't say for sure whether such a method is necessary.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] kmiracle86 opened a new pull request #4455: small typo fix

2020-08-05 Thread GitBox


kmiracle86 opened a new pull request #4455:
URL: https://github.com/apache/nifi/pull/4455


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   Correct a small typo in the UpdateCounter processor
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org