[GitHub] [nifi] YolandaMDavis commented on pull request #4274: NIFI-7437 - created separate thread for preloading predictions, refac…

2020-05-14 Thread GitBox


YolandaMDavis commented on pull request #4274:
URL: https://github.com/apache/nifi/pull/4274#issuecomment-628990831


   @mattyb149 updates are available



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] alopresto commented on pull request #4263: NIFI-7407 Refactored SSL context generation throughout framework and extensions.

2020-05-14 Thread GitBox


alopresto commented on pull request #4263:
URL: https://github.com/apache/nifi/pull/4263#issuecomment-628984876


   Force pushed as I had to fix the referenced Jira number in the recent commit 
messages. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] alopresto commented on pull request #4263: NIFI-7407 Refactored SSL context generation throughout framework and extensions.

2020-05-14 Thread GitBox


alopresto commented on pull request #4263:
URL: https://github.com/apache/nifi/pull/4263#issuecomment-628981634


   Thanks @markap14 and @thenatog for the extensive testing. I pushed another 
commit which enables TLSv1.3 for the Java 11 UI/API port and should resolve the 
test error. 
   
   I can reproduce the S2S issue mentioned above on a secure 3 node cluster 
pointing back to itself when all nodes are hosted on the same machine. I don't 
think the PR changed how this worked, so I suspect this existed previously, but 
I'll try to address it here as well. I also encountered trouble retrieving S2S 
peers so I will add some unit tests there to see what I can isolate and fix. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] thenatog edited a comment on pull request #4263: NIFI-7407 Refactored SSL context generation throughout framework and extensions.

2020-05-14 Thread GitBox


thenatog edited a comment on pull request #4263:
URL: https://github.com/apache/nifi/pull/4263#issuecomment-628899529


   Looks like there's currently a test error for JDK11.
   
   My testing:
   
   Java 8
   - Secure cluster
   - ListenHTTP
   - InvokeHTTP
   - Checked TLS negotiation for cluster comms data 
(cluster.node.protocol.port) with Wireshark which was TLSv1.2
   - Clustered Site to Site back to the same cluster (had errors)
   - openssl s_client protocol version tests:
   
https://docs.google.com/spreadsheets/d/1Vm17iqMdaPkqKtIYjGBUxG_TtRcdzhFRBnr_kaVTBVg/edit?usp=sharing
   
   Java 11
   - Secure cluster
   - ListenHTTP
   - InvokeHTTP
   - Checked TLS negotiation for cluster comms data 
(cluster.node.protocol.port) with Wireshark which was TLSv1.2
   - Clustered Site to Site back to the same cluster (had errors)
   - openssl s_client protocol version tests: 
https://docs.google.com/spreadsheets/d/1Vm17iqMdaPkqKtIYjGBUxG_TtRcdzhFRBnr_kaVTBVg/edit?usp=sharing
   
   Saw errors with site to site when using the HTTP protocol. I'm not certain 
if it's related to these changes or not:
   `"2020-05-14 15:16:06,799 WARN [Timer-Driven Process Thread-9] 
o.apache.nifi.remote.client.PeerSelector Could not communicate with 
node0.com:9551 to determine which nodes exist in the remote NiFi cluster, due 
to javax.net.ssl.SSLPeerUnverifiedException: Certificate for  
doesn't match any of the subject alternative names: [node1.com]"`
   It's possible these errors only happen for a cluster hosted on the same 
machine/localhost.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] esecules edited a comment on pull request #4265: NIFI-7434: Endpoint suffix property in AzureStorageAccount NIFI processors

2020-05-14 Thread GitBox


esecules edited a comment on pull request #4265:
URL: https://github.com/apache/nifi/pull/4265#issuecomment-628748972


   Will this solve https://jira.apache.org/jira/browse/NIFI-7386?filter=-2 ?
   (Connection to the [Azurite storage 
emulator](https://hub.docker.com/_/microsoft-azure-storage-azurite))
   
   Doesn't look like it. To connect to the azurite emulator you'd need to use a 
connection string like this one 
[here](https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string#connect-to-the-emulator-account-using-the-well-known-account-name-and-key)
   
   ```DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;
   
AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;
   BlobEndpoint=http://127.0.0.1:1/devstoreaccount1;
   TableEndpoint=http://127.0.0.1:10002/devstoreaccount1;
   QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;``` 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] esecules commented on a change in pull request #4265: NIFI-7434: Endpoint suffix property in AzureStorageAccount NIFI processors

2020-05-14 Thread GitBox


esecules commented on a change in pull request #4265:
URL: https://github.com/apache/nifi/pull/4265#discussion_r425471051



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java
##
@@ -85,6 +85,22 @@
 .sensitive(true)
 .build();
 
+public static final PropertyDescriptor ENDPOINT_SUFFIX = new 
PropertyDescriptor.Builder()
+.name("storage-endpoint-suffix")
+.displayName("Storage Endpoint Suffix")
+.description(
+"Storage accounts in public Azure always use a common FQDN 
suffix. " +
+"Override this endpoint suffix with a different suffix in 
certain circumsances (like Azure Stack or non-public Azure regions). " +
+"The preferred way is to configure them through a 
controller service specified in the Storage Credentials property. " +
+"The controller service can provide a common/shared 
configuration for multiple/all Azure processors. Furthermore, the credentials " 
+
+"can also be looked up dynamically with the 'Lookup' 
version of the service.")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)

Review comment:
   It can be valid if the endpoint suffix is defined in the controller 
service.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] MuazmaZ commented on a change in pull request #4273: NIFI-7446: Fail when the specified path is a directory in FetchAzureDataLakeStorage

2020-05-14 Thread GitBox


MuazmaZ commented on a change in pull request #4273:
URL: https://github.com/apache/nifi/pull/4273#discussion_r425486369



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/FetchAzureDataLakeStorage.java
##
@@ -67,6 +67,10 @@ public void onTrigger(ProcessContext context, ProcessSession 
session) throws Pro
 final DataLakeDirectoryClient directoryClient = 
dataLakeFileSystemClient.getDirectoryClient(directory);
 final DataLakeFileClient fileClient = 
directoryClient.getFileClient(fileName);
 
+if (fileClient.getProperties().isDirectory()) {
+throw new ProcessException(FILE.getDisplayName() + " (" + 
fileName + ") points to a directory. Full path: " + fileClient.getFilePath());

Review comment:
   The Exception I am getting is FetchAzureDataLakeStorage[id=x] failed 
to process session due to 
com.azure.storage.file.datalake.models.PathProperties.isDirectory()Ljava/lang/Boolean;;
 Processor Administratively Yielded for 1 sec: java.lang.NoSuchMethodError: 
com.azure.storage.file.datalake.models.PathProperties.isDirectory()Ljava/lang/Boolean;





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (NIFI-7328) Improve OIDC Identity Provider

2020-05-14 Thread M Tien (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

M Tien reassigned NIFI-7328:


Assignee: M Tien  (was: Troy Melhase)

> Improve OIDC Identity Provider
> --
>
> Key: NIFI-7328
> URL: https://issues.apache.org/jira/browse/NIFI-7328
> Project: Apache NiFi
>  Issue Type: Epic
>  Components: Core Framework, Extensions
>Affects Versions: 1.11.4
>Reporter: Andy LoPresto
>Assignee: M Tien
>Priority: Major
>  Labels: authentication, identity, keystore, logging, oidc, 
> security, tls
>
> A number of issues with the OIDC identity provider have been discovered 
> recently. 
> * The logging is insufficient to debug issues with the IdP
> * It does not use the NiFi keystore & truststore but rather the JVM default
> * There are grammatical and syntactic errors in log and error messages
> * It may not process all claims in the IdP response correctly



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] thenatog commented on pull request #4263: NIFI-7407 Refactored SSL context generation throughout framework and extensions.

2020-05-14 Thread GitBox


thenatog commented on pull request #4263:
URL: https://github.com/apache/nifi/pull/4263#issuecomment-628899529


   Looks like there's currently a test error for JDK11.
   
   My testing:
   Java 8
- Secure cluster
- ListenHTTP
- InvokeHTTP
- Clustered Site to Site back to the same cluster (had errors)
- openssl s_client protocol version tests: 
https://docs.google.com/spreadsheets/d/1Vm17iqMdaPkqKtIYjGBUxG_TtRcdzhFRBnr_kaVTBVg/edit?usp=sharing
   
   Java 11
- Secure cluster
- ListenHTTP
- InvokeHTTP
- Clustered Site to Site back to the same cluster (had errors)
- openssl s_client protocol version tests: 
https://docs.google.com/spreadsheets/d/1Vm17iqMdaPkqKtIYjGBUxG_TtRcdzhFRBnr_kaVTBVg/edit?usp=sharing
   
   Saw errors with site to site when using the HTTP protocol. I'm not certain 
if it's related to these changes or not:
   `"2020-05-14 15:16:06,799 WARN [Timer-Driven Process Thread-9] 
o.apache.nifi.remote.client.PeerSelector Could not communicate with 
node0.com:9551 to determine which nodes exist in the remote NiFi cluster, due 
to javax.net.ssl.SSLPeerUnverifiedException: Certificate for  
doesn't match any of the subject alternative names: [node1.com]"`
   It's possible these errors only happen for a cluster hosted on the same 
machine/localhost.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on a change in pull request #4275: NIFI-6497: Allow FreeFormTextRecordSetWriter to access FlowFile Attributes

2020-05-14 Thread GitBox


mattyb149 commented on a change in pull request #4275:
URL: https://github.com/apache/nifi/pull/4275#discussion_r42521



##
File path: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/test/java/org/apache/nifi/text/TestFreeFormTextRecordSetWriter.java
##
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.text;
+
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.schema.access.SchemaAccessUtils;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Paths;
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.junit.Assert.assertEquals;
+
+public class TestFreeFormTextRecordSetWriter {
+
+private TestRunner setup(FreeFormTextRecordSetWriter writer) throws 
InitializationException, IOException {
+TestRunner runner = 
TestRunners.newTestRunner(TestFreeFormTextRecordSetWriterProcessor.class);
+
+final String outputSchemaText = new 
String(Files.readAllBytes(Paths.get("src/test/resources/text/testschema")));
+
+runner.addControllerService("writer", writer);
+runner.setProperty(TestFreeFormTextRecordSetWriterProcessor.WRITER, 
"writer");
+
+runner.setProperty(writer, SchemaAccessUtils.SCHEMA_ACCESS_STRATEGY, 
SchemaAccessUtils.SCHEMA_TEXT_PROPERTY);
+runner.setProperty(writer, SchemaAccessUtils.SCHEMA_TEXT, 
outputSchemaText);
+runner.setProperty(writer, FreeFormTextRecordSetWriter.TEXT, "ID: 
${ID}, Name: ${NAME}, Age: ${AGE}, Country: ${COUNTRY}, Username: 
${user.name}");
+
+return runner;
+}
+
+@Test
+public void testDefault() throws IOException, InitializationException {
+FreeFormTextRecordSetWriter writer = new FreeFormTextRecordSetWriter();
+TestRunner runner = setup(writer);
+
+runner.enableControllerService(writer);
+Map attributes = new HashMap<>();
+attributes.put("user.name", "jdoe64");
+runner.enqueue("", attributes);
+runner.run();
+runner.assertQueueEmpty();
+
runner.assertAllFlowFilesTransferred(TestFreeFormTextRecordSetWriterProcessor.SUCCESS,
 1);
+

Review comment:
   Good point, I copy-pasted the code from the XML writer test and kind of 
left it alone, but you're right it could use more comments. Same goes for the 
test processor, I had to do it a bit differently than the XML one because it 
must inherit the writer schema from the record (even though it doesn't do 
anything with it). Good catch! will update





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on a change in pull request #4275: NIFI-6497: Allow FreeFormTextRecordSetWriter to access FlowFile Attributes

2020-05-14 Thread GitBox


mattyb149 commented on a change in pull request #4275:
URL: https://github.com/apache/nifi/pull/4275#discussion_r42521



##
File path: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/test/java/org/apache/nifi/text/TestFreeFormTextRecordSetWriter.java
##
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.text;
+
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.schema.access.SchemaAccessUtils;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Paths;
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.junit.Assert.assertEquals;
+
+public class TestFreeFormTextRecordSetWriter {
+
+private TestRunner setup(FreeFormTextRecordSetWriter writer) throws 
InitializationException, IOException {
+TestRunner runner = 
TestRunners.newTestRunner(TestFreeFormTextRecordSetWriterProcessor.class);
+
+final String outputSchemaText = new 
String(Files.readAllBytes(Paths.get("src/test/resources/text/testschema")));
+
+runner.addControllerService("writer", writer);
+runner.setProperty(TestFreeFormTextRecordSetWriterProcessor.WRITER, 
"writer");
+
+runner.setProperty(writer, SchemaAccessUtils.SCHEMA_ACCESS_STRATEGY, 
SchemaAccessUtils.SCHEMA_TEXT_PROPERTY);
+runner.setProperty(writer, SchemaAccessUtils.SCHEMA_TEXT, 
outputSchemaText);
+runner.setProperty(writer, FreeFormTextRecordSetWriter.TEXT, "ID: 
${ID}, Name: ${NAME}, Age: ${AGE}, Country: ${COUNTRY}, Username: 
${user.name}");
+
+return runner;
+}
+
+@Test
+public void testDefault() throws IOException, InitializationException {
+FreeFormTextRecordSetWriter writer = new FreeFormTextRecordSetWriter();
+TestRunner runner = setup(writer);
+
+runner.enableControllerService(writer);
+Map attributes = new HashMap<>();
+attributes.put("user.name", "jdoe64");
+runner.enqueue("", attributes);
+runner.run();
+runner.assertQueueEmpty();
+
runner.assertAllFlowFilesTransferred(TestFreeFormTextRecordSetWriterProcessor.SUCCESS,
 1);
+

Review comment:
   Good point, I copy-pasted the code from the XML writer test and kind of 
left it alone, but you're right it could use more comments. Same goes for the 
test processor, I had to do it a bit differently than the XML one because it 
must inherit the writer schema from the reader (even though it doesn't do 
anything with it). Good catch! will update





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] ottobackwards commented on a change in pull request #4275: NIFI-6497: Allow FreeFormTextRecordSetWriter to access FlowFile Attributes

2020-05-14 Thread GitBox


ottobackwards commented on a change in pull request #4275:
URL: https://github.com/apache/nifi/pull/4275#discussion_r425440486



##
File path: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/test/java/org/apache/nifi/text/TestFreeFormTextRecordSetWriter.java
##
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.text;
+
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.schema.access.SchemaAccessUtils;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Paths;
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.junit.Assert.assertEquals;
+
+public class TestFreeFormTextRecordSetWriter {
+
+private TestRunner setup(FreeFormTextRecordSetWriter writer) throws 
InitializationException, IOException {
+TestRunner runner = 
TestRunners.newTestRunner(TestFreeFormTextRecordSetWriterProcessor.class);
+
+final String outputSchemaText = new 
String(Files.readAllBytes(Paths.get("src/test/resources/text/testschema")));
+
+runner.addControllerService("writer", writer);
+runner.setProperty(TestFreeFormTextRecordSetWriterProcessor.WRITER, 
"writer");
+
+runner.setProperty(writer, SchemaAccessUtils.SCHEMA_ACCESS_STRATEGY, 
SchemaAccessUtils.SCHEMA_TEXT_PROPERTY);
+runner.setProperty(writer, SchemaAccessUtils.SCHEMA_TEXT, 
outputSchemaText);
+runner.setProperty(writer, FreeFormTextRecordSetWriter.TEXT, "ID: 
${ID}, Name: ${NAME}, Age: ${AGE}, Country: ${COUNTRY}, Username: 
${user.name}");
+
+return runner;
+}
+
+@Test
+public void testDefault() throws IOException, InitializationException {
+FreeFormTextRecordSetWriter writer = new FreeFormTextRecordSetWriter();
+TestRunner runner = setup(writer);
+
+runner.enableControllerService(writer);
+Map attributes = new HashMap<>();
+attributes.put("user.name", "jdoe64");
+runner.enqueue("", attributes);
+runner.run();
+runner.assertQueueEmpty();
+
runner.assertAllFlowFilesTransferred(TestFreeFormTextRecordSetWriterProcessor.SUCCESS,
 1);
+

Review comment:
   This is pretty neat, but I had to go through the code and debug to 
figure out why there were two return values here, because just looking at it it 
looks wrong.
   
   Maybe a comment or two to help help here would be good.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] esecules edited a comment on pull request #4265: NIFI-7434: Endpoint suffix property in AzureStorageAccount NIFI processors

2020-05-14 Thread GitBox


esecules edited a comment on pull request #4265:
URL: https://github.com/apache/nifi/pull/4265#issuecomment-628748972


   Will this solve https://jira.apache.org/jira/browse/NIFI-7386?filter=-2 ?
   (Connection to the [Azurite storage 
emulator](https://hub.docker.com/_/microsoft-azure-storage-azurite))



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on pull request #4274: NIFI-7437 - created separate thread for preloading predictions, refac…

2020-05-14 Thread GitBox


mattyb149 commented on pull request #4274:
URL: https://github.com/apache/nifi/pull/4274#issuecomment-628883444


   Sounds good, I think if we can get under 15 (even @ 14 sec) then we should 
be able to retain the UI responsiveness / update rate for small flows, and 
moving this to its own thread should improve the responsiveness (although maybe 
not update rate) for larger flows



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-6497) Allow FreeFormTextRecordSetWriter to access FlowFile Attributes

2020-05-14 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-6497:
---
Affects Version/s: (was: 1.8.0)
   Status: Patch Available  (was: In Progress)

> Allow FreeFormTextRecordSetWriter to access FlowFile Attributes
> ---
>
> Key: NIFI-6497
> URL: https://issues.apache.org/jira/browse/NIFI-6497
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: DamienDEOM
>Assignee: Matt Burgess
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  
> I'm trying to convert json records to database insert statements using the 
> Splitrecords processor
> To do so, I use FreeFormTextRecordSetWriter controller with following text:
> {{INSERT INTO p17128.bookmark_users values ('${username}', 
> '${firstname:urlEncode()}', '${user_id}', '${accountnumber}', 
> '${lastname:urlEncode()}', '${nominal_time}'}})
> The resulting statement values are valid for all fields contained in Record 
> reader.
> Now I'd like to add a field that is a flowfile attribute ( ${nominal_time} ), 
> but I always get an empty string in the output.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 opened a new pull request #4275: NIFI-6497: Allow FreeFormTextRecordSetWriter to access FlowFile Attributes

2020-05-14 Thread GitBox


mattyb149 opened a new pull request #4275:
URL: https://github.com/apache/nifi/pull/4275


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   FreeFormTextRecordSetWriter allows the use of record field names in NiFi 
Expression Language expressions, but it does not allow the use of attributes or 
variables from the flowfile and environment, respectively. This PR makes 
attributes and variables available to the writer, but does not override any 
existing field name/value from the record.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [x] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (NIFI-6497) Allow FreeFormTextRecordSetWriter to access FlowFile Attributes

2020-05-14 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-6497:
--

Assignee: Matt Burgess

> Allow FreeFormTextRecordSetWriter to access FlowFile Attributes
> ---
>
> Key: NIFI-6497
> URL: https://issues.apache.org/jira/browse/NIFI-6497
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: DamienDEOM
>Assignee: Matt Burgess
>Priority: Major
>
>  
> I'm trying to convert json records to database insert statements using the 
> Splitrecords processor
> To do so, I use FreeFormTextRecordSetWriter controller with following text:
> {{INSERT INTO p17128.bookmark_users values ('${username}', 
> '${firstname:urlEncode()}', '${user_id}', '${accountnumber}', 
> '${lastname:urlEncode()}', '${nominal_time}'}})
> The resulting statement values are valid for all fields contained in Record 
> reader.
> Now I'd like to add a field that is a flowfile attribute ( ${nominal_time} ), 
> but I always get an empty string in the output.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] YolandaMDavis commented on pull request #4274: NIFI-7437 - created separate thread for preloading predictions, refac…

2020-05-14 Thread GitBox


YolandaMDavis commented on pull request #4274:
URL: https://github.com/apache/nifi/pull/4274#issuecomment-628832930


   > I'd ask you both to consider NOT having configurable properties if we can 
avoid it. If we can pick a good - albeit maybe too conservative default that 
might be enough for now. Then if we find tuning is desirable we add the 
property. I'm not against it - just recommending we try to avoid it
   
   Ok I'll see what's the furthest we can reduce timing which will offer 
optimal performance for larger flow scenarios.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] joewitt commented on pull request #4274: NIFI-7437 - created separate thread for preloading predictions, refac…

2020-05-14 Thread GitBox


joewitt commented on pull request #4274:
URL: https://github.com/apache/nifi/pull/4274#issuecomment-628824851


   I'd ask you both to consider NOT having configurable properties if we can 
avoid it.  If we can pick a good - albeit maybe too conservative default that 
might be enough for now.  Then if we find tuning is desirable we add the 
property.  I'm not against it - just recommending we try to avoid it



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] YolandaMDavis commented on a change in pull request #4274: NIFI-7437 - created separate thread for preloading predictions, refac…

2020-05-14 Thread GitBox


YolandaMDavis commented on a change in pull request #4274:
URL: https://github.com/apache/nifi/pull/4274#discussion_r425357901



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/FlowController.java
##
@@ -683,8 +685,28 @@ private FlowController(
 
 StatusAnalyticsModelMapFactory statusAnalyticsModelMapFactory = 
new StatusAnalyticsModelMapFactory(extensionManager, nifiProperties);
 
-analyticsEngine = new 
CachingConnectionStatusAnalyticsEngine(flowManager, componentStatusRepository, 
flowFileEventRepository, statusAnalyticsModelMapFactory,
+analyticsEngine = new 
CachingConnectionStatusAnalyticsEngine(flowManager, componentStatusRepository, 
statusAnalyticsModelMapFactory,
 predictionIntervalMillis, queryIntervalMillis, 
modelScoreName, modelScoreThreshold);
+
+timerDrivenEngineRef.get().scheduleWithFixedDelay(new Runnable() {
+@Override
+public void run() {
+try {
+Long startTs = System.currentTimeMillis();
+RepositoryStatusReport statusReport = 
flowFileEventRepository.reportTransferEvents(startTs);
+flowManager.findAllConnections().forEach(connection -> 
{
+ConnectionStatusAnalytics 
connectionStatusAnalytics = 
((ConnectionStatusAnalytics)analyticsEngine.getStatusAnalytics(connection.getIdentifier()));
+connectionStatusAnalytics.refresh();
+
connectionStatusAnalytics.loadPredictions(statusReport);
+});
+Long endTs = System.currentTimeMillis();
+LOG.debug("Time Elapsed for Prediction for loading all 
predictions: {}", endTs - startTs);
+} catch (final Exception e) {
+LOG.error("Failed to generate predictions", e);
+}
+}
+}, 0L, 30, TimeUnit.SECONDS);

Review comment:
   yes agreed.  I think 15 seconds would be optimal as a default but will 
confirm with some large flow scenarios to be sure.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (MINIFICPP-1220) Memory leak in CWEL

2020-05-14 Thread Marton Szasz (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Szasz closed MINIFICPP-1220.
---

> Memory leak in CWEL
> ---
>
> Key: MINIFICPP-1220
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1220
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Marton Szasz
>Assignee: Marton Szasz
>Priority: Major
> Fix For: 0.8.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Memory usage of minifi-cpp on windows increases gradually when using 
> ConsumeWindowsEventLog, with more frequent scheduling triggering faster 
> memory leak.
> The main issue seems to be double event creation in 
> Bookmark::getBookmarkHandleFromXML and leaking one of them.
> The fix I'm about to submit changes the code to use unique_ptr for event 
> ownership handling, reducing the risk of similar bugs in the future.
> Part of the credit goes to [~aboda] as we found the cause independently.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7263) Add a No tracking Strategy to ListFile/ListFTP

2020-05-14 Thread Waleed Al Aibani (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Waleed Al Aibani updated NIFI-7263:
---
Fix Version/s: 1.12.0
   Resolution: Resolved
   Status: Resolved  (was: Patch Available)

> Add a No tracking Strategy to ListFile/ListFTP
> --
>
> Key: NIFI-7263
> URL: https://issues.apache.org/jira/browse/NIFI-7263
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Jens M Kofoed
>Assignee: Waleed Al Aibani
>Priority: Major
>  Labels: ListFile, listftp
> Fix For: 1.12.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The Listfile/ListFTP has 2 Listing Strategies: Tracking Timestamps and 
> Tracking Entities.
> It would be very very nice if the List process also could have a No Tracking 
> (fix it your self) strategy
> If running NIFI in a cluster the List/Fetch is the perfect solution instead 
> of using a GetFile. But we have had many caces where files in the pickup 
> folder has old timestamps, so here we have to use Tracking Entities.
> The issue is in cases where you are not allowed to delete files but you have 
> to make a change to the file filter. The tracking entities start all over, 
> and list all files again.
> In other situations we need to resent all data, and would like to clear the 
> state of the Tracking Entities. But you can't.
> So I have to make a small flow for detecting duplicates. And in some cases 
> just ignore duplicates and in other caces open up for sending duplicates. But 
> it is a pain in the ... to use the Tracking Entities.
> So a NO STRATEGY would be very very nice



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7333) OIDC provider should use NiFi keystore & truststore

2020-05-14 Thread Andy LoPresto (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107555#comment-17107555
 ] 

Andy LoPresto commented on NIFI-7333:
-

I discussed this with [~markap14] and our proposal is:

* Add a new property in {{nifi.properties}} for an OIDC-specific truststore 
path and password
* If this value is not populated, continue using the JVM {{cacerts}} which will 
work with public OIDC providers like Google & Microsoft out of the box
* If the desired OIDC provider is not trusted by the JVM, a new truststore can 
be provided via these properties. It can be a standalone truststore which only 
trusts the OIDC provider, or the OIDC provider public certificate can be added 
to the NiFi truststore, and that file can be referenced here as well

There is another discussion about possibly allowing multiple "framework-level" 
keystore/truststores, which would be named and referenced by name in 
application behavior, which would obviate this need. 

> OIDC provider should use NiFi keystore & truststore
> ---
>
> Key: NIFI-7333
> URL: https://issues.apache.org/jira/browse/NIFI-7333
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Security
>Affects Versions: 1.11.4
>Reporter: Andy LoPresto
>Assignee: Troy Melhase
>Priority: Major
>  Labels: keystore, oidc, security, tls
>
> The OIDC provider uses generic HTTPS requests to the OIDC IdP, but does not 
> configure these requests to use the NiFi keystore or truststore. Rather, it 
> uses the default JVM keystore and truststore, which leads to difficulty 
> debugging PKIX and other TLS negotiation errors. It should be switched to use 
> the NiFi keystore and truststore as other NiFi framework services do. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] alopresto edited a comment on pull request #4263: NIFI-7407 Refactored SSL context generation throughout framework and extensions.

2020-05-14 Thread GitBox


alopresto edited a comment on pull request #4263:
URL: https://github.com/apache/nifi/pull/4263#issuecomment-628794810


   I made the dropdown for `RestrictedSSLContextService` more explicit where it 
now provides `TLS, TLSv1.2` on Java 8 and `TLS, TLSv1.2, TLSv1.3` on Java 11. 
Selecting `TLS` will allow connections over `TLSv1.2` _and_ `TLSv1.3` (on Java 
11 _only_. Java 8 does not support `TLSv1.3`). 
   
   ### With `TLSv1.2` selected:
   
   ```
   
   # TLSv1.2 is successful
   
..oolkit-1.11.4   master ●  echo Q | openssl s_client -connect 
node1.nifi: -key nifi-key.key -cert nifi-cert.pem -CAfile nifi-cert.pem 
-tls1_2
   CONNECTED(0003)
   depth=1 OU = NIFI, CN = ca.nifi
   verify return:1
   depth=0 OU = NIFI, CN = node1.nifi
   verify return:1
   ---
   Certificate chain
0 s:OU = NIFI, CN = node1.nifi
  i:OU = NIFI, CN = ca.nifi
1 s:OU = NIFI, CN = ca.nifi
  i:OU = NIFI, CN = ca.nifi
   ---
   ...
   ---
   SSL handshake has read 2289 bytes and written 1464 bytes
   Verification: OK
   ---
   New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
   Server public key is 2048 bit
   Secure Renegotiation IS supported
   Compression: NONE
   Expansion: NONE
   No ALPN negotiated
   SSL-Session:
   Protocol  : TLSv1.2
   Cipher: ECDHE-RSA-AES256-GCM-SHA384
   Session-ID: BA2FC4...0D2790
   Session-ID-ctx:
   Master-Key: C773AC...A85A19
   PSK identity: None
   PSK identity hint: None
   SRP username: None
   Start Time: 1589478477
   Timeout   : 7200 (sec)
   Verify return code: 0 (ok)
   Extended master secret: yes
   ---
   DONE
   
   # TLSv1.3 fails
   
..oolkit-1.11.4   master ●  echo Q | openssl s_client -connect 
node1.nifi: -key nifi-key.key -cert nifi-cert.pem -CAfile nifi-cert.pem 
-tls1_3
   CONNECTED(0003)
   4570201536:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol 
version:ssl/record/rec_layer_s3.c:1544:SSL alert number 70
   ---
   no peer certificate available
   ---
   No client certificate CA names sent
   ---
   SSL handshake has read 7 bytes and written 234 bytes
   Verification: OK
   ---
   New, (NONE), Cipher is (NONE)
   Secure Renegotiation IS NOT supported
   Compression: NONE
   Expansion: NONE
   No ALPN negotiated
   Early data was not sent
   Verify return code: 0 (ok)
   ---
✘  ..oolkit-1.11.4   master ● 
   ```
   
   # With `TLS` selected:
   
   ```
   
   ### TLSv1.3 is successful
   
..oolkit-1.11.4   master ●  echo Q | openssl s_client -connect 
node1.nifi: -key nifi-key.key -cert nifi-cert.pem -CAfile nifi-cert.pem 
-tls1_3
   CONNECTED(0003)
   depth=1 OU = NIFI, CN = ca.nifi
   verify return:1
   depth=0 OU = NIFI, CN = node1.nifi
   verify return:1
   ---
   Certificate chain
0 s:OU = NIFI, CN = node1.nifi
  i:OU = NIFI, CN = ca.nifi
1 s:OU = NIFI, CN = ca.nifi
  i:OU = NIFI, CN = ca.nifi
   ---
   ...
   ---
   SSL handshake has read 2510 bytes and written 1800 bytes
   Verification: OK
   ---
   New, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256
   Server public key is 2048 bit
   Secure Renegotiation IS NOT supported
   Compression: NONE
   Expansion: NONE
   No ALPN negotiated
   Early data was not sent
   Verify return code: 0 (ok)
   ---
   DONE
   
   # TLSv1.2 is successful
   
..oolkit-1.11.4   master ●  echo Q | openssl s_client -connect 
node1.nifi: -key nifi-key.key -cert nifi-cert.pem -CAfile nifi-cert.pem 
-tls1_2
   CONNECTED(0003)
   depth=1 OU = NIFI, CN = ca.nifi
   verify return:1
   depth=0 OU = NIFI, CN = node1.nifi
   verify return:1
   ---
   Certificate chain
0 s:OU = NIFI, CN = node1.nifi
  i:OU = NIFI, CN = ca.nifi
1 s:OU = NIFI, CN = ca.nifi
  i:OU = NIFI, CN = ca.nifi
   ---
   ...
   ---
   SSL handshake has read 2293 bytes and written 1464 bytes
   Verification: OK
   ---
   New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
   Server public key is 2048 bit
   Secure Renegotiation IS supported
   Compression: NONE
   Expansion: NONE
   No ALPN negotiated
   SSL-Session:
   Protocol  : TLSv1.2
   Cipher: ECDHE-RSA-AES256-GCM-SHA384
   Session-ID: 7E5D46...1F4E63
   Session-ID-ctx:
   Master-Key: AB80DE...4FCC9A
   PSK identity: None
   PSK identity hint: None
   SRP username: None
   Start Time: 1589478427
   Timeout   : 7200 (sec)
   Verify return code: 0 (ok)
   Extended master secret: yes
   ---
   DONE
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] alopresto commented on pull request #4263: NIFI-7407 Refactored SSL context generation throughout framework and extensions.

2020-05-14 Thread GitBox


alopresto commented on pull request #4263:
URL: https://github.com/apache/nifi/pull/4263#issuecomment-628794810


   I made the dropdown for `RestrictedSSLContextService` more explicit where it 
now provides `TLS, TLSv1.2` on Java 8 and `TLS, TLSv1.2, TLSv1.3` on Java 11. 
Selecting `TLS` will allow connections over `TLSv1.2` _and_ `TLSv1.3`. 
   
   # With `TLSv1.2` selected:
   
   ```
   
   # TLSv1.2 is successful
   
..oolkit-1.11.4   master ●  echo Q | openssl s_client -connect 
node1.nifi: -key nifi-key.key -cert nifi-cert.pem -CAfile nifi-cert.pem 
-tls1_2
   CONNECTED(0003)
   depth=1 OU = NIFI, CN = ca.nifi
   verify return:1
   depth=0 OU = NIFI, CN = node1.nifi
   verify return:1
   ---
   Certificate chain
0 s:OU = NIFI, CN = node1.nifi
  i:OU = NIFI, CN = ca.nifi
1 s:OU = NIFI, CN = ca.nifi
  i:OU = NIFI, CN = ca.nifi
   ---
   ...
   ---
   SSL handshake has read 2289 bytes and written 1464 bytes
   Verification: OK
   ---
   New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
   Server public key is 2048 bit
   Secure Renegotiation IS supported
   Compression: NONE
   Expansion: NONE
   No ALPN negotiated
   SSL-Session:
   Protocol  : TLSv1.2
   Cipher: ECDHE-RSA-AES256-GCM-SHA384
   Session-ID: BA2FC4...0D2790
   Session-ID-ctx:
   Master-Key: C773AC...A85A19
   PSK identity: None
   PSK identity hint: None
   SRP username: None
   Start Time: 1589478477
   Timeout   : 7200 (sec)
   Verify return code: 0 (ok)
   Extended master secret: yes
   ---
   DONE
   
   # TLSv1.3 fails
   
..oolkit-1.11.4   master ●  echo Q | openssl s_client -connect 
node1.nifi: -key nifi-key.key -cert nifi-cert.pem -CAfile nifi-cert.pem 
-tls1_3
   CONNECTED(0003)
   4570201536:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol 
version:ssl/record/rec_layer_s3.c:1544:SSL alert number 70
   ---
   no peer certificate available
   ---
   No client certificate CA names sent
   ---
   SSL handshake has read 7 bytes and written 234 bytes
   Verification: OK
   ---
   New, (NONE), Cipher is (NONE)
   Secure Renegotiation IS NOT supported
   Compression: NONE
   Expansion: NONE
   No ALPN negotiated
   Early data was not sent
   Verify return code: 0 (ok)
   ---
✘  ..oolkit-1.11.4   master ● 
   ```
   
   # With `TLS` selected:
   
   ```
   
   # TLSv1.3 is successful
   
..oolkit-1.11.4   master ●  echo Q | openssl s_client -connect 
node1.nifi: -key nifi-key.key -cert nifi-cert.pem -CAfile nifi-cert.pem 
-tls1_3
   CONNECTED(0003)
   depth=1 OU = NIFI, CN = ca.nifi
   verify return:1
   depth=0 OU = NIFI, CN = node1.nifi
   verify return:1
   ---
   Certificate chain
0 s:OU = NIFI, CN = node1.nifi
  i:OU = NIFI, CN = ca.nifi
1 s:OU = NIFI, CN = ca.nifi
  i:OU = NIFI, CN = ca.nifi
   ---
   ...
   ---
   SSL handshake has read 2510 bytes and written 1800 bytes
   Verification: OK
   ---
   New, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256
   Server public key is 2048 bit
   Secure Renegotiation IS NOT supported
   Compression: NONE
   Expansion: NONE
   No ALPN negotiated
   Early data was not sent
   Verify return code: 0 (ok)
   ---
   DONE
   
   # TLSv1.2 is successful
   
..oolkit-1.11.4   master ●  echo Q | openssl s_client -connect 
node1.nifi: -key nifi-key.key -cert nifi-cert.pem -CAfile nifi-cert.pem 
-tls1_2
   CONNECTED(0003)
   depth=1 OU = NIFI, CN = ca.nifi
   verify return:1
   depth=0 OU = NIFI, CN = node1.nifi
   verify return:1
   ---
   Certificate chain
0 s:OU = NIFI, CN = node1.nifi
  i:OU = NIFI, CN = ca.nifi
1 s:OU = NIFI, CN = ca.nifi
  i:OU = NIFI, CN = ca.nifi
   ---
   ...
   ---
   SSL handshake has read 2293 bytes and written 1464 bytes
   Verification: OK
   ---
   New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
   Server public key is 2048 bit
   Secure Renegotiation IS supported
   Compression: NONE
   Expansion: NONE
   No ALPN negotiated
   SSL-Session:
   Protocol  : TLSv1.2
   Cipher: ECDHE-RSA-AES256-GCM-SHA384
   Session-ID: 7E5D46...1F4E63
   Session-ID-ctx:
   Master-Key: AB80DE...4FCC9A
   PSK identity: None
   PSK identity hint: None
   SRP username: None
   Start Time: 1589478427
   Timeout   : 7200 (sec)
   Verify return code: 0 (ok)
   Extended master secret: yes
   ---
   DONE
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] alopresto edited a comment on pull request #4263: NIFI-7407 Refactored SSL context generation throughout framework and extensions.

2020-05-14 Thread GitBox


alopresto edited a comment on pull request #4263:
URL: https://github.com/apache/nifi/pull/4263#issuecomment-628794810


   I made the dropdown for `RestrictedSSLContextService` more explicit where it 
now provides `TLS, TLSv1.2` on Java 8 and `TLS, TLSv1.2, TLSv1.3` on Java 11. 
Selecting `TLS` will allow connections over `TLSv1.2` _and_ `TLSv1.3`. 
   
   ### With `TLSv1.2` selected:
   
   ```
   
   # TLSv1.2 is successful
   
..oolkit-1.11.4   master ●  echo Q | openssl s_client -connect 
node1.nifi: -key nifi-key.key -cert nifi-cert.pem -CAfile nifi-cert.pem 
-tls1_2
   CONNECTED(0003)
   depth=1 OU = NIFI, CN = ca.nifi
   verify return:1
   depth=0 OU = NIFI, CN = node1.nifi
   verify return:1
   ---
   Certificate chain
0 s:OU = NIFI, CN = node1.nifi
  i:OU = NIFI, CN = ca.nifi
1 s:OU = NIFI, CN = ca.nifi
  i:OU = NIFI, CN = ca.nifi
   ---
   ...
   ---
   SSL handshake has read 2289 bytes and written 1464 bytes
   Verification: OK
   ---
   New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
   Server public key is 2048 bit
   Secure Renegotiation IS supported
   Compression: NONE
   Expansion: NONE
   No ALPN negotiated
   SSL-Session:
   Protocol  : TLSv1.2
   Cipher: ECDHE-RSA-AES256-GCM-SHA384
   Session-ID: BA2FC4...0D2790
   Session-ID-ctx:
   Master-Key: C773AC...A85A19
   PSK identity: None
   PSK identity hint: None
   SRP username: None
   Start Time: 1589478477
   Timeout   : 7200 (sec)
   Verify return code: 0 (ok)
   Extended master secret: yes
   ---
   DONE
   
   # TLSv1.3 fails
   
..oolkit-1.11.4   master ●  echo Q | openssl s_client -connect 
node1.nifi: -key nifi-key.key -cert nifi-cert.pem -CAfile nifi-cert.pem 
-tls1_3
   CONNECTED(0003)
   4570201536:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol 
version:ssl/record/rec_layer_s3.c:1544:SSL alert number 70
   ---
   no peer certificate available
   ---
   No client certificate CA names sent
   ---
   SSL handshake has read 7 bytes and written 234 bytes
   Verification: OK
   ---
   New, (NONE), Cipher is (NONE)
   Secure Renegotiation IS NOT supported
   Compression: NONE
   Expansion: NONE
   No ALPN negotiated
   Early data was not sent
   Verify return code: 0 (ok)
   ---
✘  ..oolkit-1.11.4   master ● 
   ```
   
   # With `TLS` selected:
   
   ```
   
   ### TLSv1.3 is successful
   
..oolkit-1.11.4   master ●  echo Q | openssl s_client -connect 
node1.nifi: -key nifi-key.key -cert nifi-cert.pem -CAfile nifi-cert.pem 
-tls1_3
   CONNECTED(0003)
   depth=1 OU = NIFI, CN = ca.nifi
   verify return:1
   depth=0 OU = NIFI, CN = node1.nifi
   verify return:1
   ---
   Certificate chain
0 s:OU = NIFI, CN = node1.nifi
  i:OU = NIFI, CN = ca.nifi
1 s:OU = NIFI, CN = ca.nifi
  i:OU = NIFI, CN = ca.nifi
   ---
   ...
   ---
   SSL handshake has read 2510 bytes and written 1800 bytes
   Verification: OK
   ---
   New, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256
   Server public key is 2048 bit
   Secure Renegotiation IS NOT supported
   Compression: NONE
   Expansion: NONE
   No ALPN negotiated
   Early data was not sent
   Verify return code: 0 (ok)
   ---
   DONE
   
   # TLSv1.2 is successful
   
..oolkit-1.11.4   master ●  echo Q | openssl s_client -connect 
node1.nifi: -key nifi-key.key -cert nifi-cert.pem -CAfile nifi-cert.pem 
-tls1_2
   CONNECTED(0003)
   depth=1 OU = NIFI, CN = ca.nifi
   verify return:1
   depth=0 OU = NIFI, CN = node1.nifi
   verify return:1
   ---
   Certificate chain
0 s:OU = NIFI, CN = node1.nifi
  i:OU = NIFI, CN = ca.nifi
1 s:OU = NIFI, CN = ca.nifi
  i:OU = NIFI, CN = ca.nifi
   ---
   ...
   ---
   SSL handshake has read 2293 bytes and written 1464 bytes
   Verification: OK
   ---
   New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
   Server public key is 2048 bit
   Secure Renegotiation IS supported
   Compression: NONE
   Expansion: NONE
   No ALPN negotiated
   SSL-Session:
   Protocol  : TLSv1.2
   Cipher: ECDHE-RSA-AES256-GCM-SHA384
   Session-ID: 7E5D46...1F4E63
   Session-ID-ctx:
   Master-Key: AB80DE...4FCC9A
   PSK identity: None
   PSK identity hint: None
   SRP username: None
   Start Time: 1589478427
   Timeout   : 7200 (sec)
   Verify return code: 0 (ok)
   Extended master secret: yes
   ---
   DONE
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7453) PutKudu kerberos issue after TGT expires

2020-05-14 Thread Tamas Palfy (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Palfy updated NIFI-7453:
--
Summary: PutKudu kerberos issue after TGT expires   (was: PutKudu kerberos 
issue reoccurs a week after restart )

> PutKudu kerberos issue after TGT expires 
> -
>
> Key: NIFI-7453
> URL: https://issues.apache.org/jira/browse/NIFI-7453
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamas Palfy
>Priority: Major
>
> When PutKudu is used with kerberos authentication, it stops working when the 
> TGT expires with the following logs/exceptions:
> {noformat}
> ERROR org.apache.nifi.processors.kudu.PutKudu: 
> PutKudu[id=4ad63284-cb39-1c78-bd0e-c280df797039] Failed to write due to Row 
> error for primary key="feebfe81-4ee6-4a8b-91ca-311e1c4f8749", tablet=null, 
> server=null, status=Runtime error: cannot re-acquire authentication token 
> after 5 attempts (Couldn't find a valid master in (HOST:PORT). Exceptions 
> received: [org.apache.kudu.client.NonRecoverableException: server requires 
> authentication, but client does not have Kerberos credentials (tgt). 
> Authentication tokens were not used because this connection will be used to 
> acquire a new token and therefore requires primary credentials])
> 2020-05-13 09:27:05,157 INFO org.apache.kudu.client.ConnectToCluster: Unable 
> to connect to master HOST:PORT: server requires authentication, but client 
> does not have Kerberos credentials (tgt). Authentication tokens were not used 
> because this connection will be used to acquire a new token and therefore 
> requires primary credentials
> 2020-05-13 09:27:05,159 WARN org.apache.kudu.client.AsyncKuduSession: 
> unexpected tablet lookup failure for operation KuduRpc(method=Write, 
> tablet=null, attempt=0, DeadlineTracker(timeout=0, elapsed=15), No traces)
> org.apache.kudu.client.NonRecoverableException: cannot re-acquire 
> authentication token after 5 attempts (Couldn't find a valid master in 
> (HOST:PORT). Exceptions received: [org.apache.kudu.client.NonRecover
> ableException: server requires authentication, but client does not have 
> Kerberos credentials (tgt). Authentication tokens were not used because this 
> connection will be used to acquire a new token and therefore requires primary 
> credentials])
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:158)
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:141)
> at com.stumbleupon.async.Deferred.doCall(Deferred.java:1280)
> at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1259)
> at com.stumbleupon.async.Deferred.callback(Deferred.java:1002)
> at 
> org.apache.kudu.client.ConnectToCluster.incrementCountAndCheckExhausted(ConnectToCluster.java:246)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7453) PutKudu kerberos issue reoccurs a week after restart

2020-05-14 Thread Tamas Palfy (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Palfy updated NIFI-7453:
--
Description: 
When PutKudu is used with kerberos authentication, it stops working when the 
TGT expires with the following logs/exceptions:



{noformat}
ERROR org.apache.nifi.processors.kudu.PutKudu: 
PutKudu[id=4ad63284-cb39-1c78-bd0e-c280df797039] Failed to write due to Row 
error for primary key="feebfe81-4ee6-4a8b-91ca-311e1c4f8749", tablet=null, 
server=null, status=Runtime error: cannot re-acquire authentication token after 
5 attempts (Couldn't find a valid master in (HOST:PORT). Exceptions received: 
[org.apache.kudu.client.NonRecoverableException: server requires 
authentication, but client does not have Kerberos credentials (tgt). 
Authentication tokens were not used because this connection will be used to 
acquire a new token and therefore requires primary credentials])
2020-05-13 09:27:05,157 INFO org.apache.kudu.client.ConnectToCluster: Unable to 
connect to master HOST:PORT: server requires authentication, but client does 
not have Kerberos credentials (tgt). Authentication tokens were not used 
because this connection will be used to acquire a new token and therefore 
requires primary credentials
2020-05-13 09:27:05,159 WARN org.apache.kudu.client.AsyncKuduSession: 
unexpected tablet lookup failure for operation KuduRpc(method=Write, 
tablet=null, attempt=0, DeadlineTracker(timeout=0, elapsed=15), No traces)
org.apache.kudu.client.NonRecoverableException: cannot re-acquire 
authentication token after 5 attempts (Couldn't find a valid master in 
(HOST:PORT). Exceptions received: [org.apache.kudu.client.NonRecover
ableException: server requires authentication, but client does not have 
Kerberos credentials (tgt). Authentication tokens were not used because this 
connection will be used to acquire a new token and therefore requires primary 
credentials])
at 
org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:158)
at 
org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:141)
at com.stumbleupon.async.Deferred.doCall(Deferred.java:1280)
at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1259)
at com.stumbleupon.async.Deferred.callback(Deferred.java:1002)
at 
org.apache.kudu.client.ConnectToCluster.incrementCountAndCheckExhausted(ConnectToCluster.java:246)
...

{noformat}


> PutKudu kerberos issue reoccurs a week after restart 
> -
>
> Key: NIFI-7453
> URL: https://issues.apache.org/jira/browse/NIFI-7453
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamas Palfy
>Priority: Major
>
> When PutKudu is used with kerberos authentication, it stops working when the 
> TGT expires with the following logs/exceptions:
> {noformat}
> ERROR org.apache.nifi.processors.kudu.PutKudu: 
> PutKudu[id=4ad63284-cb39-1c78-bd0e-c280df797039] Failed to write due to Row 
> error for primary key="feebfe81-4ee6-4a8b-91ca-311e1c4f8749", tablet=null, 
> server=null, status=Runtime error: cannot re-acquire authentication token 
> after 5 attempts (Couldn't find a valid master in (HOST:PORT). Exceptions 
> received: [org.apache.kudu.client.NonRecoverableException: server requires 
> authentication, but client does not have Kerberos credentials (tgt). 
> Authentication tokens were not used because this connection will be used to 
> acquire a new token and therefore requires primary credentials])
> 2020-05-13 09:27:05,157 INFO org.apache.kudu.client.ConnectToCluster: Unable 
> to connect to master HOST:PORT: server requires authentication, but client 
> does not have Kerberos credentials (tgt). Authentication tokens were not used 
> because this connection will be used to acquire a new token and therefore 
> requires primary credentials
> 2020-05-13 09:27:05,159 WARN org.apache.kudu.client.AsyncKuduSession: 
> unexpected tablet lookup failure for operation KuduRpc(method=Write, 
> tablet=null, attempt=0, DeadlineTracker(timeout=0, elapsed=15), No traces)
> org.apache.kudu.client.NonRecoverableException: cannot re-acquire 
> authentication token after 5 attempts (Couldn't find a valid master in 
> (HOST:PORT). Exceptions received: [org.apache.kudu.client.NonRecover
> ableException: server requires authentication, but client does not have 
> Kerberos credentials (tgt). Authentication tokens were not used because this 
> connection will be used to acquire a new token and therefore requires primary 
> credentials])
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:158)
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:141)
> at 

[GitHub] [nifi] mattyb149 commented on a change in pull request #4274: NIFI-7437 - created separate thread for preloading predictions, refac…

2020-05-14 Thread GitBox


mattyb149 commented on a change in pull request #4274:
URL: https://github.com/apache/nifi/pull/4274#discussion_r425317571



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/FlowController.java
##
@@ -683,8 +685,28 @@ private FlowController(
 
 StatusAnalyticsModelMapFactory statusAnalyticsModelMapFactory = 
new StatusAnalyticsModelMapFactory(extensionManager, nifiProperties);
 
-analyticsEngine = new 
CachingConnectionStatusAnalyticsEngine(flowManager, componentStatusRepository, 
flowFileEventRepository, statusAnalyticsModelMapFactory,
+analyticsEngine = new 
CachingConnectionStatusAnalyticsEngine(flowManager, componentStatusRepository, 
statusAnalyticsModelMapFactory,
 predictionIntervalMillis, queryIntervalMillis, 
modelScoreName, modelScoreThreshold);
+
+timerDrivenEngineRef.get().scheduleWithFixedDelay(new Runnable() {
+@Override
+public void run() {
+try {
+Long startTs = System.currentTimeMillis();
+RepositoryStatusReport statusReport = 
flowFileEventRepository.reportTransferEvents(startTs);
+flowManager.findAllConnections().forEach(connection -> 
{
+ConnectionStatusAnalytics 
connectionStatusAnalytics = 
((ConnectionStatusAnalytics)analyticsEngine.getStatusAnalytics(connection.getIdentifier()));
+connectionStatusAnalytics.refresh();
+
connectionStatusAnalytics.loadPredictions(statusReport);
+});
+Long endTs = System.currentTimeMillis();
+LOG.debug("Time Elapsed for Prediction for loading all 
predictions: {}", endTs - startTs);
+} catch (final Exception e) {
+LOG.error("Failed to generate predictions", e);
+}
+}
+}, 0L, 30, TimeUnit.SECONDS);

Review comment:
   Should we make this a configurable property in the nifi.properties 
analytics section? That way for small flows we could set it to run more often 
to get better resolution, and slower for very large flows. Is 30 seconds a good 
default, or would ~15 work better on average (I think that's the current rate 
at which the UI was asking for updates)?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7453) PutKudu kerberos issue reoccurs a week after restart

2020-05-14 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107492#comment-17107492
 ] 

Joe Witt commented on NIFI-7453:


[~tpalfy] Looks like a fun kerberos issue but would be good to put more details 
on what this JIRA really is about - maybe a stack trace - some explanation. 
Thanks

> PutKudu kerberos issue reoccurs a week after restart 
> -
>
> Key: NIFI-7453
> URL: https://issues.apache.org/jira/browse/NIFI-7453
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamas Palfy
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7453) PutKudu kerberos issue reoccurs a week after restart

2020-05-14 Thread Tamas Palfy (Jira)
Tamas Palfy created NIFI-7453:
-

 Summary: PutKudu kerberos issue reoccurs a week after restart 
 Key: NIFI-7453
 URL: https://issues.apache.org/jira/browse/NIFI-7453
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Tamas Palfy






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7437) UI is slow when nifi.analytics.predict.enabled is true

2020-05-14 Thread Yolanda M. Davis (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yolanda M. Davis updated NIFI-7437:
---
Status: Patch Available  (was: Open)

> UI is slow when nifi.analytics.predict.enabled is true
> --
>
> Key: NIFI-7437
> URL: https://issues.apache.org/jira/browse/NIFI-7437
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI, Extensions
>Affects Versions: 1.11.4, 1.10.0
> Environment: Java11, CentOS8
>Reporter: Dmitry Ibragimov
>Assignee: Yolanda M. Davis
>Priority: Critical
>  Labels: features, performance
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We faced with issue when nifi.analytics.predict.enabled is true after cluster 
> upgrade to 1.11.4
> We have about 4000 processors in development enviroment, but most of them is 
> in disabled state: 256 running, 1263 stopped, 2543 disabled
> After upgrade version from 1.9.2 to 1.11.4 we deicded to test back-pressure 
> prediction feature and enable it in configuration:
> {code:java}
> nifi.analytics.predict.enabled=true
> nifi.analytics.predict.interval=3 mins
> nifi.analytics.query.interval=5 mins
> nifi.analytics.connection.model.implementation=org.apache.nifi.controller.status.analytics.models.OrdinaryLeastSquares
> nifi.analytics.connection.model.score.name=rSquared
> nifi.analytics.connection.model.score.threshold=.90
> {code}
> And we faced with terrible UI performance degradataion. Root page opens in 20 
> seconds instead of 200-500ms. About ~100 times slower. I've tesed it with 
> different environments centos7/8, java8/11, clustered secured, clustered 
> unsecured, standalone unsecured - all the same.
> In debug log for ThreadPoolRequestReplicator:
> {code:java}
> 2020-05-09 08:03:34,459 DEBUG [Replicate Request Thread-2] 
> o.a.n.c.c.h.r.ThreadPoolRequestReplicator For GET 
> /nifi-api/flow/process-groups/root (Request ID 
> c144196f-d4cb-4053-8828-70e06f7c5100), minimum response time = 19548, max = 
> 20625, average = 20161.0 ms
> 2020-05-09 08:03:34,459 DEBUG [Replicate Request Thread-2] 
> o.a.n.c.c.h.r.ThreadPoolRequestReplicator Node Responses for GET 
> /nifi-api/flow/process-groups/root (Request ID 
> c144196f-d4cb-4053-8828-70e06f7c5100):
> newnifi01:8080: 19548 millis
> newnifi02:8080: 20625 millis
> newnifi03:8080: 20310 millis{code}
> More deep debug:
>  
> {code:java}
> 2020-05-09 10:31:13,252 DEBUG [NiFi Web Server-21] 
> org.eclipse.jetty.server.HttpChannel REQUEST for 
> //newnifi01:8080/nifi-api/flow/process-groups/root on 
> HttpChannelOverHttp@68d3e945{r=1,c=false,c=false/false,a=IDLE,uri=//newnifi01:8080/nifi-api/flow/process-groups/root,age=0}
> GET //newnifi01:8080/nifi-api/flow/process-groups/root HTTP/1.1
> Host: newnifi01:8080
> ...
> 2020-05-09 10:31:13,256 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for calculating time 
> back pressure by content size in bytes. Returning -1
> 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for calculating time 
> to back pressure by object count. Returning -1
> 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting 
> content size in bytes for next interval. Returning -1
> 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting 
> object count for next interval. Returning -1
> 2020-05-09 10:31:13,258 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting 
> object count for next interval. Returning -1
> 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting 
> content size in bytes for next interval. Returning -1
> 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id 
> eb602b2a-016f-1000--2767192a: nextIntervalPercentageUseCount=-1
> 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id 
> eb602b2a-016f-1000--2767192a: nextIntervalBytes=-1
> 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id 
> eb602b2a-016f-1000--2767192a: timeToBytesBackpressureMillis=-1
> 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id 
> eb602b2a-016f-1000--2767192a: nextIntervalCount=-1
> 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id 
> 

[GitHub] [nifi] YolandaMDavis opened a new pull request #4274: NIFI-7437 - created separate thread for preloading predictions, refac…

2020-05-14 Thread GitBox


YolandaMDavis opened a new pull request #4274:
URL: https://github.com/apache/nifi/pull/4274


   …tors for performance
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-7452) Support adls_gen2_directory in Atlas reporting task

2020-05-14 Thread Peter Turcsanyi (Jira)
Peter Turcsanyi created NIFI-7452:
-

 Summary: Support adls_gen2_directory in Atlas reporting task
 Key: NIFI-7452
 URL: https://issues.apache.org/jira/browse/NIFI-7452
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Peter Turcsanyi
Assignee: Peter Turcsanyi






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] esecules edited a comment on pull request #4265: NIFI-7434: Endpoint suffix property in AzureStorageAccount NIFI processors

2020-05-14 Thread GitBox


esecules edited a comment on pull request #4265:
URL: https://github.com/apache/nifi/pull/4265#issuecomment-628748972


   Will this solve https://jira.apache.org/jira/browse/NIFI-7386?filter=-2 ?
   (Connection to the Azurite storage emulator)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] esecules commented on pull request #4265: NIFI-7434: Endpoint suffix property in AzureStorageAccount NIFI processors

2020-05-14 Thread GitBox


esecules commented on pull request #4265:
URL: https://github.com/apache/nifi/pull/4265#issuecomment-628748972


   Will this solve https://jira.apache.org/jira/browse/NIFI-7386?filter=-2 ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] james94 commented on a change in pull request #781: MINIFICPP-1214: Converts H2O Processors to use ALv2 compliant H20-3 library

2020-05-14 Thread GitBox


james94 commented on a change in pull request #781:
URL: https://github.com/apache/nifi-minifi-cpp/pull/781#discussion_r424811407



##
File path: extensions/pythonprocessors/h2o/h2o3/mojo/ExecuteH2oMojoScoring.py
##
@@ -0,0 +1,165 @@
+#!/usr/bin/env python
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+-- after downloading the mojo model from h2o3, the following packages
+   are needed to execute the model to do batch or real-time scoring
+
+Make all packages available on your machine:
+
+sudo apt-get -y update
+
+Install Java to include open source H2O-3 algorithms:
+
+sudo apt-get -y install openjdk-8-jdk
+
+Install Datatable and pandas:
+
+pip install datatable
+pip install pandas
+
+Option 1: Install H2O-3 with conda
+
+conda create -n h2o3-nifi-minifi python=3.6
+conda activate h2o3-nifi-minifi
+conda config --append channels conda-forge
+conda install -y -c h2oai h2o
+
+Option 2: Install H2O-3 with pip
+
+pip install requests
+pip install tabulate
+pip install "colorama>=0.3.8"
+pip install future
+pip uninstall h2o
+If on Mac OS X, must include --user:
+pip install -f 
http://h2o-release.s3.amazonaws.com/h2o/latest_stable_Py.html h2o --user
+else:
+pip install -f 
http://h2o-release.s3.amazonaws.com/h2o/latest_stable_Py.html h2o
+
+"""
+import h2o
+import codecs
+import pandas as pd
+import datatable as dt
+
+mojo_model = None
+
+def describe(processor):
+""" describe what this processor does
+"""
+processor.setDescription("Executes H2O-3's MOJO Model in Python to do 
batch scoring or \
+real-time scoring for one or more predicted label(s) on the tabular 
test data in \
+the incoming flow file content. If tabular data is one row, then MOJO 
does real-time \
+scoring. If tabular data is multiple rows, then MOJO does batch 
scoring.")
+
+def onInitialize(processor):
+""" onInitialize is where you can set properties
+processor.addProperty(name, description, defaultValue, required, el)
+"""
+processor.addProperty("MOJO Model Filepath", "Add the filepath to the MOJO 
Model file. For example, \
+
'path/to/mojo-model/GBM_grid__1_AutoML_20200511_075150_model_180.zip'.", "", 
True, False)
+
+processor.addProperty("Is First Line Header", "Add True or False for 
whether first line is header.", \
+"True", True, False)
+
+processor.addProperty("Input Schema", "If first line is not header, then 
you must add Input Schema for \
+incoming data.If there is more than one column name, write a comma 
separated list of \
+column names. Else, you do not need to add an Input Schema.", "", 
False, False)
+
+processor.addProperty("Use Output Header", "Add True or False for whether 
you want to use an output \
+for your predictions.", "False", False, False)
+
+processor.addProperty("Output Schema", "To set Output Schema, 'Use Output 
Header' must be set to 'True' \
+If you want more descriptive column names for your predictions, then 
add an Output Schema. If there \
+is more than one column name, write a comma separated list of column 
names. Else, H2O-3 will include \
+them by default", "", False, False)
+
+def onSchedule(context):
+""" onSchedule is where you load and read properties
+this function is called 1 time when the processor is scheduled to run
+"""
+global mojo_model

Review comment:
   I use global for mojo_model, so I can access mojo_model in the 
onSchdule() function and onTrigger() function. In onSchedule(), I specify 
global since the mojo_model will change in this function and my intention is to 
instantiate a mojo_model object 1 time right at the start when the processor is 
scheduled to run. This case applies to all processor instances. Then in the 
onTrigger(), we use the mojo_model to make predictions. 
   
   For example, one processor instance could instantiate a classification 
mojo_model while another processor could instantiate a regression mojo_model. 
Then we have one processor that is making classification predictions while the 
other processor is making regression 

[jira] [Updated] (NIFI-7446) Fail when the specified path is a directory in FetchAzureDataLakeStorage

2020-05-14 Thread Peter Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Gyori updated NIFI-7446:
--
Status: Patch Available  (was: In Progress)

https://github.com/apache/nifi/pull/4273

> Fail when the specified path is a directory in FetchAzureDataLakeStorage 
> -
>
> Key: NIFI-7446
> URL: https://issues.apache.org/jira/browse/NIFI-7446
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Peter Turcsanyi
>Assignee: Peter Gyori
>Priority: Major
>  Labels: azure
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> FetchAzureDataLakeStorage currently returns an empty FlowFile without error 
> when the specified path points to a directory on ADLS (instead of a file).
> FetchAzureDataLakeStorage should fail in this case.
> PathProperties.isDirectory() can be used to check if the retrieved entity is 
> a directory or a file (available from azure-storage-file-datalake 12.1.x).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] pgyori opened a new pull request #4273: NIFI-7446: Fail when the specified path is a directory in FetchAzureDataLakeStorage

2020-05-14 Thread GitBox


pgyori opened a new pull request #4273:
URL: https://github.com/apache/nifi/pull/4273


   https://issues.apache.org/jira/browse/NIFI-7446
   
    Description of PR
   
   FetchAzureDataLakeStorage processor now throws exception when the specified 
path points to a directory.
   A newer version (12.1.1) of azure-storage-file-datalake is imported.
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (MINIFICPP-965) InvokeHTTP's relationships are unfinished

2020-05-14 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda resolved MINIFICPP-965.
--
Fix Version/s: 0.8.0
   Resolution: Fixed

> InvokeHTTP's relationships are unfinished
> -
>
> Key: MINIFICPP-965
> URL: https://issues.apache.org/jira/browse/MINIFICPP-965
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Dániel Bakai
>Assignee: Murtuza Shareef
>Priority: Major
> Fix For: 0.8.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Only the Success relationship is added as a supported one, only Success, 
> RelRetry and RelNoRetry are used, RelFailure and RelResponse are unused.
> We should decide what error relationships we need and use them properly. 
> Also, currently both the original FlowFile and response FlowFile are routed 
> to Success, which is hard to use, and inconsistent with NiFi. Response 
> FlowFiles should be routed to RelResponse.
> There might be other unfinished features/deviation from documentation in 
> InvokeHTTP as well.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #776: MINIFICPP-1202 - Handle C2 requests/responses using MinifiConcurrentQueue

2020-05-14 Thread GitBox


hunyadi-dev commented on a change in pull request #776:
URL: https://github.com/apache/nifi-minifi-cpp/pull/776#discussion_r425209344



##
File path: libminifi/src/c2/C2Agent.cpp
##
@@ -75,54 +78,54 @@ C2Agent::C2Agent(const 
std::shared_ptr lock(request_mutex, std::adopt_lock);
-if (!requests.empty()) {
-  int count = 0;
-  do {
-const C2Payload payload(std::move(requests.back()));
-requests.pop_back();
-try {
-  C2Payload && response = 
protocol_.load()->consumePayload(payload);
-  enqueue_c2_server_response(std::move(response));
-}
-catch(const std::exception ) {
-  logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
-}
-catch(...) {
-  logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
-}
-  }while(!requests.empty() && ++count < max_c2_responses);
+if (protocol_.load() != nullptr) {
+  std::vector payload_batch;
+  payload_batch.reserve(max_c2_responses);
+  auto getRequestPayload = [_batch] (C2Payload&& payload) { 
payload_batch.emplace_back(std::move(payload)); };
+  for (std::size_t attempt_num = 0; attempt_num < max_c2_responses; 
++attempt_num) {
+if (!requests.consumeWaitFor(getRequestPayload, 
std::chrono::seconds(1))) {
+  break;
 }
   }
-  try {
-performHeartBeat();
-  }
-  catch(const std::exception ) {
-logger_->log_error("Exception occurred while performing heartbeat. 
error: %s", e.what());
-  }
-  catch(...) {
-logger_->log_error("Unknonwn exception occurred while performing 
heartbeat.");
-  }
+  std::for_each(
+std::make_move_iterator(payload_batch.begin()),
+std::make_move_iterator(payload_batch.end()),
+[&] (C2Payload&& payload) {
+  try {
+C2Payload && response = 
protocol_.load()->consumePayload(std::move(payload));
+enqueue_c2_server_response(std::move(response));
+  }
+  catch(const std::exception ) {
+logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
+  }
+  catch(...) {
+logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
+  }
+});
 
-  checkTriggers();
+try {
+  performHeartBeat();
+}
+catch (const std::exception ) {
+  logger_->log_error("Exception occurred while performing heartbeat. 
error: %s", e.what());
+}
+catch (...) {
+  logger_->log_error("Unknonwn exception occurred while performing 
heartbeat.");
+}
+}
+
+checkTriggers();
+
+return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(heart_beat_period_));
+  };
 
-  return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(heart_beat_period_));
-};
   functions_.push_back(c2_producer_);
 
-  c2_consumer_ = [&]() {
-if ( queue_mutex.try_lock_for(std::chrono::seconds(1)) ) {
-  C2Payload payload(Operation::HEARTBEAT);
-  {
-std::lock_guard lock(queue_mutex, std::adopt_lock);
-if (responses.empty()) {
-  return 
utils::TaskRescheduleInfo::RetryIn(std::chrono::milliseconds(C2RESPONSE_POLL_MS));
-}
-payload = std::move(responses.back());
-responses.pop_back();
+  c2_consumer_ = [&] {
+if (responses.size()) {
+  if (!responses.consumeWaitFor([this](C2Payload&& e) { 
extractPayload(std::move(e)); }, std::chrono::seconds(1))) {

Review comment:
   This is the same as before, right?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #776: MINIFICPP-1202 - Handle C2 requests/responses using MinifiConcurrentQueue

2020-05-14 Thread GitBox


hunyadi-dev commented on a change in pull request #776:
URL: https://github.com/apache/nifi-minifi-cpp/pull/776#discussion_r425209703



##
File path: libminifi/src/c2/C2Agent.cpp
##
@@ -75,54 +78,54 @@ C2Agent::C2Agent(const 
std::shared_ptr lock(request_mutex, std::adopt_lock);
-if (!requests.empty()) {
-  int count = 0;
-  do {
-const C2Payload payload(std::move(requests.back()));
-requests.pop_back();
-try {
-  C2Payload && response = 
protocol_.load()->consumePayload(payload);
-  enqueue_c2_server_response(std::move(response));
-}
-catch(const std::exception ) {
-  logger_->log_error("Exception occurred while consuming payload. 
error: %s", e.what());
-}
-catch(...) {
-  logger_->log_error("Unknonwn exception occurred while consuming 
payload.");
-}
-  }while(!requests.empty() && ++count < max_c2_responses);
+if (protocol_.load() != nullptr) {
+  std::vector payload_batch;
+  payload_batch.reserve(max_c2_responses);
+  auto getRequestPayload = [_batch] (C2Payload&& payload) { 
payload_batch.emplace_back(std::move(payload)); };
+  for (std::size_t attempt_num = 0; attempt_num < max_c2_responses; 
++attempt_num) {
+if (!requests.consumeWaitFor(getRequestPayload, 
std::chrono::seconds(1))) {

Review comment:
   Added `wait_until`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #785: MINIFICPP-1202 - Fix unstable test for concurrent queue

2020-05-14 Thread GitBox


arpadboda closed pull request #785:
URL: https://github.com/apache/nifi-minifi-cpp/pull/785


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev opened a new pull request #785: MINIFICPP-1202 - Fix unstable test for concurrent queue

2020-05-14 Thread GitBox


hunyadi-dev opened a new pull request #785:
URL: https://github.com/apache/nifi-minifi-cpp/pull/785


   It is a test ensuring that the consumers that put back failing elements to 
their queue can eventually get to read new elements (so the queue is FIFO). The 
test was interrupting the consumer by stopping the queue. However there was no 
guarantee that the consumer already finished reading the data out by this time.
   
   ---
   ### :a: uto generated text, please do not waste your time reading this:
   
   
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev opened a new pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support

2020-05-14 Thread GitBox


hunyadi-dev opened a new pull request #784:
URL: https://github.com/apache/nifi-minifi-cpp/pull/784


   Rework and test ExecutePythonProcessor, add in-place script support
   
   Some changes will be added in different PRs:
- https://issues.apache.org/jira/browse/MINIFICPP-1222
- https://issues.apache.org/jira/browse/MINIFICPP-1223
- https://issues.apache.org/jira/browse/MINIFICPP-1224
   
   ---
   ### :a: uto generated text, please do not waste your time reading this:
   
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] turcsanyip commented on a change in pull request #4249: NIFI-7409: Azure managed identity support to Azure Datalake processors

2020-05-14 Thread GitBox


turcsanyip commented on a change in pull request #4249:
URL: https://github.com/apache/nifi/pull/4249#discussion_r424967588



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/AbstractAzureDataLakeStorageProcessor.java
##
@@ -118,9 +136,14 @@
 .build();
 
 private static final List PROPERTIES = 
Collections.unmodifiableList(
-Arrays.asList(AbstractAzureDataLakeStorageProcessor.ACCOUNT_NAME, 
AbstractAzureDataLakeStorageProcessor.ACCOUNT_KEY,
-AbstractAzureDataLakeStorageProcessor.SAS_TOKEN, 
AbstractAzureDataLakeStorageProcessor.FILESYSTEM,
-AbstractAzureDataLakeStorageProcessor.DIRECTORY, 
AbstractAzureDataLakeStorageProcessor.FILE));
+Arrays.asList(AbstractAzureDataLakeStorageProcessor.ACCOUNT_NAME,
+AbstractAzureDataLakeStorageProcessor.ACCOUNT_KEY,
+AbstractAzureDataLakeStorageProcessor.SAS_TOKEN,
+AbstractAzureDataLakeStorageProcessor.ENDPOINT_SUFFIX,

Review comment:
   As I mentioned in https://github.com/apache/nifi/pull/4265 (Blob/Queue 
processors Endpoint Suffix), I think we should keep the credential properties 
together on the UI.
   Unlike the Blob/Queue processors, there is one more credential property 
here, the Use Azure Managed Identity.
   So I would move the suffix property after it (or leave it at the bottom 
where it was earlier).
   
   As a general rule, it is preferred to add only 1 feature in a jira ticket 
and NIFI-7409 is about the Managed Identity support. So this Endpoint Suffix is 
an additional change here and it would rather belong to NIFI-7434 T think.
   Mixing multiple features together makes the review more complicated. Please 
keep it in mind in the future.

##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/AbstractAzureDataLakeStorageProcessor.java
##
@@ -134,17 +157,35 @@
 
 public static Collection 
validateCredentialProperties(final ValidationContext validationContext) {
 final List results = new ArrayList<>();
+
+final boolean useManagedIdentity = 
validationContext.getProperty(USE_MANAGED_IDENTITY).asBoolean();
 final String accountName = 
validationContext.getProperty(ACCOUNT_NAME).getValue();
-final String accountKey = 
validationContext.getProperty(ACCOUNT_KEY).getValue();
-final String sasToken = 
validationContext.getProperty(SAS_TOKEN).getValue();
-
-if (StringUtils.isNotBlank(accountName)
-&& ((StringUtils.isNotBlank(accountKey) && 
StringUtils.isNotBlank(sasToken)) || (StringUtils.isBlank(accountKey) && 
StringUtils.isBlank(sasToken {
-results.add(new ValidationResult.Builder().subject("Azure Storage 
Credentials").valid(false)
-.explanation("either " + ACCOUNT_NAME.getDisplayName() + " 
with " + ACCOUNT_KEY.getDisplayName() +
-" or " + ACCOUNT_NAME.getDisplayName() + " with " 
+ SAS_TOKEN.getDisplayName() +
-" must be specified, not both")
+final boolean accountKeyIsSet  = 
validationContext.getProperty(ACCOUNT_KEY).isSet();
+final boolean sasTokenIsSet = 
validationContext.getProperty(SAS_TOKEN).isSet();
+
+if(useManagedIdentity){
+if(accountKeyIsSet || sasTokenIsSet) {
+final String msg = String.format(
+"('%s') and ('%s' or '%s') fields cannot be set at the 
same time.",
+USE_MANAGED_IDENTITY.getDisplayName(),
+ACCOUNT_KEY.getDisplayName(),
+SAS_TOKEN.getDisplayName()
+);
+results.add(new 
ValidationResult.Builder().subject("Credentials 
config").valid(false).explanation(msg).build());
+}
+} else {
+final String accountKey = 
validationContext.getProperty(ACCOUNT_KEY).getValue();
+final String sasToken = 
validationContext.getProperty(SAS_TOKEN).getValue();
+if (StringUtils.isNotBlank(accountName) && 
((StringUtils.isNotBlank(accountKey) && StringUtils.isNotBlank(sasToken))
+|| (StringUtils.isBlank(accountKey) && 
StringUtils.isBlank(sasToken {
+final String msg = String.format("either " + 
ACCOUNT_NAME.getDisplayName() + " with " + ACCOUNT_KEY.getDisplayName() +
+" or " + ACCOUNT_NAME.getDisplayName() + " with " + 
SAS_TOKEN.getDisplayName() +
+" must be specified, not both"
+);
+results.add(new 
ValidationResult.Builder().subject("Credentials Config").valid(false)
+.explanation(msg)
 .build());
+}

Review comment:
   When I was testing the validation, I noticed 2 things:
   
   - when none of the Account 

[jira] [Resolved] (NIFI-7140) PutSql support database transaction rollback when is false

2020-05-14 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng resolved NIFI-7140.
--

Replaced by NIFI-7403


> PutSql support database transaction rollback when is 
> false
> 
>
> Key: NIFI-7140
> URL: https://issues.apache.org/jira/browse/NIFI-7140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
> processor will process all FlowFiles with that fragment.identifier as a 
> single transaction;
> In actuality,it works. 
> But when some sql of the transaction failed and  is 
> false , the database transaction will not roll back.
> Sometimes,we need the  database transaction rollback and do not want  the 
> flowfile rollback, we need that the failed database  transaction route  to 
> REL_FAILURE.
> If the is true and  is 
> false , I think it should still support the capability  of database 
> transaction rollback, for example :it should add a property (like  Fragmented Transactions RollBack>)  which can indicate that whether the 
> processor support  database transaction rollback when the 'Support Fragmented 
> Transactions' is true .Of course ,when  is true 
> , will be ignored



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6878) ConvertJSONToSQL Improvement. Statement Type Support "Use statement.type Attribute" or Supports Expression Language

2020-05-14 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-6878:
-
Status: Patch Available  (was: Reopened)

> ConvertJSONToSQL Improvement. Statement Type Support  "Use statement.type 
> Attribute" or Supports Expression Language 
> -
>
> Key: NIFI-6878
> URL: https://issues.apache.org/jira/browse/NIFI-6878
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ConvertJSONToSQL  Statement Type provides fixed options : 
> UPDATE,INSERT,DELETE. 
> Usually, it can meet our needs. But  in actual application,I think It's not 
> flexible enough.
>  In some cases, we need to dynamically indicate the Statement Type.
> For example,the data from CpatureChangeMysql owns  the attribute  of 
> statement  type(cdc.event.type, we need to convert the data to sql(DML) 
> orderly; And we now have to use RouteOnAttribute to transfer data to three 
> branches , Build SQL statement separately ,finally,we have to use 
> EnforceOrder  to ensure the order of SQL statements.
> But it will be easy if ConvertJSONToSQL  supports dynamical Statement Type . 
> It is easy to implement this feature just like PutDatabaseRecord. 
> In practice, I did use PutDatabaseRecord   instead of ConvertJSONToSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7140) PutSql support database transaction rollback when is false

2020-05-14 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7140:
-
Affects Version/s: (was: 1.11.1)
   Status: Open  (was: Patch Available)

> PutSql support database transaction rollback when is 
> false
> 
>
> Key: NIFI-7140
> URL: https://issues.apache.org/jira/browse/NIFI-7140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
> processor will process all FlowFiles with that fragment.identifier as a 
> single transaction;
> In actuality,it works. 
> But when some sql of the transaction failed and  is 
> false , the database transaction will not roll back.
> Sometimes,we need the  database transaction rollback and do not want  the 
> flowfile rollback, we need that the failed database  transaction route  to 
> REL_FAILURE.
> If the is true and  is 
> false , I think it should still support the capability  of database 
> transaction rollback, for example :it should add a property (like  Fragmented Transactions RollBack>)  which can indicate that whether the 
> processor support  database transaction rollback when the 'Support Fragmented 
> Transactions' is true .Of course ,when  is true 
> , will be ignored



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7140) PutSql support database transaction rollback when is false

2020-05-14 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7140:
-
Status: Patch Available  (was: Reopened)

> PutSql support database transaction rollback when is 
> false
> 
>
> Key: NIFI-7140
> URL: https://issues.apache.org/jira/browse/NIFI-7140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.1
>Reporter: ZhangCheng
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
> processor will process all FlowFiles with that fragment.identifier as a 
> single transaction;
> In actuality,it works. 
> But when some sql of the transaction failed and  is 
> false , the database transaction will not roll back.
> Sometimes,we need the  database transaction rollback and do not want  the 
> flowfile rollback, we need that the failed database  transaction route  to 
> REL_FAILURE.
> If the is true and  is 
> false , I think it should still support the capability  of database 
> transaction rollback, for example :it should add a property (like  Fragmented Transactions RollBack>)  which can indicate that whether the 
> processor support  database transaction rollback when the 'Support Fragmented 
> Transactions' is true .Of course ,when  is true 
> , will be ignored



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7140) PutSql support database transaction rollback when is false

2020-05-14 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7140:
-
Status: Reopened  (was: Closed)

> PutSql support database transaction rollback when is 
> false
> 
>
> Key: NIFI-7140
> URL: https://issues.apache.org/jira/browse/NIFI-7140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.1
>Reporter: ZhangCheng
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
> processor will process all FlowFiles with that fragment.identifier as a 
> single transaction;
> In actuality,it works. 
> But when some sql of the transaction failed and  is 
> false , the database transaction will not roll back.
> Sometimes,we need the  database transaction rollback and do not want  the 
> flowfile rollback, we need that the failed database  transaction route  to 
> REL_FAILURE.
> If the is true and  is 
> false , I think it should still support the capability  of database 
> transaction rollback, for example :it should add a property (like  Fragmented Transactions RollBack>)  which can indicate that whether the 
> processor support  database transaction rollback when the 'Support Fragmented 
> Transactions' is true .Of course ,when  is true 
> , will be ignored



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (NIFI-6878) ConvertJSONToSQL Improvement. Statement Type Support "Use statement.type Attribute" or Supports Expression Language

2020-05-14 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng reopened NIFI-6878:
--

> ConvertJSONToSQL Improvement. Statement Type Support  "Use statement.type 
> Attribute" or Supports Expression Language 
> -
>
> Key: NIFI-6878
> URL: https://issues.apache.org/jira/browse/NIFI-6878
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ConvertJSONToSQL  Statement Type provides fixed options : 
> UPDATE,INSERT,DELETE. 
> Usually, it can meet our needs. But  in actual application,I think It's not 
> flexible enough.
>  In some cases, we need to dynamically indicate the Statement Type.
> For example,the data from CpatureChangeMysql owns  the attribute  of 
> statement  type(cdc.event.type, we need to convert the data to sql(DML) 
> orderly; And we now have to use RouteOnAttribute to transfer data to three 
> branches , Build SQL statement separately ,finally,we have to use 
> EnforceOrder  to ensure the order of SQL statements.
> But it will be easy if ConvertJSONToSQL  supports dynamical Statement Type . 
> It is easy to implement this feature just like PutDatabaseRecord. 
> In practice, I did use PutDatabaseRecord   instead of ConvertJSONToSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6878) ConvertJSONToSQL Improvement. Statement Type Support "Use statement.type Attribute" or Supports Expression Language

2020-05-14 Thread ZhangCheng (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107174#comment-17107174
 ] 

ZhangCheng commented on NIFI-6878:
--

[~pvillard]I am so sorry :(. I made a mistake that I thought the PR was closed. 

> ConvertJSONToSQL Improvement. Statement Type Support  "Use statement.type 
> Attribute" or Supports Expression Language 
> -
>
> Key: NIFI-6878
> URL: https://issues.apache.org/jira/browse/NIFI-6878
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ConvertJSONToSQL  Statement Type provides fixed options : 
> UPDATE,INSERT,DELETE. 
> Usually, it can meet our needs. But  in actual application,I think It's not 
> flexible enough.
>  In some cases, we need to dynamically indicate the Statement Type.
> For example,the data from CpatureChangeMysql owns  the attribute  of 
> statement  type(cdc.event.type, we need to convert the data to sql(DML) 
> orderly; And we now have to use RouteOnAttribute to transfer data to three 
> branches , Build SQL statement separately ,finally,we have to use 
> EnforceOrder  to ensure the order of SQL statements.
> But it will be easy if ConvertJSONToSQL  supports dynamical Statement Type . 
> It is easy to implement this feature just like PutDatabaseRecord. 
> In practice, I did use PutDatabaseRecord   instead of ConvertJSONToSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7437) UI is slow when nifi.analytics.predict.enabled is true

2020-05-14 Thread Yolanda M. Davis (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107168#comment-17107168
 ] 

Yolanda M. Davis commented on NIFI-7437:


[~diarworld] Thanks for this info.  I am also seeing this issue on a single 
node but just wanted to ensure I have similar settings since I have isolated 
the culprit and have a refactor that I am testing.  When the PR is available 
you are more than welcome to review/test it out.

> UI is slow when nifi.analytics.predict.enabled is true
> --
>
> Key: NIFI-7437
> URL: https://issues.apache.org/jira/browse/NIFI-7437
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI, Extensions
>Affects Versions: 1.10.0, 1.11.4
> Environment: Java11, CentOS8
>Reporter: Dmitry Ibragimov
>Assignee: Yolanda M. Davis
>Priority: Critical
>  Labels: features, performance
>
> We faced with issue when nifi.analytics.predict.enabled is true after cluster 
> upgrade to 1.11.4
> We have about 4000 processors in development enviroment, but most of them is 
> in disabled state: 256 running, 1263 stopped, 2543 disabled
> After upgrade version from 1.9.2 to 1.11.4 we deicded to test back-pressure 
> prediction feature and enable it in configuration:
> {code:java}
> nifi.analytics.predict.enabled=true
> nifi.analytics.predict.interval=3 mins
> nifi.analytics.query.interval=5 mins
> nifi.analytics.connection.model.implementation=org.apache.nifi.controller.status.analytics.models.OrdinaryLeastSquares
> nifi.analytics.connection.model.score.name=rSquared
> nifi.analytics.connection.model.score.threshold=.90
> {code}
> And we faced with terrible UI performance degradataion. Root page opens in 20 
> seconds instead of 200-500ms. About ~100 times slower. I've tesed it with 
> different environments centos7/8, java8/11, clustered secured, clustered 
> unsecured, standalone unsecured - all the same.
> In debug log for ThreadPoolRequestReplicator:
> {code:java}
> 2020-05-09 08:03:34,459 DEBUG [Replicate Request Thread-2] 
> o.a.n.c.c.h.r.ThreadPoolRequestReplicator For GET 
> /nifi-api/flow/process-groups/root (Request ID 
> c144196f-d4cb-4053-8828-70e06f7c5100), minimum response time = 19548, max = 
> 20625, average = 20161.0 ms
> 2020-05-09 08:03:34,459 DEBUG [Replicate Request Thread-2] 
> o.a.n.c.c.h.r.ThreadPoolRequestReplicator Node Responses for GET 
> /nifi-api/flow/process-groups/root (Request ID 
> c144196f-d4cb-4053-8828-70e06f7c5100):
> newnifi01:8080: 19548 millis
> newnifi02:8080: 20625 millis
> newnifi03:8080: 20310 millis{code}
> More deep debug:
>  
> {code:java}
> 2020-05-09 10:31:13,252 DEBUG [NiFi Web Server-21] 
> org.eclipse.jetty.server.HttpChannel REQUEST for 
> //newnifi01:8080/nifi-api/flow/process-groups/root on 
> HttpChannelOverHttp@68d3e945{r=1,c=false,c=false/false,a=IDLE,uri=//newnifi01:8080/nifi-api/flow/process-groups/root,age=0}
> GET //newnifi01:8080/nifi-api/flow/process-groups/root HTTP/1.1
> Host: newnifi01:8080
> ...
> 2020-05-09 10:31:13,256 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for calculating time 
> back pressure by content size in bytes. Returning -1
> 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for calculating time 
> to back pressure by object count. Returning -1
> 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting 
> content size in bytes for next interval. Returning -1
> 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting 
> object count for next interval. Returning -1
> 2020-05-09 10:31:13,258 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting 
> object count for next interval. Returning -1
> 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting 
> content size in bytes for next interval. Returning -1
> 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id 
> eb602b2a-016f-1000--2767192a: nextIntervalPercentageUseCount=-1
> 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id 
> eb602b2a-016f-1000--2767192a: nextIntervalBytes=-1
> 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id 
> eb602b2a-016f-1000--2767192a: timeToBytesBackpressureMillis=-1
> 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model 

[jira] [Updated] (NIFI-7451) SFTP is failing to connect to Remote host

2020-05-14 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7451:
-
Issue Type: Bug  (was: Task)

> SFTP is failing to connect to Remote host
> -
>
> Key: NIFI-7451
> URL: https://issues.apache.org/jira/browse/NIFI-7451
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.11.3
> Environment: NiFi 1.11.3 installed on Windows 10.
>Reporter: KiMi
>Priority: Trivial
>
> GetSFTP processor is failing to validate the host key. the log file reads 
> below:
> {code:java}
> ERROR [Timer-Driven Process Thread-4] o.a.nifi.processors.standard.GetSFTP 
> GetSFTP[id=f782e1ab-0171-1000-f3a7-04d2f80263a4] Unable to fetch listing from 
> remote server due to net.schmizz.sshj.userauth.UserAuthException: Exhausted 
> available authentication methods: 
> net.schmizz.sshj.userauth.UserAuthException: Exhausted available 
> authentication methodsERROR [Timer-Driven Process Thread-4] 
> o.a.nifi.processors.standard.GetSFTP 
> GetSFTP[id=f782e1ab-0171-1000-f3a7-04d2f80263a4] Unable to fetch listing from 
> remote server due to net.schmizz.sshj.userauth.UserAuthException: Exhausted 
> available authentication methods: 
> net.schmizz.sshj.userauth.UserAuthException: Exhausted available 
> authentication methodsnet.schmizz.sshj.userauth.UserAuthException: Exhausted 
> available authentication methods at 
> net.schmizz.sshj.SSHClient.auth(SSHClient.java:230) at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getSFTPClient(SFTPTransfer.java:602)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:230)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:193)
>  at 
> org.apache.nifi.processors.standard.GetFileTransfer.fetchListing(GetFileTransfer.java:284)
>  at 
> org.apache.nifi.processors.standard.GetFileTransfer.onTrigger(GetFileTransfer.java:127)
>  at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
>  at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
>  at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>  at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>  at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  at java.base/java.lang.Thread.run(Thread.java:830){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7451) SFTP is failing to connect to Remote host

2020-05-14 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7451:
-
Component/s: (was: Configuration)
 Extensions

> SFTP is failing to connect to Remote host
> -
>
> Key: NIFI-7451
> URL: https://issues.apache.org/jira/browse/NIFI-7451
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Affects Versions: 1.11.3
> Environment: NiFi 1.11.3 installed on Windows 10.
>Reporter: KiMi
>Priority: Trivial
>
> GetSFTP processor is failing to validate the host key. the log file reads 
> below:
> {code:java}
> ERROR [Timer-Driven Process Thread-4] o.a.nifi.processors.standard.GetSFTP 
> GetSFTP[id=f782e1ab-0171-1000-f3a7-04d2f80263a4] Unable to fetch listing from 
> remote server due to net.schmizz.sshj.userauth.UserAuthException: Exhausted 
> available authentication methods: 
> net.schmizz.sshj.userauth.UserAuthException: Exhausted available 
> authentication methodsERROR [Timer-Driven Process Thread-4] 
> o.a.nifi.processors.standard.GetSFTP 
> GetSFTP[id=f782e1ab-0171-1000-f3a7-04d2f80263a4] Unable to fetch listing from 
> remote server due to net.schmizz.sshj.userauth.UserAuthException: Exhausted 
> available authentication methods: 
> net.schmizz.sshj.userauth.UserAuthException: Exhausted available 
> authentication methodsnet.schmizz.sshj.userauth.UserAuthException: Exhausted 
> available authentication methods at 
> net.schmizz.sshj.SSHClient.auth(SSHClient.java:230) at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getSFTPClient(SFTPTransfer.java:602)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:230)
>  at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:193)
>  at 
> org.apache.nifi.processors.standard.GetFileTransfer.fetchListing(GetFileTransfer.java:284)
>  at 
> org.apache.nifi.processors.standard.GetFileTransfer.onTrigger(GetFileTransfer.java:127)
>  at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
>  at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
>  at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>  at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>  at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  at java.base/java.lang.Thread.run(Thread.java:830){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7451) SFTP is failing to connect to Remote host

2020-05-14 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7451:
-
Description: 
GetSFTP processor is failing to validate the host key. the log file reads below:
{code:java}
ERROR [Timer-Driven Process Thread-4] o.a.nifi.processors.standard.GetSFTP 
GetSFTP[id=f782e1ab-0171-1000-f3a7-04d2f80263a4] Unable to fetch listing from 
remote server due to net.schmizz.sshj.userauth.UserAuthException: Exhausted 
available authentication methods: net.schmizz.sshj.userauth.UserAuthException: 
Exhausted available authentication methodsERROR [Timer-Driven Process Thread-4] 
o.a.nifi.processors.standard.GetSFTP 
GetSFTP[id=f782e1ab-0171-1000-f3a7-04d2f80263a4] Unable to fetch listing from 
remote server due to net.schmizz.sshj.userauth.UserAuthException: Exhausted 
available authentication methods: net.schmizz.sshj.userauth.UserAuthException: 
Exhausted available authentication 
methodsnet.schmizz.sshj.userauth.UserAuthException: Exhausted available 
authentication methods at net.schmizz.sshj.SSHClient.auth(SSHClient.java:230) 
at 
org.apache.nifi.processors.standard.util.SFTPTransfer.getSFTPClient(SFTPTransfer.java:602)
 at 
org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:230)
 at 
org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:193)
 at 
org.apache.nifi.processors.standard.GetFileTransfer.fetchListing(GetFileTransfer.java:284)
 at 
org.apache.nifi.processors.standard.GetFileTransfer.onTrigger(GetFileTransfer.java:127)
 at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
 at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
 at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
 at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
 at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
 at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
 at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base/java.lang.Thread.run(Thread.java:830){code}

  was:
GetSFTP processor is failing to validate the host key. the log file reads below:

 

ERROR [Timer-Driven Process Thread-4] o.a.nifi.processors.standard.GetSFTP 
GetSFTP[id=f782e1ab-0171-1000-f3a7-04d2f80263a4] Unable to fetch listing from 
remote server due to net.schmizz.sshj.userauth.UserAuthException: Exhausted 
available authentication methods: net.schmizz.sshj.userauth.UserAuthException: 
Exhausted available authentication methodsERROR [Timer-Driven Process Thread-4] 
o.a.nifi.processors.standard.GetSFTP 
GetSFTP[id=f782e1ab-0171-1000-f3a7-04d2f80263a4] Unable to fetch listing from 
remote server due to net.schmizz.sshj.userauth.UserAuthException: Exhausted 
available authentication methods: net.schmizz.sshj.userauth.UserAuthException: 
Exhausted available authentication 
methodsnet.schmizz.sshj.userauth.UserAuthException: Exhausted available 
authentication methods at net.schmizz.sshj.SSHClient.auth(SSHClient.java:230) 
at 
org.apache.nifi.processors.standard.util.SFTPTransfer.getSFTPClient(SFTPTransfer.java:602)
 at 
org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:230)
 at 
org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:193)
 at 
org.apache.nifi.processors.standard.GetFileTransfer.fetchListing(GetFileTransfer.java:284)
 at 
org.apache.nifi.processors.standard.GetFileTransfer.onTrigger(GetFileTransfer.java:127)
 at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
 at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
 at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
 at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
 at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
 at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
 at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at 

[jira] [Commented] (NIFI-6878) ConvertJSONToSQL Improvement. Statement Type Support "Use statement.type Attribute" or Supports Expression Language

2020-05-14 Thread Pierre Villard (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107157#comment-17107157
 ] 

Pierre Villard commented on NIFI-6878:
--

[~Ku_Cheng] - can you let us know why you set the status as resolved although 
the associated pull request is still opened?

> ConvertJSONToSQL Improvement. Statement Type Support  "Use statement.type 
> Attribute" or Supports Expression Language 
> -
>
> Key: NIFI-6878
> URL: https://issues.apache.org/jira/browse/NIFI-6878
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ConvertJSONToSQL  Statement Type provides fixed options : 
> UPDATE,INSERT,DELETE. 
> Usually, it can meet our needs. But  in actual application,I think It's not 
> flexible enough.
>  In some cases, we need to dynamically indicate the Statement Type.
> For example,the data from CpatureChangeMysql owns  the attribute  of 
> statement  type(cdc.event.type, we need to convert the data to sql(DML) 
> orderly; And we now have to use RouteOnAttribute to transfer data to three 
> branches , Build SQL statement separately ,finally,we have to use 
> EnforceOrder  to ensure the order of SQL statements.
> But it will be easy if ConvertJSONToSQL  supports dynamical Statement Type . 
> It is easy to implement this feature just like PutDatabaseRecord. 
> In practice, I did use PutDatabaseRecord   instead of ConvertJSONToSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7380) NiFi Stateless does not validate CS correctly

2020-05-14 Thread Pierre Villard (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107153#comment-17107153
 ] 

Pierre Villard commented on NIFI-7380:
--

I closed #4220 in favor of #4264, thanks for the pull request [~mcauffiez].

[~mark.weghorst], for the two other issues (Jolt and Confluent SR), I'd suggest 
to file 2 different JIRAs. It could be the same issue but better to track this 
appropriately.

> NiFi Stateless does not validate CS correctly
> -
>
> Key: NIFI-7380
> URL: https://issues.apache.org/jira/browse/NIFI-7380
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: NiFi Stateless
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Critical
>  Labels: nifi-stateless, stateless
> Attachments: nifi-7380-confluent-schema-registry-exception.txt, 
> nifi-7380-exception.txt, nifi-7380-flow-config.json, 
> nifi-7380-jolt-exception.txt, stateless.json
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When the flow executed with the NiFi Stateless running mode contains a 
> Controller Service with required properties, it'll fail as it does not take 
> into account the configuration when performing the validation of the 
> component.
> In *StatelessControllerServiceLookup*, the method
> {code:java}
> public void enableControllerServices(final VariableRegistry variableRegistry) 
> {code}
> first validates the configured controller services and calls
> {code:java}
> public Collection validate(...){code}
> This will create a *StatelessProcessContext* object and a 
> *StatelessValidationContext* object. Then the method *validate* is called on 
> the controller service and pass the validation context as argument. It will 
> go through the properties of the controller service and will retrieve the 
> configured value of the properties as set in the *StatelessProcessContext* 
> object. The problem is that the *properties* map in the Stateless Process 
> Context supposed to contain the configured values is never set. As such, any 
> required property in a Controller Service is considered as configured with a 
> null value if there is no default value. This will cause the component 
> validation to fail and the flow won't be executed.
> I opened a PR with a solution that does solve this issue. However I'm not 
> sure this issue does not affect other scenarios and a better approach could 
> be necessary (more in line with what is done in NiFi core).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] pvillard31 closed pull request #4220: NIFI-7380 - fix for controller service validation in NiFi Stateless

2020-05-14 Thread GitBox


pvillard31 closed pull request #4220:
URL: https://github.com/apache/nifi/pull/4220


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] pvillard31 commented on pull request #4220: NIFI-7380 - fix for controller service validation in NiFi Stateless

2020-05-14 Thread GitBox


pvillard31 commented on pull request #4220:
URL: https://github.com/apache/nifi/pull/4220#issuecomment-628523817


   Closing this one in favor of #4264.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7448) PutORC quotes fully-qualified table names but should quote each part

2020-05-14 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7448:
-
Fix Version/s: 1.12.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> PutORC quotes fully-qualified table names but should quote each part
> 
>
> Key: NIFI-7448
> URL: https://issues.apache.org/jira/browse/NIFI-7448
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> NIFI-5667 introduced quoting of the table name in the DDL generated by 
> PutORC, but it quotes the entire value of the Table Name property. In cases 
> where the table name is qualified by a database name (for example, 
> mydb.mytable), this causes a parsing error as `mydb.mytable` should be 
> `mydb`.`mytable.
> PutORC should parse the table name, splitting on "." and quoting each section.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4269: NIFI-7448: Fix quoting of DDL table name in PutORC

2020-05-14 Thread GitBox


asfgit closed pull request #4269:
URL: https://github.com/apache/nifi/pull/4269


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7448) PutORC quotes fully-qualified table names but should quote each part

2020-05-14 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107147#comment-17107147
 ] 

ASF subversion and git services commented on NIFI-7448:
---

Commit 53a161234e651c9b5259723d5ffc986cc6eab7dd in nifi's branch 
refs/heads/master from Matt Burgess
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=53a1612 ]

NIFI-7448: Fix quoting of DDL table name in PutORC

Signed-off-by: Pierre Villard 

This closes #4269.


> PutORC quotes fully-qualified table names but should quote each part
> 
>
> Key: NIFI-7448
> URL: https://issues.apache.org/jira/browse/NIFI-7448
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> NIFI-5667 introduced quoting of the table name in the DDL generated by 
> PutORC, but it quotes the entire value of the Table Name property. In cases 
> where the table name is qualified by a database name (for example, 
> mydb.mytable), this causes a parsing error as `mydb.mytable` should be 
> `mydb`.`mytable.
> PutORC should parse the table name, splitting on "." and quoting each section.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7437) UI is slow when nifi.analytics.predict.enabled is true

2020-05-14 Thread Dmitry Ibragimov (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107122#comment-17107122
 ] 

Dmitry Ibragimov commented on NIFI-7437:


[~YolandaMDavis] Our current heap settings is following:
{code:java}
# JVM memory settings
java.arg.2=-Xms12g
java.arg.3=-Xmx12g
java.arg.13=-XX:+UseG1GC
{code}
{code:java}
openjdk version "11.0.7" 2020-04-14 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.7+10-LTS)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.7+10-LTS, mixed mode, sharing){code}
We used clustered setup with 3 nodes (16 core, 32 GB memory), and secured 
installation.

But, I've successfully reproduced this issue with 1 unsecured node (Java8 with 
OldGen GC also) with copied our large flow.xml.gz (with more than 4000 
processors, but most of them in disabled state) to this node and enabled 
property nifi.analytics.predict.enabled.

Reproducibility of this issue is Heap independent - just need to create big 
bunch of disabled processors and connect them together.

> UI is slow when nifi.analytics.predict.enabled is true
> --
>
> Key: NIFI-7437
> URL: https://issues.apache.org/jira/browse/NIFI-7437
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI, Extensions
>Affects Versions: 1.10.0, 1.11.4
> Environment: Java11, CentOS8
>Reporter: Dmitry Ibragimov
>Assignee: Yolanda M. Davis
>Priority: Critical
>  Labels: features, performance
>
> We faced with issue when nifi.analytics.predict.enabled is true after cluster 
> upgrade to 1.11.4
> We have about 4000 processors in development enviroment, but most of them is 
> in disabled state: 256 running, 1263 stopped, 2543 disabled
> After upgrade version from 1.9.2 to 1.11.4 we deicded to test back-pressure 
> prediction feature and enable it in configuration:
> {code:java}
> nifi.analytics.predict.enabled=true
> nifi.analytics.predict.interval=3 mins
> nifi.analytics.query.interval=5 mins
> nifi.analytics.connection.model.implementation=org.apache.nifi.controller.status.analytics.models.OrdinaryLeastSquares
> nifi.analytics.connection.model.score.name=rSquared
> nifi.analytics.connection.model.score.threshold=.90
> {code}
> And we faced with terrible UI performance degradataion. Root page opens in 20 
> seconds instead of 200-500ms. About ~100 times slower. I've tesed it with 
> different environments centos7/8, java8/11, clustered secured, clustered 
> unsecured, standalone unsecured - all the same.
> In debug log for ThreadPoolRequestReplicator:
> {code:java}
> 2020-05-09 08:03:34,459 DEBUG [Replicate Request Thread-2] 
> o.a.n.c.c.h.r.ThreadPoolRequestReplicator For GET 
> /nifi-api/flow/process-groups/root (Request ID 
> c144196f-d4cb-4053-8828-70e06f7c5100), minimum response time = 19548, max = 
> 20625, average = 20161.0 ms
> 2020-05-09 08:03:34,459 DEBUG [Replicate Request Thread-2] 
> o.a.n.c.c.h.r.ThreadPoolRequestReplicator Node Responses for GET 
> /nifi-api/flow/process-groups/root (Request ID 
> c144196f-d4cb-4053-8828-70e06f7c5100):
> newnifi01:8080: 19548 millis
> newnifi02:8080: 20625 millis
> newnifi03:8080: 20310 millis{code}
> More deep debug:
>  
> {code:java}
> 2020-05-09 10:31:13,252 DEBUG [NiFi Web Server-21] 
> org.eclipse.jetty.server.HttpChannel REQUEST for 
> //newnifi01:8080/nifi-api/flow/process-groups/root on 
> HttpChannelOverHttp@68d3e945{r=1,c=false,c=false/false,a=IDLE,uri=//newnifi01:8080/nifi-api/flow/process-groups/root,age=0}
> GET //newnifi01:8080/nifi-api/flow/process-groups/root HTTP/1.1
> Host: newnifi01:8080
> ...
> 2020-05-09 10:31:13,256 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for calculating time 
> back pressure by content size in bytes. Returning -1
> 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for calculating time 
> to back pressure by object count. Returning -1
> 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting 
> content size in bytes for next interval. Returning -1
> 2020-05-09 10:31:13,257 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting 
> object count for next interval. Returning -1
> 2020-05-09 10:31:13,258 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting 
> object count for next interval. Returning -1
> 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Model is not valid for predicting 
> content size in bytes for next interval. Returning -1
> 2020-05-09 10:31:13,259 DEBUG [NiFi Web Server-21] 
> o.a.n.c.s.a.ConnectionStatusAnalytics Prediction model for connection id 
> 

[jira] [Updated] (NIFI-7336) Add tests for DeleteAzureDataLakeStorage

2020-05-14 Thread Peter Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Gyori updated NIFI-7336:
--
Status: Patch Available  (was: In Progress)

> Add tests for DeleteAzureDataLakeStorage
> 
>
> Key: NIFI-7336
> URL: https://issues.apache.org/jira/browse/NIFI-7336
> Project: Apache NiFi
>  Issue Type: Test
>  Components: Extensions
>Reporter: Peter Turcsanyi
>Assignee: Peter Gyori
>Priority: Major
>  Labels: azure
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] pgyori opened a new pull request #4272: NIFI-7336: Add tests for DeleteAzureDataLakeStorage

2020-05-14 Thread GitBox


pgyori opened a new pull request #4272:
URL: https://github.com/apache/nifi/pull/4272


   https://issues.apache.org/jira/browse/NIFI-7336
   
    Description of PR
   
   Tests added for DeleteAzureDataLakeStorage.
   DeleteAzureDataLakeStorage now throws exception if fileSystem or fileName is 
empty string.
   One constant renamed in FetchAzureDataLakeStorage.
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on both JDK 8 and 
JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (MINIFICPP-1221) Cannot attach to running MiNiFi process.

2020-05-14 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda resolved MINIFICPP-1221.
---
Resolution: Fixed

> Cannot attach to running MiNiFi process.
> 
>
> Key: MINIFICPP-1221
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1221
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Adam Debreceni
>Assignee: Adam Debreceni
>Priority: Minor
> Fix For: 0.8.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Attaching to a running MiNiFi process with debugger interrupts the main 
> semaphore wait, which terminates the application since we do not check if the 
> return was caused by an interrupt.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1221) Cannot attach to running MiNiFi process.

2020-05-14 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-1221:
--
Fix Version/s: 0.8.0

> Cannot attach to running MiNiFi process.
> 
>
> Key: MINIFICPP-1221
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1221
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Adam Debreceni
>Assignee: Adam Debreceni
>Priority: Minor
> Fix For: 0.8.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Attaching to a running MiNiFi process with debugger interrupts the main 
> semaphore wait, which terminates the application since we do not check if the 
> return was caused by an interrupt.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #783: MINIFICPP-1221 - Cannot attach to running MiNiFi process.

2020-05-14 Thread GitBox


arpadboda closed pull request #783:
URL: https://github.com/apache/nifi-minifi-cpp/pull/783


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] turcsanyip commented on a change in pull request #4265: NIFI-7434: Endpoint suffix property in AzureStorageAccount NIFI processors

2020-05-14 Thread GitBox


turcsanyip commented on a change in pull request #4265:
URL: https://github.com/apache/nifi/pull/4265#discussion_r424954784



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/AbstractAzureBlobProcessor.java
##
@@ -59,6 +59,7 @@
 AzureStorageUtils.STORAGE_CREDENTIALS_SERVICE,
 AzureStorageUtils.ACCOUNT_NAME,
 AzureStorageUtils.ACCOUNT_KEY,
+AzureStorageUtils.STORAGE_SUFFIX,

Review comment:
   Sorry for not mentioning but I meant the same property order on all 
processors and the controller service. Please keep them consistent and update 
ListBlob, Get/PutQueue processors and AzureStorageCredentialsControllerService 
too.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (MINIFICPP-1220) Memory leak in CWEL

2020-05-14 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-1220:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Memory leak in CWEL
> ---
>
> Key: MINIFICPP-1220
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1220
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Marton Szasz
>Assignee: Marton Szasz
>Priority: Major
> Fix For: 0.8.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Memory usage of minifi-cpp on windows increases gradually when using 
> ConsumeWindowsEventLog, with more frequent scheduling triggering faster 
> memory leak.
> The main issue seems to be double event creation in 
> Bookmark::getBookmarkHandleFromXML and leaking one of them.
> The fix I'm about to submit changes the code to use unique_ptr for event 
> ownership handling, reducing the risk of similar bugs in the future.
> Part of the credit goes to [~aboda] as we found the cause independently.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1219) PublishKafka should release connection when stopped

2020-05-14 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-1219:
--
Fix Version/s: 0.8.0

> PublishKafka should release connection when stopped
> ---
>
> Key: MINIFICPP-1219
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1219
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Nghia Le
>Assignee: Nghia Le
>Priority: Major
> Fix For: 0.8.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It should release everything created in onSchedule!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1218) C2 metrics simplification introduced an undefined and unused member function

2020-05-14 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-1218:
--
Fix Version/s: 0.8.0

> C2 metrics simplification introduced an undefined and unused member function
> 
>
> Key: MINIFICPP-1218
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1218
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Marton Szasz
>Assignee: Marton Szasz
>Priority: Trivial
> Fix For: 0.8.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{FlowController::getAgentInformation()}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1220) Memory leak in CWEL

2020-05-14 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-1220:
--
Fix Version/s: 0.8.0

> Memory leak in CWEL
> ---
>
> Key: MINIFICPP-1220
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1220
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Marton Szasz
>Assignee: Marton Szasz
>Priority: Major
> Fix For: 0.8.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Memory usage of minifi-cpp on windows increases gradually when using 
> ConsumeWindowsEventLog, with more frequent scheduling triggering faster 
> memory leak.
> The main issue seems to be double event creation in 
> Bookmark::getBookmarkHandleFromXML and leaking one of them.
> The fix I'm about to submit changes the code to use unique_ptr for event 
> ownership handling, reducing the risk of similar bugs in the future.
> Part of the credit goes to [~aboda] as we found the cause independently.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1218) C2 metrics simplification introduced an undefined and unused member function

2020-05-14 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-1218:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> C2 metrics simplification introduced an undefined and unused member function
> 
>
> Key: MINIFICPP-1218
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1218
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Marton Szasz
>Assignee: Marton Szasz
>Priority: Trivial
> Fix For: 0.8.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{FlowController::getAgentInformation()}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (MINIFICPP-1219) PublishKafka should release connection when stopped

2020-05-14 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda resolved MINIFICPP-1219.
---
Resolution: Fixed

> PublishKafka should release connection when stopped
> ---
>
> Key: MINIFICPP-1219
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1219
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Nghia Le
>Assignee: Nghia Le
>Priority: Major
> Fix For: 0.8.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It should release everything created in onSchedule!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #778: MINIFICPP-1218 remove unimplemented declaration

2020-05-14 Thread GitBox


arpadboda closed pull request #778:
URL: https://github.com/apache/nifi-minifi-cpp/pull/778


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #779: MINIFICPP-1219 - PublishKafka should release connection when stopped

2020-05-14 Thread GitBox


arpadboda closed pull request #779:
URL: https://github.com/apache/nifi-minifi-cpp/pull/779


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #780: MINIFICPP-1220 fix memleak and clarify ownership semantics in CWEL

2020-05-14 Thread GitBox


arpadboda closed pull request #780:
URL: https://github.com/apache/nifi-minifi-cpp/pull/780


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (NIFI-7445) Add Conflict Resolution property to PutAzureDataLakeStorage processor

2020-05-14 Thread Peter Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Gyori reassigned NIFI-7445:
-

Assignee: Peter Gyori

> Add Conflict Resolution property to PutAzureDataLakeStorage processor
> -
>
> Key: NIFI-7445
> URL: https://issues.apache.org/jira/browse/NIFI-7445
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Peter Turcsanyi
>Assignee: Peter Gyori
>Priority: Major
>  Labels: azure
>
> PutAzureDataLakeStorage currently overwrites existing files without error 
> (azure-storage-file-datalake 12.0.1).
> Add Conflict Resolution property with values: fail (default), replace, ignore 
> (similar to PutFile).
> DataLakeDirectoryClient.createFile(String fileName, boolean overwrite) can be 
> used (available from 12.1.x)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (MINIFICPP-1224) Implement runtime module-directory extension for ExecutePythonScript

2020-05-14 Thread Adam Hunyadi (Jira)
Adam Hunyadi created MINIFICPP-1224:
---

 Summary: Implement runtime module-directory extension for 
ExecutePythonScript
 Key: MINIFICPP-1224
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1224
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Affects Versions: 0.7.0
Reporter: Adam Hunyadi
Assignee: Adam Hunyadi
 Attachments: Screenshot 2020-05-13 at 16.49.35.png

*Background:*

Having no runtime extension should be a convenience feature only, one could 
both use the python code itself.

It should be possible to access {{{color:#403294}{{sys.path}}{color}}} from the 
cpp wrapper like this: 
[https://pybind11.readthedocs.io/en/stable/advanced/embedding.html#importing-modules]

!Screenshot 2020-05-13 at 16.49.35.png|width=631,height=142!

Having this code called before the script-engines start causes crash even under 
gil.

*Proposal:*

Extend the python script engine with an interface that handles module imports 
and call to it before any {{{color:#403294}{{eval}}{color}}} call happens.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)