[jira] [Commented] (NIFI-4241) ListSFTP node in cluster is not coming up

2017-07-31 Thread Frank Thiele (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108371#comment-16108371
 ] 

Frank Thiele commented on NIFI-4241:


The WARN message is gone -- work around seems to work.
But the exception still remains. As I found NIFI-2343 I will check how to fix 
the configuration -> Found the probable issue in {{state-management.xml}}:
{code}
...

zk-provider

org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider

/nifi
10 seconds
Open

...
{code}
Adding the addresses probably will work. Checking...

> ListSFTP node in cluster is not coming up
> -
>
> Key: NIFI-4241
> URL: https://issues.apache.org/jira/browse/NIFI-4241
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
> Environment: Docker container within VM:
> docker@myvm5:~$ uname -a
> Linux myvm5 4.4.74-boot2docker #1 SMP Mon Jun 26 18:01:14 UTC 2017 x86_64 
> GNU/Linux
>Reporter: Frank Thiele
> Attachments: cluster.png, flow.xml, Overview.png
>
>
> I have setup a small cluster with 3 nodes nifi1:8080, nifi2:8081 and 
> nifi3:8082.
> When I configure a ListSFTP processor and run it, the following exception 
> comes up:
> {code}
> 2017-07-31 06:25:28,522 ERROR [StandardProcessScheduler Thread-6] 
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped 
> abnormally
> java.util.concurrent.ExecutionException: 
> java.lang.reflect.InvocationTargetException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.reflect.InvocationTargetException: null
>   at sun.reflect.GeneratedMethodAccessor185.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1307)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   ... 2 common frames omitted
> Caused by: java.io.IOException: Failed to obtain value from ZooKeeper for 
> component with ID 9716842f-015d-1000--7de55c99 with exception code 
> CONNECTIONLOSS
>   at 
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420)
>   at 
> org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63)
>   at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.updateState(AbstractListProcessor.java:199)
>   ... 15 common frames omitted
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for 
> /nifi/components/9716842f-015d-1000--7de55c99
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
>   at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
>   at 
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403)
>   ... 17 common frames omitted
> 2017-07-31 06:25:28,523 ERROR [StandardProcessScheduler Thread-5] 
> o.a.nifi.processors.standard.ListSFTP 
> ListSFTP[id=9716842f-015d-1000--7de55c99] 
> ListSFTP[id=9716842f-015d-1000--7de55c99] failed to invoke 
> @OnScheduled method due to 

[jira] [Commented] (NIFI-4015) DeleteSQS Throws Exception Deleting Message

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108356#comment-16108356
 ] 

ASF GitHub Bot commented on NIFI-4015:
--

Github user jvwing commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1888#discussion_r130516693
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/test/java/org/apache/nifi/processors/aws/sqs/ITDeleteSQS.java
 ---
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.sqs;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.amazonaws.regions.Regions;
+import com.amazonaws.services.sqs.model.Message;
+import com.amazonaws.services.sqs.model.ReceiveMessageResult;
+import com.amazonaws.services.sqs.model.SendMessageResult;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+
+import com.amazonaws.auth.PropertiesCredentials;
+import com.amazonaws.services.sqs.AmazonSQSClient;
+
+import org.junit.Before;
+import org.junit.Ignore;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+
+
+@Ignore("For local testing only - interacts with S3 so the credentials 
file must be configured and all necessary queues created")
+public class ITDeleteSQS {
+
+private final String CREDENTIALS_FILE = 
System.getProperty("user.home") + "/aws-credentials.properties";
--- End diff --

@jzonthemtn The DeleteSQS processor can absolutely use default credentials 
through a controller service.  The integration tests across the SQS processors 
and most of the AWS bundle use a properties file, and I followed that pattern.  
I'm not sure if it was desired as the best solution, or just the one we knew 
how to use at the time.  Maybe it prevents people from accidentally running the 
integration tests on with their default credentials?

What do you think?


> DeleteSQS Throws Exception Deleting Message
> ---
>
> Key: NIFI-4015
> URL: https://issues.apache.org/jira/browse/NIFI-4015
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: James Wing
>Assignee: James Wing
>Priority: Minor
>
> While attempting to delete a message from an SQS queue, DeleteSQS throws the 
> following exception:
> {quote}
> DeleteSQS[id=6197f269-015c-1000-9317-818c01162722] Failed to delete 1 objects 
> from SQS due to com.amazonaws.services.sqs.model.AmazonSQSException: The 
> request must contain the parameter DeleteMessageBatchRequestEntry.1.Id. 
> (Service: AmazonSQS; Status Code: 400; Error Code: MissingParameter; Request 
> ID: eea76d96-a07d-5406-9838-3c3f26575223): 
> com.amazonaws.services.sqs.model.AmazonSQSException: The request must contain 
> the parameter DeleteMessageBatchRequestEntry.1.Id. (Service: AmazonSQS; 
> Status Code: 400; Error Code: MissingParameter; Request ID: 
> eea76d96-a07d-5406-9838-3c3f26575223)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1888: NIFI-4015 NIFI-3999 Fix DeleteSQS Issues

2017-07-31 Thread jvwing
Github user jvwing commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1888#discussion_r130516693
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/test/java/org/apache/nifi/processors/aws/sqs/ITDeleteSQS.java
 ---
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.sqs;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.amazonaws.regions.Regions;
+import com.amazonaws.services.sqs.model.Message;
+import com.amazonaws.services.sqs.model.ReceiveMessageResult;
+import com.amazonaws.services.sqs.model.SendMessageResult;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+
+import com.amazonaws.auth.PropertiesCredentials;
+import com.amazonaws.services.sqs.AmazonSQSClient;
+
+import org.junit.Before;
+import org.junit.Ignore;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+
+
+@Ignore("For local testing only - interacts with S3 so the credentials 
file must be configured and all necessary queues created")
+public class ITDeleteSQS {
+
+private final String CREDENTIALS_FILE = 
System.getProperty("user.home") + "/aws-credentials.properties";
--- End diff --

@jzonthemtn The DeleteSQS processor can absolutely use default credentials 
through a controller service.  The integration tests across the SQS processors 
and most of the AWS bundle use a properties file, and I followed that pattern.  
I'm not sure if it was desired as the best solution, or just the one we knew 
how to use at the time.  Maybe it prevents people from accidentally running the 
integration tests on with their default credentials?

What do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4139) Refactor KeyProvider interface from provenance module to framework-level service

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108246#comment-16108246
 ] 

ASF GitHub Bot commented on NIFI-4139:
--

GitHub user alopresto opened a pull request:

https://github.com/apache/nifi/pull/2044

NIFI-4139 Extract key provider to framework-level service

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alopresto/nifi NIFI-4139

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2044.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2044


commit 8e188c3463958902b0ac96583ee28e0abd558fa3
Author: Andy LoPresto 
Date:   2017-07-28T00:11:10Z

NIFI-4139 Moved key provider interface and implementations from 
nifi-data-provenance-utils module to nifi-security-utils module.

commit 7c5729b792128717835653773828623cfd354d5b
Author: Andy LoPresto 
Date:   2017-07-28T00:42:04Z

NIFI-4139 Refactored duplicate byte[] concatenation methods from utility 
classes and removed deprecation warnings from CipherUtility.

commit fc10d4c52b9de651443e69b306355b0eb72621e1
Author: Andy LoPresto 
Date:   2017-07-28T17:50:58Z

NIFI-4139 Created KeyProviderFactory to encapsulate key provider 
instantiation logic.

commit c9a644a8404c50f13442c65d2ea5e9476f379d44
Author: Andy LoPresto 
Date:   2017-07-31T22:29:47Z

NIFI-4139 Added logic to handle legacy package configuration values for key 
providers.
Added unit tests.
Added resource files for un/limited strength cryptography scenarios.

commit 70e3a181336bd016510bf59b4be80a9d249f6395
Author: Andy LoPresto 
Date:   2017-07-31T22:33:55Z

NIFI-4139 Added ASL to test resources.

commit b9f3e956608e57d05be3f47869cdd08bb496627d
Author: Andy LoPresto 
Date:   2017-08-01T00:33:10Z

NIFI-4139 Moved legacy FQCN handling logic to CryptUtils.
Added unit tests to ensure application startup logic handles legacy FQCNs.




> Refactor KeyProvider interface from provenance module to framework-level 
> service
> 
>
> Key: NIFI-4139
> URL: https://issues.apache.org/jira/browse/NIFI-4139
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>  Labels: encryption, key-management, security
>
> The {{KeyProvider}} interface introduced in NIFI-3388 to allow the encrypted 
> provenance repository should be refactored to a framework-level service which 
> is accessible to the encrypted content repository and encrypted flowfile 
> repository as well. Exposing this common functionality will reduce code & 
> logic duplication and consolidate sensitive behavior in a single 

[GitHub] nifi pull request #2044: NIFI-4139 Extract key provider to framework-level s...

2017-07-31 Thread alopresto
GitHub user alopresto opened a pull request:

https://github.com/apache/nifi/pull/2044

NIFI-4139 Extract key provider to framework-level service

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alopresto/nifi NIFI-4139

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2044.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2044


commit 8e188c3463958902b0ac96583ee28e0abd558fa3
Author: Andy LoPresto 
Date:   2017-07-28T00:11:10Z

NIFI-4139 Moved key provider interface and implementations from 
nifi-data-provenance-utils module to nifi-security-utils module.

commit 7c5729b792128717835653773828623cfd354d5b
Author: Andy LoPresto 
Date:   2017-07-28T00:42:04Z

NIFI-4139 Refactored duplicate byte[] concatenation methods from utility 
classes and removed deprecation warnings from CipherUtility.

commit fc10d4c52b9de651443e69b306355b0eb72621e1
Author: Andy LoPresto 
Date:   2017-07-28T17:50:58Z

NIFI-4139 Created KeyProviderFactory to encapsulate key provider 
instantiation logic.

commit c9a644a8404c50f13442c65d2ea5e9476f379d44
Author: Andy LoPresto 
Date:   2017-07-31T22:29:47Z

NIFI-4139 Added logic to handle legacy package configuration values for key 
providers.
Added unit tests.
Added resource files for un/limited strength cryptography scenarios.

commit 70e3a181336bd016510bf59b4be80a9d249f6395
Author: Andy LoPresto 
Date:   2017-07-31T22:33:55Z

NIFI-4139 Added ASL to test resources.

commit b9f3e956608e57d05be3f47869cdd08bb496627d
Author: Andy LoPresto 
Date:   2017-08-01T00:33:10Z

NIFI-4139 Moved legacy FQCN handling logic to CryptUtils.
Added unit tests to ensure application startup logic handles legacy FQCNs.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #2020: [NiFi-3973] Add PutKudu Processor for ingesting data to Ku...

2017-07-31 Thread cammachusa
Github user cammachusa commented on the issue:

https://github.com/apache/nifi/pull/2020
  
@joewitt , would you please help? I couldn't start nifi, it keep saying 
"ClassNotFoundException: org.apache.nifi.serialization.RecordReaderFactory" . I 
can build it without any error, but when starting nifi, ... Some of my 
dependency on RecordReader may not be correct, or missing something? Thanks


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #2035: Update GetHTTP.java

2017-07-31 Thread jzonthemtn
Github user jzonthemtn commented on the issue:

https://github.com/apache/nifi/pull/2035
  
You might want to add a unit test to show that your change fixes the 
problem and to help prevent the problem from reappearing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4015) DeleteSQS Throws Exception Deleting Message

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108174#comment-16108174
 ] 

ASF GitHub Bot commented on NIFI-4015:
--

Github user jzonthemtn commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1888#discussion_r130491355
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/test/java/org/apache/nifi/processors/aws/sqs/ITDeleteSQS.java
 ---
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.sqs;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.amazonaws.regions.Regions;
+import com.amazonaws.services.sqs.model.Message;
+import com.amazonaws.services.sqs.model.ReceiveMessageResult;
+import com.amazonaws.services.sqs.model.SendMessageResult;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+
+import com.amazonaws.auth.PropertiesCredentials;
+import com.amazonaws.services.sqs.AmazonSQSClient;
+
+import org.junit.Before;
+import org.junit.Ignore;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+
+
+@Ignore("For local testing only - interacts with S3 so the credentials 
file must be configured and all necessary queues created")
+public class ITDeleteSQS {
+
+private final String CREDENTIALS_FILE = 
System.getProperty("user.home") + "/aws-credentials.properties";
--- End diff --

Can it use the default credential provider chain in case there is no 
properties file?


> DeleteSQS Throws Exception Deleting Message
> ---
>
> Key: NIFI-4015
> URL: https://issues.apache.org/jira/browse/NIFI-4015
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: James Wing
>Assignee: James Wing
>Priority: Minor
>
> While attempting to delete a message from an SQS queue, DeleteSQS throws the 
> following exception:
> {quote}
> DeleteSQS[id=6197f269-015c-1000-9317-818c01162722] Failed to delete 1 objects 
> from SQS due to com.amazonaws.services.sqs.model.AmazonSQSException: The 
> request must contain the parameter DeleteMessageBatchRequestEntry.1.Id. 
> (Service: AmazonSQS; Status Code: 400; Error Code: MissingParameter; Request 
> ID: eea76d96-a07d-5406-9838-3c3f26575223): 
> com.amazonaws.services.sqs.model.AmazonSQSException: The request must contain 
> the parameter DeleteMessageBatchRequestEntry.1.Id. (Service: AmazonSQS; 
> Status Code: 400; Error Code: MissingParameter; Request ID: 
> eea76d96-a07d-5406-9838-3c3f26575223)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1888: NIFI-4015 NIFI-3999 Fix DeleteSQS Issues

2017-07-31 Thread jzonthemtn
Github user jzonthemtn commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1888#discussion_r130491355
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/test/java/org/apache/nifi/processors/aws/sqs/ITDeleteSQS.java
 ---
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.sqs;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.amazonaws.regions.Regions;
+import com.amazonaws.services.sqs.model.Message;
+import com.amazonaws.services.sqs.model.ReceiveMessageResult;
+import com.amazonaws.services.sqs.model.SendMessageResult;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+
+import com.amazonaws.auth.PropertiesCredentials;
+import com.amazonaws.services.sqs.AmazonSQSClient;
+
+import org.junit.Before;
+import org.junit.Ignore;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+
+
+@Ignore("For local testing only - interacts with S3 so the credentials 
file must be configured and all necessary queues created")
+public class ITDeleteSQS {
+
+private final String CREDENTIALS_FILE = 
System.getProperty("user.home") + "/aws-credentials.properties";
--- End diff --

Can it use the default credential provider chain in case there is no 
properties file?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #2043: rNIFI-4248: Adding Rya processor.

2017-07-31 Thread jzonthemtn
GitHub user jzonthemtn opened a pull request:

https://github.com/apache/nifi/pull/2043

rNIFI-4248: Adding Rya processor.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [X] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [X] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [X] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [X] Is your initial contribution a single, squashed commit?

### For code changes:
- [X] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [X] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jzonthemtn/nifi NIFI-4248

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2043.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2043


commit 22a82dd8d9ec3d5cab0bafe19ab92b928fb1ba0d
Author: jzonthemtn 
Date:   2017-07-31T23:36:36Z

NIFI-4248: Adding Rya processor.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4022) Use SASL Auth Scheme For Secured Zookeeper Client Interaction

2017-07-31 Thread Yolanda M. Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108164#comment-16108164
 ] 

Yolanda M. Davis commented on NIFI-4022:


Adding some thoughts on approach given the challenges:

1) For existing NiFi users how will we migrate Zookeeper entries from the old 
security scheme to the new scheme?
2) How should zNodes be reverted to open if kerberos is disabled?

Suggest a utility be written to migrate existing entries to the new secured 
scheme as well as reverse those entries if kerberos is disabled.  An example of 
a similar approach in Kafka can be found here (e.g. kafka 
https://kafka.apache.org/documentation/#zk_authz)

4) Currently users can control authentication scheme via state management 
configuration for processors yet not for clusters. Should we still maintain the 
practice of allowing schemes to be configurable for processors (with SASL being 
the new default)? Today if security is Zookeeper using SASL and if users select 
Creator Only for processors then SASL auth scheme will be used.  However in my 
opinion there is no particular driver that stands out for processor state 
management to enforce Creatory Only by default when security is enabled.  With 
cluster management at the least there should be flexibility to enable SASL 
acl's when secured Zookeeper environment is available. 

> Use SASL Auth Scheme For Secured Zookeeper Client Interaction
> -
>
> Key: NIFI-4022
> URL: https://issues.apache.org/jira/browse/NIFI-4022
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Yolanda M. Davis
>Assignee: Yolanda M. Davis
>
> NiFi uses Zookeeper to assist in cluster orchestration including leader 
> elections for Primary Node and Cluster Coordinator and to store state for 
> various processors (such as MonitorActivity). In secured Zookeeper 
> environments (supported by SASL + Kerberos) NiFi should protect the zNodes it 
> creates to prevent users or hosts, outside of a NiFi cluster, from accessing 
> or modifying entries.  In its current implementation security can be enforced 
> for processors that store state information in Zookeeper, however zNodes used 
> for managing Primary Node and Cluster Coordinator data are left open and 
> susceptible to change from any user.  Also when zNodes are secured for 
> processor state, a “Creator Only” policy is used which allows the system to 
> determine the identification of the NiFi node and protect any zNodes created 
> with that node id using Zookeeper’s “auth” scheme. The challenge with this 
> scheme is that it limits the ability for other NiFi nodes in the cluster to 
> access that zNode if needed (since it is specifically binds that zNode to the 
> unique id of its creator).
>  
> To best protect zNodes created in Zookeeper by NiFi while maximizing NiFi’s 
> ability to share information across the cluster I propose that we move to 
> using Zookeeper’s SASL authentication scheme, which will allow the use of 
> Kerberos principals for securing zNode with the appropriate permissions.  For 
> maximum flexibility, these principals can be mapped appropriately in 
> Zookeeper, using auth-to-local rules, to ensure that nodes across the cluster 
> can share zNodes as needed. 
>  
> Potential Concerns/Challenges for Discussion:
>  
> 1)  For existing NiFi users how will we migrate Zookeeper entries from 
> the old security scheme to the new scheme?
> 2)  How should zNodes be reverted to open if kerberos is disabled?
> 3)  What will the performance impact be on the cluster once SASL scheme 
> is enabled (since we’d be moving from open to protected)? Would require 
> investigation
> 4)  Currently users can control authentication scheme via state 
> management configuration for processors yet not for clusters.  Should we 
> still maintain the practice of allowing schemes to be configurable for 
> processors (with SASL being the new default)?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4248) Create processor for Apache Rya

2017-07-31 Thread Jeff Zemerick (JIRA)
Jeff Zemerick created NIFI-4248:
---

 Summary: Create processor for Apache Rya
 Key: NIFI-4248
 URL: https://issues.apache.org/jira/browse/NIFI-4248
 Project: Apache NiFi
  Issue Type: Task
  Components: Extensions
Reporter: Jeff Zemerick
Priority: Minor


Create a processor to ingest triples into Apache Rya.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4247) TLS Toolkit should parse regex in SAN fields

2017-07-31 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-4247:

Description: 
Similar to the way the TLS Toolkit can generate multiple certificates with one 
command through parsing some minimal regular expression syntax in the hostname 
field, the SAN field should be processed the same way. Currently, a command 
which generates three hosts via {{-n "server[1-3].com"}} cannot have the 
corresponding SAN entries provided inline. Once NIFI-4222 is implemented, the 
hostname will be present in the SAN list by default, but if there are 
additional desired entries, the command must be split and run individually. 

Example:

||Desired hostname||Desired SAN||
|{{server1.com}}|{{server1.com, otherserver1.com}}|
|{{server2.com}}|{{server2.com, otherserver2.com}}|
|{{server3.com}}|{{server3.com, otherserver3.com}}|

{code}
$ ./bin/tls-toolkit.sh standalone -n "server[1-3].com" 
--subjectAlternativeNames "otherserver[1-3].com"
{code}

Currently, this must be run as: 

{code}
$ ./bin/tls-toolkit.sh standalone -n "server1.com" --subjectAlternativeNames 
"otherserver1.com"
$ ./bin/tls-toolkit.sh standalone -n "server2.com" --subjectAlternativeNames 
"otherserver2.com"
$ ./bin/tls-toolkit.sh standalone -n "server3.com" --subjectAlternativeNames 
"otherserver3.com"
{code}

  was:
Similar to the way the TLS Toolkit can generate multiple certificates with one 
command through parsing some minimal regular expression syntax in the hostname 
field, the SAN field should be processed the same way. Currently, a command 
which generates three hosts via {{ -n "server[1-3].com" }} cannot have the 
corresponding SAN entries provided inline. Once NIFI-4222 is implemented, the 
hostname will be present in the SAN list by default, but if there are 
additional desired entries, the command must be split and run individually. 

Example:

||Desired hostname||Desired SAN||
|{{server1.com}}|{{server1.com, otherserver1.com}}|
|{{server2.com}}|{{server2.com, otherserver2.com}}|
|{{server3.com}}|{{server3.com, otherserver3.com}}|

{code}
$ ./bin/tls-toolkit.sh standalone -n "server[1-3].com" 
--subjectAlternativeNames "otherserver[1-3].com"
{code}

Currently, this must be run as: 

{code}
$ ./bin/tls-toolkit.sh standalone -n "server1.com" --subjectAlternativeNames 
"otherserver1.com"
$ ./bin/tls-toolkit.sh standalone -n "server2.com" --subjectAlternativeNames 
"otherserver2.com"
$ ./bin/tls-toolkit.sh standalone -n "server3.com" --subjectAlternativeNames 
"otherserver3.com"
{code}


> TLS Toolkit should parse regex in SAN fields
> 
>
> Key: NIFI-4247
> URL: https://issues.apache.org/jira/browse/NIFI-4247
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>  Labels: certificate, security, subjectAltName, tls, tls-toolkit
>
> Similar to the way the TLS Toolkit can generate multiple certificates with 
> one command through parsing some minimal regular expression syntax in the 
> hostname field, the SAN field should be processed the same way. Currently, a 
> command which generates three hosts via {{-n "server[1-3].com"}} cannot have 
> the corresponding SAN entries provided inline. Once NIFI-4222 is implemented, 
> the hostname will be present in the SAN list by default, but if there are 
> additional desired entries, the command must be split and run individually. 
> Example:
> ||Desired hostname||Desired SAN||
> |{{server1.com}}|{{server1.com, otherserver1.com}}|
> |{{server2.com}}|{{server2.com, otherserver2.com}}|
> |{{server3.com}}|{{server3.com, otherserver3.com}}|
> {code}
> $ ./bin/tls-toolkit.sh standalone -n "server[1-3].com" 
> --subjectAlternativeNames "otherserver[1-3].com"
> {code}
> Currently, this must be run as: 
> {code}
> $ ./bin/tls-toolkit.sh standalone -n "server1.com" --subjectAlternativeNames 
> "otherserver1.com"
> $ ./bin/tls-toolkit.sh standalone -n "server2.com" --subjectAlternativeNames 
> "otherserver2.com"
> $ ./bin/tls-toolkit.sh standalone -n "server3.com" --subjectAlternativeNames 
> "otherserver3.com"
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4247) TLS Toolkit should parse regex in SAN fields

2017-07-31 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-4247:

Description: 
Similar to the way the TLS Toolkit can generate multiple certificates with one 
command through parsing some minimal regular expression syntax in the hostname 
field, the SAN field should be processed the same way. Currently, a command 
which generates three hosts via {{ -n "server[1-3].com" }} cannot have the 
corresponding SAN entries provided inline. Once NIFI-4222 is implemented, the 
hostname will be present in the SAN list by default, but if there are 
additional desired entries, the command must be split and run individually. 

Example:

||Desired hostname||Desired SAN||
|{{server1.com}}|{{server1.com, otherserver1.com}}|
|{{server2.com}}|{{server2.com, otherserver2.com}}|
|{{server3.com}}|{{server3.com, otherserver3.com}}|

{code}
$ ./bin/tls-toolkit.sh standalone -n "server[1-3].com" 
--subjectAlternativeNames "otherserver[1-3].com"
{code}

Currently, this must be run as: 

{code}
$ ./bin/tls-toolkit.sh standalone -n "server1.com" --subjectAlternativeNames 
"otherserver1.com"
$ ./bin/tls-toolkit.sh standalone -n "server2.com" --subjectAlternativeNames 
"otherserver2.com"
$ ./bin/tls-toolkit.sh standalone -n "server3.com" --subjectAlternativeNames 
"otherserver3.com"
{code}

  was:
Similar to the way the TLS Toolkit can generate multiple certificates with one 
command through parsing some minimal regular expression syntax in the hostname 
field, the SAN field should be processed the same way. Currently, a command 
which generates three hosts via {{ -n "server[1-3].com" }} cannot have the 
corresponding SAN entries provided inline. Once NIFI-4222 is implemented, the 
hostname will be present in the SAN list by default, but if there are 
additional desired entries, the command must be split and run individually. 

Example:

||Desired hostname||Desired SAN||
|{{server1.com}}|{{server1.com, otherserver1.com}}|
|{{server2.com}}|{{server2.com, otherserver2.com}}|
|{{server3.com}}|{{server3.com, otherserver3.com}}|

{code}
$ ./bin/tls-toolkit.sh standalone -n "server[1-3].com" 
--subjectAlternativeNames "otherserver[1-3].com"
{code}

Currently, this must be run as: 

{code}
$ ./bin/tls-toolkit.sh standalone -n "server1.com" --subjectAlternativeNames 
"otherserver1.com"
$ ./bin/tls-toolkit.sh standalone -n "server1.com" --subjectAlternativeNames 
"otherserver2.com"
$ ./bin/tls-toolkit.sh standalone -n "server1.com" --subjectAlternativeNames 
"otherserver3.com"
{code}


> TLS Toolkit should parse regex in SAN fields
> 
>
> Key: NIFI-4247
> URL: https://issues.apache.org/jira/browse/NIFI-4247
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>  Labels: certificate, security, subjectAltName, tls, tls-toolkit
>
> Similar to the way the TLS Toolkit can generate multiple certificates with 
> one command through parsing some minimal regular expression syntax in the 
> hostname field, the SAN field should be processed the same way. Currently, a 
> command which generates three hosts via {{ -n "server[1-3].com" }} cannot 
> have the corresponding SAN entries provided inline. Once NIFI-4222 is 
> implemented, the hostname will be present in the SAN list by default, but if 
> there are additional desired entries, the command must be split and run 
> individually. 
> Example:
> ||Desired hostname||Desired SAN||
> |{{server1.com}}|{{server1.com, otherserver1.com}}|
> |{{server2.com}}|{{server2.com, otherserver2.com}}|
> |{{server3.com}}|{{server3.com, otherserver3.com}}|
> {code}
> $ ./bin/tls-toolkit.sh standalone -n "server[1-3].com" 
> --subjectAlternativeNames "otherserver[1-3].com"
> {code}
> Currently, this must be run as: 
> {code}
> $ ./bin/tls-toolkit.sh standalone -n "server1.com" --subjectAlternativeNames 
> "otherserver1.com"
> $ ./bin/tls-toolkit.sh standalone -n "server2.com" --subjectAlternativeNames 
> "otherserver2.com"
> $ ./bin/tls-toolkit.sh standalone -n "server3.com" --subjectAlternativeNames 
> "otherserver3.com"
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4247) TLS Toolkit should parse regex in SAN fields

2017-07-31 Thread Andy LoPresto (JIRA)
Andy LoPresto created NIFI-4247:
---

 Summary: TLS Toolkit should parse regex in SAN fields
 Key: NIFI-4247
 URL: https://issues.apache.org/jira/browse/NIFI-4247
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Tools and Build
Affects Versions: 1.3.0
Reporter: Andy LoPresto


Similar to the way the TLS Toolkit can generate multiple certificates with one 
command through parsing some minimal regular expression syntax in the hostname 
field, the SAN field should be processed the same way. Currently, a command 
which generates three hosts via {{ -n "server[1-3].com" }} cannot have the 
corresponding SAN entries provided inline. Once NIFI-4222 is implemented, the 
hostname will be present in the SAN list by default, but if there are 
additional desired entries, the command must be split and run individually. 

Example:

||Desired hostname||Desired SAN||
|{{server1.com}}|{{server1.com, otherserver1.com}}|
|{{server2.com}}|{{server2.com, otherserver2.com}}|
|{{server3.com}}|{{server3.com, otherserver3.com}}|

{code}
$ ./bin/tls-toolkit.sh standalone -n "server[1-3].com" 
--subjectAlternativeNames "otherserver[1-3].com"
{code}

Currently, this must be run as: 

{code}
$ ./bin/tls-toolkit.sh standalone -n "server1.com" --subjectAlternativeNames 
"otherserver1.com"
$ ./bin/tls-toolkit.sh standalone -n "server1.com" --subjectAlternativeNames 
"otherserver2.com"
$ ./bin/tls-toolkit.sh standalone -n "server1.com" --subjectAlternativeNames 
"otherserver3.com"
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2020: [NiFi-3973] Add PutKudu Processor for ingesting data to Ku...

2017-07-31 Thread rickysaltzer
Github user rickysaltzer commented on the issue:

https://github.com/apache/nifi/pull/2020
  
@joewitt might be the best person to answer @cammachusa's question 
regarding Travis. I seem to recall that it can be finicky, but not 100% the 
current state of CI stability. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFIREG-5) Develop data model that can be shared by nifi and nifi-registry

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-5?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107860#comment-16107860
 ] 

ASF GitHub Bot commented on NIFIREG-5:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/2


> Develop data model that can be shared by nifi and nifi-registry
> ---
>
> Key: NIFIREG-5
> URL: https://issues.apache.org/jira/browse/NIFIREG-5
> Project: NiFi Registry
>  Issue Type: Task
>Reporter: Mark Payne
>Assignee: Mark Payne
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-registry pull request #2: NIFIREG-5: Initial implementation of nifi-reg...

2017-07-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-registry/pull/2


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFIREG-5) Develop data model that can be shared by nifi and nifi-registry

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-5?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107858#comment-16107858
 ] 

ASF GitHub Bot commented on NIFIREG-5:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/2
  
Looks good, going to merge, thanks!


> Develop data model that can be shared by nifi and nifi-registry
> ---
>
> Key: NIFIREG-5
> URL: https://issues.apache.org/jira/browse/NIFIREG-5
> Project: NiFi Registry
>  Issue Type: Task
>Reporter: Mark Payne
>Assignee: Mark Payne
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4246) OAuth 2 Authorization support - Client Credentials Grant

2017-07-31 Thread Jeremy Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107822#comment-16107822
 ] 

Jeremy Dyer commented on NIFI-4246:
---

Working on the first pass of this implementation. Have most of the code written 
just working on a test plan now.

> OAuth 2 Authorization support - Client Credentials Grant
> 
>
> Key: NIFI-4246
> URL: https://issues.apache.org/jira/browse/NIFI-4246
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jeremy Dyer
>Assignee: Jeremy Dyer
>
> If your interacting with REST endpoints on the web chances are you are going 
> to run into an OAuth2 secured webservice. The IETF (Internet Engineering Task 
> Force) defines 4 methods in which OAuth2 authorization can occur. This JIRA 
> is focused solely on the Authorization Code Grant method defined at 
> https://tools.ietf.org/html/rfc6749#section-4.4
> This implementation should provide a ControllerService in which the enduser 
> can configure the credentials for obtaining the authorization grant (access 
> token) from the resource owner. In turn a new property will be added to the 
> InvokeHTTP processor (if it doesn't already exist from one of the other JIRA 
> efforts similar to this one) where the processor can reference this 
> controller service to obtain the access token and insert the appropriate HTTP 
> header (Authorization: Bearer{access_token}) so that the InvokeHTTP processor 
> can interact with the OAuth protected resources without having to worry about 
> setting up the credentials for each InvokeHTTP processor saving time and 
> complexity.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (NIFI-4245) OAuth 2 Authorization support - Resource Owner Password Credentials Grant

2017-07-31 Thread Jeremy Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Dyer reassigned NIFI-4245:
-

Assignee: Jeremy Dyer

> OAuth 2 Authorization support - Resource Owner Password Credentials Grant
> -
>
> Key: NIFI-4245
> URL: https://issues.apache.org/jira/browse/NIFI-4245
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jeremy Dyer
>Assignee: Jeremy Dyer
>
> If your interacting with REST endpoints on the web chances are you are going 
> to run into an OAuth2 secured webservice. The IETF (Internet Engineering Task 
> Force) defines 4 methods in which OAuth2 authorization can occur. This JIRA 
> is focused solely on the Authorization Code Grant method defined at 
> https://tools.ietf.org/html/rfc6749#section-4.3
> This implementation should provide a ControllerService in which the enduser 
> can configure the credentials for obtaining the authorization grant (access 
> token) from the resource owner. In turn a new property will be added to the 
> InvokeHTTP processor (if it doesn't already exist from one of the other JIRA 
> efforts similar to this one) where the processor can reference this 
> controller service to obtain the access token and insert the appropriate HTTP 
> header (Authorization: Bearer {access_token}) so that the InvokeHTTP 
> processor can interact with the OAuth protected resources without having to 
> worry about setting up the credentials for each InvokeHTTP processor saving 
> time and complexity.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4245) OAuth 2 Authorization support - Resource Owner Password Credentials Grant

2017-07-31 Thread Jeremy Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107823#comment-16107823
 ] 

Jeremy Dyer commented on NIFI-4245:
---

Working on the first pass of this now. Have most of the code written and just 
working on a test plan now.

> OAuth 2 Authorization support - Resource Owner Password Credentials Grant
> -
>
> Key: NIFI-4245
> URL: https://issues.apache.org/jira/browse/NIFI-4245
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jeremy Dyer
>
> If your interacting with REST endpoints on the web chances are you are going 
> to run into an OAuth2 secured webservice. The IETF (Internet Engineering Task 
> Force) defines 4 methods in which OAuth2 authorization can occur. This JIRA 
> is focused solely on the Authorization Code Grant method defined at 
> https://tools.ietf.org/html/rfc6749#section-4.3
> This implementation should provide a ControllerService in which the enduser 
> can configure the credentials for obtaining the authorization grant (access 
> token) from the resource owner. In turn a new property will be added to the 
> InvokeHTTP processor (if it doesn't already exist from one of the other JIRA 
> efforts similar to this one) where the processor can reference this 
> controller service to obtain the access token and insert the appropriate HTTP 
> header (Authorization: Bearer {access_token}) so that the InvokeHTTP 
> processor can interact with the OAuth protected resources without having to 
> worry about setting up the credentials for each InvokeHTTP processor saving 
> time and complexity.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4246) OAuth 2 Authorization support - Client Credentials Grant

2017-07-31 Thread Jeremy Dyer (JIRA)
Jeremy Dyer created NIFI-4246:
-

 Summary: OAuth 2 Authorization support - Client Credentials Grant
 Key: NIFI-4246
 URL: https://issues.apache.org/jira/browse/NIFI-4246
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Jeremy Dyer
Assignee: Jeremy Dyer






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4246) OAuth 2 Authorization support - Client Credentials Grant

2017-07-31 Thread Jeremy Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Dyer updated NIFI-4246:
--
Description: 
If your interacting with REST endpoints on the web chances are you are going to 
run into an OAuth2 secured webservice. The IETF (Internet Engineering Task 
Force) defines 4 methods in which OAuth2 authorization can occur. This JIRA is 
focused solely on the Authorization Code Grant method defined at 
https://tools.ietf.org/html/rfc6749#section-4.4

This implementation should provide a ControllerService in which the enduser can 
configure the credentials for obtaining the authorization grant (access token) 
from the resource owner. In turn a new property will be added to the InvokeHTTP 
processor (if it doesn't already exist from one of the other JIRA efforts 
similar to this one) where the processor can reference this controller service 
to obtain the access token and insert the appropriate HTTP header 
(Authorization: Bearer
{access_token}
) so that the InvokeHTTP processor can interact with the OAuth protected 
resources without having to worry about setting up the credentials for each 
InvokeHTTP processor saving time and complexity.

> OAuth 2 Authorization support - Client Credentials Grant
> 
>
> Key: NIFI-4246
> URL: https://issues.apache.org/jira/browse/NIFI-4246
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jeremy Dyer
>Assignee: Jeremy Dyer
>
> If your interacting with REST endpoints on the web chances are you are going 
> to run into an OAuth2 secured webservice. The IETF (Internet Engineering Task 
> Force) defines 4 methods in which OAuth2 authorization can occur. This JIRA 
> is focused solely on the Authorization Code Grant method defined at 
> https://tools.ietf.org/html/rfc6749#section-4.4
> This implementation should provide a ControllerService in which the enduser 
> can configure the credentials for obtaining the authorization grant (access 
> token) from the resource owner. In turn a new property will be added to the 
> InvokeHTTP processor (if it doesn't already exist from one of the other JIRA 
> efforts similar to this one) where the processor can reference this 
> controller service to obtain the access token and insert the appropriate HTTP 
> header (Authorization: Bearer
> {access_token}
> ) so that the InvokeHTTP processor can interact with the OAuth protected 
> resources without having to worry about setting up the credentials for each 
> InvokeHTTP processor saving time and complexity.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4246) OAuth 2 Authorization support - Client Credentials Grant

2017-07-31 Thread Jeremy Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Dyer updated NIFI-4246:
--
Description: 
If your interacting with REST endpoints on the web chances are you are going to 
run into an OAuth2 secured webservice. The IETF (Internet Engineering Task 
Force) defines 4 methods in which OAuth2 authorization can occur. This JIRA is 
focused solely on the Authorization Code Grant method defined at 
https://tools.ietf.org/html/rfc6749#section-4.4

This implementation should provide a ControllerService in which the enduser can 
configure the credentials for obtaining the authorization grant (access token) 
from the resource owner. In turn a new property will be added to the InvokeHTTP 
processor (if it doesn't already exist from one of the other JIRA efforts 
similar to this one) where the processor can reference this controller service 
to obtain the access token and insert the appropriate HTTP header 
(Authorization: Bearer{access_token}) so that the InvokeHTTP processor can 
interact with the OAuth protected resources without having to worry about 
setting up the credentials for each InvokeHTTP processor saving time and 
complexity.

  was:
If your interacting with REST endpoints on the web chances are you are going to 
run into an OAuth2 secured webservice. The IETF (Internet Engineering Task 
Force) defines 4 methods in which OAuth2 authorization can occur. This JIRA is 
focused solely on the Authorization Code Grant method defined at 
https://tools.ietf.org/html/rfc6749#section-4.4

This implementation should provide a ControllerService in which the enduser can 
configure the credentials for obtaining the authorization grant (access token) 
from the resource owner. In turn a new property will be added to the InvokeHTTP 
processor (if it doesn't already exist from one of the other JIRA efforts 
similar to this one) where the processor can reference this controller service 
to obtain the access token and insert the appropriate HTTP header 
(Authorization: Bearer
{access_token}
) so that the InvokeHTTP processor can interact with the OAuth protected 
resources without having to worry about setting up the credentials for each 
InvokeHTTP processor saving time and complexity.


> OAuth 2 Authorization support - Client Credentials Grant
> 
>
> Key: NIFI-4246
> URL: https://issues.apache.org/jira/browse/NIFI-4246
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jeremy Dyer
>Assignee: Jeremy Dyer
>
> If your interacting with REST endpoints on the web chances are you are going 
> to run into an OAuth2 secured webservice. The IETF (Internet Engineering Task 
> Force) defines 4 methods in which OAuth2 authorization can occur. This JIRA 
> is focused solely on the Authorization Code Grant method defined at 
> https://tools.ietf.org/html/rfc6749#section-4.4
> This implementation should provide a ControllerService in which the enduser 
> can configure the credentials for obtaining the authorization grant (access 
> token) from the resource owner. In turn a new property will be added to the 
> InvokeHTTP processor (if it doesn't already exist from one of the other JIRA 
> efforts similar to this one) where the processor can reference this 
> controller service to obtain the access token and insert the appropriate HTTP 
> header (Authorization: Bearer{access_token}) so that the InvokeHTTP processor 
> can interact with the OAuth protected resources without having to worry about 
> setting up the credentials for each InvokeHTTP processor saving time and 
> complexity.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4245) OAuth 2 Authorization support - Resource Owner Password Credentials Grant

2017-07-31 Thread Jeremy Dyer (JIRA)
Jeremy Dyer created NIFI-4245:
-

 Summary: OAuth 2 Authorization support - Resource Owner Password 
Credentials Grant
 Key: NIFI-4245
 URL: https://issues.apache.org/jira/browse/NIFI-4245
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Jeremy Dyer


If your interacting with REST endpoints on the web chances are you are going to 
run into an OAuth2 secured webservice. The IETF (Internet Engineering Task 
Force) defines 4 methods in which OAuth2 authorization can occur. This JIRA is 
focused solely on the Authorization Code Grant method defined at 
https://tools.ietf.org/html/rfc6749#section-4.3

This implementation should provide a ControllerService in which the enduser can 
configure the credentials for obtaining the authorization grant (access token) 
from the resource owner. In turn a new property will be added to the InvokeHTTP 
processor (if it doesn't already exist from one of the other JIRA efforts 
similar to this one) where the processor can reference this controller service 
to obtain the access token and insert the appropriate HTTP header 
(Authorization: Bearer {access_token}) so that the InvokeHTTP processor can 
interact with the OAuth protected resources without having to worry about 
setting up the credentials for each InvokeHTTP processor saving time and 
complexity.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4244) OAuth 2 Authorization support - Implicit Grant

2017-07-31 Thread Jeremy Dyer (JIRA)
Jeremy Dyer created NIFI-4244:
-

 Summary: OAuth 2 Authorization support - Implicit Grant
 Key: NIFI-4244
 URL: https://issues.apache.org/jira/browse/NIFI-4244
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Jeremy Dyer


If your interacting with REST endpoints on the web chances are you are going to 
run into an OAuth2 secured webservice. The IETF (Internet Engineering Task 
Force) defines 4 methods in which OAuth2 authorization can occur. This JIRA is 
focused solely on the Authorization Code Grant method defined at 
https://tools.ietf.org/html/rfc6749#section-4.2

This implementation should provide a ControllerService in which the enduser can 
configure the credentials for obtaining the authorization grant (access token) 
from the resource owner. In turn a new property will be added to the InvokeHTTP 
processor (if it doesn't already exist from one of the other JIRA efforts 
similar to this one) where the processor can reference this controller service 
to obtain the access token and insert the appropriate HTTP header 
(Authorization: Bearer {access_token}) so that the InvokeHTTP processor can 
interact with the OAuth protected resources without having to worry about 
setting up the credentials for each InvokeHTTP processor saving time and 
complexity.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-3338) Improve Add Processor UX with usage guidance, in-depth help, and solution-based examples

2017-07-31 Thread Rob Moran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Moran updated NIFI-3338:

Attachment: (was: processor-icon-samples.png)

> Improve Add Processor UX with usage guidance, in-depth help, and 
> solution-based examples 
> -
>
> Key: NIFI-3338
> URL: https://issues.apache.org/jira/browse/NIFI-3338
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Rob Moran
>Priority: Minor
>
> This issue is to capture discussion and concept designs for an improved Add 
> Processor experience in NiFi.
> Things that could potentially be included:
> * Documentation (link, or perhaps some embedded information)
> * Links to existing templates that use the processor
> * Solution-based videos and articles that demo processor use



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-3338) Improve Add Processor UX with usage guidance, in-depth help, and solution-based examples

2017-07-31 Thread Rob Moran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Moran updated NIFI-3338:

Attachment: (was: processor-icon-samples.png)

> Improve Add Processor UX with usage guidance, in-depth help, and 
> solution-based examples 
> -
>
> Key: NIFI-3338
> URL: https://issues.apache.org/jira/browse/NIFI-3338
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Rob Moran
>Priority: Minor
>
> This issue is to capture discussion and concept designs for an improved Add 
> Processor experience in NiFi.
> Things that could potentially be included:
> * Documentation (link, or perhaps some embedded information)
> * Links to existing templates that use the processor
> * Solution-based videos and articles that demo processor use



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4191) Display process-specific component icons

2017-07-31 Thread Rob Moran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Moran updated NIFI-4191:

Attachment: processor-icon-samples.png
processor-icons.zip

> Display process-specific component icons
> 
>
> Key: NIFI-4191
> URL: https://issues.apache.org/jira/browse/NIFI-4191
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Rob Moran
>Priority: Minor
> Attachments: processor-icon-samples.png, processor-icons.zip
>
>
> It would be nice to expand the iconography used for components on the NiFi 
> canvas to more accurately describe the pattern type (e.g., split, route, 
> join, partition, etc.) or system being used (e.g., Kafka, HBase, etc.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4191) Display process-specific component icons

2017-07-31 Thread Rob Moran (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107767#comment-16107767
 ] 

Rob Moran commented on NIFI-4191:
-

We should also look at adding some legal text regarding the intent of 
displaying other Apache and third-party logos. I'm thinking this should be 
added to the Add Processor dialog, user documentation, and perhaps the About 
dialog as well. I will create a subtask to document the work required and 
recommend layout changes.

> Display process-specific component icons
> 
>
> Key: NIFI-4191
> URL: https://issues.apache.org/jira/browse/NIFI-4191
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Rob Moran
>Priority: Minor
>
> It would be nice to expand the iconography used for components on the NiFi 
> canvas to more accurately describe the pattern type (e.g., split, route, 
> join, partition, etc.) or system being used (e.g., Kafka, HBase, etc.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4191) Display process-specific component icons

2017-07-31 Thread Rob Moran (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107765#comment-16107765
 ] 

Rob Moran commented on NIFI-4191:
-

I've attached a PNG sample and individual SVG files (ZIP) for each icon/logo 
seen in the sample. The idea is to have a base set of icons representing 
patterns, operational intent, etc. as mentioned in the description. For other 
processors of a particular system or for a specific technology, display a 
unique icon such as a logo or icon and explicit label (e.g., HTTP).

To start these will help enhance the UI with a variety of icons that help tell 
more of visual story about the data flow. Going forward this will hopefully 
serve as a foundation on which to build a more engaging Add Processor 
experience by allowing users to explore NiFi's offerings by pattern, workflow, 
technology, etc. This is related to NIFI-3338.

> Display process-specific component icons
> 
>
> Key: NIFI-4191
> URL: https://issues.apache.org/jira/browse/NIFI-4191
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Rob Moran
>Priority: Minor
>
> It would be nice to expand the iconography used for components on the NiFi 
> canvas to more accurately describe the pattern type (e.g., split, route, 
> join, partition, etc.) or system being used (e.g., Kafka, HBase, etc.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (NIFI-3338) Improve Add Processor UX with usage guidance, in-depth help, and solution-based examples

2017-07-31 Thread Rob Moran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Moran updated NIFI-3338:

Comment: was deleted

(was: I've attached a PNG sample and individual SVG files (ZIP) for each 
icon/logo seen in the sample. The idea is to have a base set of icons 
representing patterns, operational intent, etc. as mentioned in the 
description. For other processors of a particular system or for a specific 
technology, display a unique icon such as a logo or icon and explicit label 
(e.g., HTTP).

To start these will help enhance the UI with a variety of icons to help users 
tell more of a visual story about the data flow. Going forward this will 
hopefully serve as a foundation on which to build a more engaging Add Processor 
experience by allowing users to explore NiFi's offerings by pattern, workflow, 
technology, etc. This is related to NIFI-3338.)

> Improve Add Processor UX with usage guidance, in-depth help, and 
> solution-based examples 
> -
>
> Key: NIFI-3338
> URL: https://issues.apache.org/jira/browse/NIFI-3338
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Rob Moran
>Priority: Minor
> Attachments: processor-icon-samples.png, processor-icon-samples.png
>
>
> This issue is to capture discussion and concept designs for an improved Add 
> Processor experience in NiFi.
> Things that could potentially be included:
> * Documentation (link, or perhaps some embedded information)
> * Links to existing templates that use the processor
> * Solution-based videos and articles that demo processor use



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-3338) Improve Add Processor UX with usage guidance, in-depth help, and solution-based examples

2017-07-31 Thread Rob Moran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Moran updated NIFI-3338:

Attachment: processor-icon-samples.png
processor-icon-samples.png

I've attached a PNG sample and individual SVG files (ZIP) for each icon/logo 
seen in the sample. The idea is to have a base set of icons representing 
patterns, operational intent, etc. as mentioned in the description. For other 
processors of a particular system or for a specific technology, display a 
unique icon such as a logo or icon and explicit label (e.g., HTTP).

To start these will help enhance the UI with a variety of icons to help users 
tell more of a visual story about the data flow. Going forward this will 
hopefully serve as a foundation on which to build a more engaging Add Processor 
experience by allowing users to explore NiFi's offerings by pattern, workflow, 
technology, etc. This is related to NIFI-3338.

> Improve Add Processor UX with usage guidance, in-depth help, and 
> solution-based examples 
> -
>
> Key: NIFI-3338
> URL: https://issues.apache.org/jira/browse/NIFI-3338
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Rob Moran
>Priority: Minor
> Attachments: processor-icon-samples.png, processor-icon-samples.png
>
>
> This issue is to capture discussion and concept designs for an improved Add 
> Processor experience in NiFi.
> Things that could potentially be included:
> * Documentation (link, or perhaps some embedded information)
> * Links to existing templates that use the processor
> * Solution-based videos and articles that demo processor use



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-1580) Allow double-click to display config of processor

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107712#comment-16107712
 ] 

ASF GitHub Bot commented on NIFI-1580:
--

Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2009
  
Sorry about that @yuri1969 I thought we reached consensus that this was 
expected as long as the current behavior for adding a bend with the double 
click stayed in tact...


> Allow double-click to display config of processor
> -
>
> Key: NIFI-1580
> URL: https://issues.apache.org/jira/browse/NIFI-1580
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core UI
>Affects Versions: 0.4.1
> Environment: all
>Reporter: Uwe Geercken
>Priority: Minor
>  Labels: features, processor, ui
>
> A user frequently has to open the "config" dialog when designing nifi flows. 
> Each time the user has to right-click the processor and select "config" from 
> the menu.
> It would be quicker when it would be possible to double click a processor - 
> or maybe the title are - to display the config dialog.
> This could also be designed as a confuguration of the UI that the user can 
> define (if double-clicking open the config dialog, does something else or 
> simply nothing)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2009: NIFI-1580 - Allow double-click to display config

2017-07-31 Thread scottyaslan
Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2009
  
Sorry about that @yuri1969 I thought we reached consensus that this was 
expected as long as the current behavior for adding a bend with the double 
click stayed in tact...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #2020: [NiFi-3973] Add PutKudu Processor for ingesting data to Ku...

2017-07-31 Thread cammachusa
Github user cammachusa commented on the issue:

https://github.com/apache/nifi/pull/2020
  
Hi @joewitt and @rickysaltzer , I am following the PutParquet to implement 
the PutKudu, and so have the same dependencies as the PutParquet's, and got it 
build success. But build on Travis was always fail. And I realized this morning 
that the reason is Record Reader is not properly referenced and deployed for my 
PutKudu (I deployed it manually to test the PutKudu first). I checked the log, 
and here is the message: "ClassNotFoundException: 
org.apache.nifi.serialization.RecordReaderFactory"
I compare every files of PutParquet with my PutKudu, but couldn't find any 
missing part of it. Would you please advise?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107694#comment-16107694
 ] 

ASF GitHub Bot commented on NIFI-4024:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r130413186
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,323 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+@ReadsAttribute(attribute = "restart.index", description = "Reads 
restart.index when it needs to replay part of a record set that did not get 
into HBase.")
+@WritesAttribute(attribute = "restart.index", description = "Writes 
restart.index when a batch fails to be insert into HBase")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Path")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the 

[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107695#comment-16107695
 ] 

ASF GitHub Bot commented on NIFI-4024:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r130415687
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,323 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+@ReadsAttribute(attribute = "restart.index", description = "Reads 
restart.index when it needs to replay part of a record set that did not get 
into HBase.")
+@WritesAttribute(attribute = "restart.index", description = "Writes 
restart.index when a batch fails to be insert into HBase")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Path")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the 

[jira] [Commented] (NIFI-4024) Create EvaluateRecordPath processor

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107693#comment-16107693
 ] 

ASF GitHub Bot commented on NIFI-4024:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r130412355
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,323 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+@ReadsAttribute(attribute = "restart.index", description = "Reads 
restart.index when it needs to replay part of a record set that did not get 
into HBase.")
+@WritesAttribute(attribute = "restart.index", description = "Writes 
restart.index when a batch fails to be insert into HBase")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Path")
--- End diff --

Minor, but can we go back to calling this "Row Identifier Field Name"? 


> Create EvaluateRecordPath processor
> ---
>
> Key: NIFI-4024
> URL: https://issues.apache.org/jira/browse/NIFI-4024
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Steve Champagne
>Priority: Minor
>
> With the new RecordPath DSL, it would be nice if there was a processor that 
> could pull fields into attributes of the flowfile based on a RecordPath. This 
> would be similar to the EvaluateJsonPath processor that currently exists, 
> except it could be used to pull fields from arbitrary record formats. My 
> current use case for it would be pulling fields out of Avro records while 
> skipping the steps of having to convert Avro to JSON, evaluate JsonPath, and 
> then converting back to Avro. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecor...

2017-07-31 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r130413186
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,323 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+@ReadsAttribute(attribute = "restart.index", description = "Reads 
restart.index when it needs to replay part of a record set that did not get 
into HBase.")
+@WritesAttribute(attribute = "restart.index", description = "Writes 
restart.index when a batch fails to be insert into HBase")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Path")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the given column.");
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+

[GitHub] nifi pull request #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecor...

2017-07-31 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r130412355
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,323 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+@ReadsAttribute(attribute = "restart.index", description = "Reads 
restart.index when it needs to replay part of a record set that did not get 
into HBase.")
+@WritesAttribute(attribute = "restart.index", description = "Writes 
restart.index when a batch fails to be insert into HBase")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Path")
--- End diff --

Minor, but can we go back to calling this "Row Identifier Field Name"? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1961: NIFI-4024 Added org.apache.nifi.hbase.PutHBaseRecor...

2017-07-31 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1961#discussion_r130415687
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -0,0 +1,323 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.hbase.put.PutColumn;
+import org.apache.nifi.hbase.put.PutFlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@EventDriven
+@SupportsBatching
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "hbase", "put", "record"})
+@CapabilityDescription("Adds rows to HBase based on the contents of a 
flowfile using a configured record reader.")
+@ReadsAttribute(attribute = "restart.index", description = "Reads 
restart.index when it needs to replay part of a record set that did not get 
into HBase.")
+@WritesAttribute(attribute = "restart.index", description = "Writes 
restart.index when a batch fails to be insert into HBase")
+public class PutHBaseRecord extends AbstractPutHBase {
+
+protected static final PropertyDescriptor ROW_FIELD_NAME = new 
PropertyDescriptor.Builder()
+.name("Row Identifier Field Path")
+.description("Specifies the name of a record field whose value 
should be used as the row id for the given record.")
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final String FAIL_VALUE = "Fail";
+protected static final String WARN_VALUE = "Warn";
+protected static final String IGNORE_VALUE = "Ignore";
+protected static final String TEXT_VALUE = "Text";
+
+protected static final AllowableValue COMPLEX_FIELD_FAIL = new 
AllowableValue(FAIL_VALUE, FAIL_VALUE, "Route entire FlowFile to failure if any 
elements contain complex values.");
+protected static final AllowableValue COMPLEX_FIELD_WARN = new 
AllowableValue(WARN_VALUE, WARN_VALUE, "Provide a warning and do not include 
field in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_IGNORE = new 
AllowableValue(IGNORE_VALUE, IGNORE_VALUE, "Silently ignore and do not include 
in row sent to HBase.");
+protected static final AllowableValue COMPLEX_FIELD_TEXT = new 
AllowableValue(TEXT_VALUE, TEXT_VALUE, "Use the string representation of the 
complex field as the value of the given column.");
+
+static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+

[GitHub] nifi pull request #2020: [NiFi-3973] Add PutKudu Processor for ingesting dat...

2017-07-31 Thread rickysaltzer
Github user rickysaltzer commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2020#discussion_r130409278
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -0,0 +1,232 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.kudu;
+
+import java.io.BufferedInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.kudu.client.KuduClient;
+import org.apache.kudu.client.KuduException;
+import org.apache.kudu.client.KuduSession;
+import org.apache.kudu.client.KuduTable;
+import org.apache.kudu.client.Insert;
+import org.apache.kudu.client.Upsert;
+
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.FlowFileAccessException;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import 
org.apache.nifi.processors.hadoop.exception.RecordReaderFactoryException;
+
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.RecordSet;
+import org.apache.nifi.serialization.record.Record;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicReference;
+
+public abstract class AbstractKudu extends AbstractProcessor {
+
+protected static final PropertyDescriptor KUDU_MASTERS = new 
PropertyDescriptor.Builder()
+.name("KUDU Masters")
+.description("List all kudu masters's ip with port (e.g. 
7051), comma separated")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final PropertyDescriptor TABLE_NAME = new 
PropertyDescriptor.Builder()
+.name("Table Name")
+.description("The name of the Kudu Table to put data into")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
+.name("record-reader")
+.displayName("Record Reader")
+.description("The service for reading records from incoming 
flow files.")
+.identifiesControllerService(RecordReaderFactory.class)
+.required(true)
+.build();
+
+protected static final PropertyDescriptor SKIP_HEAD_LINE = new 
PropertyDescriptor.Builder()
+.name("Skip head line")
+.description("Set it to true if your first line is the header 
line e.g. column names")
+.allowableValues("true", "false")
+.defaultValue("true")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+protected static final PropertyDescriptor INSERT_OPERATION = new 
PropertyDescriptor.Builder()
+.name("INSERT OPERATION")
+.description("Specify operation for this processor. 
Insert-Ignore will ignore duplicated rows")
+.allowableValues("Insert", "Insert-Ignore", "Upsert")
+.defaultValue("Insert")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+

[jira] [Created] (NIFI-4242) CSVReader shouldn't require that an escape character be defined

2017-07-31 Thread Wesley L Lawrence (JIRA)
Wesley L Lawrence created NIFI-4242:
---

 Summary: CSVReader shouldn't require that an escape character be 
defined
 Key: NIFI-4242
 URL: https://issues.apache.org/jira/browse/NIFI-4242
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.3.0
Reporter: Wesley L Lawrence
Priority: Minor


There are situations where, when parsing a CSV file, one doesn't want to define 
an escape character. For example, when using quote character ", the following 
is valid CSV;

{code}
a,"""b",c
{code}

The second column should be interpreted as "b. But when Apache Commons CSV is 
told that there's an escape character, the above row is invalid (interestingly, 
if it was """b""", it would be valid as "b"). 

There are known formats that Apache Commons CSV provides, that doesn't define 
escape characters either.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2020: [NiFi-3973] Add PutKudu Processor for ingesting dat...

2017-07-31 Thread rickysaltzer
Github user rickysaltzer commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2020#discussion_r130400960
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -84,6 +86,14 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+protected static final PropertyDescriptor INSERT_OPERATION = new 
PropertyDescriptor.Builder()
+.name("INSERT OPERATION")
+.description("Specify operation for this processor. 
Insert-Ignore will ignore duplicated rows")
+.allowableValues("Insert", "Insert-Ignore", "Upsert")
--- End diff --

It's generally best practice for us to use an Enum instead of a list of 
strings when using `allowableValues`..it mainly just lends itself to cleaner 
looking code later down the road. 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #2020: [NiFi-3973] Add PutKudu Processor for ingesting dat...

2017-07-31 Thread rickysaltzer
Github user rickysaltzer commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2020#discussion_r130400374
  
--- Diff: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKudu.java
 ---
@@ -84,6 +86,14 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+protected static final PropertyDescriptor INSERT_OPERATION = new 
PropertyDescriptor.Builder()
+.name("INSERT OPERATION")
--- End diff --

Let's just capitalize the words instead of each letter. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1580) Allow double-click to display config of processor

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107509#comment-16107509
 ] 

ASF GitHub Bot commented on NIFI-1580:
--

Github user yuri1969 commented on the issue:

https://github.com/apache/nifi/pull/2009
  
@scottyaslan OK, but I'm afraid the merged version still has the 
`quickSelect` activated for mid nodes/bends. It seems this should be removed.


> Allow double-click to display config of processor
> -
>
> Key: NIFI-1580
> URL: https://issues.apache.org/jira/browse/NIFI-1580
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core UI
>Affects Versions: 0.4.1
> Environment: all
>Reporter: Uwe Geercken
>Priority: Minor
>  Labels: features, processor, ui
>
> A user frequently has to open the "config" dialog when designing nifi flows. 
> Each time the user has to right-click the processor and select "config" from 
> the menu.
> It would be quicker when it would be possible to double click a processor - 
> or maybe the title are - to display the config dialog.
> This could also be designed as a confuguration of the UI that the user can 
> define (if double-clicking open the config dialog, does something else or 
> simply nothing)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2009: NIFI-1580 - Allow double-click to display config

2017-07-31 Thread yuri1969
Github user yuri1969 commented on the issue:

https://github.com/apache/nifi/pull/2009
  
@scottyaslan OK, but I'm afraid the merged version still has the 
`quickSelect` activated for mid nodes/bends. It seems this should be removed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-4152) Create ListenTCPRecord Processor

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107407#comment-16107407
 ] 

ASF GitHub Bot commented on NIFI-4152:
--

Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1987
  
@pvillard31 thanks for reviewing, I just rebased against master and made 
your suggested changes, let me know of anything else, thanks!


> Create ListenTCPRecord Processor
> 
>
> Key: NIFI-4152
> URL: https://issues.apache.org/jira/browse/NIFI-4152
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Attachments: ListenTCPRecordWithGrok.xml
>
>
> We should implement a ListenTCPRecord that can pass the underlying 
> InputStream from a TCP connection to a record reader.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1987: NIFI-4152 Initial commit of ListenTCPRecord

2017-07-31 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1987
  
@pvillard31 thanks for reviewing, I just rebased against master and made 
your suggested changes, let me know of anything else, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1580) Allow double-click to display config of processor

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107394#comment-16107394
 ] 

ASF GitHub Bot commented on NIFI-1580:
--

Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2009
  
Thanks @yuri1969 this has been merged to master!


> Allow double-click to display config of processor
> -
>
> Key: NIFI-1580
> URL: https://issues.apache.org/jira/browse/NIFI-1580
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core UI
>Affects Versions: 0.4.1
> Environment: all
>Reporter: Uwe Geercken
>Priority: Minor
>  Labels: features, processor, ui
>
> A user frequently has to open the "config" dialog when designing nifi flows. 
> Each time the user has to right-click the processor and select "config" from 
> the menu.
> It would be quicker when it would be possible to double click a processor - 
> or maybe the title are - to display the config dialog.
> This could also be designed as a confuguration of the UI that the user can 
> define (if double-clicking open the config dialog, does something else or 
> simply nothing)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2009: NIFI-1580 - Allow double-click to display config

2017-07-31 Thread scottyaslan
Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/2009
  
Thanks @yuri1969 this has been merged to master!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-1580) Allow double-click to display config of processor

2017-07-31 Thread Scott Aslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan updated NIFI-1580:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Allow double-click to display config of processor
> -
>
> Key: NIFI-1580
> URL: https://issues.apache.org/jira/browse/NIFI-1580
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core UI
>Affects Versions: 0.4.1
> Environment: all
>Reporter: Uwe Geercken
>Priority: Minor
>  Labels: features, processor, ui
>
> A user frequently has to open the "config" dialog when designing nifi flows. 
> Each time the user has to right-click the processor and select "config" from 
> the menu.
> It would be quicker when it would be possible to double click a processor - 
> or maybe the title are - to display the config dialog.
> This could also be designed as a confuguration of the UI that the user can 
> define (if double-clicking open the config dialog, does something else or 
> simply nothing)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-1580) Allow double-click to display config of processor

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107392#comment-16107392
 ] 

ASF GitHub Bot commented on NIFI-1580:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2009


> Allow double-click to display config of processor
> -
>
> Key: NIFI-1580
> URL: https://issues.apache.org/jira/browse/NIFI-1580
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core UI
>Affects Versions: 0.4.1
> Environment: all
>Reporter: Uwe Geercken
>Priority: Minor
>  Labels: features, processor, ui
>
> A user frequently has to open the "config" dialog when designing nifi flows. 
> Each time the user has to right-click the processor and select "config" from 
> the menu.
> It would be quicker when it would be possible to double click a processor - 
> or maybe the title are - to display the config dialog.
> This could also be designed as a confuguration of the UI that the user can 
> define (if double-clicking open the config dialog, does something else or 
> simply nothing)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2009: NIFI-1580 - Allow double-click to display config

2017-07-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2009


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1580) Allow double-click to display config of processor

2017-07-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107391#comment-16107391
 ] 

ASF subversion and git services commented on NIFI-1580:
---

Commit ef9cb5be235e48533848a6b5f87cbdeefc8933a3 in nifi's branch 
refs/heads/master from [~yuri1969]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=ef9cb5b ]

NIFI-1580 - Allow double-click to display config

* Added double-click shortcut opening config/details dialog to processors,
connections, ports and labels.
* Created a base for further default action selection, disabling, etc.
* Omitted default action configuration UI - that might be a separate JIRA 
ticket.


> Allow double-click to display config of processor
> -
>
> Key: NIFI-1580
> URL: https://issues.apache.org/jira/browse/NIFI-1580
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core UI
>Affects Versions: 0.4.1
> Environment: all
>Reporter: Uwe Geercken
>Priority: Minor
>  Labels: features, processor, ui
>
> A user frequently has to open the "config" dialog when designing nifi flows. 
> Each time the user has to right-click the processor and select "config" from 
> the menu.
> It would be quicker when it would be possible to double click a processor - 
> or maybe the title are - to display the config dialog.
> This could also be designed as a confuguration of the UI that the user can 
> define (if double-clicking open the config dialog, does something else or 
> simply nothing)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp pull request #122: MINIFI-359: Add PutFile test to test a va...

2017-07-31 Thread phrocker
GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/122

MINIFI-359: Add PutFile test to test a variety of conditions for the …

…user provided input

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFI-359

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/122.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #122


commit 85c7fc9e3c4c2b8259ae4b792c4be3f341f35c29
Author: Marc Parisi 
Date:   2017-07-31T14:57:28Z

MINIFI-359: Add PutFile test to test a variety of conditions for the user 
provided input




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #120: MINIFI-354 Reverting default config.yml t...

2017-07-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/120


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp issue #121: MINIFI-357 fixing PutFile bug that caused all wr...

2017-07-31 Thread phrocker
Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/121
  
Thanks for catching this. I or someone else will work on 
https://issues.apache.org/jira/browse/MINIFI-359 to help avoid this in future 
changes. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #121: MINIFI-357 fixing PutFile bug that caused...

2017-07-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/121


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3472) PutHDFS Kerberos relogin not working (tgt) after ticket expires

2017-07-31 Thread Jorge Machado (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106963#comment-16106963
 ] 

Jorge Machado commented on NIFI-3472:
-

I'm hitting the same problem but with the gethdfs processor: 


{code:java}
2017-07-31 10:20:51,657 WARN [Timer-Driven Process Thread-1] 
org.apache.hadoop.ipc.Client Exception encountered while connecting to the 
server : javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
2017-07-31 10:20:51,658 WARN [Timer-Driven Process Thread-1] 
o.a.h.io.retry.RetryInvocationHandler Exception while invoking class 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo
 over server/10.174.22.40:8020. Not retrying because failovers (15) exceeded 
maximum allowed (15)
java.io.IOException: Failed on local exception: java.io.IOException: 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]; Host Details : local host is: "nifi/10.174.22.49"; destination host is: 
"server":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
at org.apache.hadoop.ipc.Client.call(Client.java:1479)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy126.getFileInfo(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
at sun.reflect.GeneratedMethodAccessor421.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy136.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
at 
org.apache.nifi.processors.hadoop.GetHDFS.selectFiles(GetHDFS.java:444)
at 
org.apache.nifi.processors.hadoop.GetHDFS.performListing(GetHDFS.java:420)
at org.apache.nifi.processors.hadoop.GetHDFS.onTrigger(GetHDFS.java:264)
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:687)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at 
org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:650)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:737)
at 

[GitHub] nifi issue #1712: NIFI-3724 - Add Put/Fetch Parquet Processors

2017-07-31 Thread masterlittle
Github user masterlittle commented on the issue:

https://github.com/apache/nifi/pull/1712
  
Hi, Does this enable writing the parquet files to S3 bucket? If not then is 
there any way I can achieve the same?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3724) Add Put/Fetch Parquet Processors

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106919#comment-16106919
 ] 

ASF GitHub Bot commented on NIFI-3724:
--

Github user masterlittle commented on the issue:

https://github.com/apache/nifi/pull/1712
  
Hi, Does this enable writing the parquet files to S3 bucket? If not then is 
there any way I can achieve the same?


> Add Put/Fetch Parquet Processors
> 
>
> Key: NIFI-3724
> URL: https://issues.apache.org/jira/browse/NIFI-3724
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.2.0
>
>
> Now that we have the record reader/writer services currently in master, it 
> would be nice to have reader and writers for Parquet. Since Parquet's API is 
> based on the Hadoop Path object, and not InputStreams/OutputStreams, we can't 
> really implement direct conversions to and from Parquet in the middle of a 
> flow, but we can we can perform the conversion by taking any record format 
> and writing to a Path as Parquet, or reading Parquet from a Path and writing 
> it out as another record format.
> We should add a PutParquet that uses a record reader and writes records to a 
> Path as Parquet, and a FetchParquet that reads Parquet from a path and writes 
> out records to a flow file using a record writer.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4241) ListSFTP node in cluster is not coming up

2017-07-31 Thread Frank Thiele (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank Thiele updated NIFI-4241:
---
Attachment: Overview.png

> ListSFTP node in cluster is not coming up
> -
>
> Key: NIFI-4241
> URL: https://issues.apache.org/jira/browse/NIFI-4241
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
> Environment: Docker container within VM:
> docker@myvm5:~$ uname -a
> Linux myvm5 4.4.74-boot2docker #1 SMP Mon Jun 26 18:01:14 UTC 2017 x86_64 
> GNU/Linux
>Reporter: Frank Thiele
> Attachments: cluster.png, flow.xml, Overview.png
>
>
> I have setup a small cluster with 3 nodes nifi1:8080, nifi2:8081 and 
> nifi3:8082.
> When I configure a ListSFTP processor and run it, the following exception 
> comes up:
> {code}
> 2017-07-31 06:25:28,522 ERROR [StandardProcessScheduler Thread-6] 
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped 
> abnormally
> java.util.concurrent.ExecutionException: 
> java.lang.reflect.InvocationTargetException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.reflect.InvocationTargetException: null
>   at sun.reflect.GeneratedMethodAccessor185.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1307)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   ... 2 common frames omitted
> Caused by: java.io.IOException: Failed to obtain value from ZooKeeper for 
> component with ID 9716842f-015d-1000--7de55c99 with exception code 
> CONNECTIONLOSS
>   at 
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420)
>   at 
> org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63)
>   at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.updateState(AbstractListProcessor.java:199)
>   ... 15 common frames omitted
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for 
> /nifi/components/9716842f-015d-1000--7de55c99
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
>   at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
>   at 
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403)
>   ... 17 common frames omitted
> 2017-07-31 06:25:28,523 ERROR [StandardProcessScheduler Thread-5] 
> o.a.nifi.processors.standard.ListSFTP 
> ListSFTP[id=9716842f-015d-1000--7de55c99] 
> ListSFTP[id=9716842f-015d-1000--7de55c99] failed to invoke 
> @OnScheduled method due to java.lang.RuntimeException: Failed while executing 
> one of processor's OnScheduled task.; processor will not be scheduled to run 
> for 30 seconds: java.lang.RuntimeException: Failed while executing one of 
> processor's OnScheduled task.
> java.lang.RuntimeException: Failed while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1482)
>   at 
> 

[jira] [Updated] (NIFI-4241) ListSFTP node in cluster is not coming up

2017-07-31 Thread Frank Thiele (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank Thiele updated NIFI-4241:
---
Attachment: (was: overview.png)

> ListSFTP node in cluster is not coming up
> -
>
> Key: NIFI-4241
> URL: https://issues.apache.org/jira/browse/NIFI-4241
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
> Environment: Docker container within VM:
> docker@myvm5:~$ uname -a
> Linux myvm5 4.4.74-boot2docker #1 SMP Mon Jun 26 18:01:14 UTC 2017 x86_64 
> GNU/Linux
>Reporter: Frank Thiele
> Attachments: cluster.png, flow.xml, Overview.png
>
>
> I have setup a small cluster with 3 nodes nifi1:8080, nifi2:8081 and 
> nifi3:8082.
> When I configure a ListSFTP processor and run it, the following exception 
> comes up:
> {code}
> 2017-07-31 06:25:28,522 ERROR [StandardProcessScheduler Thread-6] 
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped 
> abnormally
> java.util.concurrent.ExecutionException: 
> java.lang.reflect.InvocationTargetException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.reflect.InvocationTargetException: null
>   at sun.reflect.GeneratedMethodAccessor185.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1307)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   ... 2 common frames omitted
> Caused by: java.io.IOException: Failed to obtain value from ZooKeeper for 
> component with ID 9716842f-015d-1000--7de55c99 with exception code 
> CONNECTIONLOSS
>   at 
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420)
>   at 
> org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63)
>   at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.updateState(AbstractListProcessor.java:199)
>   ... 15 common frames omitted
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for 
> /nifi/components/9716842f-015d-1000--7de55c99
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
>   at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
>   at 
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403)
>   ... 17 common frames omitted
> 2017-07-31 06:25:28,523 ERROR [StandardProcessScheduler Thread-5] 
> o.a.nifi.processors.standard.ListSFTP 
> ListSFTP[id=9716842f-015d-1000--7de55c99] 
> ListSFTP[id=9716842f-015d-1000--7de55c99] failed to invoke 
> @OnScheduled method due to java.lang.RuntimeException: Failed while executing 
> one of processor's OnScheduled task.; processor will not be scheduled to run 
> for 30 seconds: java.lang.RuntimeException: Failed while executing one of 
> processor's OnScheduled task.
> java.lang.RuntimeException: Failed while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1482)
>   at 
> 

[jira] [Updated] (NIFI-4241) ListSFTP node in cluster is not coming up

2017-07-31 Thread Frank Thiele (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank Thiele updated NIFI-4241:
---
Attachment: (was: Overview.png)

> ListSFTP node in cluster is not coming up
> -
>
> Key: NIFI-4241
> URL: https://issues.apache.org/jira/browse/NIFI-4241
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
> Environment: Docker container within VM:
> docker@myvm5:~$ uname -a
> Linux myvm5 4.4.74-boot2docker #1 SMP Mon Jun 26 18:01:14 UTC 2017 x86_64 
> GNU/Linux
>Reporter: Frank Thiele
> Attachments: cluster.png, flow.xml, overview.png
>
>
> I have setup a small cluster with 3 nodes nifi1:8080, nifi2:8081 and 
> nifi3:8082.
> When I configure a ListSFTP processor and run it, the following exception 
> comes up:
> {code}
> 2017-07-31 06:25:28,522 ERROR [StandardProcessScheduler Thread-6] 
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped 
> abnormally
> java.util.concurrent.ExecutionException: 
> java.lang.reflect.InvocationTargetException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.reflect.InvocationTargetException: null
>   at sun.reflect.GeneratedMethodAccessor185.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1307)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   ... 2 common frames omitted
> Caused by: java.io.IOException: Failed to obtain value from ZooKeeper for 
> component with ID 9716842f-015d-1000--7de55c99 with exception code 
> CONNECTIONLOSS
>   at 
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420)
>   at 
> org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63)
>   at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.updateState(AbstractListProcessor.java:199)
>   ... 15 common frames omitted
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for 
> /nifi/components/9716842f-015d-1000--7de55c99
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
>   at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
>   at 
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403)
>   ... 17 common frames omitted
> 2017-07-31 06:25:28,523 ERROR [StandardProcessScheduler Thread-5] 
> o.a.nifi.processors.standard.ListSFTP 
> ListSFTP[id=9716842f-015d-1000--7de55c99] 
> ListSFTP[id=9716842f-015d-1000--7de55c99] failed to invoke 
> @OnScheduled method due to java.lang.RuntimeException: Failed while executing 
> one of processor's OnScheduled task.; processor will not be scheduled to run 
> for 30 seconds: java.lang.RuntimeException: Failed while executing one of 
> processor's OnScheduled task.
> java.lang.RuntimeException: Failed while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1482)
>   at 
> 

[jira] [Updated] (NIFI-4241) ListSFTP node in cluster is not coming up

2017-07-31 Thread Frank Thiele (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank Thiele updated NIFI-4241:
---
Attachment: overview.png

> ListSFTP node in cluster is not coming up
> -
>
> Key: NIFI-4241
> URL: https://issues.apache.org/jira/browse/NIFI-4241
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
> Environment: Docker container within VM:
> docker@myvm5:~$ uname -a
> Linux myvm5 4.4.74-boot2docker #1 SMP Mon Jun 26 18:01:14 UTC 2017 x86_64 
> GNU/Linux
>Reporter: Frank Thiele
> Attachments: cluster.png, flow.xml, overview.png
>
>
> I have setup a small cluster with 3 nodes nifi1:8080, nifi2:8081 and 
> nifi3:8082.
> When I configure a ListSFTP processor and run it, the following exception 
> comes up:
> {code}
> 2017-07-31 06:25:28,522 ERROR [StandardProcessScheduler Thread-6] 
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped 
> abnormally
> java.util.concurrent.ExecutionException: 
> java.lang.reflect.InvocationTargetException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.reflect.InvocationTargetException: null
>   at sun.reflect.GeneratedMethodAccessor185.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1307)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   ... 2 common frames omitted
> Caused by: java.io.IOException: Failed to obtain value from ZooKeeper for 
> component with ID 9716842f-015d-1000--7de55c99 with exception code 
> CONNECTIONLOSS
>   at 
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420)
>   at 
> org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63)
>   at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.updateState(AbstractListProcessor.java:199)
>   ... 15 common frames omitted
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for 
> /nifi/components/9716842f-015d-1000--7de55c99
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
>   at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
>   at 
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403)
>   ... 17 common frames omitted
> 2017-07-31 06:25:28,523 ERROR [StandardProcessScheduler Thread-5] 
> o.a.nifi.processors.standard.ListSFTP 
> ListSFTP[id=9716842f-015d-1000--7de55c99] 
> ListSFTP[id=9716842f-015d-1000--7de55c99] failed to invoke 
> @OnScheduled method due to java.lang.RuntimeException: Failed while executing 
> one of processor's OnScheduled task.; processor will not be scheduled to run 
> for 30 seconds: java.lang.RuntimeException: Failed while executing one of 
> processor's OnScheduled task.
> java.lang.RuntimeException: Failed while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1482)
>   at 
> 

[jira] [Updated] (NIFI-4241) ListSFTP node in cluster is not coming up

2017-07-31 Thread Frank Thiele (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank Thiele updated NIFI-4241:
---
Attachment: cluster.png

> ListSFTP node in cluster is not coming up
> -
>
> Key: NIFI-4241
> URL: https://issues.apache.org/jira/browse/NIFI-4241
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
> Environment: Docker container within VM:
> docker@myvm5:~$ uname -a
> Linux myvm5 4.4.74-boot2docker #1 SMP Mon Jun 26 18:01:14 UTC 2017 x86_64 
> GNU/Linux
>Reporter: Frank Thiele
> Attachments: cluster.png, flow.xml, Overview.png
>
>
> I have setup a small cluster with 3 nodes nifi1:8080, nifi2:8081 and 
> nifi3:8082.
> When I configure a ListSFTP processor and run it, the following exception 
> comes up:
> {code}
> 2017-07-31 06:25:28,522 ERROR [StandardProcessScheduler Thread-6] 
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped 
> abnormally
> java.util.concurrent.ExecutionException: 
> java.lang.reflect.InvocationTargetException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.reflect.InvocationTargetException: null
>   at sun.reflect.GeneratedMethodAccessor185.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1307)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   ... 2 common frames omitted
> Caused by: java.io.IOException: Failed to obtain value from ZooKeeper for 
> component with ID 9716842f-015d-1000--7de55c99 with exception code 
> CONNECTIONLOSS
>   at 
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420)
>   at 
> org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63)
>   at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.updateState(AbstractListProcessor.java:199)
>   ... 15 common frames omitted
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for 
> /nifi/components/9716842f-015d-1000--7de55c99
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
>   at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
>   at 
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403)
>   ... 17 common frames omitted
> 2017-07-31 06:25:28,523 ERROR [StandardProcessScheduler Thread-5] 
> o.a.nifi.processors.standard.ListSFTP 
> ListSFTP[id=9716842f-015d-1000--7de55c99] 
> ListSFTP[id=9716842f-015d-1000--7de55c99] failed to invoke 
> @OnScheduled method due to java.lang.RuntimeException: Failed while executing 
> one of processor's OnScheduled task.; processor will not be scheduled to run 
> for 30 seconds: java.lang.RuntimeException: Failed while executing one of 
> processor's OnScheduled task.
> java.lang.RuntimeException: Failed while executing one of processor's 
> OnScheduled task.
>   at 
> org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1482)
>   at 
> 

[jira] [Commented] (NIFI-4241) ListSFTP node in cluster is not coming up

2017-07-31 Thread Frank Thiele (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106861#comment-16106861
 ] 

Frank Thiele commented on NIFI-4241:


I now try – as a workaround – to configure 8080 on all nodes.

> ListSFTP node in cluster is not coming up
> -
>
> Key: NIFI-4241
> URL: https://issues.apache.org/jira/browse/NIFI-4241
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
> Environment: Docker container within VM:
> docker@myvm5:~$ uname -a
> Linux myvm5 4.4.74-boot2docker #1 SMP Mon Jun 26 18:01:14 UTC 2017 x86_64 
> GNU/Linux
>Reporter: Frank Thiele
> Attachments: flow.xml, Overview.png
>
>
> I have setup a small cluster with 3 nodes nifi1:8080, nifi2:8081 and 
> nifi3:8082.
> When I configure a ListSFTP processor and run it, the following exception 
> comes up:
> {code}
> 2017-07-31 06:25:28,522 ERROR [StandardProcessScheduler Thread-6] 
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped 
> abnormally
> java.util.concurrent.ExecutionException: 
> java.lang.reflect.InvocationTargetException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.reflect.InvocationTargetException: null
>   at sun.reflect.GeneratedMethodAccessor185.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
>   at 
> org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1307)
>   at 
> org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   ... 2 common frames omitted
> Caused by: java.io.IOException: Failed to obtain value from ZooKeeper for 
> component with ID 9716842f-015d-1000--7de55c99 with exception code 
> CONNECTIONLOSS
>   at 
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420)
>   at 
> org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63)
>   at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.updateState(AbstractListProcessor.java:199)
>   ... 15 common frames omitted
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for 
> /nifi/components/9716842f-015d-1000--7de55c99
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
>   at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
>   at 
> org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403)
>   ... 17 common frames omitted
> 2017-07-31 06:25:28,523 ERROR [StandardProcessScheduler Thread-5] 
> o.a.nifi.processors.standard.ListSFTP 
> ListSFTP[id=9716842f-015d-1000--7de55c99] 
> ListSFTP[id=9716842f-015d-1000--7de55c99] failed to invoke 
> @OnScheduled method due to java.lang.RuntimeException: Failed while executing 
> one of processor's OnScheduled task.; processor will not be scheduled to run 
> for 30 seconds: java.lang.RuntimeException: Failed while executing one of 
> processor's OnScheduled task.
> java.lang.RuntimeException: Failed while executing one of processor's 
> OnScheduled task.
>   at 
> 

[jira] [Created] (NIFI-4241) ListSFTP node in cluster is not coming up

2017-07-31 Thread Frank Thiele (JIRA)
Frank Thiele created NIFI-4241:
--

 Summary: ListSFTP node in cluster is not coming up
 Key: NIFI-4241
 URL: https://issues.apache.org/jira/browse/NIFI-4241
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.3.0
 Environment: Docker container within VM:
docker@myvm5:~$ uname -a
Linux myvm5 4.4.74-boot2docker #1 SMP Mon Jun 26 18:01:14 UTC 2017 x86_64 
GNU/Linux

Reporter: Frank Thiele
 Attachments: flow.xml, Overview.png

I have setup a small cluster with 3 nodes nifi1:8080, nifi2:8081 and nifi3:8082.
When I configure a ListSFTP processor and run it, the following exception comes 
up:

{code}
2017-07-31 06:25:28,522 ERROR [StandardProcessScheduler Thread-6] 
org.apache.nifi.engine.FlowEngine A flow controller task execution stopped 
abnormally
java.util.concurrent.ExecutionException: 
java.lang.reflect.InvocationTargetException
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.reflect.InvocationTargetException: null
at sun.reflect.GeneratedMethodAccessor185.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
at 
org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
at 
org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1307)
at 
org.apache.nifi.controller.StandardProcessorNode$1$1.call(StandardProcessorNode.java:1303)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
... 2 common frames omitted
Caused by: java.io.IOException: Failed to obtain value from ZooKeeper for 
component with ID 9716842f-015d-1000--7de55c99 with exception code 
CONNECTIONLOSS
at 
org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420)
at 
org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63)
at 
org.apache.nifi.processor.util.list.AbstractListProcessor.updateState(AbstractListProcessor.java:199)
... 15 common frames omitted
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
KeeperErrorCode = ConnectionLoss for 
/nifi/components/9716842f-015d-1000--7de55c99
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
at 
org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403)
... 17 common frames omitted
2017-07-31 06:25:28,523 ERROR [StandardProcessScheduler Thread-5] 
o.a.nifi.processors.standard.ListSFTP 
ListSFTP[id=9716842f-015d-1000--7de55c99] 
ListSFTP[id=9716842f-015d-1000--7de55c99] failed to invoke @OnScheduled 
method due to java.lang.RuntimeException: Failed while executing one of 
processor's OnScheduled task.; processor will not be scheduled to run for 30 
seconds: java.lang.RuntimeException: Failed while executing one of processor's 
OnScheduled task.
java.lang.RuntimeException: Failed while executing one of processor's 
OnScheduled task.
at 
org.apache.nifi.controller.StandardProcessorNode.invokeTaskAsCancelableFuture(StandardProcessorNode.java:1482)
at 
org.apache.nifi.controller.StandardProcessorNode.access$000(StandardProcessorNode.java:102)
at 
org.apache.nifi.controller.StandardProcessorNode$1.run(StandardProcessorNode.java:1303)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at