[jira] [Assigned] (NIFI-3163) Flow Fingerprint should include new RPG and RPG port configurations

2016-12-14 Thread Koji Kawamura (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura reassigned NIFI-3163:
---

Assignee: Koji Kawamura

> Flow Fingerprint should include new RPG and RPG port configurations 
> 
>
> Key: NIFI-3163
> URL: https://issues.apache.org/jira/browse/NIFI-3163
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>
> NiFi calculates fingerprint of a flow.xml using attributes related to data 
> processing. Since 1.0.0, some RemoteProcessGroup and RemoteProcessGroupPort 
> configuration has been added, but not being taken into account for 
> fingerprint.
> These new configurations should be revisited and added accordingly.
> - Transport protocol
> - Multiple target URIs
> - Proxy settings
> - Per port batch settings



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3196) Provide MiNiFi FlowStatus Insights processor that integrates MiNiFi Flow Status Report with Azure Application Insights

2016-12-14 Thread Andrew Psaltis (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15750168#comment-15750168
 ] 

Andrew Psaltis commented on NIFI-3196:
--

It would certainly be possible to put other processors in-front of this to do 
transformation of the data. The only thing this knows about the input is that 
it is a MiNiFi FlowStatus report serialized as JSON and from that it 
deserializes it and sends it along as metrics to AAI. 

> Provide MiNiFi FlowStatus Insights processor that integrates MiNiFi Flow 
> Status Report with Azure Application Insights
> --
>
> Key: NIFI-3196
> URL: https://issues.apache.org/jira/browse/NIFI-3196
> Project: Apache NiFi
>  Issue Type: New Feature
>Affects Versions: 1.1.0
>Reporter: Andrew Psaltis
>Assignee: Andrew Psaltis
>
> As a user of both NiFi and MiNiFi there are many times I would like to have 
> an operational dashboard for MiNiFi. Although, there is a discussion and 
> desire for [Centralized management | 
> https://cwiki.apache.org/confluence/display/MINIFI/MiNiFi+Community+Driven+Requirements]
>  there are many times where as a user I just want to be able to use the 
> operational tools I already have. In many cases those may be products such 
> as:  Nagios, Zenoos, Zabix, Graphite, Grafana and in cloud environments 
> perhaps Azure Application Insights and AWS Cloudwatch. 
> This JIRA and the related P/R provides a processor that ingests the MiNiFi 
> Flow Status report and sends it to Azure Application Insights as custom 
> metrics. This then allows a user to set alerts in Azure Application Insights, 
> thus providing the ability to monitor MiNiFi agents. As such, this has a 
> dependency on the following MiNiFi JIRA: [Change FlowStatusReport to be JSON 
> | https://issues.apache.org/jira/browse/MINIFI-170]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3196) Provide MiNiFi FlowStatus Insights processor that integrates MiNiFi Flow Status Report with Azure Application Insights

2016-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15750039#comment-15750039
 ] 

ASF GitHub Bot commented on NIFI-3196:
--

GitHub user apsaltis opened a pull request:

https://github.com/apache/nifi/pull/1331

NIFI-3196 Provide MiNiFi FlowStatus Insights processor that integrates 
MiNiFi Flow Status Report with Azure Application Insights

As a user of both NiFi and MiNiFi there are many times I would like to have 
an operational dashboard for MiNiFi. Although, there is a discussion and desire 
for Centralized management there are many times where as a user I just want to 
be able to use the operational tools I already have. In many cases those may be 
products such as: Nagios, Zenoos, Zabix, Graphite, Grafana and in cloud 
environments perhaps Azure Application Insights and AWS Cloudwatch.

This  P/R provides a processor that ingests the MiNiFi Flow Status report 
and sends it to Azure Application Insights as custom metrics. This then allows 
a user to set alerts in Azure Application Insights, thus providing the ability 
to monitor MiNiFi agents. As such, this has a dependency on the following 
MiNiFi JIRA: Change FlowStatusReport to be JSON 
([https://github.com/apache/nifi-minifi/pull/65](url))

An example flow of how to set this up for the MiNiFi side is:


![image](https://cloud.githubusercontent.com/assets/1100275/21208171/61891462-c23a-11e6-9075-e7657653f40f.png)

And the NiFi Side looks like:

![image](https://cloud.githubusercontent.com/assets/1100275/21208215/bc114cec-c23a-11e6-8393-ee2ec50e61e7.png)

Then once this is all set in Azure Application Insights I can build a 
dashboard that may look like this:

![image](https://cloud.githubusercontent.com/assets/1100275/21208329/67804e02-c23b-11e6-8e7d-933c56f33d3b.png)

And explore all the various metrics, that I may want to build a dashboard 
from:

![image](https://cloud.githubusercontent.com/assets/1100275/21208347/913e5b76-c23b-11e6-89b6-b8895210.png)

I can then also go and set alerts 

![image](https://cloud.githubusercontent.com/assets/1100275/21208458/128b653e-c23c-11e6-8737-d8665c280325.png)

In the end this type of integration can also be done with other enterprise 
tools.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apsaltis/nifi NIFI-3196

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1331.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1331


commit 9d3fa636a4f0feb9c7055488c5f686dd3f237e36
Author: Andrew Psaltis 
Date:   2016-12-14T16:40:27Z

NIFI-3196 Provide MiNiFi FlowStatus Insights processor that integrates 
MiNiFi Flow Status Report with Azure Application Insights




> Provide MiNiFi FlowStatus Insights processor that integrates MiNiFi Flow 
> Status Report with Azure Application Insights
> --
>
> Key: NIFI-3196
> URL: https://issues.apache.org/jira/browse/NIFI-3196
> Project: Apache NiFi
>  Issue Type: New Feature
>Affects Versions: 1.1.0
>Reporter: Andrew Psaltis
>Assignee: Andrew Psaltis
>
> As a user of both NiFi and MiNiFi there are many times I would like to have 
> an operational dashboard for MiNiFi. Although, there is a discussion and 
> desire for [Centralized management | 
> https://cwiki.apache.org/confluence/display/MINIFI/MiNiFi+Community+Driven+Requirements]
>  there are many times where as a user I just want to be able to use the 
> operational tools I already have. In many cases those may be products such 
> as:  Nagios, Zenoos, Zabix, Graphite, Grafana and in cloud environments 
> perhaps Azure Application Insights and AWS Cloudwatch. 
> This JIRA and the related P/R provides a processor that ingests the MiNiFi 
> Flow Status report and sends it to Azure Application Insights as custom 
> metrics. This then allows a user to set alerts in Azure Application Insights, 
> thus providing the ability to monitor MiNiFi agents. As such, this has a 
> dependency on the following MiNiFi JIRA: [Change FlowStatusReport to be JSON 
> | https://issues.apache.org/jira/browse/MINIFI-170]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1331: NIFI-3196 Provide MiNiFi FlowStatus Insights proces...

2016-12-14 Thread apsaltis
GitHub user apsaltis opened a pull request:

https://github.com/apache/nifi/pull/1331

NIFI-3196 Provide MiNiFi FlowStatus Insights processor that integrates 
MiNiFi Flow Status Report with Azure Application Insights

As a user of both NiFi and MiNiFi there are many times I would like to have 
an operational dashboard for MiNiFi. Although, there is a discussion and desire 
for Centralized management there are many times where as a user I just want to 
be able to use the operational tools I already have. In many cases those may be 
products such as: Nagios, Zenoos, Zabix, Graphite, Grafana and in cloud 
environments perhaps Azure Application Insights and AWS Cloudwatch.

This  P/R provides a processor that ingests the MiNiFi Flow Status report 
and sends it to Azure Application Insights as custom metrics. This then allows 
a user to set alerts in Azure Application Insights, thus providing the ability 
to monitor MiNiFi agents. As such, this has a dependency on the following 
MiNiFi JIRA: Change FlowStatusReport to be JSON 
([https://github.com/apache/nifi-minifi/pull/65](url))

An example flow of how to set this up for the MiNiFi side is:


![image](https://cloud.githubusercontent.com/assets/1100275/21208171/61891462-c23a-11e6-9075-e7657653f40f.png)

And the NiFi Side looks like:

![image](https://cloud.githubusercontent.com/assets/1100275/21208215/bc114cec-c23a-11e6-8393-ee2ec50e61e7.png)

Then once this is all set in Azure Application Insights I can build a 
dashboard that may look like this:

![image](https://cloud.githubusercontent.com/assets/1100275/21208329/67804e02-c23b-11e6-8e7d-933c56f33d3b.png)

And explore all the various metrics, that I may want to build a dashboard 
from:

![image](https://cloud.githubusercontent.com/assets/1100275/21208347/913e5b76-c23b-11e6-89b6-b8895210.png)

I can then also go and set alerts 

![image](https://cloud.githubusercontent.com/assets/1100275/21208458/128b653e-c23c-11e6-8737-d8665c280325.png)

In the end this type of integration can also be done with other enterprise 
tools.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apsaltis/nifi NIFI-3196

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1331.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1331


commit 9d3fa636a4f0feb9c7055488c5f686dd3f237e36
Author: Andrew Psaltis 
Date:   2016-12-14T16:40:27Z

NIFI-3196 Provide MiNiFi FlowStatus Insights processor that integrates 
MiNiFi Flow Status Report with Azure Application Insights




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1567) EvaluateJsonPath processor should work against attributes

2016-12-14 Thread Joey Frazee (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749785#comment-15749785
 ] 

Joey Frazee commented on NIFI-1567:
---

[~markap14] [~acesir] NIFI-1660 satisfied this, right? Wanted to draw your 
attention to it in case either of you wanted to close it.

> EvaluateJsonPath processor should work against attributes
> -
>
> Key: NIFI-1567
> URL: https://issues.apache.org/jira/browse/NIFI-1567
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.4.1
>Reporter: Adis Cesir
>Priority: Minor
>
> Consider that certain flowfile contents are not always clean JSON but rather 
> embedded JSON in other sets of data. Currently there is not a god way to 
> extract JSON out of this set of data other than parsing with regex and 
> keeping the existing flow file content.
> Having the ability to parse JSON on already extracted attributes should make 
> processing complex and embedded sets of JSON data much easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-3197) Unable to use Snappy Compression Codec on PutHDFS

2016-12-14 Thread Josh Meyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Meyer resolved NIFI-3197.
--
Resolution: Invalid

> Unable to use Snappy Compression Codec on PutHDFS
> -
>
> Key: NIFI-3197
> URL: https://issues.apache.org/jira/browse/NIFI-3197
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
> Environment: centos 7.2
> NiFi 1.1.0 or NiFi 1.0.0
> HDP 2.4.3.0
>Reporter: Josh Meyer
>  Labels: HDFS, Processor, PutHDFS, Snappy
> Attachments: native-directory.png, nifi-puthdfs-snappy-issue.xml, 
> processor-config.png
>
>
> When setting the 'Compression code' to SNAPPY on PutHDFS NiFi is unable to 
> compress and push data to HDFS. Attached is the sample workflow that is 
> having trouble. Below are the errors for NiFi 1.0.0 and the errors for NiFi 
> 1.1.0. Same configuration, but the error is a bit different.
> I have tried setting both LD_LIBRARY_PATH as an environment variable, and 
> then I tried setting java.library.path in the bootstrap.conf.
> NiFi 1.1.0 nifi-app.log error message:
> {code}
> 2016-12-14 15:13:56,563 ERROR [Timer-Driven Process Thread-6] 
> o.apache.nifi.processors.hadoop.PutHDFS 
> PutHDFS[id=fdd52005-0158-1000-ac0f-2a87ed12a1e7] Failed to write to HDFS due 
> to java.lang.RuntimeException: native snappy library not available: this 
> version of libhadoop was built without snappy support.: 
> java.lang.RuntimeException: native snappy library not available: this version 
> of libhadoop was built without snappy support.
> 2016-12-14 15:13:56,644 ERROR [Timer-Driven Process Thread-6] 
> o.apache.nifi.processors.hadoop.PutHDFS
> java.lang.RuntimeException: native snappy library not available: this version 
> of libhadoop was built without snappy support.
> at 
> org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
>  ~[hadoop-common-2.7.3.jar:na]
> at 
> org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:134)
>  ~[hadoop-common-2.7.3.jar:na]
> at 
> org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150) 
> ~[hadoop-common-2.7.3.jar:na]
> at 
> org.apache.hadoop.io.compress.CompressionCodec$Util.createOutputStreamWithCodecPool(CompressionCodec.java:131)
>  ~[hadoop-common-2.7.3.jar:na]
> at 
> org.apache.hadoop.io.compress.SnappyCodec.createOutputStream(SnappyCodec.java:100)
>  ~[hadoop-common-2.7.3.jar:na]
> at 
> org.apache.nifi.processors.hadoop.PutHDFS$1$1.process(PutHDFS.java:313) 
> ~[nifi-hdfs-processors-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2082)
>  ~[na:na]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2053)
>  ~[na:na]
> at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:300) 
> ~[nifi-hdfs-processors-1.1.0.jar:1.1.0]
> at java.security.AccessController.doPrivileged(Native Method) 
> [na:1.8.0_111-debug]
> at javax.security.auth.Subject.doAs(Subject.java:360) 
> [na:1.8.0_111-debug]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
>  [hadoop-common-2.7.3.jar:na]
> at 
> org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:230) 
> [nifi-hdfs-processors-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  [nifi-api-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_111-debug]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_111-debug]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_111-debug]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_111-debug]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  

[jira] [Commented] (NIFI-3198) [Backport to 1.0.x] Address issues of PublishKafka blocking when having trouble communicating with Kafka broker and improve performance

2016-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749553#comment-15749553
 ] 

ASF GitHub Bot commented on NIFI-3198:
--

Github user markap14 closed the pull request at:

https://github.com/apache/nifi/pull/1330


> [Backport to 1.0.x] Address issues of PublishKafka blocking when having 
> trouble communicating with Kafka broker and improve performance
> ---
>
> Key: NIFI-3198
> URL: https://issues.apache.org/jira/browse/NIFI-3198
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.0.1
>
>
> This was addressed for 1.1.0 in NIFI-2865 but it should be backported to 
> 1.0.x as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1330: NIFI-3198: Refactored how PublishKafka and PublishK...

2016-12-14 Thread markap14
Github user markap14 closed the pull request at:

https://github.com/apache/nifi/pull/1330


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2615) Add support for GetTCP processor

2016-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749543#comment-15749543
 ] 

ASF GitHub Bot commented on NIFI-2615:
--

Github user apsaltis closed the pull request at:

https://github.com/apache/nifi/pull/1177


> Add support for GetTCP processor
> 
>
> Key: NIFI-2615
> URL: https://issues.apache.org/jira/browse/NIFI-2615
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0, 0.7.0, 0.6.1
>Reporter: Andrew Psaltis
>Assignee: Andrew Psaltis
>
> This processor will allow NiFi to connect to a host via TCP, thus acting as 
> the client and consume data. This should provide the following properties:
> * Endpoint -  this should accept a list of addresses in the format of 
> : - if a user wants to be able to track the ingestion rate per 
> address then you would want to have one address in this list. However, there 
> are times when multiple endpoints represent a logical entity and the 
> aggregate ingestion rate is representative of it. 
> * Failover Endpoint - An endpoint to fall over to if the list of Endpoints is 
> exhausted and a connection cannot be made to them or it is disconnected and 
> cannot reconnect.
> * Receive Buffer Size -- The size of the TCP receive buffer to use. This does 
> not related to the size of content in the resulting flow file.
> * Keep Alive -- This enables TCP keep Alive
> * Connection Timeout -- How long to wait when trying to establish a connection
> * Batch Size - The number of messages to put into a Flow File. This will be 
> the max number of messages, as there may be cases where the number of 
> messages received over the wire waiting to be emitted as FF content may be 
> less then the desired batch.
> This processor should also support the following:
> 1. If a connection to end endpoint is broken, it should be logged and 
> reconnections to it should be made. Potentially an exponential backoff 
> strategy will be used. The strategy if there is more than one should be 
> documented and potentially exposed as an Attribute.
> 2. When there are multiple instances of this processor in a flow and NiFi is 
> setup in a cluster, this processor needs to ensure that received messages are 
> not dual processed. For example if this processor is configured to point to 
> the endpoint (172.31.32.212:1) and the data flow is running on more than 
> one node then only one node should be processing data. In essence they should 
> form a group and have similar semantics as a Kafka consumer group does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1177: NIFI-2615 Adding a GetTCP processor

2016-12-14 Thread apsaltis
Github user apsaltis closed the pull request at:

https://github.com/apache/nifi/pull/1177


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Reopened] (NIFI-3173) When creating a template Controller Services can end up being not properly scoped

2016-12-14 Thread Joseph Percivall (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Percivall reopened NIFI-3173:


> When creating a template Controller Services can end up being not properly 
> scoped
> -
>
> Key: NIFI-3173
> URL: https://issues.apache.org/jira/browse/NIFI-3173
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Joseph Percivall
>Assignee: Mark Payne
> Fix For: 1.1.1, 1.0.1
>
>
> Steps to reproduce:
> 1: From the root group create a Controller Service (ie. HttpContextMap)
> 2: Create two child process groups
> 3: In each child group create a processor that references the CS (ie. 
> HandleHttpRequest and HandleHttpResponse)
> 4: Create a template of the two child PGs
> 5: Export the template
> 6: Import template into a clean instance
> 7: See the Controller service not scoped to the root level but instead scoped 
> to one of the child PGs
> I believe the template creation is grabbing the first instance of the CS when 
> it sees it and ignores the other references, instead of taking it at the 
> highest level. I also believe this is related to NIFI-3129
> https://issues.apache.org/jira/browse/NIFI-3129



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3173) When creating a template Controller Services can end up being not properly scoped

2016-12-14 Thread Joseph Percivall (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Percivall updated NIFI-3173:
---
Fix Version/s: 1.0.1

> When creating a template Controller Services can end up being not properly 
> scoped
> -
>
> Key: NIFI-3173
> URL: https://issues.apache.org/jira/browse/NIFI-3173
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Joseph Percivall
>Assignee: Mark Payne
> Fix For: 1.1.1, 1.0.1
>
>
> Steps to reproduce:
> 1: From the root group create a Controller Service (ie. HttpContextMap)
> 2: Create two child process groups
> 3: In each child group create a processor that references the CS (ie. 
> HandleHttpRequest and HandleHttpResponse)
> 4: Create a template of the two child PGs
> 5: Export the template
> 6: Import template into a clean instance
> 7: See the Controller service not scoped to the root level but instead scoped 
> to one of the child PGs
> I believe the template creation is grabbing the first instance of the CS when 
> it sees it and ignores the other references, instead of taking it at the 
> highest level. I also believe this is related to NIFI-3129
> https://issues.apache.org/jira/browse/NIFI-3129



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-3129) When adding template to flow, I receive failure message "The specified observer identifier already exists."

2016-12-14 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne resolved NIFI-3129.
--
Resolution: Fixed

> When adding template to flow, I receive failure message "The specified 
> observer identifier already exists."
> ---
>
> Key: NIFI-3129
> URL: https://issues.apache.org/jira/browse/NIFI-3129
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
> Attachments: 
> 0001-NIFI-3129-When-adding-controller-services-to-a-snipp.patch
>
>
> I have a template that I generated. Whenever I try to drop the template onto 
> the graph, and the UI gives me an error: "The specified observer identifier 
> already exists."
> Part of the template is added to the graph, but it then stops without adding 
> the rest of the components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3129) When adding template to flow, I receive failure message "The specified observer identifier already exists."

2016-12-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749460#comment-15749460
 ] 

ASF subversion and git services commented on NIFI-3129:
---

Commit aa54da1cf16725d32c6b43531e8195e862999d78 in nifi's branch 
refs/heads/support/nifi-1.1.x from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=aa54da1 ]

NIFI-3129: When adding controller services to a snippet, ensure that we don't 
add the service multiple times, even when it's referenced by child process 
groups. This closes #1284


> When adding template to flow, I receive failure message "The specified 
> observer identifier already exists."
> ---
>
> Key: NIFI-3129
> URL: https://issues.apache.org/jira/browse/NIFI-3129
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
> Attachments: 
> 0001-NIFI-3129-When-adding-controller-services-to-a-snipp.patch
>
>
> I have a template that I generated. Whenever I try to drop the template onto 
> the graph, and the UI gives me an error: "The specified observer identifier 
> already exists."
> Part of the template is added to the graph, but it then stops without adding 
> the rest of the components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3129) When adding template to flow, I receive failure message "The specified observer identifier already exists."

2016-12-14 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749461#comment-15749461
 ] 

Mark Payne commented on NIFI-3129:
--

Merged to both support branches; will re-resolve ticket.

> When adding template to flow, I receive failure message "The specified 
> observer identifier already exists."
> ---
>
> Key: NIFI-3129
> URL: https://issues.apache.org/jira/browse/NIFI-3129
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
> Attachments: 
> 0001-NIFI-3129-When-adding-controller-services-to-a-snipp.patch
>
>
> I have a template that I generated. Whenever I try to drop the template onto 
> the graph, and the UI gives me an error: "The specified observer identifier 
> already exists."
> Part of the template is added to the graph, but it then stops without adding 
> the rest of the components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3129) When adding template to flow, I receive failure message "The specified observer identifier already exists."

2016-12-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749433#comment-15749433
 ] 

ASF subversion and git services commented on NIFI-3129:
---

Commit fc2941553ecea7a29d6e74ac0fa93c8533506121 in nifi's branch 
refs/heads/support/nifi-1.0.x from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=fc29415 ]

NIFI-3129: When adding controller services to a snippet, ensure that we don't 
add the service multiple times, even when it's referenced by child process 
groups. This closes #1284


> When adding template to flow, I receive failure message "The specified 
> observer identifier already exists."
> ---
>
> Key: NIFI-3129
> URL: https://issues.apache.org/jira/browse/NIFI-3129
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
> Attachments: 
> 0001-NIFI-3129-When-adding-controller-services-to-a-snipp.patch
>
>
> I have a template that I generated. Whenever I try to drop the template onto 
> the graph, and the UI gives me an error: "The specified observer identifier 
> already exists."
> Part of the template is added to the graph, but it then stops without adding 
> the rest of the components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (NIFI-3129) When adding template to flow, I receive failure message "The specified observer identifier already exists."

2016-12-14 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749353#comment-15749353
 ] 

Mark Payne edited comment on NIFI-3129 at 12/14/16 8:40 PM:


The commit id to cherry-pick is fff0148a0e057277b34e647ef31c441ed5b4bdc3


was (Author: markap14):
The commit id to cherry-pick is ba513447d75dc5e95ddcdfcac1a1fefe2eb175ce

> When adding template to flow, I receive failure message "The specified 
> observer identifier already exists."
> ---
>
> Key: NIFI-3129
> URL: https://issues.apache.org/jira/browse/NIFI-3129
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
> Attachments: 
> 0001-NIFI-3129-When-adding-controller-services-to-a-snipp.patch
>
>
> I have a template that I generated. Whenever I try to drop the template onto 
> the graph, and the UI gives me an error: "The specified observer identifier 
> already exists."
> Part of the template is added to the graph, but it then stops without adding 
> the rest of the components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3115) Enhance user policy management functionality

2016-12-14 Thread Andrew Lim (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749400#comment-15749400
 ] 

Andrew Lim commented on NIFI-3115:
--

Lots of good ideas here [~alopresto], especially to streamline the user 
creation process (with and without policies).  I definitely looked at NIFI-2926 
as the first stage for the User Policies window.  As you suggested, there are 
ways to make that window more interactive/powerful, so that is doesn't just 
show a user's policies, but allows management of those policies as well.

> Enhance user policy management functionality
> 
>
> Key: NIFI-3115
> URL: https://issues.apache.org/jira/browse/NIFI-3115
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework, Core UI
>Affects Versions: 1.1.0
>Reporter: Andy LoPresto
>Priority: Critical
>  Labels: dac, permissions, policy, rbac, user
> Attachments: Screen Shot 2016-11-28 at 6.57.43 PM.png
>
>
> With the multi-tenant authorization model introduced in version 1.1.0, NiFi 
> has moved from role-based access control (RBAC) to a more granular 
> combination of discretionary access control (DAC) (permissions based on 
> individual user credentials combined with membership in groups) and 
> permission-based access control (granting explicit behavioral access on 
> individual resources to specific users and groups). See [Overview of Access 
> Control Models|https://www.owasp.org/index.php/Access_Control_Cheat_Sheet] 
> for more details. 
> Because of this change, centralized management of user permissions ("policy 
> in NiFi terminology) can be complex. For example, to add a new user with the 
> same policies as the "Initial Administrator Identity" requires approximately 
> 55 clicks, and to add a user with all policies would take approximately 80. 
> Currently, the mental model appears to me to be policy-focused as opposed to 
> user-focused. This makes sense as the development of these features was 
> highly-focused on policy definition, and in default deployments, the number 
> of policies outnumbers the number of users. Much like [NIFI-2926] streamlined 
> viewing the policies assigned to a user across the entire application, I 
> propose a couple of features to make user/policy management much easier. 
> I believe these should be broken out into subtasks of this ticket, but I am 
> including all of my thoughts in the ticket description to facilitate 
> discussion in a single location. Once the community has weighed in, they can 
> be subdivided. 
> * Clone user feature
> ** This feature would allow an administrator/user with necessary user 
> management permissions to clone an existing user and copy their permissions. 
> This is useful for adding new members of a team with the expectation that 
> they would gain access to the same resources and global policies granted to a 
> colleague at a similar level of job responsibility. This feature should be 
> implemented in a way that the policies are cloned but not related -- i.e. if 
> Andy has permission X and Matt is a clone, Matt should have permission X 
> permanently, even if Andy loses permission X tomorrow. 
> * New user policy definition dialog
> ** Similar to the attached screenshot for viewing policies assigned to a 
> user, I suggest a feature where a specific user or group can be selected and 
> all available global and per-resource policies on the system are exposed as a 
> list with checkboxes or a ternary selector if applicable (NONE, READ, 
> READ+WRITE). The existing policies for the user/group would be 
> pre-populated/selected. This would allow the rapid creation of a new user 
> with appropriate policy assignment without cloning an existing user, and the 
> rapid application of new policies to an existing user/group. 
> * Batch user import
> ** Whether the users are providing client certificates, LDAP credentials, or 
> Kerberos tickets to authenticate, the canonical source of identity is still 
> managed by NiFi. I propose a mechanism to quickly define multiple users in 
> the system (without affording any policy assignments). Here I am looking for 
> substantial community input on the most common/desired use cases, but my 
> initial thoughts are:
> *** One user per line in a text file/pastable text area in a UI dialog
>  Each line is parsed and a user defined with the provided username
> *** LDAP-specific
>  A manager DN and password (similar to necessary for LDAP authentication) 
> are used to authenticate the admin/user manager, and then a LDAP query string 
> (i.e. {{ou=users,dc=nifi,dc=apache,dc=org}}) is provided and the dialog 
> displays/API returns a list of users/groups matching the query. The admin can 
> then select which to import to NiFi and confirm. 
> *** Kerberos-specific
> 

[jira] [Reopened] (NIFI-3129) When adding template to flow, I receive failure message "The specified observer identifier already exists."

2016-12-14 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne reopened NIFI-3129:
--

> When adding template to flow, I receive failure message "The specified 
> observer identifier already exists."
> ---
>
> Key: NIFI-3129
> URL: https://issues.apache.org/jira/browse/NIFI-3129
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
>
> I have a template that I generated. Whenever I try to drop the template onto 
> the graph, and the UI gives me an error: "The specified observer identifier 
> already exists."
> Part of the template is added to the graph, but it then stops without adding 
> the rest of the components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3129) When adding template to flow, I receive failure message "The specified observer identifier already exists."

2016-12-14 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749353#comment-15749353
 ] 

Mark Payne commented on NIFI-3129:
--

The commit id to cherry-pick is ba513447d75dc5e95ddcdfcac1a1fefe2eb175ce

> When adding template to flow, I receive failure message "The specified 
> observer identifier already exists."
> ---
>
> Key: NIFI-3129
> URL: https://issues.apache.org/jira/browse/NIFI-3129
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
> Attachments: 
> 0001-NIFI-3129-When-adding-controller-services-to-a-snipp.patch
>
>
> I have a template that I generated. Whenever I try to drop the template onto 
> the graph, and the UI gives me an error: "The specified observer identifier 
> already exists."
> Part of the template is added to the graph, but it then stops without adding 
> the rest of the components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3129) When adding template to flow, I receive failure message "The specified observer identifier already exists."

2016-12-14 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-3129:
-
Attachment: 0001-NIFI-3129-When-adding-controller-services-to-a-snipp.patch

> When adding template to flow, I receive failure message "The specified 
> observer identifier already exists."
> ---
>
> Key: NIFI-3129
> URL: https://issues.apache.org/jira/browse/NIFI-3129
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
> Attachments: 
> 0001-NIFI-3129-When-adding-controller-services-to-a-snipp.patch
>
>
> I have a template that I generated. Whenever I try to drop the template onto 
> the graph, and the UI gives me an error: "The specified observer identifier 
> already exists."
> Part of the template is added to the graph, but it then stops without adding 
> the rest of the components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3129) When adding template to flow, I receive failure message "The specified observer identifier already exists."

2016-12-14 Thread Mark Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749348#comment-15749348
 ] 

Mark Payne commented on NIFI-3129:
--

Re-opening this because it needs to go into the 1.0.1 and 1.1.1 branches as 
well. Will attach a patch to this JIRA to make that coordination a little easier

> When adding template to flow, I receive failure message "The specified 
> observer identifier already exists."
> ---
>
> Key: NIFI-3129
> URL: https://issues.apache.org/jira/browse/NIFI-3129
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
>
> I have a template that I generated. Whenever I try to drop the template onto 
> the graph, and the UI gives me an error: "The specified observer identifier 
> already exists."
> Part of the template is added to the graph, but it then stops without adding 
> the rest of the components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3129) When adding template to flow, I receive failure message "The specified observer identifier already exists."

2016-12-14 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-3129:
-
Fix Version/s: 1.0.1
   1.1.1

> When adding template to flow, I receive failure message "The specified 
> observer identifier already exists."
> ---
>
> Key: NIFI-3129
> URL: https://issues.apache.org/jira/browse/NIFI-3129
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
>
> I have a template that I generated. Whenever I try to drop the template onto 
> the graph, and the UI gives me an error: "The specified observer identifier 
> already exists."
> Part of the template is added to the graph, but it then stops without adding 
> the rest of the components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3204) delete hdfs processor throws an error stating transfer relationship not specified even when all relationships are present

2016-12-14 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated NIFI-3204:
--
Affects Version/s: 1.0.0
   1.1.0

> delete hdfs processor throws an error stating transfer relationship not 
> specified even when all relationships are present
> -
>
> Key: NIFI-3204
> URL: https://issues.apache.org/jira/browse/NIFI-3204
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Arpit Gupta
>
> Following flow was setup
> get file -> extract text -> delete hdfs
> A bunch of files were written each having one line which was the path to 
> delete. Some of these path's were files, some were directories and some were 
> patterns. Extract text would extract the line and assign to an attribute 
> which delete hdfs would use to populate the path to delete.
> However the processor would run into an error when ever it tried to process 
> the path which was a pattern matching multiple paths.
> {code}
> 2016-12-14 11:32:43,335 ERROR [Timer-Driven Process Thread-7] 
> o.a.nifi.processors.hadoop.DeleteHDFS 
> DeleteHDFS[id=fed0acf6-0158-1000-b7ab-8cc724e4142d] 
> DeleteHDFS[id=fed0acf6-0158-1000-b7ab-8cc724e4142d] failed to process session 
> due to org.apache.nifi.processor.exception
> .FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=af8be94a-e527-4203-bb87-a0115f84e582,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1481656798897-1, container=default, 
> section=1], offset=6518, length=75],offset=0,name=noyg3p7km8.txt,size=75] tr
> ansfer relationship not specified: 
> org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=af8be94a-e527-4203-bb87-a0115f84e582,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1481656798897-1, container=default, 
> sectio
> n=1], offset=6518, length=75],offset=0,name=noyg3p7km8.txt,size=75] transfer 
> relationship not specified
> 2016-12-14 11:32:43,335 ERROR [Timer-Driven Process Thread-7] 
> o.a.nifi.processors.hadoop.DeleteHDFS
> org.apache.nifi.processor.exception.FlowFileHandlingException: 
> StandardFlowFileRecord[uuid=af8be94a-e527-4203-bb87-a0115f84e582,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1481656798897-1, container=default, 
> section=1], offset=6518, length=75],offse
> t=0,name=noyg3p7km8.txt,size=75] transfer relationship not specified
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:234)
>  ~[nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:304)
>  ~[nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>  ~[nifi-api-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
>  ~[nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_92]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_92]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_92]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_92]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_92]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_92]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_92]
> 2016-12-14 11:32:43,335 WARN [Timer-Driven Process Thread-7] 
> o.a.nifi.processors.hadoop.DeleteHDFS 
> DeleteHDFS[id=fed0acf6-0158-1000-b7ab-8cc724e4142d] Processor 
> Administratively Yielded for 1 sec due to processing failure
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3204) delete hdfs processor throws an error stating transfer relationship not specified even when all relationships are present

2016-12-14 Thread Arpit Gupta (JIRA)
Arpit Gupta created NIFI-3204:
-

 Summary: delete hdfs processor throws an error stating transfer 
relationship not specified even when all relationships are present
 Key: NIFI-3204
 URL: https://issues.apache.org/jira/browse/NIFI-3204
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Arpit Gupta


Following flow was setup

get file -> extract text -> delete hdfs

A bunch of files were written each having one line which was the path to 
delete. Some of these path's were files, some were directories and some were 
patterns. Extract text would extract the line and assign to an attribute which 
delete hdfs would use to populate the path to delete.

However the processor would run into an error when ever it tried to process the 
path which was a pattern matching multiple paths.

{code}
2016-12-14 11:32:43,335 ERROR [Timer-Driven Process Thread-7] 
o.a.nifi.processors.hadoop.DeleteHDFS 
DeleteHDFS[id=fed0acf6-0158-1000-b7ab-8cc724e4142d] 
DeleteHDFS[id=fed0acf6-0158-1000-b7ab-8cc724e4142d] failed to process session 
due to org.apache.nifi.processor.exception
.FlowFileHandlingException: 
StandardFlowFileRecord[uuid=af8be94a-e527-4203-bb87-a0115f84e582,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1481656798897-1, container=default, 
section=1], offset=6518, length=75],offset=0,name=noyg3p7km8.txt,size=75] tr
ansfer relationship not specified: 
org.apache.nifi.processor.exception.FlowFileHandlingException: 
StandardFlowFileRecord[uuid=af8be94a-e527-4203-bb87-a0115f84e582,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1481656798897-1, container=default, 
sectio
n=1], offset=6518, length=75],offset=0,name=noyg3p7km8.txt,size=75] transfer 
relationship not specified
2016-12-14 11:32:43,335 ERROR [Timer-Driven Process Thread-7] 
o.a.nifi.processors.hadoop.DeleteHDFS
org.apache.nifi.processor.exception.FlowFileHandlingException: 
StandardFlowFileRecord[uuid=af8be94a-e527-4203-bb87-a0115f84e582,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1481656798897-1, container=default, 
section=1], offset=6518, length=75],offse
t=0,name=noyg3p7km8.txt,size=75] transfer relationship not specified
at 
org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:234)
 ~[nifi-framework-core-1.1.0.jar:1.1.0]
at 
org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:304)
 ~[nifi-framework-core-1.1.0.jar:1.1.0]
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
 ~[nifi-api-1.1.0.jar:1.1.0]
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
 ~[nifi-framework-core-1.1.0.jar:1.1.0]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
 [nifi-framework-core-1.1.0.jar:1.1.0]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
 [nifi-framework-core-1.1.0.jar:1.1.0]
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
 [nifi-framework-core-1.1.0.jar:1.1.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_92]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
[na:1.8.0_92]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_92]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 [na:1.8.0_92]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_92]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_92]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_92]
2016-12-14 11:32:43,335 WARN [Timer-Driven Process Thread-7] 
o.a.nifi.processors.hadoop.DeleteHDFS 
DeleteHDFS[id=fed0acf6-0158-1000-b7ab-8cc724e4142d] Processor Administratively 
Yielded for 1 sec due to processing failure
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3117) Snippet usage: Authorize all referenced services

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3117:
--
Fix Version/s: (was: 1.0.1)

> Snippet usage: Authorize all referenced services
> 
>
> Key: NIFI-3117
> URL: https://issues.apache.org/jira/browse/NIFI-3117
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.2.0, 1.1.1
>
>
> When creating a template or using copy/paste all selected components are used 
> to build a snippet. The snippet is then authorized for the current user. 
> However, services referenced by those components are not considered part of 
> the snippet and should also be checked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3133) Unable to empty queue connected to a RPG

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3133:
--
Fix Version/s: (was: 1.0.1)

> Unable to empty queue connected to a RPG
> 
>
> Key: NIFI-3133
> URL: https://issues.apache.org/jira/browse/NIFI-3133
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.2.0, 1.1.1
>
>
> Unable to empty a queue in a connection where the source or destination is a 
> Remote Process Group. Appears to be using the policy of the port instead of 
> the RPG which is used for other configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1) Remote Process Groups' output Ports removed if connected to another component and update of Flow fails

2016-12-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749301#comment-15749301
 ] 

ASF subversion and git services commented on NIFI-1:


Commit f477590e7b0e9a8e5520fc2690f27bfe63dac75a in nifi's branch 
refs/heads/support/nifi-1.0.x from [~joewitt]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=f477590 ]

NIFI-3202 updated versions to NIFI-1.0.1-SNAPSHOT


> Remote Process Groups' output Ports removed if connected to another component 
> and update of Flow fails
> --
>
> Key: NIFI-1
> URL: https://issues.apache.org/jira/browse/NIFI-1
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
> Fix For: 0.0.1
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3202) Perform rm duties for nifi 1.0.1

2016-12-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749300#comment-15749300
 ] 

ASF subversion and git services commented on NIFI-3202:
---

Commit f477590e7b0e9a8e5520fc2690f27bfe63dac75a in nifi's branch 
refs/heads/support/nifi-1.0.x from [~joewitt]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=f477590 ]

NIFI-3202 updated versions to NIFI-1.0.1-SNAPSHOT


> Perform rm duties for nifi 1.0.1
> 
>
> Key: NIFI-3202
> URL: https://issues.apache.org/jira/browse/NIFI-3202
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Tools and Build
>Reporter: Joseph Witt
> Fix For: 1.0.1
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3203) Perform nifi 1.1.1 release duties

2016-12-14 Thread Joseph Witt (JIRA)
Joseph Witt created NIFI-3203:
-

 Summary: Perform nifi 1.1.1 release duties
 Key: NIFI-3203
 URL: https://issues.apache.org/jira/browse/NIFI-3203
 Project: Apache NiFi
  Issue Type: Task
  Components: Tools and Build
Reporter: Joseph Witt
Assignee: Joseph Witt
 Fix For: 1.1.1






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3202) Perform rm duties for nifi 1.0.1

2016-12-14 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-3202:
--
Assignee: (was: Joseph Witt)

> Perform rm duties for nifi 1.0.1
> 
>
> Key: NIFI-3202
> URL: https://issues.apache.org/jira/browse/NIFI-3202
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Tools and Build
>Reporter: Joseph Witt
> Fix For: 1.0.1
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3200) Cleanup authorization and REST api logic

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3200:
--
Description: 
Creating a new JIRA to backport bugs necessary for 1.0.1.

https://issues.apache.org/jira/browse/NIFI-2768
https://issues.apache.org/jira/browse/NIFI-2694
https://issues.apache.org/jira/browse/NIFI-2687
https://issues.apache.org/jira/browse/NIFI-2704
https://issues.apache.org/jira/browse/NIFI-2777
https://issues.apache.org/jira/browse/NIFI-2824
https://issues.apache.org/jira/browse/NIFI-2797

  was:
Creating a new JIRA to backport bugs necessary for 1.0.1.

https://issues.apache.org/jira/browse/NIFI-2768
https://issues.apache.org/jira/browse/NIFI-2694
https://issues.apache.org/jira/browse/NIFI-2687
https://issues.apache.org/jira/browse/NIFI-2704
https://issues.apache.org/jira/browse/NIFI-2777
https://issues.apache.org/jira/browse/NIFI-2824


> Cleanup authorization and REST api logic
> 
>
> Key: NIFI-3200
> URL: https://issues.apache.org/jira/browse/NIFI-3200
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
> Fix For: 1.0.1
>
>
> Creating a new JIRA to backport bugs necessary for 1.0.1.
> https://issues.apache.org/jira/browse/NIFI-2768
> https://issues.apache.org/jira/browse/NIFI-2694
> https://issues.apache.org/jira/browse/NIFI-2687
> https://issues.apache.org/jira/browse/NIFI-2704
> https://issues.apache.org/jira/browse/NIFI-2777
> https://issues.apache.org/jira/browse/NIFI-2824
> https://issues.apache.org/jira/browse/NIFI-2797



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3200) Cleanup authorization and REST api logic

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3200:
--
Description: 
Creating a new JIRA to backport bugs necessary for 1.0.1.

https://issues.apache.org/jira/browse/NIFI-2768
https://issues.apache.org/jira/browse/NIFI-2694
https://issues.apache.org/jira/browse/NIFI-2687
https://issues.apache.org/jira/browse/NIFI-2704
https://issues.apache.org/jira/browse/NIFI-2777
https://issues.apache.org/jira/browse/NIFI-2824

  was:
Creating a new JIRA to backport bugs necessary for 1.0.1.

https://issues.apache.org/jira/browse/NIFI-2768
https://issues.apache.org/jira/browse/NIFI-2694
https://issues.apache.org/jira/browse/NIFI-2687
https://issues.apache.org/jira/browse/NIFI-2704
https://issues.apache.org/jira/browse/NIFI-2777


> Cleanup authorization and REST api logic
> 
>
> Key: NIFI-3200
> URL: https://issues.apache.org/jira/browse/NIFI-3200
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
> Fix For: 1.0.1
>
>
> Creating a new JIRA to backport bugs necessary for 1.0.1.
> https://issues.apache.org/jira/browse/NIFI-2768
> https://issues.apache.org/jira/browse/NIFI-2694
> https://issues.apache.org/jira/browse/NIFI-2687
> https://issues.apache.org/jira/browse/NIFI-2704
> https://issues.apache.org/jira/browse/NIFI-2777
> https://issues.apache.org/jira/browse/NIFI-2824



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3200) Cleanup authorization and REST api logic

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3200:
--
Description: 
Creating a new JIRA to backport bugs necessary for 1.0.1.

https://issues.apache.org/jira/browse/NIFI-2768
https://issues.apache.org/jira/browse/NIFI-2694
https://issues.apache.org/jira/browse/NIFI-2687
https://issues.apache.org/jira/browse/NIFI-2704
https://issues.apache.org/jira/browse/NIFI-2777

  was:
Creating a new JIRA to backport bugs necessary for 1.0.1.

https://issues.apache.org/jira/browse/NIFI-2768
https://issues.apache.org/jira/browse/NIFI-2694
https://issues.apache.org/jira/browse/NIFI-2687
https://issues.apache.org/jira/browse/NIFI-2704


> Cleanup authorization and REST api logic
> 
>
> Key: NIFI-3200
> URL: https://issues.apache.org/jira/browse/NIFI-3200
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
> Fix For: 1.0.1
>
>
> Creating a new JIRA to backport bugs necessary for 1.0.1.
> https://issues.apache.org/jira/browse/NIFI-2768
> https://issues.apache.org/jira/browse/NIFI-2694
> https://issues.apache.org/jira/browse/NIFI-2687
> https://issues.apache.org/jira/browse/NIFI-2704
> https://issues.apache.org/jira/browse/NIFI-2777



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3202) Perform rm duties for nifi 1.0.1

2016-12-14 Thread Joseph Witt (JIRA)
Joseph Witt created NIFI-3202:
-

 Summary: Perform rm duties for nifi 1.0.1
 Key: NIFI-3202
 URL: https://issues.apache.org/jira/browse/NIFI-3202
 Project: Apache NiFi
  Issue Type: Task
  Components: Tools and Build
Reporter: Joseph Witt
Assignee: Joseph Witt
 Fix For: 1.0.1






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3198) [Backport to 1.0.x] Address issues of PublishKafka blocking when having trouble communicating with Kafka broker and improve performance

2016-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749221#comment-15749221
 ] 

ASF GitHub Bot commented on NIFI-3198:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/1330

NIFI-3198: Refactored how PublishKafka and PublishKafka_0_10 work to …

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.

…improve throughput and resilience. Fixed bug in StreamDemarcator. Slight 
refactoring of consume processors to simplify code.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-3198

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1330.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1330


commit eedc65fb46e46af6ad7e0bfa1abc9cb117ac965e
Author: Mark Payne 
Date:   2016-12-14T19:23:21Z

NIFI-3198: Refactored how PublishKafka and PublishKafka_0_10 work to 
improve throughput and resilience. Fixed bug in StreamDemarcator. Slight 
refactoring of consume processors to simplify code.




> [Backport to 1.0.x] Address issues of PublishKafka blocking when having 
> trouble communicating with Kafka broker and improve performance
> ---
>
> Key: NIFI-3198
> URL: https://issues.apache.org/jira/browse/NIFI-3198
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.0.1
>
>
> This was addressed for 1.1.0 in NIFI-2865 but it should be backported to 
> 1.0.x as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1330: NIFI-3198: Refactored how PublishKafka and PublishK...

2016-12-14 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/1330

NIFI-3198: Refactored how PublishKafka and PublishKafka_0_10 work to …

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.

…improve throughput and resilience. Fixed bug in StreamDemarcator. Slight 
refactoring of consume processors to simplify code.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-3198

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1330.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1330


commit eedc65fb46e46af6ad7e0bfa1abc9cb117ac965e
Author: Mark Payne 
Date:   2016-12-14T19:23:21Z

NIFI-3198: Refactored how PublishKafka and PublishKafka_0_10 work to 
improve throughput and resilience. Fixed bug in StreamDemarcator. Slight 
refactoring of consume processors to simplify code.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-3173) When creating a template Controller Services can end up being not properly scoped

2016-12-14 Thread Joseph Percivall (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Percivall updated NIFI-3173:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> When creating a template Controller Services can end up being not properly 
> scoped
> -
>
> Key: NIFI-3173
> URL: https://issues.apache.org/jira/browse/NIFI-3173
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Joseph Percivall
>Assignee: Mark Payne
> Fix For: 1.1.1
>
>
> Steps to reproduce:
> 1: From the root group create a Controller Service (ie. HttpContextMap)
> 2: Create two child process groups
> 3: In each child group create a processor that references the CS (ie. 
> HandleHttpRequest and HandleHttpResponse)
> 4: Create a template of the two child PGs
> 5: Export the template
> 6: Import template into a clean instance
> 7: See the Controller service not scoped to the root level but instead scoped 
> to one of the child PGs
> I believe the template creation is grabbing the first instance of the CS when 
> it sees it and ignores the other references, instead of taking it at the 
> highest level. I also believe this is related to NIFI-3129
> https://issues.apache.org/jira/browse/NIFI-3129



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3173) When creating a template Controller Services can end up being not properly scoped

2016-12-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749200#comment-15749200
 ] 

ASF subversion and git services commented on NIFI-3173:
---

Commit 5776c4b1f99df4662dd878c8183a8c03e6861cd2 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=5776c4b ]

NIFI-3173: When a template is created with multiple components in different 
groups all referencing the same controller service, ensure that controller 
service is added to the template at a high enough level that all components 
needing hte service can access it.

- Ensure that controller services are added to child process groups when 
creating snippet

- Addressed issue related to modifying higher-level process groups' controller 
services in snippet after having already visited the process group

This closes #1318

Signed-off-by: jpercivall 


> When creating a template Controller Services can end up being not properly 
> scoped
> -
>
> Key: NIFI-3173
> URL: https://issues.apache.org/jira/browse/NIFI-3173
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Joseph Percivall
>Assignee: Mark Payne
> Fix For: 1.1.1
>
>
> Steps to reproduce:
> 1: From the root group create a Controller Service (ie. HttpContextMap)
> 2: Create two child process groups
> 3: In each child group create a processor that references the CS (ie. 
> HandleHttpRequest and HandleHttpResponse)
> 4: Create a template of the two child PGs
> 5: Export the template
> 6: Import template into a clean instance
> 7: See the Controller service not scoped to the root level but instead scoped 
> to one of the child PGs
> I believe the template creation is grabbing the first instance of the CS when 
> it sees it and ignores the other references, instead of taking it at the 
> highest level. I also believe this is related to NIFI-3129
> https://issues.apache.org/jira/browse/NIFI-3129



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3173) When creating a template Controller Services can end up being not properly scoped

2016-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749202#comment-15749202
 ] 

ASF GitHub Bot commented on NIFI-3173:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1318


> When creating a template Controller Services can end up being not properly 
> scoped
> -
>
> Key: NIFI-3173
> URL: https://issues.apache.org/jira/browse/NIFI-3173
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Joseph Percivall
>Assignee: Mark Payne
> Fix For: 1.1.1
>
>
> Steps to reproduce:
> 1: From the root group create a Controller Service (ie. HttpContextMap)
> 2: Create two child process groups
> 3: In each child group create a processor that references the CS (ie. 
> HandleHttpRequest and HandleHttpResponse)
> 4: Create a template of the two child PGs
> 5: Export the template
> 6: Import template into a clean instance
> 7: See the Controller service not scoped to the root level but instead scoped 
> to one of the child PGs
> I believe the template creation is grabbing the first instance of the CS when 
> it sees it and ignores the other references, instead of taking it at the 
> highest level. I also believe this is related to NIFI-3129
> https://issues.apache.org/jira/browse/NIFI-3129



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1318: NIFI-3173: When a template is created with multiple...

2016-12-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1318


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3173) When creating a template Controller Services can end up being not properly scoped

2016-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749165#comment-15749165
 ] 

ASF GitHub Bot commented on NIFI-3173:
--

Github user JPercivall commented on the issue:

https://github.com/apache/nifi/pull/1318
  
+1

Visually verified code and did a contrib-check build. In standalone and 
clustered, tried creating templates ranging in complexity from a processor 
template to a template containing multiple different PGs, CSs and processors at 
every level. All worked as expected. Thanks @markap14 I will squash and merge.


> When creating a template Controller Services can end up being not properly 
> scoped
> -
>
> Key: NIFI-3173
> URL: https://issues.apache.org/jira/browse/NIFI-3173
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Joseph Percivall
>Assignee: Mark Payne
> Fix For: 1.1.1
>
>
> Steps to reproduce:
> 1: From the root group create a Controller Service (ie. HttpContextMap)
> 2: Create two child process groups
> 3: In each child group create a processor that references the CS (ie. 
> HandleHttpRequest and HandleHttpResponse)
> 4: Create a template of the two child PGs
> 5: Export the template
> 6: Import template into a clean instance
> 7: See the Controller service not scoped to the root level but instead scoped 
> to one of the child PGs
> I believe the template creation is grabbing the first instance of the CS when 
> it sees it and ignores the other references, instead of taking it at the 
> highest level. I also believe this is related to NIFI-3129
> https://issues.apache.org/jira/browse/NIFI-3129



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3198) [Backport to 1.0.x] Address issues of PublishKafka blocking when having trouble communicating with Kafka broker and improve performance

2016-12-14 Thread Mark Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-3198:
-
Fix Version/s: 1.0.1

> [Backport to 1.0.x] Address issues of PublishKafka blocking when having 
> trouble communicating with Kafka broker and improve performance
> ---
>
> Key: NIFI-3198
> URL: https://issues.apache.org/jira/browse/NIFI-3198
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.0.1
>
>
> This was addressed for 1.1.0 in NIFI-2865 but it should be backported to 
> 1.0.x as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3200) Cleanup authorization and REST api logic

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3200:
--
Summary: Cleanup authorization and REST api logic  (was: Backport for 1.0.1)

> Cleanup authorization and REST api logic
> 
>
> Key: NIFI-3200
> URL: https://issues.apache.org/jira/browse/NIFI-3200
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
> Fix For: 1.0.1
>
>
> Creating a new JIRA to backport bugs necessary for 1.0.1.
> https://issues.apache.org/jira/browse/NIFI-2768
> https://issues.apache.org/jira/browse/NIFI-2694
> https://issues.apache.org/jira/browse/NIFI-2687
> https://issues.apache.org/jira/browse/NIFI-2704



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3201) Cleanup authorization and REST api logic

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3201:
--
Summary: Cleanup authorization and REST api logic  (was: Backport for 1.0.1)

> Cleanup authorization and REST api logic
> 
>
> Key: NIFI-3201
> URL: https://issues.apache.org/jira/browse/NIFI-3201
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
> Fix For: 1.0.1
>
>
> Creating a new JIRA to backport bugs necessary for 1.0.1.
> https://issues.apache.org/jira/browse/NIFI-2768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3186) NullPointerException in Provenance Repository on startup

2016-12-14 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15749151#comment-15749151
 ] 

Joseph Witt commented on NIFI-3186:
---

+1

> NullPointerException in Provenance Repository on startup 
> -
>
> Key: NIFI-3186
> URL: https://issues.apache.org/jira/browse/NIFI-3186
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Joseph Gresock
>Assignee: Joseph Gresock
> Fix For: 1.2.0, 1.1.1
>
>
> Often when NiFi restarts itself, we will get this error on startup, which 
> prevents the node from starting until we delete the provenance_repository 
> from disk.
> {code}
> Caused by: java.lang.RuntimeException: Unable to create Provenance Repository
> at 
> org.apache.nifi.controller.FlowController.(FlowController.java:459) 
> ~[nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.FlowController.createClusteredInstance(FlowController.java:403)
>  ~[nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.spring.FlowControllerFactoryBean.getObject(FlowControllerFactoryBean.java:61)
>  ~[nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:168)
>  ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
> ... 41 common frames omitted
> Caused by: java.lang.NullPointerException: null
> at 
> org.apache.nifi.provenance.schema.EventRecord.getEvent(EventRecord.java:122) 
> ~[na:na]
> at 
> org.apache.nifi.provenance.ByteArraySchemaRecordReader.nextRecord(ByteArraySchemaRecordReader.java:77)
>  ~[na:na]
> at 
> org.apache.nifi.provenance.serialization.CompressableRecordReader.nextRecord(CompressableRecordReader.java:272)
>  ~[na:na]
> at 
> org.apache.nifi.provenance.PersistentProvenanceRepository.mergeJournals(PersistentProvenanceRepository.java:1826)
>  ~[na:na]
> at 
> org.apache.nifi.provenance.PersistentProvenanceRepository.recoverJournalFiles(PersistentProvenanceRepository.java:1505)
>  ~[na:na]
> at 
> org.apache.nifi.provenance.PersistentProvenanceRepository.recover(PersistentProvenanceRepository.java:665)
>  ~[na:na]
> at 
> org.apache.nifi.provenance.PersistentProvenanceRepository.initialize(PersistentProvenanceRepository.java:268)
>  ~[na:na]
> at 
> org.apache.nifi.controller.FlowController.(FlowController.java:457) 
> ~[nifi-framework-core-1.1.0.jar:1.1.0]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3200) Backport for 1.0.1

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3200:
--
Description: 
Creating a new JIRA to backport bugs necessary for 1.0.1.

https://issues.apache.org/jira/browse/NIFI-2768
https://issues.apache.org/jira/browse/NIFI-2694
https://issues.apache.org/jira/browse/NIFI-2687
https://issues.apache.org/jira/browse/NIFI-2704

  was:
Creating a new JIRA to backport bugs necessary for 1.0.1.

https://issues.apache.org/jira/browse/NIFI-2768
https://issues.apache.org/jira/browse/NIFI-2694
https://issues.apache.org/jira/browse/NIFI-2694


> Backport for 1.0.1
> --
>
> Key: NIFI-3200
> URL: https://issues.apache.org/jira/browse/NIFI-3200
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
> Fix For: 1.0.1
>
>
> Creating a new JIRA to backport bugs necessary for 1.0.1.
> https://issues.apache.org/jira/browse/NIFI-2768
> https://issues.apache.org/jira/browse/NIFI-2694
> https://issues.apache.org/jira/browse/NIFI-2687
> https://issues.apache.org/jira/browse/NIFI-2704



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3200) Backport for 1.0.1

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3200:
--
Description: 
Creating a new JIRA to backport bugs necessary for 1.0.1.

https://issues.apache.org/jira/browse/NIFI-2768
https://issues.apache.org/jira/browse/NIFI-2694
https://issues.apache.org/jira/browse/NIFI-2694

  was:
Creating a new JIRA to backport bugs necessary for 1.0.1.

https://issues.apache.org/jira/browse/NIFI-2768
https://issues.apache.org/jira/browse/NIFI-2694


> Backport for 1.0.1
> --
>
> Key: NIFI-3200
> URL: https://issues.apache.org/jira/browse/NIFI-3200
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
> Fix For: 1.0.1
>
>
> Creating a new JIRA to backport bugs necessary for 1.0.1.
> https://issues.apache.org/jira/browse/NIFI-2768
> https://issues.apache.org/jira/browse/NIFI-2694
> https://issues.apache.org/jira/browse/NIFI-2694



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3186) NullPointerException in Provenance Repository on startup

2016-12-14 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-3186:
--
Fix Version/s: 1.2.0

> NullPointerException in Provenance Repository on startup 
> -
>
> Key: NIFI-3186
> URL: https://issues.apache.org/jira/browse/NIFI-3186
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Joseph Gresock
>Assignee: Joseph Gresock
> Fix For: 1.2.0, 1.1.1
>
>
> Often when NiFi restarts itself, we will get this error on startup, which 
> prevents the node from starting until we delete the provenance_repository 
> from disk.
> {code}
> Caused by: java.lang.RuntimeException: Unable to create Provenance Repository
> at 
> org.apache.nifi.controller.FlowController.(FlowController.java:459) 
> ~[nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.FlowController.createClusteredInstance(FlowController.java:403)
>  ~[nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.spring.FlowControllerFactoryBean.getObject(FlowControllerFactoryBean.java:61)
>  ~[nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:168)
>  ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
> ... 41 common frames omitted
> Caused by: java.lang.NullPointerException: null
> at 
> org.apache.nifi.provenance.schema.EventRecord.getEvent(EventRecord.java:122) 
> ~[na:na]
> at 
> org.apache.nifi.provenance.ByteArraySchemaRecordReader.nextRecord(ByteArraySchemaRecordReader.java:77)
>  ~[na:na]
> at 
> org.apache.nifi.provenance.serialization.CompressableRecordReader.nextRecord(CompressableRecordReader.java:272)
>  ~[na:na]
> at 
> org.apache.nifi.provenance.PersistentProvenanceRepository.mergeJournals(PersistentProvenanceRepository.java:1826)
>  ~[na:na]
> at 
> org.apache.nifi.provenance.PersistentProvenanceRepository.recoverJournalFiles(PersistentProvenanceRepository.java:1505)
>  ~[na:na]
> at 
> org.apache.nifi.provenance.PersistentProvenanceRepository.recover(PersistentProvenanceRepository.java:665)
>  ~[na:na]
> at 
> org.apache.nifi.provenance.PersistentProvenanceRepository.initialize(PersistentProvenanceRepository.java:268)
>  ~[na:na]
> at 
> org.apache.nifi.controller.FlowController.(FlowController.java:457) 
> ~[nifi-framework-core-1.1.0.jar:1.1.0]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3200) Backport for 1.0.1

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3200:
--
Description: 
Creating a new JIRA to backport bugs necessary for 1.0.1.

https://issues.apache.org/jira/browse/NIFI-2768
https://issues.apache.org/jira/browse/NIFI-2694

  was:
Creating a new JIRA to backport bugs necessary for 1.0.1.

https://issues.apache.org/jira/browse/NIFI-2768


> Backport for 1.0.1
> --
>
> Key: NIFI-3200
> URL: https://issues.apache.org/jira/browse/NIFI-3200
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
> Fix For: 1.0.1
>
>
> Creating a new JIRA to backport bugs necessary for 1.0.1.
> https://issues.apache.org/jira/browse/NIFI-2768
> https://issues.apache.org/jira/browse/NIFI-2694



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-3201) Backport for 1.0.1

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman resolved NIFI-3201.
---
Resolution: Duplicate

> Backport for 1.0.1
> --
>
> Key: NIFI-3201
> URL: https://issues.apache.org/jira/browse/NIFI-3201
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
> Fix For: 1.0.1
>
>
> Creating a new JIRA to backport bugs necessary for 1.0.1.
> https://issues.apache.org/jira/browse/NIFI-2768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3201) Backport for 1.0.1

2016-12-14 Thread Matt Gilman (JIRA)
Matt Gilman created NIFI-3201:
-

 Summary: Backport for 1.0.1
 Key: NIFI-3201
 URL: https://issues.apache.org/jira/browse/NIFI-3201
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Matt Gilman
 Fix For: 1.0.1


Creating a new JIRA to backport bugs necessary for 1.0.1.

https://issues.apache.org/jira/browse/NIFI-2768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3200) Backport for 1.0.1

2016-12-14 Thread Matt Gilman (JIRA)
Matt Gilman created NIFI-3200:
-

 Summary: Backport for 1.0.1
 Key: NIFI-3200
 URL: https://issues.apache.org/jira/browse/NIFI-3200
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Matt Gilman
 Fix For: 1.0.1


Creating a new JIRA to backport bugs necessary for 1.0.1.

https://issues.apache.org/jira/browse/NIFI-2768



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3199) Able to delete a Process Group containing enabled Controller Services

2016-12-14 Thread Joseph Percivall (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Percivall updated NIFI-3199:
---
Affects Version/s: 1.2.0

> Able to delete a Process Group containing enabled Controller Services
> -
>
> Key: NIFI-3199
> URL: https://issues.apache.org/jira/browse/NIFI-3199
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Joseph Percivall
>
> To recreate:
> 1: Create a Process Group (PG)
> 2: Create and enable a Controller Service (CS) to run within that PG
> 3: Delete the PG
> It will successfully delete the PG without having to first disable the 
> controller service. This should not be able to be done. This happens in both 
> standalone and clustered, and with or without referencing components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3199) Able to delete a Process Group containing enabled Controller Services

2016-12-14 Thread Joseph Percivall (JIRA)
Joseph Percivall created NIFI-3199:
--

 Summary: Able to delete a Process Group containing enabled 
Controller Services
 Key: NIFI-3199
 URL: https://issues.apache.org/jira/browse/NIFI-3199
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Joseph Percivall


To recreate:

1: Create a Process Group (PG)
2: Create and enable a Controller Service (CS) to run within that PG
3: Delete the PG

It will successfully delete the PG without having to first disable the 
controller service. This should not be able to be done. This happens in both 
standalone and clustered, and with or without referencing components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (NIFI-3118) UI - Inconsistent order of garbage collection statistics

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman reassigned NIFI-3118:
-

Assignee: Matt Gilman

> UI - Inconsistent order of garbage collection statistics
> 
>
> Key: NIFI-3118
> URL: https://issues.apache.org/jira/browse/NIFI-3118
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Minor
> Fix For: 1.2.0
>
>
> In the system diagnostics, the ordering of the garbage collection statistics 
> is inconsistent. The ordering appears to shuffle with each refresh.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3154) Connection Config and Connection Details dialogs .label elements text overflow

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3154:
--
Fix Version/s: 1.0.1
   1.1.1

> Connection Config and Connection Details dialogs .label elements text overflow
> --
>
> Key: NIFI-3154
> URL: https://issues.apache.org/jira/browse/NIFI-3154
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.1.1
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>Priority: Minor
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
> Attachments: connection dialog text label missing ellipsis.png
>
>
> The Processor name label text does not display ellipsis when sufficiently 
> long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (NIFI-3154) Connection Config and Connection Details dialogs .label elements text overflow

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman reopened NIFI-3154:
---

> Connection Config and Connection Details dialogs .label elements text overflow
> --
>
> Key: NIFI-3154
> URL: https://issues.apache.org/jira/browse/NIFI-3154
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.1.1
>Reporter: Scott Aslan
>Assignee: Scott Aslan
>Priority: Minor
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
> Attachments: connection dialog text label missing ellipsis.png
>
>
> The Processor name label text does not display ellipsis when sufficiently 
> long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3130) Restricted and referenced service check on component creation

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3130:
--
Fix Version/s: 1.1.1

> Restricted and referenced service check on component creation
> -
>
> Key: NIFI-3130
> URL: https://issues.apache.org/jira/browse/NIFI-3130
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.2.0, 1.1.1
>
>
> When creating a Processor, Controller Service, or Reporting Task a temporary 
> instance is created to check if the component is restricted or if the 
> proposed configuration is referencing any services. For Controller Services 
> and Reporting Tasks these were creating the wrong type of temporary instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (NIFI-3135) Check restricted components on template instantiation

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman reopened NIFI-3135:
---

> Check restricted components on template instantiation
> -
>
> Key: NIFI-3135
> URL: https://issues.apache.org/jira/browse/NIFI-3135
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.2.0, 1.1.1
>
>
> When creating components we create a temporary instance to check if the 
> component is restricted. We should also be performing this check when 
> creating the components through template instantiation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3135) Check restricted components on template instantiation

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3135:
--
Fix Version/s: 1.1.1

> Check restricted components on template instantiation
> -
>
> Key: NIFI-3135
> URL: https://issues.apache.org/jira/browse/NIFI-3135
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
> Fix For: 1.2.0, 1.1.1
>
>
> When creating components we create a temporary instance to check if the 
> component is restricted. We should also be performing this check when 
> creating the components through template instantiation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3117) Snippet usage: Authorize all referenced services

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3117:
--
Fix Version/s: 1.0.1
   1.1.1

> Snippet usage: Authorize all referenced services
> 
>
> Key: NIFI-3117
> URL: https://issues.apache.org/jira/browse/NIFI-3117
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
>
> When creating a template or using copy/paste all selected components are used 
> to build a snippet. The snippet is then authorized for the current user. 
> However, services referenced by those components are not considered part of 
> the snippet and should also be checked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (NIFI-3130) Restricted and referenced service check on component creation

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman reopened NIFI-3130:
---

> Restricted and referenced service check on component creation
> -
>
> Key: NIFI-3130
> URL: https://issues.apache.org/jira/browse/NIFI-3130
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.2.0
>
>
> When creating a Processor, Controller Service, or Reporting Task a temporary 
> instance is created to check if the component is restricted or if the 
> proposed configuration is referencing any services. For Controller Services 
> and Reporting Tasks these were creating the wrong type of temporary instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (NIFI-3117) Snippet usage: Authorize all referenced services

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman reopened NIFI-3117:
---

> Snippet usage: Authorize all referenced services
> 
>
> Key: NIFI-3117
> URL: https://issues.apache.org/jira/browse/NIFI-3117
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
>
> When creating a template or using copy/paste all selected components are used 
> to build a snippet. The snippet is then authorized for the current user. 
> However, services referenced by those components are not considered part of 
> the snippet and should also be checked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3133) Unable to empty queue connected to a RPG

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-3133:
--
Fix Version/s: 1.0.1
   1.1.1

> Unable to empty queue connected to a RPG
> 
>
> Key: NIFI-3133
> URL: https://issues.apache.org/jira/browse/NIFI-3133
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
>
> Unable to empty a queue in a connection where the source or destination is a 
> Remote Process Group. Appears to be using the policy of the port instead of 
> the RPG which is used for other configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (NIFI-3133) Unable to empty queue connected to a RPG

2016-12-14 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman reopened NIFI-3133:
---

> Unable to empty queue connected to a RPG
> 
>
> Key: NIFI-3133
> URL: https://issues.apache.org/jira/browse/NIFI-3133
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Critical
> Fix For: 1.2.0, 1.1.1, 1.0.1
>
>
> Unable to empty a queue in a connection where the source or destination is a 
> Remote Process Group. Appears to be using the policy of the port instead of 
> the RPG which is used for other configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (NIFI-3198) [Backport to 1.0.x] Address issues of PublishKafka blocking when having trouble communicating with Kafka broker and improve performance

2016-12-14 Thread Mark Payne (JIRA)
Mark Payne created NIFI-3198:


 Summary: [Backport to 1.0.x] Address issues of PublishKafka 
blocking when having trouble communicating with Kafka broker and improve 
performance
 Key: NIFI-3198
 URL: https://issues.apache.org/jira/browse/NIFI-3198
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Mark Payne
Assignee: Mark Payne


This was addressed for 1.1.0 in NIFI-2865 but it should be backported to 1.0.x 
as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3197) Unable to use Snappy Compression Codec on PutHDFS

2016-12-14 Thread Josh Meyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Meyer updated NIFI-3197:
-
Attachment: nifi-puthdfs-snappy-issue.xml

> Unable to use Snappy Compression Codec on PutHDFS
> -
>
> Key: NIFI-3197
> URL: https://issues.apache.org/jira/browse/NIFI-3197
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
> Environment: centos 7.2
> NiFi 1.1.0 or NiFi 1.0.0
> HDP 2.4.3.0
>Reporter: Josh Meyer
>  Labels: HDFS, Processor, PutHDFS, Snappy
> Attachments: native-directory.png, nifi-puthdfs-snappy-issue.xml, 
> processor-config.png
>
>
> When setting the 'Compression code' to SNAPPY on PutHDFS NiFi is unable to 
> compress and push data to HDFS. Attached is the sample workflow that is 
> having trouble. Below are the errors for NiFi 1.0.0 and the errors for NiFi 
> 1.1.0. Same configuration, but the error is a bit different.
> I have tried setting both LD_LIBRARY_PATH as an environment variable, and 
> then I tried setting java.library.path in the bootstrap.conf.
> NiFi 1.1.0 nifi-app.log error message:
> {code}
> 2016-12-14 15:13:56,563 ERROR [Timer-Driven Process Thread-6] 
> o.apache.nifi.processors.hadoop.PutHDFS 
> PutHDFS[id=fdd52005-0158-1000-ac0f-2a87ed12a1e7] Failed to write to HDFS due 
> to java.lang.RuntimeException: native snappy library not available: this 
> version of libhadoop was built without snappy support.: 
> java.lang.RuntimeException: native snappy library not available: this version 
> of libhadoop was built without snappy support.
> 2016-12-14 15:13:56,644 ERROR [Timer-Driven Process Thread-6] 
> o.apache.nifi.processors.hadoop.PutHDFS
> java.lang.RuntimeException: native snappy library not available: this version 
> of libhadoop was built without snappy support.
> at 
> org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
>  ~[hadoop-common-2.7.3.jar:na]
> at 
> org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:134)
>  ~[hadoop-common-2.7.3.jar:na]
> at 
> org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150) 
> ~[hadoop-common-2.7.3.jar:na]
> at 
> org.apache.hadoop.io.compress.CompressionCodec$Util.createOutputStreamWithCodecPool(CompressionCodec.java:131)
>  ~[hadoop-common-2.7.3.jar:na]
> at 
> org.apache.hadoop.io.compress.SnappyCodec.createOutputStream(SnappyCodec.java:100)
>  ~[hadoop-common-2.7.3.jar:na]
> at 
> org.apache.nifi.processors.hadoop.PutHDFS$1$1.process(PutHDFS.java:313) 
> ~[nifi-hdfs-processors-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2082)
>  ~[na:na]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2053)
>  ~[na:na]
> at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:300) 
> ~[nifi-hdfs-processors-1.1.0.jar:1.1.0]
> at java.security.AccessController.doPrivileged(Native Method) 
> [na:1.8.0_111-debug]
> at javax.security.auth.Subject.doAs(Subject.java:360) 
> [na:1.8.0_111-debug]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
>  [hadoop-common-2.7.3.jar:na]
> at 
> org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:230) 
> [nifi-hdfs-processors-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  [nifi-api-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_111-debug]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_111-debug]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_111-debug]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_111-debug]
> at 
> 

[jira] [Commented] (NIFI-3196) Provide MiNiFi FlowStatus Insights processor that integrates MiNiFi Flow Status Report with Azure Application Insights

2016-12-14 Thread Aldrin Piri (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15748789#comment-15748789
 ] 

Aldrin Piri commented on NIFI-3196:
---

Sounds like a good concept.  Curious if it makes sense to deconstruct this a 
bit more but do not have great familiarity with the Azure App Insights (AAI).  
Could see a series of processors that receive the data, then transform 
(transform JSON) and then a PutAAI or similar.  That last bit of functionality 
seems like it might get some nice reuse for a lot of other uses.


> Provide MiNiFi FlowStatus Insights processor that integrates MiNiFi Flow 
> Status Report with Azure Application Insights
> --
>
> Key: NIFI-3196
> URL: https://issues.apache.org/jira/browse/NIFI-3196
> Project: Apache NiFi
>  Issue Type: New Feature
>Affects Versions: 1.1.0
>Reporter: Andrew Psaltis
>Assignee: Andrew Psaltis
>
> As a user of both NiFi and MiNiFi there are many times I would like to have 
> an operational dashboard for MiNiFi. Although, there is a discussion and 
> desire for [Centralized management | 
> https://cwiki.apache.org/confluence/display/MINIFI/MiNiFi+Community+Driven+Requirements]
>  there are many times where as a user I just want to be able to use the 
> operational tools I already have. In many cases those may be products such 
> as:  Nagios, Zenoos, Zabix, Graphite, Grafana and in cloud environments 
> perhaps Azure Application Insights and AWS Cloudwatch. 
> This JIRA and the related P/R provides a processor that ingests the MiNiFi 
> Flow Status report and sends it to Azure Application Insights as custom 
> metrics. This then allows a user to set alerts in Azure Application Insights, 
> thus providing the ability to monitor MiNiFi agents. As such, this has a 
> dependency on the following MiNiFi JIRA: [Change FlowStatusReport to be JSON 
> | https://issues.apache.org/jira/browse/MINIFI-170]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3197) Unable to use Snappy Compression Codec on PutHDFS

2016-12-14 Thread Josh Meyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Meyer updated NIFI-3197:
-
Attachment: processor-config.png
native-directory.png

> Unable to use Snappy Compression Codec on PutHDFS
> -
>
> Key: NIFI-3197
> URL: https://issues.apache.org/jira/browse/NIFI-3197
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
> Environment: centos 7.2
> NiFi 1.1.0 or NiFi 1.0.0
> HDP 2.4.3.0
>Reporter: Josh Meyer
>  Labels: HDFS, Processor, PutHDFS, Snappy
> Attachments: native-directory.png, processor-config.png
>
>
> When setting the 'Compression code' to SNAPPY on PutHDFS NiFi is unable to 
> compress and push data to HDFS. Attached is the sample workflow that is 
> having trouble. Below are the errors for NiFi 1.0.0 and the errors for NiFi 
> 1.1.0. Same configuration, but the error is a bit different.
> I have tried setting both LD_LIBRARY_PATH as an environment variable, and 
> then I tried setting java.library.path in the bootstrap.conf.
> NiFi 1.1.0 nifi-app.log error message:
> {code}
> 2016-12-14 15:13:56,563 ERROR [Timer-Driven Process Thread-6] 
> o.apache.nifi.processors.hadoop.PutHDFS 
> PutHDFS[id=fdd52005-0158-1000-ac0f-2a87ed12a1e7] Failed to write to HDFS due 
> to java.lang.RuntimeException: native snappy library not available: this 
> version of libhadoop was built without snappy support.: 
> java.lang.RuntimeException: native snappy library not available: this version 
> of libhadoop was built without snappy support.
> 2016-12-14 15:13:56,644 ERROR [Timer-Driven Process Thread-6] 
> o.apache.nifi.processors.hadoop.PutHDFS
> java.lang.RuntimeException: native snappy library not available: this version 
> of libhadoop was built without snappy support.
> at 
> org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
>  ~[hadoop-common-2.7.3.jar:na]
> at 
> org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:134)
>  ~[hadoop-common-2.7.3.jar:na]
> at 
> org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150) 
> ~[hadoop-common-2.7.3.jar:na]
> at 
> org.apache.hadoop.io.compress.CompressionCodec$Util.createOutputStreamWithCodecPool(CompressionCodec.java:131)
>  ~[hadoop-common-2.7.3.jar:na]
> at 
> org.apache.hadoop.io.compress.SnappyCodec.createOutputStream(SnappyCodec.java:100)
>  ~[hadoop-common-2.7.3.jar:na]
> at 
> org.apache.nifi.processors.hadoop.PutHDFS$1$1.process(PutHDFS.java:313) 
> ~[nifi-hdfs-processors-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2082)
>  ~[na:na]
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2053)
>  ~[na:na]
> at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:300) 
> ~[nifi-hdfs-processors-1.1.0.jar:1.1.0]
> at java.security.AccessController.doPrivileged(Native Method) 
> [na:1.8.0_111-debug]
> at javax.security.auth.Subject.doAs(Subject.java:360) 
> [na:1.8.0_111-debug]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
>  [hadoop-common-2.7.3.jar:na]
> at 
> org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:230) 
> [nifi-hdfs-processors-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  [nifi-api-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  [nifi-framework-core-1.1.0.jar:1.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_111-debug]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_111-debug]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_111-debug]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_111-debug]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  

[jira] [Created] (NIFI-3197) Unable to use Snappy Compression Codec on PutHDFS

2016-12-14 Thread Josh Meyer (JIRA)
Josh Meyer created NIFI-3197:


 Summary: Unable to use Snappy Compression Codec on PutHDFS
 Key: NIFI-3197
 URL: https://issues.apache.org/jira/browse/NIFI-3197
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.1.0, 1.0.0
 Environment: centos 7.2
NiFi 1.1.0 or NiFi 1.0.0
HDP 2.4.3.0
Reporter: Josh Meyer


When setting the 'Compression code' to SNAPPY on PutHDFS NiFi is unable to 
compress and push data to HDFS. Attached is the sample workflow that is having 
trouble. Below are the errors for NiFi 1.0.0 and the errors for NiFi 1.1.0. 
Same configuration, but the error is a bit different.

I have tried setting both LD_LIBRARY_PATH as an environment variable, and then 
I tried setting java.library.path in the bootstrap.conf.

NiFi 1.1.0 nifi-app.log error message:
{code}
2016-12-14 15:13:56,563 ERROR [Timer-Driven Process Thread-6] 
o.apache.nifi.processors.hadoop.PutHDFS 
PutHDFS[id=fdd52005-0158-1000-ac0f-2a87ed12a1e7] Failed to write to HDFS due to 
java.lang.RuntimeException: native snappy library not available: this version 
of libhadoop was built without snappy support.: java.lang.RuntimeException: 
native snappy library not available: this version of libhadoop was built 
without snappy support.
2016-12-14 15:13:56,644 ERROR [Timer-Driven Process Thread-6] 
o.apache.nifi.processors.hadoop.PutHDFS
java.lang.RuntimeException: native snappy library not available: this version 
of libhadoop was built without snappy support.
at 
org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
 ~[hadoop-common-2.7.3.jar:na]
at 
org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:134)
 ~[hadoop-common-2.7.3.jar:na]
at 
org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150) 
~[hadoop-common-2.7.3.jar:na]
at 
org.apache.hadoop.io.compress.CompressionCodec$Util.createOutputStreamWithCodecPool(CompressionCodec.java:131)
 ~[hadoop-common-2.7.3.jar:na]
at 
org.apache.hadoop.io.compress.SnappyCodec.createOutputStream(SnappyCodec.java:100)
 ~[hadoop-common-2.7.3.jar:na]
at 
org.apache.nifi.processors.hadoop.PutHDFS$1$1.process(PutHDFS.java:313) 
~[nifi-hdfs-processors-1.1.0.jar:1.1.0]
at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2082)
 ~[na:na]
at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2053)
 ~[na:na]
at org.apache.nifi.processors.hadoop.PutHDFS$1.run(PutHDFS.java:300) 
~[nifi-hdfs-processors-1.1.0.jar:1.1.0]
at java.security.AccessController.doPrivileged(Native Method) 
[na:1.8.0_111-debug]
at javax.security.auth.Subject.doAs(Subject.java:360) 
[na:1.8.0_111-debug]
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
 [hadoop-common-2.7.3.jar:na]
at 
org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:230) 
[nifi-hdfs-processors-1.1.0.jar:1.1.0]
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 [nifi-api-1.1.0.jar:1.1.0]
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099)
 [nifi-framework-core-1.1.0.jar:1.1.0]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
 [nifi-framework-core-1.1.0.jar:1.1.0]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
 [nifi-framework-core-1.1.0.jar:1.1.0]
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
 [nifi-framework-core-1.1.0.jar:1.1.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_111-debug]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
[na:1.8.0_111-debug]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_111-debug]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 [na:1.8.0_111-debug]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_111-debug]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_111-debug]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111-debug]
{code}


NiFi 1.0.0 nifi-bootstrap.log
{code}
2016-12-14 15:32:50,373 INFO [NiFi Bootstrap Command Listener] 
org.apache.nifi.bootstrap.RunNiFi Apache NiFi now running and listening for 
Bootstrap requests on port 40280
2016-12-14 15:43:47,752 INFO [NiFi logging handler] org.apache.nifi.StdOut 
FATAL ERROR in native method: 

[jira] [Created] (NIFI-3196) Provide MiNiFi FlowStatus Insights processor that integrates MiNiFi Flow Status Report with Azure Application Insights

2016-12-14 Thread Andrew Psaltis (JIRA)
Andrew Psaltis created NIFI-3196:


 Summary: Provide MiNiFi FlowStatus Insights processor that 
integrates MiNiFi Flow Status Report with Azure Application Insights
 Key: NIFI-3196
 URL: https://issues.apache.org/jira/browse/NIFI-3196
 Project: Apache NiFi
  Issue Type: New Feature
Affects Versions: 1.1.0
Reporter: Andrew Psaltis
Assignee: Andrew Psaltis


As a user of both NiFi and MiNiFi there are many times I would like to have an 
operational dashboard for MiNiFi. Although, there is a discussion and desire 
for [Centralized management | 
https://cwiki.apache.org/confluence/display/MINIFI/MiNiFi+Community+Driven+Requirements]
 there are many times where as a user I just want to be able to use the 
operational tools I already have. In many cases those may be products such as:  
Nagios, Zenoos, Zabix, Graphite, Grafana and in cloud environments perhaps 
Azure Application Insights and AWS Cloudwatch. 

This JIRA and the related P/R provides a processor that ingests the MiNiFi Flow 
Status report and sends it to Azure Application Insights as custom metrics. 
This then allows a user to set alerts in Azure Application Insights, thus 
providing the ability to monitor MiNiFi agents. As such, this has a dependency 
on the following MiNiFi JIRA: [Change FlowStatusReport to be JSON | 
https://issues.apache.org/jira/browse/MINIFI-170]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-949) Allow configuration of cleanup resources

2016-12-14 Thread Brandon DeVries (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15748701#comment-15748701
 ] 

Brandon DeVries commented on NIFI-949:
--

That's seeming at least possible.  We have an 0.7.1 running right now that had 
~40 GB in the flow across the cluster, but had 100% consumed the 2 TB of 
content repo.  Restarting one of the boxes and tailing the logs shows all of 
the expected content repo cleanup, and (although it is still running) the 
content repo usage is dropping significantly.  

NIFI-2920 is marked as affecting 0.7.0, 0.6.1, and 0.7.1.  Do you know if this 
issue was really limited to those versions?  REgardless, it sounds like we're 
going to need an 0.8.0 release to pick up that fix...

> Allow configuration of cleanup resources
> 
>
> Key: NIFI-949
> URL: https://issues.apache.org/jira/browse/NIFI-949
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Brandon DeVries
>Priority: Minor
>
> Allow the allocation of resource to the cleanup of the content and provenance 
> repositories to be configurable.   There are times at which the resources 
> spent performing cleanup activities can degrade the operation of the flow 
> sufficiently to cause problems.  Sometimes this requires removing the 
> offending node from the cluster to allow it to clean up without taking on new 
> data (and thus backing up).  Making the cleanup resources configurable could 
> prevent such a situation by allowing them to be reduced under times of heavy 
> load to keep the node useable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3029) QueryDatabaseTable supports max fragments property

2016-12-14 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-3029:
---
Resolution: Fixed
  Assignee: Matt Burgess
Status: Resolved  (was: Patch Available)

> QueryDatabaseTable supports max fragments property
> --
>
> Key: NIFI-3029
> URL: https://issues.apache.org/jira/browse/NIFI-3029
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.1.0
>Reporter: Byunghwa Yun
>Assignee: Matt Burgess
>Priority: Minor
> Fix For: 1.2.0
>
>
> When QueryDatabaseTable ingests huge table that has ten billion data at the 
> first time, NiFi throws OutOfMemoryError.
> Because QueryDatabaseTable creates too many fragments in memory event setting 
> the MaxRowsPerFlowFile property.
> So I suggest QueryDatabaseTable supports max fragments property.
> Thank you.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3029) QueryDatabaseTable supports max fragments property

2016-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15748681#comment-15748681
 ] 

ASF GitHub Bot commented on NIFI-3029:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1213
  
+1 LGTM, ran unit tests and tried a NiFi flow with various settings for Max 
Rows Per Flow File and Max Number of Fragments. Thank you for the contribution! 
Merging to master


> QueryDatabaseTable supports max fragments property
> --
>
> Key: NIFI-3029
> URL: https://issues.apache.org/jira/browse/NIFI-3029
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.1.0
>Reporter: Byunghwa Yun
>Priority: Minor
> Fix For: 1.2.0
>
>
> When QueryDatabaseTable ingests huge table that has ten billion data at the 
> first time, NiFi throws OutOfMemoryError.
> Because QueryDatabaseTable creates too many fragments in memory event setting 
> the MaxRowsPerFlowFile property.
> So I suggest QueryDatabaseTable supports max fragments property.
> Thank you.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3029) QueryDatabaseTable supports max fragments property

2016-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15748685#comment-15748685
 ] 

ASF GitHub Bot commented on NIFI-3029:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1213


> QueryDatabaseTable supports max fragments property
> --
>
> Key: NIFI-3029
> URL: https://issues.apache.org/jira/browse/NIFI-3029
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.1.0
>Reporter: Byunghwa Yun
>Priority: Minor
> Fix For: 1.2.0
>
>
> When QueryDatabaseTable ingests huge table that has ten billion data at the 
> first time, NiFi throws OutOfMemoryError.
> Because QueryDatabaseTable creates too many fragments in memory event setting 
> the MaxRowsPerFlowFile property.
> So I suggest QueryDatabaseTable supports max fragments property.
> Thank you.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1213: NIFI-3029: QueryDatabaseTable supports max fragment...

2016-12-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1213


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1213: NIFI-3029: QueryDatabaseTable supports max fragments prope...

2016-12-14 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/1213
  
+1 LGTM, ran unit tests and tried a NiFi flow with various settings for Max 
Rows Per Flow File and Max Number of Fragments. Thank you for the contribution! 
Merging to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-190) Wait/Notify processors

2016-12-14 Thread Joseph Gresock (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Gresock updated NIFI-190:

Description: 
Our team has developed a set of processors for the following use case:
* Format A needs to be sent to Endpoint A
* Format B needs to be sent to Endpoint B, but should not proceed until A has 
reached Endpoint A.  We most commonly have this restriction when Endpoint B 
requires some output of Endpoint A.

The proposed Wait/Notify processors enable this functionality:
* Wait: routes files to the 'wait' relationship until a matching Release Signal 
Identifier is found in the distributed map cache.  Then routes them to 
'success' (unless they have expired)
* Notify: stores a Release Signal Identifier in the distributed map cache, 
optionally with attributes to copy to the outgoing matching Wait flow files.

An example:
Wait is configured with Release Signal Attribute = "$\{myId}". Its 'wait' 
relationship routes back onto itself.

flowFile 1 \{ myId : "123" }

comes into Wait processor
Wait checks the distributed cache map for "123", doesn't find it, and is 
routed to the 'wait' relationship

Notify is configured with Release Signal Attribute = "$\{myId}"

flowFile 2 \{ myId : "123" }

comes in to Notify processor
Notify puts an entry in the map for "123" with any other attributes from 
flowFile2

Next time flowFile 1 is processed by Wait...

Finds an entry for "123"
Removes that entry from the map
Copies attributes to flowFile 1
Sends flowFile 1 out the success relationship


Notify will optionally cache attributes in the distributed map, as determined 
by a regex property.  This is what allows the output of Endpoint A to pass to 
Endpoint B, above.  Wait also allows conflicting attributes from the cache to 
either be replaced or kept, depending on property configuration.


  was:
Our team has developed a set of processors for the following use case:
* Format A needs to be sent to Endpoint A
* Format B needs to be sent to Endpoint B, but should not proceed until A has 
reached Endpoint A.  We most commonly have this restriction when Endpoint B 
requires some output of Endpoint A.

The proposed Wait/Notify processors enable this functionality:
* Wait: routes files to the 'wait' relationship until a matching Release Signal 
Identifier is found in the distributed map cache.  Then routes them to 
'success' (unless they have expired)
* Notify: stores a Release Signal Identifier in the distributed map cache, 
optionally with attributes to copy to the outgoing matching Wait flow files.

An example:
Wait is configured with Release Signal Attribute = "${myId}". Its 'wait' 
relationship routes back onto itself.

flowFile 1 { myId : "123" }

comes into Wait processor
Wait checks the distributed cache map for "123", doesn't find it, and is 
routed to the 'wait' relationship

Notify is configured with Release Signal Attribute = "${myId}"

flowFile 2 { myId : "123" }

comes in to Notify processor
Notify puts an entry in the map for "123" with any other attributes from 
flowFile2

Next time flowFile 1 is processed by Wait...

Finds an entry for "123"
Removes that entry from the map
Copies attributes to flowFile 1
Sends flowFile 1 out the success relationship


Notify will optionally cache attributes in the distributed map, as determined 
by a regex property.  This is what allows the output of Endpoint A to pass to 
Endpoint B, above.  Wait also allows conflicting attributes from the cache to 
either be replaced or kept, depending on property configuration.



> Wait/Notify processors
> --
>
> Key: NIFI-190
> URL: https://issues.apache.org/jira/browse/NIFI-190
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joseph Gresock
>Assignee: Joseph Gresock
>Priority: Minor
> Fix For: 1.2.0
>
> Attachments: Wait_Notify_template.xml
>
>
> Our team has developed a set of processors for the following use case:
> * Format A needs to be sent to Endpoint A
> * Format B needs to be sent to Endpoint B, but should not proceed until A has 
> reached Endpoint A.  We most commonly have this restriction when Endpoint B 
> requires some output of Endpoint A.
> The proposed Wait/Notify processors enable this functionality:
> * Wait: routes files to the 'wait' relationship until a matching Release 
> Signal Identifier is found in the distributed map cache.  Then routes them to 
> 'success' (unless they have expired)
> * Notify: stores a Release Signal Identifier in the distributed map cache, 
> optionally with attributes to copy to the outgoing matching Wait flow files.
> An example:
> Wait is configured with Release Signal Attribute = "$\{myId}". Its 'wait' 
> relationship routes back onto 

[jira] [Updated] (NIFI-190) Wait/Notify processors

2016-12-14 Thread Joseph Gresock (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Gresock updated NIFI-190:

Description: 
Our team has developed a set of processors for the following use case:
* Format A needs to be sent to Endpoint A
* Format B needs to be sent to Endpoint B, but should not proceed until A has 
reached Endpoint A.  We most commonly have this restriction when Endpoint B 
requires some output of Endpoint A.

The proposed Wait/Notify processors enable this functionality:
* Wait: routes files to the 'wait' relationship until a matching Release Signal 
Identifier is found in the distributed map cache.  Then routes them to 
'success' (unless they have expired)
* Notify: stores a Release Signal Identifier in the distributed map cache, 
optionally with attributes to copy to the outgoing matching Wait flow files.

An example:
Wait is configured with Release Signal Attribute = "${myId}". Its 'wait' 
relationship routes back onto itself.

flowFile 1 { myId : "123" }

comes into Wait processor
Wait checks the distributed cache map for "123", doesn't find it, and is 
routed to the 'wait' relationship

Notify is configured with Release Signal Attribute = "${myId}"

flowFile 2 { myId : "123" }

comes in to Notify processor
Notify puts an entry in the map for "123" with any other attributes from 
flowFile2

Next time flowFile 1 is processed by Wait...

Finds an entry for "123"
Removes that entry from the map
Copies attributes to flowFile 1
Sends flowFile 1 out the success relationship


Notify will optionally cache attributes in the distributed map, as determined 
by a regex property.  This is what allows the output of Endpoint A to pass to 
Endpoint B, above.  Wait also allows conflicting attributes from the cache to 
either be replaced or kept, depending on property configuration.


  was:
Our team has developed a processor for the following use case:
* Format A needs to be sent to Endpoint A
* Format B needs to be sent to Endpoint B, but should not proceed until A has 
reached Endpoint A.  We most commonly have this restriction when Endpoint B 
requires some output of Endpoint A.

The proposed Wait/Notify processors enable this functionality:
* Wait: routes files to the 'wait' relationship until a matching Release Signal 
Identifier is found in the distributed map cache.  Then routes them to 
'success' (unless they have expired)
* Notify: stores a Release Signal Identifier in the distributed map cache, 
optionally with attributes to copy to the outgoing matching Wait flow files.

An example:
Wait is configured with Release Signal Attribute = "${myId}". Its 'wait' 
relationship routes back onto itself.

flowFile 1 { myId : "123" }

comes into Wait processor
Wait checks the distributed cache map for "123", doesn't find it, and is 
routed to the 'wait' relationship

Notify is configured with Release Signal Attribute = "${myId}"

flowFile 2 { myId : "123" }

comes in to Notify processor
Notify puts an entry in the map for "123" with any other attributes from 
flowFile2

Next time flowFile 1 is processed by Wait...

Finds an entry for "123"
Removes that entry from the map
Copies attributes to flowFile 1
Sends flowFile 1 out the success relationship


Signal flow files will also copy their attributes to matching held files, as 
optionally configured by an attribute name regex property.  This is what allows 
the output of Endpoint A to pass to Endpoint B, above.  Wait also allows 
conflicting attributes to either be replaced or kept, depending on property 
configuration.



> Wait/Notify processors
> --
>
> Key: NIFI-190
> URL: https://issues.apache.org/jira/browse/NIFI-190
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joseph Gresock
>Assignee: Joseph Gresock
>Priority: Minor
> Fix For: 1.2.0
>
> Attachments: Wait_Notify_template.xml
>
>
> Our team has developed a set of processors for the following use case:
> * Format A needs to be sent to Endpoint A
> * Format B needs to be sent to Endpoint B, but should not proceed until A has 
> reached Endpoint A.  We most commonly have this restriction when Endpoint B 
> requires some output of Endpoint A.
> The proposed Wait/Notify processors enable this functionality:
> * Wait: routes files to the 'wait' relationship until a matching Release 
> Signal Identifier is found in the distributed map cache.  Then routes them to 
> 'success' (unless they have expired)
> * Notify: stores a Release Signal Identifier in the distributed map cache, 
> optionally with attributes to copy to the outgoing matching Wait flow files.
> An example:
> Wait is configured with Release Signal Attribute = "${myId}". Its 'wait' 
> relationship routes 

[jira] [Updated] (NIFI-190) Wait/Notify processors

2016-12-14 Thread Joseph Gresock (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Gresock updated NIFI-190:

Description: 
Our team has developed a processor for the following use case:
* Format A needs to be sent to Endpoint A
* Format B needs to be sent to Endpoint B, but should not proceed until A has 
reached Endpoint A.  We most commonly have this restriction when Endpoint B 
requires some output of Endpoint A.

The proposed Wait/Notify processors enable this functionality:
* Wait: routes files to the 'wait' relationship until a matching Release Signal 
Identifier is found in the distributed map cache.  Then routes them to 
'success' (unless they have expired)
* Notify: stores a Release Signal Identifier in the distributed map cache, 
optionally with attributes to copy to the outgoing matching Wait flow files.

An example:
Wait is configured with Release Signal Attribute = "${myId}". Its 'wait' 
relationship routes back onto itself.

flowFile 1 { myId : "123" }

comes into Wait processor
Wait checks the distributed cache map for "123", doesn't find it, and is 
routed to the 'wait' relationship

Notify is configured with Release Signal Attribute = "${myId}"

flowFile 2 { myId : "123" }

comes in to Notify processor
Notify puts an entry in the map for "123" with any other attributes from 
flowFile2

Next time flowFile 1 is processed by Wait...

Finds an entry for "123"
Removes that entry from the map
Copies attributes to flowFile 1
Sends flowFile 1 out the success relationship


Signal flow files will also copy their attributes to matching held files, as 
optionally configured by an attribute name regex property.  This is what allows 
the output of Endpoint A to pass to Endpoint B, above.  Wait also allows 
conflicting attributes to either be replaced or kept, depending on property 
configuration.


  was:
Our team has developed a processor for the following use case:
* Format A needs to be sent to Endpoint A
* Format B needs to be sent to Endpoint B, but should not proceed until A has 
reached Endpoint A.  We most commonly have this restriction when Endpoint B 
requires some output of Endpoint A.

The proposed HoldFile processor takes 2 types of flow files as input:
* Files to be held
* Signal files that can release corresponding held files, based on the value of 
a configurable "release" attribute

Signal files are distinguished from held files by the presence of the 
"flow.file.release.value" attribute.  The processor is configured with a 
"Release Signal Attribute".  Held files with this attribute whose value matches 
a received signal value will be released.

An example:
HoldFile is configured with Release Signal Attribute = "myId".  Its 'Hold' 
relationship routes back onto itself.
1. flowFile 1 { myId : "123" } enters HoldFile.  It is routed to the 'Hold' 
relationship
2. flowFile 2 { flow.file.release.value : "123" } enters HoldFile.  flowfile 1 
is then routed to 'Release', and flow file 2 is removed from the session.

Signal flow files will also copy their attributes to matching held files, 
unless otherwise indicated.  This is what allows the output of Endpoint A to 
pass to Endpoint B, above.



> Wait/Notify processors
> --
>
> Key: NIFI-190
> URL: https://issues.apache.org/jira/browse/NIFI-190
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joseph Gresock
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.2.0
>
> Attachments: Wait_Notify_template.xml
>
>
> Our team has developed a processor for the following use case:
> * Format A needs to be sent to Endpoint A
> * Format B needs to be sent to Endpoint B, but should not proceed until A has 
> reached Endpoint A.  We most commonly have this restriction when Endpoint B 
> requires some output of Endpoint A.
> The proposed Wait/Notify processors enable this functionality:
> * Wait: routes files to the 'wait' relationship until a matching Release 
> Signal Identifier is found in the distributed map cache.  Then routes them to 
> 'success' (unless they have expired)
> * Notify: stores a Release Signal Identifier in the distributed map cache, 
> optionally with attributes to copy to the outgoing matching Wait flow files.
> An example:
> Wait is configured with Release Signal Attribute = "${myId}". Its 'wait' 
> relationship routes back onto itself.
> flowFile 1 { myId : "123" }
> comes into Wait processor
> Wait checks the distributed cache map for "123", doesn't find it, and is 
> routed to the 'wait' relationship
> Notify is configured with Release Signal Attribute = "${myId}"
> flowFile 2 { myId : "123" }
> comes in to Notify processor
> Notify puts an entry in the map for "123" with any other attributes from 
> 

[jira] [Updated] (NIFI-3029) QueryDatabaseTable supports max fragments property

2016-12-14 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-3029:
---
Status: Patch Available  (was: Open)

> QueryDatabaseTable supports max fragments property
> --
>
> Key: NIFI-3029
> URL: https://issues.apache.org/jira/browse/NIFI-3029
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.1.0
>Reporter: Byunghwa Yun
>Priority: Minor
>
> When QueryDatabaseTable ingests huge table that has ten billion data at the 
> first time, NiFi throws OutOfMemoryError.
> Because QueryDatabaseTable creates too many fragments in memory event setting 
> the MaxRowsPerFlowFile property.
> So I suggest QueryDatabaseTable supports max fragments property.
> Thank you.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (NIFI-190) Wait/Notify processors

2016-12-14 Thread Joseph Gresock (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Gresock reassigned NIFI-190:
---

Assignee: Joseph Gresock  (was: Bryan Bende)

> Wait/Notify processors
> --
>
> Key: NIFI-190
> URL: https://issues.apache.org/jira/browse/NIFI-190
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joseph Gresock
>Assignee: Joseph Gresock
>Priority: Minor
> Fix For: 1.2.0
>
> Attachments: Wait_Notify_template.xml
>
>
> Our team has developed a processor for the following use case:
> * Format A needs to be sent to Endpoint A
> * Format B needs to be sent to Endpoint B, but should not proceed until A has 
> reached Endpoint A.  We most commonly have this restriction when Endpoint B 
> requires some output of Endpoint A.
> The proposed Wait/Notify processors enable this functionality:
> * Wait: routes files to the 'wait' relationship until a matching Release 
> Signal Identifier is found in the distributed map cache.  Then routes them to 
> 'success' (unless they have expired)
> * Notify: stores a Release Signal Identifier in the distributed map cache, 
> optionally with attributes to copy to the outgoing matching Wait flow files.
> An example:
> Wait is configured with Release Signal Attribute = "${myId}". Its 'wait' 
> relationship routes back onto itself.
> flowFile 1 { myId : "123" }
> comes into Wait processor
> Wait checks the distributed cache map for "123", doesn't find it, and is 
> routed to the 'wait' relationship
> Notify is configured with Release Signal Attribute = "${myId}"
> flowFile 2 { myId : "123" }
> comes in to Notify processor
> Notify puts an entry in the map for "123" with any other attributes from 
> flowFile2
> Next time flowFile 1 is processed by Wait...
> Finds an entry for "123"
> Removes that entry from the map
> Copies attributes to flowFile 1
> Sends flowFile 1 out the success relationship
> Signal flow files will also copy their attributes to matching held files, as 
> optionally configured by an attribute name regex property.  This is what 
> allows the output of Endpoint A to pass to Endpoint B, above.  Wait also 
> allows conflicting attributes to either be replaced or kept, depending on 
> property configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-190) Wait/Notify processors

2016-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15748648#comment-15748648
 ] 

ASF GitHub Bot commented on NIFI-190:
-

GitHub user gresockj opened a pull request:

https://github.com/apache/nifi/pull/1329

NIFI-190: Initial commit of Wait and Notify processors

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [x] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gresockj/nifi NIFI-190

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1329.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1329


commit 12d686952e220f74862fc031b3495b6acc8eb0ff
Author: Joe Gresock 
Date:   2016-12-14T15:32:16Z

NIFI-190: Initial commit of Wait and Notify processors




> Wait/Notify processors
> --
>
> Key: NIFI-190
> URL: https://issues.apache.org/jira/browse/NIFI-190
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joseph Gresock
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.2.0
>
> Attachments: Wait_Notify_template.xml
>
>
> Our team has developed a processor for the following use case:
> * Format A needs to be sent to Endpoint A
> * Format B needs to be sent to Endpoint B, but should not proceed until A has 
> reached Endpoint A.  We most commonly have this restriction when Endpoint B 
> requires some output of Endpoint A.
> The proposed Wait/Notify processors enable this functionality:
> * Wait: routes files to the 'wait' relationship until a matching Release 
> Signal Identifier is found in the distributed map cache.  Then routes them to 
> 'success' (unless they have expired)
> * Notify: stores a Release Signal Identifier in the distributed map cache, 
> optionally with attributes to copy to the outgoing matching Wait flow files.
> An example:
> Wait is configured with Release Signal Attribute = "${myId}". Its 'wait' 
> relationship routes back onto itself.
> flowFile 1 { myId : "123" }
> comes into Wait processor
> Wait checks the distributed cache map for "123", doesn't find it, and is 
> routed to the 'wait' relationship
> Notify is configured with Release Signal Attribute = "${myId}"
> flowFile 2 { myId : "123" }
> comes in to Notify processor
> Notify puts an entry in the map for "123" with any other attributes from 
> flowFile2
> Next time flowFile 1 is processed by Wait...
> Finds an entry for "123"
> Removes that entry from the map
> Copies attributes to flowFile 1
> Sends flowFile 1 out the success relationship
> Signal flow files will also copy their attributes to matching held files, as 
> optionally configured by an attribute name regex property.  This is what 
> allows the output of Endpoint A to pass to Endpoint B, above.  Wait also 
> allows conflicting attributes to either be replaced or kept, depending on 
> 

[GitHub] nifi pull request #1329: NIFI-190: Initial commit of Wait and Notify process...

2016-12-14 Thread gresockj
GitHub user gresockj opened a pull request:

https://github.com/apache/nifi/pull/1329

NIFI-190: Initial commit of Wait and Notify processors

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [x] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gresockj/nifi NIFI-190

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1329.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1329


commit 12d686952e220f74862fc031b3495b6acc8eb0ff
Author: Joe Gresock 
Date:   2016-12-14T15:32:16Z

NIFI-190: Initial commit of Wait and Notify processors




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (NIFI-190) Wait/Notify processors

2016-12-14 Thread Joseph Gresock (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Gresock updated NIFI-190:

Summary: Wait/Notify processors  (was: HoldFile processor)

> Wait/Notify processors
> --
>
> Key: NIFI-190
> URL: https://issues.apache.org/jira/browse/NIFI-190
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joseph Gresock
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.2.0
>
> Attachments: Wait_Notify_template.xml
>
>
> Our team has developed a processor for the following use case:
> * Format A needs to be sent to Endpoint A
> * Format B needs to be sent to Endpoint B, but should not proceed until A has 
> reached Endpoint A.  We most commonly have this restriction when Endpoint B 
> requires some output of Endpoint A.
> The proposed HoldFile processor takes 2 types of flow files as input:
> * Files to be held
> * Signal files that can release corresponding held files, based on the value 
> of a configurable "release" attribute
> Signal files are distinguished from held files by the presence of the 
> "flow.file.release.value" attribute.  The processor is configured with a 
> "Release Signal Attribute".  Held files with this attribute whose value 
> matches a received signal value will be released.
> An example:
> HoldFile is configured with Release Signal Attribute = "myId".  Its 'Hold' 
> relationship routes back onto itself.
> 1. flowFile 1 { myId : "123" } enters HoldFile.  It is routed to the 'Hold' 
> relationship
> 2. flowFile 2 { flow.file.release.value : "123" } enters HoldFile.  flowfile 
> 1 is then routed to 'Release', and flow file 2 is removed from the session.
> Signal flow files will also copy their attributes to matching held files, 
> unless otherwise indicated.  This is what allows the output of Endpoint A to 
> pass to Endpoint B, above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-190) HoldFile processor

2016-12-14 Thread Joseph Gresock (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Gresock updated NIFI-190:

Attachment: (was: HoldFile_example.xml)

> HoldFile processor
> --
>
> Key: NIFI-190
> URL: https://issues.apache.org/jira/browse/NIFI-190
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joseph Gresock
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.2.0
>
> Attachments: Wait_Notify_template.xml
>
>
> Our team has developed a processor for the following use case:
> * Format A needs to be sent to Endpoint A
> * Format B needs to be sent to Endpoint B, but should not proceed until A has 
> reached Endpoint A.  We most commonly have this restriction when Endpoint B 
> requires some output of Endpoint A.
> The proposed HoldFile processor takes 2 types of flow files as input:
> * Files to be held
> * Signal files that can release corresponding held files, based on the value 
> of a configurable "release" attribute
> Signal files are distinguished from held files by the presence of the 
> "flow.file.release.value" attribute.  The processor is configured with a 
> "Release Signal Attribute".  Held files with this attribute whose value 
> matches a received signal value will be released.
> An example:
> HoldFile is configured with Release Signal Attribute = "myId".  Its 'Hold' 
> relationship routes back onto itself.
> 1. flowFile 1 { myId : "123" } enters HoldFile.  It is routed to the 'Hold' 
> relationship
> 2. flowFile 2 { flow.file.release.value : "123" } enters HoldFile.  flowfile 
> 1 is then routed to 'Release', and flow file 2 is removed from the session.
> Signal flow files will also copy their attributes to matching held files, 
> unless otherwise indicated.  This is what allows the output of Endpoint A to 
> pass to Endpoint B, above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-190) HoldFile processor

2016-12-14 Thread Joseph Gresock (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Gresock updated NIFI-190:

Attachment: Wait_Notify_template.xml

Demonstrates usage of Wait and Notify processors.

> HoldFile processor
> --
>
> Key: NIFI-190
> URL: https://issues.apache.org/jira/browse/NIFI-190
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joseph Gresock
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.2.0
>
> Attachments: Wait_Notify_template.xml
>
>
> Our team has developed a processor for the following use case:
> * Format A needs to be sent to Endpoint A
> * Format B needs to be sent to Endpoint B, but should not proceed until A has 
> reached Endpoint A.  We most commonly have this restriction when Endpoint B 
> requires some output of Endpoint A.
> The proposed HoldFile processor takes 2 types of flow files as input:
> * Files to be held
> * Signal files that can release corresponding held files, based on the value 
> of a configurable "release" attribute
> Signal files are distinguished from held files by the presence of the 
> "flow.file.release.value" attribute.  The processor is configured with a 
> "Release Signal Attribute".  Held files with this attribute whose value 
> matches a received signal value will be released.
> An example:
> HoldFile is configured with Release Signal Attribute = "myId".  Its 'Hold' 
> relationship routes back onto itself.
> 1. flowFile 1 { myId : "123" } enters HoldFile.  It is routed to the 'Hold' 
> relationship
> 2. flowFile 2 { flow.file.release.value : "123" } enters HoldFile.  flowfile 
> 1 is then routed to 'Release', and flow file 2 is removed from the session.
> Signal flow files will also copy their attributes to matching held files, 
> unless otherwise indicated.  This is what allows the output of Endpoint A to 
> pass to Endpoint B, above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-190) HoldFile processor

2016-12-14 Thread Joseph Gresock (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Gresock updated NIFI-190:

Fix Version/s: 1.2.0

> HoldFile processor
> --
>
> Key: NIFI-190
> URL: https://issues.apache.org/jira/browse/NIFI-190
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Joseph Gresock
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.2.0
>
> Attachments: HoldFile_example.xml
>
>
> Our team has developed a processor for the following use case:
> * Format A needs to be sent to Endpoint A
> * Format B needs to be sent to Endpoint B, but should not proceed until A has 
> reached Endpoint A.  We most commonly have this restriction when Endpoint B 
> requires some output of Endpoint A.
> The proposed HoldFile processor takes 2 types of flow files as input:
> * Files to be held
> * Signal files that can release corresponding held files, based on the value 
> of a configurable "release" attribute
> Signal files are distinguished from held files by the presence of the 
> "flow.file.release.value" attribute.  The processor is configured with a 
> "Release Signal Attribute".  Held files with this attribute whose value 
> matches a received signal value will be released.
> An example:
> HoldFile is configured with Release Signal Attribute = "myId".  Its 'Hold' 
> relationship routes back onto itself.
> 1. flowFile 1 { myId : "123" } enters HoldFile.  It is routed to the 'Hold' 
> relationship
> 2. flowFile 2 { flow.file.release.value : "123" } enters HoldFile.  flowfile 
> 1 is then routed to 'Release', and flow file 2 is removed from the session.
> Signal flow files will also copy their attributes to matching held files, 
> unless otherwise indicated.  This is what allows the output of Endpoint A to 
> pass to Endpoint B, above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3147) Build processor to parse CCDA into attributes and JSON

2016-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15748364#comment-15748364
 ] 

ASF GitHub Bot commented on NIFI-3147:
--

Github user kedarchitale commented on the issue:

https://github.com/apache/nifi/pull/1312
  
What is the next step on this? Do I have to do anything or this would be 
reviewed by members of dev community and then merged with the master?


> Build processor to parse CCDA into attributes and JSON
> --
>
> Key: NIFI-3147
> URL: https://issues.apache.org/jira/browse/NIFI-3147
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Kedar Chitale
>  Labels: attributes, ccda, healthcare, json, parser
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Accept a CCDA document, Parse the document to create JSON text and individual 
> attributes for example code.codeSystemName=LOINC



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1312: NIFI-3147 CCDA Processor

2016-12-14 Thread kedarchitale
Github user kedarchitale commented on the issue:

https://github.com/apache/nifi/pull/1312
  
What is the next step on this? Do I have to do anything or this would be 
reviewed by members of dev community and then merged with the master?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (NIFI-3193) Update ConsumeAMQP and PublishAMQP to retrieve username from certificate common name

2016-12-14 Thread Oleg Zhurakousky (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Zhurakousky reassigned NIFI-3193:
--

Assignee: Oleg Zhurakousky

> Update ConsumeAMQP and PublishAMQP to retrieve username from certificate 
> common name
> 
>
> Key: NIFI-3193
> URL: https://issues.apache.org/jira/browse/NIFI-3193
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0, 0.7.1
>Reporter: Brian
>Assignee: Oleg Zhurakousky
>
> At the moment the NiFi AMQP processors can establish a SSL connection to 
> RabbitMQ but still user a user defined username and password to authenticate. 
> When using certificates RabbitMQ allows you to use to COMMON_NAME from the 
> certificate to authenticate instead of providing a username and password. 
> Unfortunately the NiFi processors do not support this so I would like to 
> request an update to the processors to enable this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #70: NIFI-751 Add Processor To Convert Avro Formats

2016-12-14 Thread avishsaha
Github user avishsaha commented on the issue:

https://github.com/apache/nifi/pull/70
  
Hey @joewitt somehow I dont see AvroRecordConverter in NiFi's available 
Processors. I need to be able to convert an incoming source file (CSV/XML/JSON) 
to a generic schema. However, the problem is I am not sure how to - 1. Convert 
a AVRO 'record'  type to another and 2. When using ConvertAvroSchema how do we 
specify the dynamic properties as specified in the NiFi documentaion here - 
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.kite.ConvertAvroSchema/index.html

Please advice. Thank you.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-751) Add Processor To Convert Avro Formats

2016-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15747634#comment-15747634
 ] 

ASF GitHub Bot commented on NIFI-751:
-

Github user avishsaha commented on the issue:

https://github.com/apache/nifi/pull/70
  
Hey @joewitt somehow I dont see AvroRecordConverter in NiFi's available 
Processors. I need to be able to convert an incoming source file (CSV/XML/JSON) 
to a generic schema. However, the problem is I am not sure how to - 1. Convert 
a AVRO 'record'  type to another and 2. When using ConvertAvroSchema how do we 
specify the dynamic properties as specified in the NiFi documentaion here - 
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.kite.ConvertAvroSchema/index.html

Please advice. Thank you.


> Add Processor To Convert Avro Formats
> -
>
> Key: NIFI-751
> URL: https://issues.apache.org/jira/browse/NIFI-751
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.1.0
>Reporter: Alan Jackoway
>Assignee: Joseph Witt
> Fix For: 0.3.0
>
>
> When working with data from external sources, such as complex WSDL, I 
> frequently wind up with complex nested data that is difficult to work with 
> even when converted to Avro format. Specifically, I often have two needs:
> * Converting types of data, usually from string to long, double, etc. when 
> APIs give only string data back.
> * Flattening data by taking fields out of nested records and putting them on 
> the top level of the Avro file.
> Unfortunately the Kite JSONToAvro processor only supports exact conversions 
> from JSON to a matching Avro schema and will not do data transformations of 
> this type. Proposed processor to come.
> Discussed this with [~rdblue], so tagging him here as I don't have permission 
> to set a CC for some reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)