Re: enforce run only in promary node $ multiple primary node

2016-09-21 Thread markap14
Nijel,

I'd like to hear more about your use case, as from the description given, I'm 
not sure that this all would need to run on a primary node. Generally, you want 
only "source processors" to run on primary node.

One thing that I've been thinking about, though, is changing the concept of 
"Run on Primary Node" to a "Run on Only One Node." The concern there is that we 
will have cases where a few processors have to run on the same node. So we 
would need a mechanism for supporting that. Perhaps some sort of named grouping 
construct. 

Thoughts?

Sent from my iPhone

> On Sep 21, 2016, at 5:07 AM, Nijel s f  wrote:
> 
> Hi all
> 
>Supporting to Tijo’s thought, have one scenario.
> 
> we are trying to use Nifi for a data pipeline solution. The scenario is to 
> coordinate between various services and provide a solution for big data 
> analysis
>In our scenario many of the activities are kind of "run on 
> primary" mode processors. These are being implemented on top of various 
> components like Yarn, Hbase, Spark, DB etc.
> 
>One issue we are seeing is all these processors to be run on 
> primary node  [like spark execution, yarn/mr job execution etc.. ] and it is 
> only one.
>We are thinking of having multiple primary nodes and assign 
> the activities using some distribution algorithm.
>The idea is to handle the coordination and failover mechanism 
> using zookeeper.
> 
>Any thoughts on this ?
> 
> Regards
> Nijel
> 
> From: Jeff [mailto:jtsw...@gmail.com]
> Sent: Monday, September 19, 2016 11:17 PM
> To: Tijo Thomas; us...@nifi.apache.org
> Subject: Re: enforce run only in promary node $ multiple primary node
> 
> Tijo,
> 
> To give you some information on your second question, you can design your 
> flow to redistribute the flowfiles coming out of your processors to other 
> nodes in the cluster for processing.  There are several examples on how this 
> on various blogs/email lists/etc, and I just grabbed one for reference, 
> written by Apache NiFi's own Bryan Bende: 
> http://apache-nifi.1125220.n5.nabble.com/How-to-configure-site-to-site-communication-between-nodes-in-one-cluster-td8528.html
> 
> Please review that thread and let us know if you have further questions!
> 
> On Mon, Sep 19, 2016 at 1:19 PM Tijo Thomas 
> > wrote:
> 
> Hi ,
> 
> 1. While writing a processor is it possible to enforce to run only in primary 
> node. I saw a Jira for this but appears to unresolved.
> 
> [NIFI-543] Provide extensions a way to indicate that they can run only on 
> primary node, if clustered - ASF 
> JIRA
> 
> 
> 
> 
> 
> [NIFI-543] Provide extensions a way to indicate that they can run only on p...
> 
> 
> 
> 
> 2. Currently my Primary node is heavily loaded  as i have many  processor 
> which will run only in Primary node.  Is it possible to define multiple 
> primary nodes . or is it possible to configure processors not to run in 
> primary node.
> 
> Tijo


[GitHub] nifi pull request #612: NIFI-2159: Fixed bug that caused relationship names ...

2016-07-06 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/612

NIFI-2159: Fixed bug that caused relationship names not to get added to 
fingerprint



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-2159

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/612.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #612


commit 6063c7df62d4e9f5216a67d6e981a2fe710c2f19
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-07-06T18:35:51Z

NIFI-2159: Fixed bug that caused relationship names not to get added to 
fingerprint




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #603: NIFI-1781: Updating UI to respect access controls outside o...

2016-07-01 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/603
  
@mcgilman excellent - all looks good, as far as I can tell. This PR 
addresses several issues that existed in the application. I've pushed to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #593: NIFI-2150: Cleanse more values from templates that are not ...

2016-07-01 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/593
  
@mcgilman good call. Looks good to me!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #601: NIFI-2039: Provide a new ProcessSession.read() metho...

2016-07-01 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/601

NIFI-2039: Provide a new ProcessSession.read() method that provides an 
InputStream instead of using a callback



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-2039

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/601.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #601


commit e84817b56792b2bbcfae477f53694b3c4b96abd8
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-07-01T17:24:12Z

NIFI-2039: Provide a new ProcessSession.read() method that provides an 
InputStream instead of using a callback




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #572: NIFI-2059: Ensure that we properly pass along proxied entit...

2016-06-23 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/572
  
@mcgilman the PR has been updated to incorporate the feedback you provided. 
Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #572: NIFI-2059: Ensure that we properly pass along proxie...

2016-06-23 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/572#discussion_r68300367
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/replication/ThreadPoolRequestReplicator.java
 ---
@@ -210,6 +212,15 @@ public AsyncClusterResponse 
replicate(Set nodeIds, String method
 final Map<String, String> updatedHeaders = new HashMap<>(headers);
 
updatedHeaders.put(RequestReplicator.CLUSTER_ID_GENERATION_SEED_HEADER, 
UUID.randomUUID().toString());
 updatedHeaders.put(RequestReplicator.REPLICATION_INDICATOR_HEADER, 
"true");
+
+// If the user is authenticated, add them as a proxied entity so 
that when the receiving NiFi receives the request,
+// it knows that we are acting as a proxy on behalf of the current 
user.
+final NiFiUser user = NiFiUserUtils.getNiFiUser();
+if (user != null && !user.equals(NiFiUser.ANONYMOUS)) {
+final String proxiedEntitiesChain = 
ProxiedEntitiesUtils.buildProxiedEntitiesChainString(NiFiUserUtils.getNiFiUser());
--- End diff --

And another good call.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #573: NIFI-2089: Ensure streams are closed before attempti...

2016-06-23 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/573

NIFI-2089: Ensure streams are closed before attempting to remove files

Also fixed buggy unit tests that did not properly utilize the Content 
Repository API but instead attempted to write to files directly

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-2089

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/573.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #573


commit a4281ae4197428dd3f4f14450467f78068e2345b
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-06-23T19:21:44Z

NIFI-2089: Ensure streams are closed before attempting to remove files




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #556: NIFI-615 - Create a processor to extract WAV file character...

2016-06-22 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/556
  
@JPercivall @joewitt I believe what you said to be accurate. The JIRA that 
I was referencing is https://issues.apache.org/jira/browse/NIFI-104 - I do 
believe it is very advantageous to be able to emit ROUTE events for failure 
scenarios but right now it could cause some unwieldy behavior by generating an 
unbounded number of provenance events. I also believe that the solution laid 
out in NIFI-104 is the appropriate one, and that the description aptly captures 
the issue at hand. If either of you disagrees, feel free to update the ticket 
to capture your additional thoughts or create a new one if need be


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #556: NIFI-615 - Create a processor to extract WAV file character...

2016-06-22 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/556
  
@JPercivall the reason is not to keep data in the flow. The reason is 
because users often configure the dataflow in that way, and NiFi should handle 
that case well. We do have some JIRAs to make handling this better, by dropping 
ROUTE events if the FlowFile was routed back to the same connection that it 
came from. This has not been done to date, though.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #556: NIFI-615 - Create a processor to extract WAV file character...

2016-06-22 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/556
  
@JPercivall we don't typically recommend emitting ROUTE events when routing 
to failure. Often times, failure is routed back to self, if for no other reason 
than to keep the data in the flow. We don't want to destroy provenance by 
continually emitting ROUTE events. This event type is intended for processors 
whose sole job it is to route data, such as RouteOnAttribute, RouteContent, 
RouteText, etc.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #477: NIFI-1663: Add ConvertAvroToORC processor

2016-06-22 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/477
  
@mattyb149 Looks good for the most part. I left a few inline comments, just 
some tweaks that i think can probably help to cleanup the code. Also, I noticed 
OrcFlowFileWriter.java is a pretty hefty file. I think this is a slight 
modification to one of the writers in the Orc codebase, correct? does it make 
sense to instead create a PR for Apache Orc so that they can incorporate the 
slight change that is needed, instead of having to maintain that huge file that 
could change over time?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #477: NIFI-1663: Add ConvertAvroToORC processor

2016-06-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/477#discussion_r68074816
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/util/orc/OrcUtils.java
 ---
@@ -0,0 +1,443 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.util.orc;
+
+import org.apache.avro.Schema;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.commons.lang3.mutable.MutableInt;
+import org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.DoubleColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.ListColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.LongColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.MapColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.UnionColumnVector;
+import org.apache.orc.TypeDescription;
+
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Utility methods for ORC support (conversion from Avro, conversion to 
Hive types, e.g.
+ */
+public class OrcUtils {
+
+public static void putToRowBatch(ColumnVector col, MutableInt 
vectorOffset, int rowNumber, Schema fieldSchema, Object o) {
+Schema.Type fieldType = fieldSchema.getType();
+
+if (fieldType == null) {
+throw new IllegalArgumentException("Field type is null");
+}
+
+if (Schema.Type.INT.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((LongColumnVector) col).vector[rowNumber] = (int) o;
+}
+} else if (Schema.Type.LONG.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((LongColumnVector) col).vector[rowNumber] = (long) o;
+}
+} else if (Schema.Type.BOOLEAN.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((LongColumnVector) col).vector[rowNumber] = ((boolean) o) 
? 1 : 0;
+}
+} else if (Schema.Type.BYTES.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+ByteBuffer byteBuffer = ((ByteBuffer) o);
+int size = byteBuffer.remaining();
+byte[] buf = new byte[size];
+byteBuffer.get(buf, 0, size);
+((BytesColumnVector) col).setVal(rowNumber, buf);
+}
+} else if (Schema.Type.DOUBLE.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((DoubleColumnVector) col).vector[rowNumber] = (double) o;
+}
+} else if (Schema.Type.FLOAT.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((DoubleColumnVector) col).vector[rowNumber] = (float) o;
+}
+} else if (Schema.Type.STRING.equals(fieldType) || 
Schema.Type.ENUM.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((BytesColumnVector) col).setVal(rowNumber, 
o.toString().getBytes());
+}
+} else if (Schema.Type.UNION.equals(fieldType)) {
+// If the union only has one non-null type in it, it was 
flattened in the ORC schema
+if (col instanceof UnionColumnVector) {
+UnionColumnVector union = ((UnionColumnVector) col);
+  

[GitHub] nifi pull request #477: NIFI-1663: Add ConvertAvroToORC processor

2016-06-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/477#discussion_r68074551
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/util/orc/OrcUtils.java
 ---
@@ -0,0 +1,443 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.util.orc;
+
+import org.apache.avro.Schema;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.commons.lang3.mutable.MutableInt;
+import org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.DoubleColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.ListColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.LongColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.MapColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.UnionColumnVector;
+import org.apache.orc.TypeDescription;
+
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Utility methods for ORC support (conversion from Avro, conversion to 
Hive types, e.g.
+ */
+public class OrcUtils {
+
+public static void putToRowBatch(ColumnVector col, MutableInt 
vectorOffset, int rowNumber, Schema fieldSchema, Object o) {
+Schema.Type fieldType = fieldSchema.getType();
+
+if (fieldType == null) {
+throw new IllegalArgumentException("Field type is null");
+}
+
+if (Schema.Type.INT.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((LongColumnVector) col).vector[rowNumber] = (int) o;
+}
+} else if (Schema.Type.LONG.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((LongColumnVector) col).vector[rowNumber] = (long) o;
+}
+} else if (Schema.Type.BOOLEAN.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((LongColumnVector) col).vector[rowNumber] = ((boolean) o) 
? 1 : 0;
+}
+} else if (Schema.Type.BYTES.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+ByteBuffer byteBuffer = ((ByteBuffer) o);
+int size = byteBuffer.remaining();
+byte[] buf = new byte[size];
+byteBuffer.get(buf, 0, size);
+((BytesColumnVector) col).setVal(rowNumber, buf);
+}
+} else if (Schema.Type.DOUBLE.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((DoubleColumnVector) col).vector[rowNumber] = (double) o;
+}
+} else if (Schema.Type.FLOAT.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((DoubleColumnVector) col).vector[rowNumber] = (float) o;
+}
+} else if (Schema.Type.STRING.equals(fieldType) || 
Schema.Type.ENUM.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((BytesColumnVector) col).setVal(rowNumber, 
o.toString().getBytes());
+}
+} else if (Schema.Type.UNION.equals(fieldType)) {
+// If the union only has one non-null type in it, it was 
flattened in the ORC schema
+if (col instanceof UnionColumnVector) {
+UnionColumnVector union = ((UnionColumnVector) col);
+  

[GitHub] nifi pull request #477: NIFI-1663: Add ConvertAvroToORC processor

2016-06-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/477#discussion_r68074457
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/util/orc/OrcUtils.java
 ---
@@ -0,0 +1,443 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.util.orc;
+
+import org.apache.avro.Schema;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.commons.lang3.mutable.MutableInt;
+import org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.DoubleColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.ListColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.LongColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.MapColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.UnionColumnVector;
+import org.apache.orc.TypeDescription;
+
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Utility methods for ORC support (conversion from Avro, conversion to 
Hive types, e.g.
+ */
+public class OrcUtils {
+
+public static void putToRowBatch(ColumnVector col, MutableInt 
vectorOffset, int rowNumber, Schema fieldSchema, Object o) {
+Schema.Type fieldType = fieldSchema.getType();
+
+if (fieldType == null) {
+throw new IllegalArgumentException("Field type is null");
+}
+
+if (Schema.Type.INT.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((LongColumnVector) col).vector[rowNumber] = (int) o;
+}
+} else if (Schema.Type.LONG.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((LongColumnVector) col).vector[rowNumber] = (long) o;
+}
+} else if (Schema.Type.BOOLEAN.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((LongColumnVector) col).vector[rowNumber] = ((boolean) o) 
? 1 : 0;
+}
+} else if (Schema.Type.BYTES.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+ByteBuffer byteBuffer = ((ByteBuffer) o);
+int size = byteBuffer.remaining();
+byte[] buf = new byte[size];
+byteBuffer.get(buf, 0, size);
+((BytesColumnVector) col).setVal(rowNumber, buf);
+}
+} else if (Schema.Type.DOUBLE.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((DoubleColumnVector) col).vector[rowNumber] = (double) o;
+}
+} else if (Schema.Type.FLOAT.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((DoubleColumnVector) col).vector[rowNumber] = (float) o;
+}
+} else if (Schema.Type.STRING.equals(fieldType) || 
Schema.Type.ENUM.equals(fieldType)) {
+if (o == null) {
+col.isNull[rowNumber] = true;
+} else {
+((BytesColumnVector) col).setVal(rowNumber, 
o.toString().getBytes());
+}
+} else if (Schema.Type.UNION.equals(fieldType)) {
+// If the union only has one non-null type in it, it was 
flattened in the ORC schema
+if (col instanceof UnionColumnVector) {
+UnionColumnVector union = ((UnionColumnVector) col);
+  

[GitHub] nifi pull request #477: NIFI-1663: Add ConvertAvroToORC processor

2016-06-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/477#discussion_r68073833
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/util/orc/OrcUtils.java
 ---
@@ -0,0 +1,443 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.util.orc;
+
+import org.apache.avro.Schema;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.commons.lang3.mutable.MutableInt;
+import org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.DoubleColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.ListColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.LongColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.MapColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.UnionColumnVector;
+import org.apache.orc.TypeDescription;
+
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Utility methods for ORC support (conversion from Avro, conversion to 
Hive types, e.g.
+ */
+public class OrcUtils {
+
+public static void putToRowBatch(ColumnVector col, MutableInt 
vectorOffset, int rowNumber, Schema fieldSchema, Object o) {
+Schema.Type fieldType = fieldSchema.getType();
+
+if (fieldType == null) {
+throw new IllegalArgumentException("Field type is null");
+}
+
+if (Schema.Type.INT.equals(fieldType)) {
--- End diff --

We are comparing fieldType to the different enum values. Can we switch this 
to be a case/switch statement? I think that would be cleaner.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #477: NIFI-1663: Add ConvertAvroToORC processor

2016-06-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/477#discussion_r68073525
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/util/orc/OrcUtils.java
 ---
@@ -0,0 +1,443 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.util.orc;
+
+import org.apache.avro.Schema;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.commons.lang3.mutable.MutableInt;
+import org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.DoubleColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.ListColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.LongColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.MapColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.UnionColumnVector;
+import org.apache.orc.TypeDescription;
+
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Utility methods for ORC support (conversion from Avro, conversion to 
Hive types, e.g.
+ */
+public class OrcUtils {
+
+public static void putToRowBatch(ColumnVector col, MutableInt 
vectorOffset, int rowNumber, Schema fieldSchema, Object o) {
+Schema.Type fieldType = fieldSchema.getType();
+
+if (fieldType == null) {
+throw new IllegalArgumentException("Field type is null");
+}
+
+if (Schema.Type.INT.equals(fieldType)) {
+if (o == null) {
--- End diff --

The first several field types here check if o == null, and if so set 
isNull[rowNumber] = true. The others don't check if o == null and just perform 
(o instanceof) checks, which would throw a NullPointer. Should we move the `o 
== null` check to the beginning so that we always check it and so that the code 
is cleaner?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #477: NIFI-1663: Add ConvertAvroToORC processor

2016-06-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/477#discussion_r68072854
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/ConvertAvroToORC.java
 ---
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hive;
+
+import org.apache.avro.Schema;
+import org.apache.avro.file.DataFileStream;
+import org.apache.avro.generic.GenericDatumReader;
+import org.apache.avro.generic.GenericRecord;
+import org.apache.commons.lang3.mutable.MutableInt;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.hive.HiveJdbcCommon;
+import org.apache.nifi.util.orc.OrcFlowFileWriter;
+import org.apache.nifi.util.orc.OrcUtils;
+import org.apache.orc.CompressionKind;
+import org.apache.orc.OrcFile;
+import org.apache.orc.TypeDescription;
+
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * The ConvertAvroToORC processor takes an Avro-formatted flow file as 
input and converts it into ORC format.
+ */
+@SideEffectFree
+@SupportsBatching
+@Tags({"avro", "orc", "hive", "convert"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Converts an Avro record into ORC file format. This 
processor provides a direct mapping of an Avro record to an ORC record, such "
++ "that the resulting ORC file will have the same hierarchical 
structure as the Avro document. If an incoming FlowFile contains a stream of "
++ "multiple Avro records, the resultant FlowFile will contain a 
ORC file containing all of the Avro records.  If an incoming FlowFile does "
++ "not contain any records, an empty ORC file is the output.")
+@WritesAttributes({
+@WritesAttribute(attribute = "mime.type", description = "Sets the 
mime type to application/octet-stream"),
+@WritesAttribute(attribute = "filename", description = "Sets the 
filename to the existing filename with the extension replaced by / added to by 
.orc"),
+@WritesAttribute(attribute = "record.count", description = "Sets 
the number of records in the ORC file."),
+@WritesAttribute(attribute = "hive.ddl", description =

[GitHub] nifi pull request #477: NIFI-1663: Add ConvertAvroToORC processor

2016-06-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/477#discussion_r68072328
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/ConvertAvroToORC.java
 ---
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hive;
+
+import org.apache.avro.Schema;
+import org.apache.avro.file.DataFileStream;
+import org.apache.avro.generic.GenericDatumReader;
+import org.apache.avro.generic.GenericRecord;
+import org.apache.commons.lang3.mutable.MutableInt;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.hive.HiveJdbcCommon;
+import org.apache.nifi.util.orc.OrcFlowFileWriter;
+import org.apache.nifi.util.orc.OrcUtils;
+import org.apache.orc.CompressionKind;
+import org.apache.orc.OrcFile;
+import org.apache.orc.TypeDescription;
+
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * The ConvertAvroToORC processor takes an Avro-formatted flow file as 
input and converts it into ORC format.
+ */
+@SideEffectFree
+@SupportsBatching
+@Tags({"avro", "orc", "hive", "convert"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Converts an Avro record into ORC file format. This 
processor provides a direct mapping of an Avro record to an ORC record, such "
++ "that the resulting ORC file will have the same hierarchical 
structure as the Avro document. If an incoming FlowFile contains a stream of "
++ "multiple Avro records, the resultant FlowFile will contain a 
ORC file containing all of the Avro records.  If an incoming FlowFile does "
++ "not contain any records, an empty ORC file is the output.")
+@WritesAttributes({
+@WritesAttribute(attribute = "mime.type", description = "Sets the 
mime type to application/octet-stream"),
+@WritesAttribute(attribute = "filename", description = "Sets the 
filename to the existing filename with the extension replaced by / added to by 
.orc"),
+@WritesAttribute(attribute = "record.count", description = "Sets 
the number of records in the ORC file."),
+@WritesAttribute(attribute = "hive.ddl", description =

[GitHub] nifi pull request #477: NIFI-1663: Add ConvertAvroToORC processor

2016-06-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/477#discussion_r68071005
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/ConvertAvroToORC.java
 ---
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hive;
+
+import org.apache.avro.Schema;
+import org.apache.avro.file.DataFileStream;
+import org.apache.avro.generic.GenericDatumReader;
+import org.apache.avro.generic.GenericRecord;
+import org.apache.commons.lang3.mutable.MutableInt;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.hive.HiveJdbcCommon;
+import org.apache.nifi.util.orc.OrcFlowFileWriter;
+import org.apache.nifi.util.orc.OrcUtils;
+import org.apache.orc.CompressionKind;
+import org.apache.orc.OrcFile;
+import org.apache.orc.TypeDescription;
+
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * The ConvertAvroToORC processor takes an Avro-formatted flow file as 
input and converts it into ORC format.
+ */
+@SideEffectFree
+@SupportsBatching
+@Tags({"avro", "orc", "hive", "convert"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Converts an Avro record into ORC file format. This 
processor provides a direct mapping of an Avro record to an ORC record, such "
++ "that the resulting ORC file will have the same hierarchical 
structure as the Avro document. If an incoming FlowFile contains a stream of "
++ "multiple Avro records, the resultant FlowFile will contain a 
ORC file containing all of the Avro records.  If an incoming FlowFile does "
++ "not contain any records, an empty ORC file is the output.")
+@WritesAttributes({
+@WritesAttribute(attribute = "mime.type", description = "Sets the 
mime type to application/octet-stream"),
+@WritesAttribute(attribute = "filename", description = "Sets the 
filename to the existing filename with the extension replaced by / added to by 
.orc"),
+@WritesAttribute(attribute = "record.count", description = "Sets 
the number of records in the ORC file."),
+@WritesAttribute(attribute = "hive.ddl", description =

[GitHub] nifi pull request #477: NIFI-1663: Add ConvertAvroToORC processor

2016-06-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/477#discussion_r68070933
  
--- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/processors/hive/ConvertAvroToORC.java
 ---
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hive;
+
+import org.apache.avro.Schema;
+import org.apache.avro.file.DataFileStream;
+import org.apache.avro.generic.GenericDatumReader;
+import org.apache.avro.generic.GenericRecord;
+import org.apache.commons.lang3.mutable.MutableInt;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.StreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.hive.HiveJdbcCommon;
+import org.apache.nifi.util.orc.OrcFlowFileWriter;
+import org.apache.nifi.util.orc.OrcUtils;
+import org.apache.orc.CompressionKind;
+import org.apache.orc.OrcFile;
+import org.apache.orc.TypeDescription;
+
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * The ConvertAvroToORC processor takes an Avro-formatted flow file as 
input and converts it into ORC format.
+ */
+@SideEffectFree
+@SupportsBatching
+@Tags({"avro", "orc", "hive", "convert"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Converts an Avro record into ORC file format. This 
processor provides a direct mapping of an Avro record to an ORC record, such "
++ "that the resulting ORC file will have the same hierarchical 
structure as the Avro document. If an incoming FlowFile contains a stream of "
++ "multiple Avro records, the resultant FlowFile will contain a 
ORC file containing all of the Avro records.  If an incoming FlowFile does "
++ "not contain any records, an empty ORC file is the output.")
+@WritesAttributes({
+@WritesAttribute(attribute = "mime.type", description = "Sets the 
mime type to application/octet-stream"),
+@WritesAttribute(attribute = "filename", description = "Sets the 
filename to the existing filename with the extension replaced by / added to by 
.orc"),
+@WritesAttribute(attribute = "record.count", description = "Sets 
the number of records in the ORC file."),
+@WritesAttribute(attribute = "hive.ddl", description =

[GitHub] nifi pull request #550: NIFI-1900: Verify that connection's destination is n...

2016-06-20 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/550

NIFI-1900: Verify that connection's destination is not running when t…

…rying to change destination

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1900

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/550.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #550


commit 72679a9ca482f1647e819e40193bed477f59cb97
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-06-20T18:24:33Z

NIFI-1900: Verify that connection's destination is not running when trying 
to change destination




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #459: NIFI-1909 Adding ability to process schemaless Avro records...

2016-06-20 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/459
  
@ryanpersaud thanks for getting this in! I have verified the checkstyle and 
pushed to both the 0.x and 1.0.0 baselines. Thanks again for knocking this out!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #546: NIFI-1877, NIFI-1306: Add fields to FlowFile for FIF...

2016-06-20 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/546

NIFI-1877, NIFI-1306: Add fields to FlowFile for FIFO Prioritizer, Ol…

…dest/Newest FlowFile first prioritizers to work property

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1877

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/546.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #546


commit fce80761f886a53ff3ad5450c3524bf8510c9ea7
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-06-20T17:06:10Z

NIFI-1877, NIFI-1306: Add fields to FlowFile for FIFO Prioritizer, 
Oldest/Newest FlowFile first prioritizers to work property




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #501: NIFI-1974 - Support Custom Properties in Expression Languag...

2016-06-18 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/501
  
Yolanda - I have merged to 0.x. This was very well done. I like the class 
hierarchy you put together - the breakout of interfaces, abstract classes, and 
concrete classes. I think it will yield itself nicely to the future updates so 
that this can be baked into the UI. Nicely done! Thanks for jumping in and 
knocking this out!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #501: NIFI-1974 - Support Custom Properties in Expression Languag...

2016-06-18 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/501
  
@YolandaMDavis All looks good with the exception of needed to trim() the 
filenames and one other thing that i noticed, which is that if multiple 
properties files are specified and one cannot be loaded, none are loaded. I 
will create an additional commit just to log this accurately and trim the 
filename and push this to 0.x. I created a JIRA 
https://issues.apache.org/jira/browse/NIFI-2057 to address the multiple files 
better, but I don't want that to hold up the 0.7.0 release or prevent this from 
making it into 0.7.0. I will merge this as-is with that minor change noted to 
0.x.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-18 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r67608495
  
--- Diff: 
nifi-api/src/test/resources/TestVariableRegistry/foobar.properties ---
@@ -0,0 +1 @@
+fake.property.3=test me out 3, test me out 4
--- End diff --

It looks like you did actually already address this in the updated version


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-18 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r67608449
  
--- Diff: 
nifi-api/src/test/resources/TestVariableRegistry/foobar.properties ---
@@ -0,0 +1 @@
+fake.property.3=test me out 3, test me out 4
--- End diff --

@YolandaMDavis I would prefer adding in the ASF license. RAT check should 
be sort of a last resort, if we cannot use the license - for example, a binary 
file or a file that doesn't allow comments, etc.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-18 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r67608435
  
--- Diff: 
nifi-commons/nifi-properties/src/main/java/org/apache/nifi/util/NiFiProperties.java
 ---
@@ -1073,4 +1078,28 @@ public File getEmbeddedZooKeeperPropertiesFile() {
 public boolean isStartEmbeddedZooKeeper() {
 return 
Boolean.parseBoolean(getProperty(STATE_MANAGEMENT_START_EMBEDDED_ZOOKEEPER));
 }
+
+public String getVariableRegistryProperties(){
+return getProperty(VARIABLE_REGISTRY_PROPERTIES);
+}
+
+public Path[] getVariableRegistryPropertiesPaths() {
+final List vrPropertiesPaths = new ArrayList<>();
+
+final String vrPropertiesFiles = getVariableRegistryProperties();
+if(!StringUtils.isEmpty(vrPropertiesFiles)) {
+
+final List vrPropertiesFileList = 
Arrays.asList(vrPropertiesFiles.split(","));
+
+for(String propertiesFile : vrPropertiesFileList){
+vrPropertiesPaths.add(Paths.get(propertiesFile));
--- End diff --

I think we need to call propertiesFile.trim() here


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #517: NIFI-1994: Fixed issues with controller services and templa...

2016-06-17 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/517
  
@mcgilman I have updated the PR to address this. Please try it out again.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #531: NIFI-2007: Restoring bulletin functionality

2016-06-17 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/531
  
@mcgilman only thing that i found wrong was that the bulletins do not 
render the bulletin's 'category'. I created a separate ticket for this -- 
NIFI-2049. Otherwise, all looks good. Pushed to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #540: NIFI-2033: Allow Controller Services to be scoped at...

2016-06-17 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/540

NIFI-2033: Allow Controller Services to be scoped at Controller level…

… instead of just group level

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-2033

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/540.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #540


commit 94ab0a85ca616f4df3f1af3d34eb2e441978f7a9
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-06-17T13:08:44Z

NIFI-2033: Allow Controller Services to be scoped at Controller level 
instead of just group level




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #522: NIFI-2000: Ensure that if we override setters in Applicatio...

2016-06-15 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/522
  
@olegz I updated the PR to provide a getter in the ApplicationResource in 
order to avoid the duplicate instance variable as you suggested.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #520: NIFI-1997: Use the 'autoResumeState' property defined in ni...

2016-06-15 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/520
  
@olegz I updated the PR to remove the dead default constructor and get rid 
of the other commit


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #520: NIFI-1997: Use the 'autoResumeState' property define...

2016-06-15 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/520#discussion_r67190819
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster-protocol/src/main/java/org/apache/nifi/cluster/protocol/jaxb/message/AdaptedDataFlow.java
 ---
@@ -23,8 +23,6 @@
 private byte[] flow;
 private byte[] snippets;
 
-private boolean autoStartProcessors;
-
 public AdaptedDataFlow() {
 }
--- End diff --

Ohhh... I am sorry - I thought you were commented on the removal of the 
'autoStartProcessors' field, and asking why it was there to begin with. I can 
remove it if you prefer.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #520: NIFI-1997: Use the 'autoResumeState' property defined in ni...

2016-06-15 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/520
  
@olegz I think github was having some problems the other day. I had 3 PR's 
that I created that did this, where it combined my branch from the previously 
submitted PR. #519 should not need to be here. But you have already merged #519 
so I think we should be okay.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #520: NIFI-1997: Use the 'autoResumeState' property define...

2016-06-15 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/520#discussion_r67190057
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster-protocol/src/main/java/org/apache/nifi/cluster/protocol/jaxb/message/AdaptedDataFlow.java
 ---
@@ -23,8 +23,6 @@
 private byte[] flow;
 private byte[] snippets;
 
-private boolean autoStartProcessors;
-
 public AdaptedDataFlow() {
 }
--- End diff --

This was something that remains from the 0.x codebase but it doesn't really 
make sense anymore in the Zero-Master Clustering paradigm.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #530: NIFI-1992: Updated site-to-site client and server to...

2016-06-15 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/530

NIFI-1992: Updated site-to-site client and server to support clustered nifi 
instances

NIFI-1992: Updated site-to-site client and server to support clustered nifi 
instances.

Tested:
- 0.x client pushing to 1.x cluster
- 0.x client pulling from 1.x cluster
- 1.x client pushing to 0.x cluster
- 1.x client pulling from 0.x cluster
- 1.x client pushing to 1.x cluster
- 1.x client pulling from 1.x cluster

All appears to work well.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1992

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/530.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #530


commit 4e6c01c0f5b260d8f651a161f978ef48c1361b05
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-06-14T18:47:24Z

NIFI-1992: Updated site-to-site client and server to support clustered nifi 
instances




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #522: NIFI-2000: Ensure that if we override setters in App...

2016-06-10 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/522

NIFI-2000: Ensure that if we override setters in ApplicationResource …

…that we call the super class's setter as well

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-2000

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/522.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #522


commit 54b965dd90d90efd9eae826e73b5c60b761bd2a8
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-06-10T21:08:30Z

NIFI-2000: Ensure that if we override setters in ApplicationResource that 
we call the super class's setter as well




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #520: NIFI-1997: Use the 'autoResumeState' property define...

2016-06-10 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/520

NIFI-1997: Use the 'autoResumeState' property defined in nifi.properties on 
each node instead of inheriting the property from the Cluster Coordinator

Use the 'autoResumeState' property defined in nifi.properties on each node 
instead of inheriting the property from the Cluster Coordinator

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1997

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/520.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #520


commit 8cebe78d8e93694bce847c3e3c38802aa4f701eb
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-06-10T18:35:47Z

NIFI-1996: Fixed bug in the generation of UUID's for components when 
dealing with Snippets

commit 4f997585c3ff4c3997f76f0183ee234032d23251
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-06-10T19:16:18Z

NIFI-1997: Use the 'autoResumeState' property defined in nifi.properties on 
each node instead of inheriting the property from the Cluster Coordinator




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #519: NIFI-1996: Fixed bug in the generation of UUID's for...

2016-06-10 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/519

NIFI-1996: Fixed bug in the generation of UUID's for components when 
dealing with Snippets

Fixed bug in the generation of UUID's for components when dealing with 
Snippets

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1996

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/519.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #519


commit 8cebe78d8e93694bce847c3e3c38802aa4f701eb
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-06-10T18:35:47Z

NIFI-1996: Fixed bug in the generation of UUID's for components when 
dealing with Snippets




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #517: NIFI-1994: Fixed issues with controller services and...

2016-06-09 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/517

NIFI-1994: Fixed issues with controller services and templates

Fixed issue with Controller Service Fully Qualified Class Names and ensure 
that services are added to the process groups as appropriate when instantiating 
templates

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1994

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/517.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #517


commit a5fecda5a2ffb35e21d950aa19a07127e19a419e
Author: Bryan Rosander <brosan...@hortonworks.com>
Date:   2016-05-27T14:56:02Z

NIFI-1975 - Processor for parsing evtx files

Signed-off-by: Matt Burgess <mattyb...@apache.org>

This closes #492

commit c120c4982d4fc811b06b672e3983b8ca5fb8ae64
Author: Koji Kawamura <ijokaruma...@gmail.com>
Date:   2016-06-06T13:19:26Z

NIFI-1857: HTTPS Site-to-Site

- Enable HTTP(S) for Site-to-Site communication
- Support HTTP Proxy in the middle of local and remote NiFi
- Support BASIC and DIGEST auth with Proxy Server
- Provide 2-phase style commit same as existing socket version
- [WIP] Test with the latest cluster env (without NCM) hasn't tested yet

- Fixed Buffer handling issues at asyc http client POST
- Fixed JS error when applying Remote Process Group Port setting from UI
- Use compression setting from UI
- Removed already finished TODO comments

- Added additional buffer draining code after receiving EOF
- Added inspection and assert code to make sure Site-to-Site client has
  written data fully to output
stream
- Changed default nifi.remote.input.secure from true to false

This closes #497.

commit bfebe76d17b2024c8ae90fd3837df71ba77d
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-06-10T00:39:29Z

NIFI-1994: Fixed issue with Controller Service Fully Qualified Class Names 
and ensure that services are added to the process groups as appropriate when 
instantiating templates




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #497: NIFI-1857: HTTPS Site-to-Site

2016-06-09 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/497
  
@ijokarumawak I added the "This closes #497" message to the commit that I 
pushed, but it doesn't seem to have worked... can you close the PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #497: NIFI-1857: HTTPS Site-to-Site

2016-06-09 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/497
  
@ijokarumawak this is looking good now! I have pulled down the latest PR, 
rebased against master, and have been able to test this running directly 
against my NiFi instance and while using nginx as a proxy. Nicely done! I have 
pushed this to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #439: NIFI-1866 ProcessException handling in StandardProcessSessi...

2016-06-08 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/439
  
@pvillard31 I got this merged into both master and 0.x baselines. Thanks 
for knocking this out!!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #497: NIFI-1857: HTTPS Site-to-Site

2016-06-08 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/497
  
@ijokarumawak i checked out the new PR and tried the test again. Updated 
nifi.properties only to set secure = false for site-to-site (I think this needs 
to be the default because as-is, nifi doesn't startup out of the box).

I did get further this time, and saw the receiving side trying to receive 
data but still got Exceptions and no data coming through. The logs show:

```
2016-06-08 10:39:04,660 ERROR [Timer-Driven Process Thread-10] 
o.a.nifi.remote.StandardRemoteGroupPort 
RemoteGroupPort[name=Log,target=http://localhost:8080/nifi] failed to 
communicate with remote NiFi instance due to java.io.IOException: Failed to 
confirm transaction with Peer[url=http://127.0.0.1:8080/nifi-api] due to 
java.io.IOException: Unexpected response code: 500 errCode:Abort 
errMessage:Server encountered an exception.
2016-06-08 10:39:05,668 ERROR [NiFi Web Server-24] 
o.apache.nifi.web.api.SiteToSiteResource Unexpected exception occurred. 
clientId=a252a4c6-5a5f-42f3-8270-322923b8c118, 
portId=30638675-e655-4719-b684-905ad0d49eac
2016-06-08 10:39:05,671 ERROR [NiFi Web Server-24] 
o.apache.nifi.web.api.SiteToSiteResource Exception detail:
org.apache.nifi.processor.exception.ProcessException: 
org.apache.nifi.processor.exception.FlowFileAccessException: Failed to import 
data from org.apache.nifi.stream.io.MinimumLengthInputStream@75b15a7e for 
StandardFlowFileRecord[uuid=d8046125-80be-4475-bb43-b15cf6dec4d8,claim=,offset=0,name=531446421738053,size=0]
 due to org.apache.nifi.processor.exception.FlowFileAccessException: Unable to 
create ContentClaim due to java.io.EOFException
at 
org.apache.nifi.remote.StandardRootGroupPort.receiveFlowFiles(StandardRootGroupPort.java:503)
 ~[nifi-site-to-site-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
at 
org.apache.nifi.web.api.SiteToSiteResource.receiveFlowFiles(SiteToSiteResource.java:418)
 ~[classes/:na]
at sun.reflect.GeneratedMethodAccessor348.invoke(Unknown Source) 
~[na:na]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[na:1.8.0_60]
at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_60]
at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 [jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 [jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 [jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
 [jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 [jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 [jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 [jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
 [jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
 [jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
 [jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
 [jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
 [jersey-server-1.19.jar:1.19]
at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
 [jersey-servlet-1.19.jar:1.19]
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
 [jersey-servlet-1.19.jar:1.19]
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
 [jersey-servlet-1.19.jar:1.19]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) 
[javax.servlet-api-3.1.0.jar:3.1.0]
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:845) 
[jetty-servlet-9.3.9.v20160517.jar:9.3.9.v20160517]
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
 [jetty-servlet-9.3.9.v20160517.jar:9.3.9.v20160517]
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:51) 
[jetty-servlets-9.3.9.v20160517.jar:9.3.9.v20160517

[GitHub] nifi pull request #510: NIFI-1984: Ensure that locks are always cleaned up b...

2016-06-08 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/510

NIFI-1984: Ensure that locks are always cleaned up by NaiveRevisionManager

Ensure that if an Exception is thrown by the 'Deletion Task' when calling 
NaiveRevisionManager.deleteRevision() that the locking is appropriately cleaned 
up.

Looking through the code, there do not appear to be any other places where 
we invoke callbacks without handling appropriate with a try/finally or 
try/catch block.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1984

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/510.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #510


commit 1d46b5431bf2eca0298d2f2fdc9854ef3f9fedfa
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-06-08T12:57:37Z

NIFI-1984: Ensure that if an Exception is thrown by the 'Deletion Task' 
when calling NaiveRevisionManager.deleteRevision() that the locking is 
appropriately cleaned up




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #501: NIFI-1974 - Support Custom Properties in Expression Languag...

2016-06-07 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/501
  
@YolandaMDavis I left several comments inline. Only other comment that I 
have is that it feels a little weird to me to have `ControllerServiceLookup` 
extend `VariableRegistryProvider`. These are really unrelated concepts... any 
insight as to why you went that route?

Otherwise, all is looking great! Certainly not a trivial addition to the 
codebase. Thanks for sticking with it to get all of this knocked out!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-07 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66148247
  
--- Diff: nifi-api/src/test/resources/TestVariableRegistry/test.properties 
---
@@ -0,0 +1,2 @@
+fake.property.1=test me out 1
--- End diff --

Same comment as above


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-07 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66148213
  
--- Diff: 
nifi-api/src/test/resources/TestVariableRegistry/foobar.properties ---
@@ -0,0 +1 @@
+fake.property.3=test me out 3, test me out 4
--- End diff --

I believe this file should have the ASF license information in a header. 
This would also alleviate the need to add the rat-check exclusions in the 
pom.xml


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-07 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66148034
  
--- Diff: 
nifi-api/src/test/java/org/apache/nifi/registry/TestVariableRegistry.java ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.registry;
+
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+
+import org.junit.Test;
+
+import static org.junit.Assert.assertTrue;
+
+public class TestVariableRegistry {
+
+@Test
+public void testReadMap(){
+Map<String,String> variables1 = new HashMap<>();
+variables1.put("fake.property.1","fake test value");
+
+Map<String,String> variables2 = new HashMap<>();
+variables1.put("fake.property.2","fake test value");
+
+VariableRegistry registry = 
VariableRegistryFactory.getInstance(variables1,variables2);
+
+Map<String,String> variables = registry.getVariables();
+assertTrue(variables.size() == 2);
+assertTrue(variables.get("fake.property.1").equals("fake test 
value"));
+
assertTrue(registry.getVariableValue("fake.property.2").equals("fake test 
value"));
+}
+
+@Test
+public void testReadProperties(){
+Properties properties = new Properties();
+properties.setProperty("fake.property.1","fake test value");
+VariableRegistry registry = 
VariableRegistryFactory.getInstance(properties);
+Map<String,String> variables = registry.getVariables();
+assertTrue(variables.get("fake.property.1").equals("fake test 
value"));
+}
+
+@Test
+public void testReadFiles(){
+final Path fooPath = 
Paths.get("src/test/resources/TestVariableRegistry/foobar.properties");
+final Path testPath = 
Paths.get("src/test/resources/TestVariableRegistry/test.properties");
+VariableRegistry registry = 
VariableRegistryFactory.getInstance(fooPath.toFile(),testPath.toFile());
+Map<String,String> variables = registry.getVariables();
+assertTrue(variables.size() == 3);
+assertTrue(variables.get("fake.property.1").equals("test me out 
1"));
+assertTrue(variables.get("fake.property.3").equals("test me out 3, 
test me out 4"));
+}
+
+@Test
+public void testReadPaths(){
+final Path fooPath = 
Paths.get("src/test/resources/TestVariableRegistry/foobar.properties");
+final Path testPath = 
Paths.get("src/test/resources/TestVariableRegistry/test.properties");
+VariableRegistry registry = 
VariableRegistryFactory.getInstance(fooPath,testPath);
+Map<String,String> variables = registry.getVariables();
+assertTrue(variables.size() == 3);
+assertTrue(variables.get("fake.property.1").equals("test me out 
1"));
+assertTrue(variables.get("fake.property.3").equals("test me out 3, 
test me out 4"));
+}
+
+@Test
+public void testAddRegistry(){
+
+final Map<String,String> variables1 = new HashMap<>();
+variables1.put("fake.property.1","fake test value");
+
+
+final Path fooPath = 
Paths.get("src/test/resources/TestVariableRegistry/foobar.properties");
+VariableRegistry pathRegistry = 
VariableRegistryFactory.getInstance(fooPath);
+
+final Path testPath = 
Paths.get("src/test/resources/TestVariableRegistry/test.properties");
+VariableRegistry fileRegistry = 
VariableRegistryFact

[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-07 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66147759
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/registry/VariableRegistryUtils.java ---
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.registry;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.nifi.flowfile.FlowFile;
+
+public class VariableRegistryUtils {
+
+public static VariableRegistry createVariableRegistry(){
+VariableRegistry variableRegistry = 
VariableRegistryFactory.getInstance();
+VariableRegistry envRegistry = 
VariableRegistryFactory.getInstance(System.getenv());
+VariableRegistry propRegistry = 
VariableRegistryFactory.getInstance(System.getProperties());
+variableRegistry.addRegistry(envRegistry);
+variableRegistry.addRegistry(propRegistry);
+return variableRegistry;
+}
+
+public static VariableRegistry populateRegistry(VariableRegistry 
variableRegistry, final FlowFile flowFile, final Map<String, String> 
additionalAttributes){
+final Map<String, String> flowFileAttributes = flowFile == null ? 
Collections.<String, String> emptyMap() : flowFile.getAttributes();
+final Map<String, String> additionalOrEmpty = additionalAttributes 
== null ? Collections.<String, String> emptyMap() : additionalAttributes;
+
+final Map<String, String> flowFileProps = new HashMap<>();
+if (flowFile != null) {
+flowFileProps.put("flowFileId", 
String.valueOf(flowFile.getId()));
+flowFileProps.put("fileSize", 
String.valueOf(flowFile.getSize()));
+flowFileProps.put("entryDate", 
String.valueOf(flowFile.getEntryDate()));
+flowFileProps.put("lineageStartDate", 
String.valueOf(flowFile.getLineageStartDate()));
+}
+VariableRegistry newRegistry = 
VariableRegistryFactory.getInstance();
+newRegistry.addRegistry(variableRegistry);
+
newRegistry.addRegistry(VariableRegistryFactory.getInstance(flowFileAttributes));
+
newRegistry.addRegistry(VariableRegistryFactory.getInstance(additionalOrEmpty));
--- End diff --

I would avoid adding this at all if `additionalAttributes == null`, rather 
than adding an empty map. Since the EL is used in critical parts of the code we 
need to avoid any overhead that is not critical. Would suggest the same for 
`flowFileProps`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-07 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66147340
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/registry/VariableRegistryUtils.java ---
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.registry;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.nifi.flowfile.FlowFile;
+
+public class VariableRegistryUtils {
+
+public static VariableRegistry createVariableRegistry(){
+VariableRegistry variableRegistry = 
VariableRegistryFactory.getInstance();
+VariableRegistry envRegistry = 
VariableRegistryFactory.getInstance(System.getenv());
+VariableRegistry propRegistry = 
VariableRegistryFactory.getInstance(System.getProperties());
+variableRegistry.addRegistry(envRegistry);
+variableRegistry.addRegistry(propRegistry);
--- End diff --

I believe from reading the code that this means that properties defined in 
'propRegistry' will take precedence over those defined in 'envRegistry' - is 
that correct? It probably makes sense to call out in the JavaDocs the order of 
precedence that will be used if there are conflicting variable names.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-07 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66146877
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/registry/VariableRegistryFactory.java ---
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.registry;
+
+import java.io.File;
+import java.nio.file.Path;
+import java.util.Map;
+import java.util.Properties;
+
+public class VariableRegistryFactory {
+
+public static VariableRegistry getInstance(final 
Properties...properties){
+return new PropertiesVariableRegistry(properties);
+}
+
+public static VariableRegistry getInstance(final Path... paths){
--- End diff --

Does it make sense to perhaps use a different name than 'getInstance' here? 
I suggest this only because the File Variable Registry is abstract, so there 
could potentially be multiple implementations, so PropertiesVariableRegistry 
may not always be the correct implementation to use. Perhaps call it 
`getPropertiesInstance` or something like that? Same for `File...` argument 
below.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-07 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66146086
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/registry/VariableRegistry.java ---
@@ -0,0 +1,27 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.registry;
+
+
+import java.util.Map;
+
+public interface VariableRegistry {
+
+Map<String, String> getVariables();
--- End diff --

It seems useful to me to include a `Set getVariableNames()` method, 
perhaps?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-07 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66145978
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/registry/VariableRegistry.java ---
@@ -0,0 +1,27 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.registry;
+
+
+import java.util.Map;
+
+public interface VariableRegistry {
--- End diff --

Should add JavaDoc explaining what the Variable Registry is and what it's 
used for


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-07 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66145563
  
--- Diff: nifi-api/src/main/java/org/apache/nifi/registry/MultiMap.java ---
@@ -0,0 +1,154 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.registry;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+public class MultiMap<K,V> implements Map<K,V> {
+
+private final List<Map<K,V>> maps;
+
+MultiMap() {
+this.maps = new ArrayList<>();
+}
+
+@Override
+public int size() {
+int size = 0;
+for (final Map<K,V> map : maps) {
+size += map.size();
+}
+return size;
--- End diff --

Or, more simply returning keySet().size()


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-07 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66145081
  
--- Diff: nifi-api/src/main/java/org/apache/nifi/registry/MultiMap.java ---
@@ -0,0 +1,154 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.registry;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+public class MultiMap<K,V> implements Map<K,V> {
--- End diff --

From below, it appears that we are requiring that the key be of type 
String, so I would recommend we change this to:
`public class MultiMap implements Map<String, V>`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-07 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66143984
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/registry/FileVariableRegistry.java ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.registry;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Path;
+import java.util.Map;
+
+
+public abstract class FileVariableRegistry extends 
MultiMapVariableRegistry {
+
+public FileVariableRegistry() {
+super();
+}
+
+public FileVariableRegistry(File... files){
+super();
+addVariables(files);
+}
+
+public FileVariableRegistry(Path... paths){
+super();
+addVariables(paths);
+}
+
+@SuppressWarnings({"unchecked", "rawtypes"})
+public void addVariables(File ...files){
+if(files != null) {
+for (final File file : files) {
+try {
+registry.addMap(convertFile(file));
+} catch (IOException iex) {
+throw new IllegalArgumentException("A file provided 
was invalid.", iex);
+}
+}
+}
+}
+
+@SuppressWarnings({"unchecked", "rawtypes"})
+public void addVariables(Path ...paths){
+if(paths != null) {
+for (final Path path : paths) {
+try {
+registry.addMap(convertFile(path.toFile()));
+} catch (IOException iex) {
+throw new IllegalArgumentException("A path provided 
was invalid.", iex);
+}
+}
+}
+}
+
+protected abstract Map convertFile(File file) throws IOException;
--- End diff --

Should this be parameterized, as Map<String, String>??


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-07 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66143013
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/registry/FileVariableRegistry.java ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.registry;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Path;
+import java.util.Map;
+
+
+public abstract class FileVariableRegistry extends 
MultiMapVariableRegistry {
+
+public FileVariableRegistry() {
+super();
+}
+
+public FileVariableRegistry(File... files){
+super();
+addVariables(files);
+}
+
+public FileVariableRegistry(Path... paths){
+super();
+addVariables(paths);
+}
+
+@SuppressWarnings({"unchecked", "rawtypes"})
+public void addVariables(File ...files){
+if(files != null) {
+for (final File file : files) {
+try {
+registry.addMap(convertFile(file));
+} catch (IOException iex) {
+throw new IllegalArgumentException("A file provided 
was invalid.", iex);
--- End diff --

Is there a specific reason that we want to wrap the checked IOException 
into an Unchecked Exception here?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-07 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66142811
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/registry/FileVariableRegistry.java ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.registry;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Path;
+import java.util.Map;
+
+
+public abstract class FileVariableRegistry extends 
MultiMapVariableRegistry {
+
+public FileVariableRegistry() {
+super();
+}
+
+public FileVariableRegistry(File... files){
+super();
+addVariables(files);
+}
+
+public FileVariableRegistry(Path... paths){
+super();
+addVariables(paths);
+}
+
+@SuppressWarnings({"unchecked", "rawtypes"})
+public void addVariables(File ...files){
+if(files != null) {
+for (final File file : files) {
+try {
+registry.addMap(convertFile(file));
+} catch (IOException iex) {
+throw new IllegalArgumentException("A file provided 
was invalid.", iex);
+}
+}
+}
+}
+
+@SuppressWarnings({"unchecked", "rawtypes"})
+public void addVariables(Path ...paths){
+if(paths != null) {
+for (final Path path : paths) {
+try {
+registry.addMap(convertFile(path.toFile()));
+} catch (IOException iex) {
+throw new IllegalArgumentException("A path provided 
was invalid.", iex);
--- End diff --

Same comment as above - should include the filename for debugging purposes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-07 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66142774
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/registry/FileVariableRegistry.java ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.registry;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Path;
+import java.util.Map;
+
+
+public abstract class FileVariableRegistry extends 
MultiMapVariableRegistry {
+
+public FileVariableRegistry() {
+super();
+}
+
+public FileVariableRegistry(File... files){
+super();
+addVariables(files);
+}
+
+public FileVariableRegistry(Path... paths){
+super();
+addVariables(paths);
+}
+
+@SuppressWarnings({"unchecked", "rawtypes"})
+public void addVariables(File ...files){
+if(files != null) {
+for (final File file : files) {
+try {
+registry.addMap(convertFile(file));
+} catch (IOException iex) {
+throw new IllegalArgumentException("A file provided 
was invalid.", iex);
--- End diff --

This Exception should indicate the filename of the file that was invalid


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #506: NIFI-1660 - Enhance the expression language with jsonPath f...

2016-06-07 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/506
  
Hey @ckmcd I just realized that you had a different PR for the 0.x branch. 
I ended up just creating a patch from PR 303 and applying that to the 0.x 
baseline. So this should all be taken care of. Do you mind manually closing out 
this PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #303: NIFI-1660 - Enhance the expression language with jsonPath f...

2016-06-07 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/303
  
@ckmcd  looks good! I was able to test it locally. Unit tests are good. I 
did find that a Reader was opened in a unit test and not closed, which i 
addressed, and a couple of NOTICE files needed to be updated. Otherwise, all is 
good. Pushed to master and 0.x branches. Thanks for contributing this back!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #497: NIFI-1857: HTTPS Site-to-Site

2016-06-07 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/497
  
@ijokarumawak I can't seem to get this to work properly. I configured nifi 
to allow http site-to-site only, not secure. I then connected a 
GenerateFlowFile processor to a Remote Process Group that points back to 
http://localhost:8080/nifi with the only port that I have. I ensured that the 
port was running and transferring data to an UpdateAttribute processor that 
ends the flow. When I start sending data to the RPG, though, no data is sent, 
and the logs show the following every 60 seconds:

```
2016-06-07 12:14:44,902 WARN [I/O dispatcher 89] 
o.a.n.r.util.SiteToSiteRestApiClient Didn't received additional packet to send, 
nor confirm transaction call for 30001 millis which exceeds idle connection 
expiration millis(3). This transaction will timeout.
2016-06-07 12:14:44,902 ERROR [I/O dispatcher 89] 
o.a.n.r.util.SiteToSiteRestApiClient Sending data to 
http://127.0.0.1:8080/nifi-api/site-to-site/input-ports/84cd8e54-68ba-41ff-a1a3-ffc445836a8f/transactions/06ca-3912-4a8f-9e81-fff68d8c0e96/flow-files
 has failed
java.io.IOException: Didn't received additional packet to send, nor confirm 
transaction call for 30001 millis
at 
org.apache.nifi.remote.util.SiteToSiteRestApiClient$3.produceContent(SiteToSiteRestApiClient.java:504)
 ~[nifi-site-to-site-client-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
at 
org.apache.http.impl.nio.client.MainClientExec.produceContent(MainClientExec.java:262)
 ~[httpasyncclient-4.1.1.jar:4.1.1]
at 
org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.produceContent(DefaultClientExchangeHandlerImpl.java:136)
 ~[httpasyncclient-4.1.1.jar:4.1.1]
at 
org.apache.http.nio.protocol.HttpAsyncRequestExecutor.outputReady(HttpAsyncRequestExecutor.java:240)
 ~[httpcore-nio-4.4.4.jar:4.4.4]
at 
org.apache.http.impl.nio.DefaultNHttpClientConnection.produceOutput(DefaultNHttpClientConnection.java:292)
 ~[httpcore-nio-4.4.4.jar:4.4.4]
at 
org.apache.http.impl.nio.client.InternalIODispatch.onOutputReady(InternalIODispatch.java:86)
 [httpasyncclient-4.1.1.jar:4.1.1]
at 
org.apache.http.impl.nio.client.InternalIODispatch.onOutputReady(InternalIODispatch.java:39)
 [httpasyncclient-4.1.1.jar:4.1.1]
at 
org.apache.http.impl.nio.reactor.AbstractIODispatch.outputReady(AbstractIODispatch.java:147)
 [httpcore-nio-4.4.4.jar:4.4.4]
at 
org.apache.http.impl.nio.reactor.BaseIOReactor.writable(BaseIOReactor.java:190) 
[httpcore-nio-4.4.4.jar:4.4.4]
at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:343)
 [httpcore-nio-4.4.4.jar:4.4.4]
at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:317)
 [httpcore-nio-4.4.4.jar:4.4.4]
at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:278)
 [httpcore-nio-4.4.4.jar:4.4.4]
at 
org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:106) 
[httpcore-nio-4.4.4.jar:4.4.4]
at 
org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:590)
 [httpcore-nio-4.4.4.jar:4.4.4]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #499: NIFI-1052: Added "Ghost" Processors, Reporting Tasks...

2016-06-06 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/499

NIFI-1052: Added "Ghost" Processors, Reporting Tasks, Controller Services

If we try to create a component for which the NAR is missing, we previously 
would throw an Exception that would result in NiFi not starting up. We changed 
this so that we instead create a "Ghost" implementation that will be invalid an 
explain that the component could not be created. This allows NiFi at least to 
start, so that users can continue to use the NiFi instance.

This ticket also includes fixes to the ReportingTaskResource, as those were 
necessary to test this.

Note that if a component is missing, all properties are marked as 
'sensitive' simply because we don't know whether or not the property truly is 
sensitive, and it is better to show the value as sensitive than to assume that 
it is not. Unfortunately, this means that the actual property value can't be 
seen unless the correct component is restored to the lib/ directory.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1052

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/499.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #499


commit d86fd959f3ba20facdfc207b80da5fb48feaf379
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-06-03T23:41:43Z

NIFI-1052: Added Ghost Processors, Ghost Reporting Tasks, Ghost Controller 
Services




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #491: NIFI-1960: Update admin guide regarding documentation for c...

2016-06-03 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/491
  
Thanks, @alopresto. I have merged to master. I am sure that we will end up 
updating this a few more times before 1.0.0 is released, but at least I wanted 
to get this into master so that those trying to setup clustering could get 
started.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #491: NIFI-1960: Update admin guide regarding documentatio...

2016-06-03 Thread markap14
Github user markap14 closed the pull request at:

https://github.com/apache/nifi/pull/491


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #458: NIFI-1829 - Create new DebugFlow processor.

2016-06-03 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/458
  
I'm having trouble understanding the naming convention here. "DebugFlow" 
would indicate that it is debugging the flow that you are putting together, but 
the comment states "Create DebugFlow processor for use in testing and 
troubleshooting the processor framework." So it appears that the desire is to 
debug the framework?

I also wonder if this is something that should be in the nifi codebase, as 
it really seems like a niche processor that is intended to test very specific 
things and not something that we would expect typical users to use. I may be 
wrong, but would GitHub be a more appropriate place for this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #491: NIFI-1960: Update admin guide regarding documentatio...

2016-06-03 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/491#discussion_r65757147
  
--- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc ---
@@ -1351,66 +1389,81 @@ in the file specified in 
`nifi.login.identity.provider.configuration.file`. Sett
 |nifi.security.ocsp.responder.certificate|This is the location of the OCSP 
responder certificate if one is being used. It is blank by default.
 |
 
-*Cluster Common Properties* +
+ Cluster Common Properties
 
-When setting up a NiFi cluster, these properties should be configured the 
same way on both the cluster manager and the nodes.
+When setting up a NiFi cluster, these properties should be configured the 
same way on all nodes.
 
 |
 |*Property*|*Description*
-|nifi.cluster.protocol.heartbeat.interval|The interval at which nodes 
should emit heartbeats to the cluster manager. The default value is 5 sec.
+|nifi.cluster.protocol.heartbeat.interval|The interval at which nodes 
should emit heartbeats to the Cluster Coordinator. The default value is 5 sec.
 |nifi.cluster.protocol.is.secure|This indicates whether cluster 
communications are secure. The default value is _false_.
-|nifi.cluster.protocol.socket.timeout|The amount of time to wait for a 
cluster protocol socket to be established before trying again. The default 
value is 30 sec.
-|nifi.cluster.protocol.connection.handshake.timeout|The amount of time to 
wait for a node to connect to the cluster. The default value is 45 sec.
+|nifi.cluster.node.event.history.size|When the state of a node in the 
cluster is changed, an event is generated
--- End diff --

Yes - I updated this in the new commit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #491: NIFI-1960: Update admin guide regarding documentatio...

2016-06-03 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/491#discussion_r65755845
  
--- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc ---
@@ -485,98 +485,137 @@ It is preferable to request upstream/downstream 
systems to switch to https://cwi
 Clustering Configuration
 
 
-This section provides a quick overview of NiFi Clustering and instructions 
on how to set up a basic cluster. In the future, we hope to provide 
supplemental documentation that covers the NiFi Cluster Architecture in depth.
-
-The design of NiFi clustering is a simple master/slave model where there 
is a master and one or more slaves.
-While the model is that of master and slave, if the master dies, the 
slaves are all instructed to continue operating
-as they were to ensure the dataflow remains live. The absence of the 
master simply means new slaves cannot join the
-cluster and cluster flow changes cannot occur until the master is 
restored. In NiFi clustering, we call the master
-the NiFi Cluster Manager (NCM), and the slaves are called Nodes. See a 
full description of each in the Terminology section below.
+This section provides a quick overview of NiFi Clustering and instructions 
on how to set up a basic cluster.
+In the future, we hope to provide supplemental documentation that covers 
the NiFi Cluster Architecture in depth.
+
+NiFi employs a Zero-Master Clustering paradigm. Each of the nodes in the 
cluster performs the same tasks on
+the data but each operates on a different set of data. One of the nodes is 
automatically elected (via Apache
+ZooKeeper) as the Cluster Coordinator. All nodes in the cluster will then 
send heartbeat/status information
+to this node, and this node is responsible for disconnecting nodes that do 
not report any heartbeat status
+for some amount of time. Additionally, when a new node elects to join the 
cluster, the new node must first
+connect to the currently-elected Cluster Coordinator in order to obtain 
the most up-to-date flow. If the Cluster
+Coordinator determines that the node is allowed to join (based on its 
configured Firewall file), the current
+flow is provided to that node, and that node is able to join the cluster, 
assuming that the node's copy of the
+flow matches the copy provided by the Cluster Coordinator. If the node's 
version of the flow configuration differs
+from that of the Cluster Coordinator's, the node will not join the cluster.
 
 *Why Cluster?* +
 
-NiFi Administrators or Dataflow Managers (DFMs) may find that using one 
instance of NiFi on a single server is not enough to process the amount of data 
they have. So, one solution is to run the same dataflow on multiple NiFi 
servers. However, this creates a management problem, because each time DFMs 
want to change or update the dataflow, they must make those changes on each 
server and then monitor each server individually. By clustering the NiFi 
servers, it's possible to have that increased processing capability along with 
a single interface through which to make dataflow changes and monitor the 
dataflow. Clustering allows the DFM to make each change only once, and that 
change is then replicated to all the nodes of the cluster. Through the single 
interface, the DFM may also monitor the health and status of all the nodes.
+NiFi Administrators or Dataflow Managers (DFMs) may find that using one 
instance of NiFi on a single server is not
+enough to process the amount of data they have. So, one solution is to run 
the same dataflow on multiple NiFi servers.
+However, this creates a management problem, because each time DFMs want to 
change or update the dataflow, they must make
+those changes on each server and then monitor each server individually. By 
clustering the NiFi servers, it's possible to
+have that increased processing capability along with a single interface 
through which to make dataflow changes and monitor
+the dataflow. Clustering allows the DFM to make each change only once, and 
that change is then replicated to all the nodes
+of the cluster. Through the single interface, the DFM may also monitor the 
health and status of all the nodes.
 
 NiFi Clustering is unique and has its own terminology. It's important to 
understand the following terms before setting up a cluster.
 
 [template="glossary", id="terminology"]
 *Terminology* +
 
-*NiFi Cluster Manager*: A NiFi Cluster Manager (NCM) is an instance of 
NiFi that provides the sole management point for the cluster. It communicates 
dataflow changes to the nodes and receives health and status information from 
the nodes. It also ensures that a uniform dataflow is maintained across the 
cluster.  When DFMs manage a dataflow in a cluster, they do so through the User 
Interface o

[GitHub] nifi pull request #491: NIFI-1960: Update admin guide regarding documentatio...

2016-06-03 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/491#discussion_r65755030
  
--- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc ---
@@ -485,98 +485,137 @@ It is preferable to request upstream/downstream 
systems to switch to https://cwi
 Clustering Configuration
 
 
-This section provides a quick overview of NiFi Clustering and instructions 
on how to set up a basic cluster. In the future, we hope to provide 
supplemental documentation that covers the NiFi Cluster Architecture in depth.
-
-The design of NiFi clustering is a simple master/slave model where there 
is a master and one or more slaves.
-While the model is that of master and slave, if the master dies, the 
slaves are all instructed to continue operating
-as they were to ensure the dataflow remains live. The absence of the 
master simply means new slaves cannot join the
-cluster and cluster flow changes cannot occur until the master is 
restored. In NiFi clustering, we call the master
-the NiFi Cluster Manager (NCM), and the slaves are called Nodes. See a 
full description of each in the Terminology section below.
+This section provides a quick overview of NiFi Clustering and instructions 
on how to set up a basic cluster.
+In the future, we hope to provide supplemental documentation that covers 
the NiFi Cluster Architecture in depth.
+
+NiFi employs a Zero-Master Clustering paradigm. Each of the nodes in the 
cluster performs the same tasks on
+the data but each operates on a different set of data. One of the nodes is 
automatically elected (via Apache
+ZooKeeper) as the Cluster Coordinator. All nodes in the cluster will then 
send heartbeat/status information
+to this node, and this node is responsible for disconnecting nodes that do 
not report any heartbeat status
+for some amount of time. Additionally, when a new node elects to join the 
cluster, the new node must first
+connect to the currently-elected Cluster Coordinator in order to obtain 
the most up-to-date flow. If the Cluster
+Coordinator determines that the node is allowed to join (based on its 
configured Firewall file), the current
--- End diff --

This is a NiFi thing - not new, though. It has been around since clustering 
was initially introduced. Simply a text file that contains the IP addresses of 
nodes that are allowed to join the cluster.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #303: NIFI-1660 - Enhance the expression language with jsonPath f...

2016-06-03 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/303
  
@ckmcd an example would be something like: 
${ hello:jsonPath( '$.${elementOfInterest}.items[0]') }

Here, we are referencing an attribute within the JSON Path, so it is not a 
String Literal.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #491: NIFI-1960: Update admin guide regarding documentatio...

2016-06-03 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/491

NIFI-1960: Update admin guide regarding documentation for clustering



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1960

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/491.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #491


commit 85c1b3c21dce1d585c7147af6fa6989bacd41acf
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-06-03T18:02:35Z

NIFI-1960: Update admin guide regarding documentation for clustering




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #488: NIFI-1897: Refactoring to allow requests to be replicated f...

2016-06-03 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/488
  
Modifications that you proposed look good to me. +1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #472: NIFI-1265: Upgrading to Jetty 9.3

2016-06-02 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/472
  
@mcgilman I merged this to master but forgot to update the commit message. 
Can you close the PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #488: NIFI-1897: Refactoring to allow requests to be repli...

2016-06-02 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/488

NIFI-1897: Refactoring to allow requests to be replicated from a node to 
other nodes

Nodes are now capable of replicating requests across the cluster and a 
Cluster Coordinator is auto-elected to monitor heartbeats and provide the 
up-to-date flow to newly joining nodes. The WebClusterManager and associated 
components has been removed!

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1897

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/488.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #488


commit 6633904b6db6b75880ad6b5e185081423b644d16
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-05-19T14:42:39Z

NIFI-1897: Refactoring to allow requests to be replicated from a node to 
other nodes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1554: Updating Cluster endpoints

2016-05-26 Thread markap14
Github user markap14 commented on the pull request:

https://github.com/apache/nifi/pull/470#issuecomment-221954497
  
+1 looks good!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1800: Providing access to Controller Servi...

2016-05-26 Thread markap14
Github user markap14 commented on the pull request:

https://github.com/apache/nifi/pull/469#issuecomment-221951022
  
+1 looks good to me.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1668 modified TestProcessorLifecycle to en...

2016-05-26 Thread markap14
Github user markap14 commented on the pull request:

https://github.com/apache/nifi/pull/324#issuecomment-221935156
  
Agreed - I think moving the FlowController lifecycle to @Before and @After 
should address the issues. I do think that the 
validateIdempotencyOfProcessorStartOperation() test is a bit odd, though. Lots 
of tasks are kicked off in the background, which will test (to some degree) 
thread safety. However, if the goal is to test idempotency, then we should just 
call the method in the foreground multiple successive times, not run it in the 
background.

If the goal is to test running it in the background and the associated 
thread-safety, then I'd recommend that rather than using the countdown latch we 
just do something like:

```
while (testProcessor.operationNames.isEmpty() || 
testProcNode.getScheduledState() != ScheduledState.RUNNING) {
Thread.sleep(10L);
}
```

And then set a 10-second timeout on the test: @Test(timeout=1)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1781: UI authorization updates

2016-05-23 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/461#discussion_r64259242
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/SnippetResource.java
 ---
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.web.api;
+
+import com.wordnik.swagger.annotations.Api;
+import com.wordnik.swagger.annotations.ApiOperation;
+import com.wordnik.swagger.annotations.ApiParam;
+import com.wordnik.swagger.annotations.ApiResponse;
+import com.wordnik.swagger.annotations.ApiResponses;
+import com.wordnik.swagger.annotations.Authorization;
+import org.apache.nifi.authorization.Authorizer;
+import org.apache.nifi.authorization.RequestAction;
+import org.apache.nifi.authorization.resource.Authorizable;
+import org.apache.nifi.authorization.user.NiFiUser;
+import org.apache.nifi.authorization.user.NiFiUserUtils;
+import org.apache.nifi.cluster.manager.impl.WebClusterManager;
+import org.apache.nifi.controller.Snippet;
+import org.apache.nifi.util.NiFiProperties;
+import org.apache.nifi.web.NiFiServiceFacade;
+import org.apache.nifi.web.Revision;
+import org.apache.nifi.web.api.dto.SnippetDTO;
+import org.apache.nifi.web.api.entity.SnippetEntity;
+
+import javax.servlet.http.HttpServletRequest;
+import javax.ws.rs.Consumes;
+import javax.ws.rs.DELETE;
+import javax.ws.rs.HttpMethod;
+import javax.ws.rs.POST;
+import javax.ws.rs.PUT;
+import javax.ws.rs.Path;
+import javax.ws.rs.PathParam;
+import javax.ws.rs.Produces;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.Response;
+import java.net.URI;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+/**
+ * RESTful endpoint for querying dataflow snippets.
+ */
+@Path("/snippets")
+@Api(
+value = "/snippets",
+description = "Endpoint for accessing dataflow snippets."
+)
+public class SnippetResource extends ApplicationResource {
+
+private NiFiServiceFacade serviceFacade;
+private WebClusterManager clusterManager;
+private NiFiProperties properties;
+private Authorizer authorizer;
+
+/**
+ * Populate the uri's for the specified snippet.
+ *
+ * @param entity processors
+ * @return dtos
+ */
+private SnippetEntity 
populateRemainingSnippetEntityContent(SnippetEntity entity) {
+if (entity.getSnippet() != null) {
+populateRemainingSnippetContent(entity.getSnippet());
+}
+return entity;
+}
+
+/**
+ * Populates the uri for the specified snippet.
+ */
+private SnippetDTO populateRemainingSnippetContent(SnippetDTO snippet) 
{
+String snippetGroupId = snippet.getParentGroupId();
+
+// populate the snippet href
+snippet.setUri(generateResourceUri("process-groups", 
snippetGroupId, "snippets", snippet.getId()));
+
+return snippet;
+}
+
+// 
+// snippets
+// 
+
+/**
+ * Creates a snippet based off the specified configuration.
+ *
+ * @param httpServletRequest request
+ * @param snippetEntity A snippetEntity
+ * @return A snippetEntity
+ */
+@POST
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+// TODO - @PreAuthorize("hasRole('ROLE_DFM')")
+@ApiOperation(
+value = "Creates a snippet",
+response = SnippetEntity.class,
+authorizations = {
+@Authorization(value = "Read Only", type = "ROLE_MONITOR"),
+@Authorization(value = "

[GitHub] nifi pull request: NIFI-1781: UI authorization updates

2016-05-23 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/461#discussion_r64259092
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/SnippetResource.java
 ---
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.web.api;
+
+import com.wordnik.swagger.annotations.Api;
+import com.wordnik.swagger.annotations.ApiOperation;
+import com.wordnik.swagger.annotations.ApiParam;
+import com.wordnik.swagger.annotations.ApiResponse;
+import com.wordnik.swagger.annotations.ApiResponses;
+import com.wordnik.swagger.annotations.Authorization;
+import org.apache.nifi.authorization.Authorizer;
+import org.apache.nifi.authorization.RequestAction;
+import org.apache.nifi.authorization.resource.Authorizable;
+import org.apache.nifi.authorization.user.NiFiUser;
+import org.apache.nifi.authorization.user.NiFiUserUtils;
+import org.apache.nifi.cluster.manager.impl.WebClusterManager;
+import org.apache.nifi.controller.Snippet;
+import org.apache.nifi.util.NiFiProperties;
+import org.apache.nifi.web.NiFiServiceFacade;
+import org.apache.nifi.web.Revision;
+import org.apache.nifi.web.api.dto.SnippetDTO;
+import org.apache.nifi.web.api.entity.SnippetEntity;
+
+import javax.servlet.http.HttpServletRequest;
+import javax.ws.rs.Consumes;
+import javax.ws.rs.DELETE;
+import javax.ws.rs.HttpMethod;
+import javax.ws.rs.POST;
+import javax.ws.rs.PUT;
+import javax.ws.rs.Path;
+import javax.ws.rs.PathParam;
+import javax.ws.rs.Produces;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.Response;
+import java.net.URI;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+/**
+ * RESTful endpoint for querying dataflow snippets.
+ */
+@Path("/snippets")
+@Api(
+value = "/snippets",
+description = "Endpoint for accessing dataflow snippets."
+)
+public class SnippetResource extends ApplicationResource {
+
+private NiFiServiceFacade serviceFacade;
+private WebClusterManager clusterManager;
+private NiFiProperties properties;
+private Authorizer authorizer;
+
+/**
+ * Populate the uri's for the specified snippet.
+ *
+ * @param entity processors
+ * @return dtos
+ */
+private SnippetEntity 
populateRemainingSnippetEntityContent(SnippetEntity entity) {
+if (entity.getSnippet() != null) {
+populateRemainingSnippetContent(entity.getSnippet());
+}
+return entity;
+}
+
+/**
+ * Populates the uri for the specified snippet.
+ */
+private SnippetDTO populateRemainingSnippetContent(SnippetDTO snippet) 
{
+String snippetGroupId = snippet.getParentGroupId();
+
+// populate the snippet href
+snippet.setUri(generateResourceUri("process-groups", 
snippetGroupId, "snippets", snippet.getId()));
+
+return snippet;
+}
+
+// 
+// snippets
+// 
+
+/**
+ * Creates a snippet based off the specified configuration.
+ *
+ * @param httpServletRequest request
+ * @param snippetEntity A snippetEntity
+ * @return A snippetEntity
+ */
+@POST
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+// TODO - @PreAuthorize("hasRole('ROLE_DFM')")
+@ApiOperation(
+value = "Creates a snippet",
+response = SnippetEntity.class,
+authorizations = {
+@Authorization(value = "Read Only", type = "ROLE_MONITOR"),
+@Authorization(value = "

[GitHub] nifi pull request: NIFI-1781: UI authorization updates

2016-05-23 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/461#discussion_r64256472
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/FlowResource.java
 ---
@@ -321,6 +320,157 @@ public Response getControllerServices(
 return clusterContext(generateOkResponse(entity)).build();
 }
 
+/**
+ * Updates the specified process group.
+ *
+ * @param httpServletRequest request
+ * @param id The id of the process group.
+ * @param scheduleComponentsEntity A scheduleComponentsEntity.
+ * @return A processGroupEntity.
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("process-groups/{id}")
+// TODO - @PreAuthorize("hasRole('ROLE_DFM')")
+@ApiOperation(
+value = "Updates a process group",
+response = ScheduleComponentsEntity.class,
+authorizations = {
+@Authorization(value = "Data Flow Manager", type = "ROLE_DFM")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not be 
authenticated."),
+@ApiResponse(code = 403, message = "Client is not authorized 
to make this request."),
+@ApiResponse(code = 404, message = "The specified resource 
could not be found."),
+@ApiResponse(code = 409, message = "The request was valid but 
NiFi was not in the appropriate state to process it. Retrying the same request 
later may be successful.")
+}
+)
+public Response scheduleComponents(
+@Context HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The process group id.",
+required = true
+)
+@PathParam("id") String id,
+ScheduleComponentsEntity scheduleComponentsEntity) {
+
+authorizeFlow();
+
+// ensure the same id is being used
+if (!id.equals(scheduleComponentsEntity.getId())) {
+throw new IllegalArgumentException(String.format("The process 
group id (%s) in the request body does "
++ "not equal the process group id of the requested 
resource (%s).", scheduleComponentsEntity.getId(), id));
+}
+
+final ScheduledState state;
+if (scheduleComponentsEntity.getState() == null) {
+throw new IllegalArgumentException("The scheduled state must 
be specified.");
+} else {
+try {
+state = 
ScheduledState.valueOf(scheduleComponentsEntity.getState());
+} catch (final IllegalArgumentException iae) {
+throw new IllegalArgumentException(String.format("The 
scheduled must be one of [].", 
StringUtils.join(EnumSet.of(ScheduledState.RUNNING, ScheduledState.STOPPED), ", 
")));
+}
+}
+
+// ensure its a supported scheduled state
+if (ScheduledState.DISABLED.equals(state) || 
ScheduledState.STARTING.equals(state) || ScheduledState.STOPPING.equals(state)) 
{
+throw new IllegalArgumentException(String.format("The 
scheduled must be one of [].", 
StringUtils.join(EnumSet.of(ScheduledState.RUNNING, ScheduledState.STOPPED), ", 
")));
+}
+
+// if the components are not specified, gather all components and 
their current revision
+if (scheduleComponentsEntity.getComponents() == null) {
+// TODO - this will break while clustered until nodes are able 
to process/replicate requests
+// get the current revisions for the components being updated
+final Set revisions = 
serviceFacade.getRevisionsFromGroup(id, group -> {
+final Set componentIds = new HashSet<>();
+
+// ensure authorized for each processor we will attempt to 
schedule
+
group.findAllProcessors().stream().filter(ScheduledState.RUNNING.equals(state) 
? ProcessGroup.SCHEDULABLE_PROCESSORS : 
ProcessGroup.UNSCHEDULABLE_PROCESSORS).forEach(processor -> {
+if (processor.isAuthorized(authorizer, 
RequestAction.WRITE)) {
+componentIds.add(processor.getIdentifier());
+}
+  

[GitHub] nifi pull request: NIFI-1781: UI authorization updates

2016-05-23 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/461#discussion_r64256311
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/FlowResource.java
 ---
@@ -321,6 +320,157 @@ public Response getControllerServices(
 return clusterContext(generateOkResponse(entity)).build();
 }
 
+/**
+ * Updates the specified process group.
+ *
+ * @param httpServletRequest request
+ * @param id The id of the process group.
+ * @param scheduleComponentsEntity A scheduleComponentsEntity.
+ * @return A processGroupEntity.
+ */
+@PUT
+@Consumes(MediaType.APPLICATION_JSON)
+@Produces(MediaType.APPLICATION_JSON)
+@Path("process-groups/{id}")
+// TODO - @PreAuthorize("hasRole('ROLE_DFM')")
+@ApiOperation(
+value = "Updates a process group",
+response = ScheduleComponentsEntity.class,
+authorizations = {
+@Authorization(value = "Data Flow Manager", type = "ROLE_DFM")
+}
+)
+@ApiResponses(
+value = {
+@ApiResponse(code = 400, message = "NiFi was unable to 
complete the request because it was invalid. The request should not be retried 
without modification."),
+@ApiResponse(code = 401, message = "Client could not be 
authenticated."),
+@ApiResponse(code = 403, message = "Client is not authorized 
to make this request."),
+@ApiResponse(code = 404, message = "The specified resource 
could not be found."),
+@ApiResponse(code = 409, message = "The request was valid but 
NiFi was not in the appropriate state to process it. Retrying the same request 
later may be successful.")
+}
+)
+public Response scheduleComponents(
+@Context HttpServletRequest httpServletRequest,
+@ApiParam(
+value = "The process group id.",
+required = true
+)
+@PathParam("id") String id,
+ScheduleComponentsEntity scheduleComponentsEntity) {
+
+authorizeFlow();
+
+// ensure the same id is being used
+if (!id.equals(scheduleComponentsEntity.getId())) {
+throw new IllegalArgumentException(String.format("The process 
group id (%s) in the request body does "
++ "not equal the process group id of the requested 
resource (%s).", scheduleComponentsEntity.getId(), id));
+}
+
+final ScheduledState state;
+if (scheduleComponentsEntity.getState() == null) {
+throw new IllegalArgumentException("The scheduled state must 
be specified.");
+} else {
+try {
+state = 
ScheduledState.valueOf(scheduleComponentsEntity.getState());
+} catch (final IllegalArgumentException iae) {
+throw new IllegalArgumentException(String.format("The 
scheduled must be one of [].", 
StringUtils.join(EnumSet.of(ScheduledState.RUNNING, ScheduledState.STOPPED), ", 
")));
+}
+}
+
+// ensure its a supported scheduled state
+if (ScheduledState.DISABLED.equals(state) || 
ScheduledState.STARTING.equals(state) || ScheduledState.STOPPING.equals(state)) 
{
+throw new IllegalArgumentException(String.format("The 
scheduled must be one of [].", 
StringUtils.join(EnumSet.of(ScheduledState.RUNNING, ScheduledState.STOPPED), ", 
")));
+}
+
+// if the components are not specified, gather all components and 
their current revision
+if (scheduleComponentsEntity.getComponents() == null) {
+// TODO - this will break while clustered until nodes are able 
to process/replicate requests
+// get the current revisions for the components being updated
+final Set revisions = 
serviceFacade.getRevisionsFromGroup(id, group -> {
+final Set componentIds = new HashSet<>();
+
+// ensure authorized for each processor we will attempt to 
schedule
+
group.findAllProcessors().stream().filter(ScheduledState.RUNNING.equals(state) 
? ProcessGroup.SCHEDULABLE_PROCESSORS : 
ProcessGroup.UNSCHEDULABLE_PROCESSORS).forEach(processor -> {
+if (processor.isAuthorized(authorizer, 
RequestAction.WRITE)) {
--- End diff --

It seems cleaner to me to use a filter here to check if the processor is 
authorized, rather than mixing str

[GitHub] nifi pull request: NIFI-1781: UI authorization updates

2016-05-23 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/461#discussion_r64249465
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/StandardNiFiServiceFacade.java
 ---
@@ -568,164 +699,50 @@ public void verifyDeleteReportingTask(String 
reportingTaskId) {
 
 
 @Override
-public void verifyUpdateSnippet(SnippetDTO snippetDto) {
+public void verifyUpdateSnippet(SnippetDTO snippetDto, final 
Set affectedComponentIds) {
 try {
 // if snippet does not exist, then the update request is 
likely creating it
 // so we don't verify since it will fail
 if (snippetDAO.hasSnippet(snippetDto.getId())) {
 snippetDAO.verifyUpdate(snippetDto);
 }
 } catch (final Exception e) {
-revisionManager.cancelClaim(snippetDto.getId());
+affectedComponentIds.forEach(id -> 
revisionManager.cancelClaim(snippetDto.getId()));
 throw e;
 }
 }
 
-private Set getRevisionsForGroup(final String groupId) {
-final Set revisions = new HashSet<>();
-
-revisions.add(revisionManager.getRevision(groupId));
-final ProcessGroup processGroup = 
processGroupDAO.getProcessGroup(groupId);
-if (processGroup == null) {
-throw new IllegalArgumentException("Snippet contains a 
reference to Process Group with ID " + groupId + " but no Process Group exists 
with that ID");
-}
-
-processGroup.getConnections().stream().map(c -> 
c.getIdentifier()).map(id -> revisionManager.getRevision(id)).forEach(rev -> 
revisions.add(rev));
-processGroup.getFunnels().stream().map(c -> 
c.getIdentifier()).map(id -> revisionManager.getRevision(id)).forEach(rev -> 
revisions.add(rev));
-processGroup.getInputPorts().stream().map(c -> 
c.getIdentifier()).map(id -> revisionManager.getRevision(id)).forEach(rev -> 
revisions.add(rev));
-processGroup.getOutputPorts().stream().map(c -> 
c.getIdentifier()).map(id -> revisionManager.getRevision(id)).forEach(rev -> 
revisions.add(rev));
-processGroup.getLabels().stream().map(c -> 
c.getIdentifier()).map(id -> revisionManager.getRevision(id)).forEach(rev -> 
revisions.add(rev));
-processGroup.getProcessors().stream().map(c -> 
c.getIdentifier()).map(id -> revisionManager.getRevision(id)).forEach(rev -> 
revisions.add(rev));
-processGroup.getRemoteProcessGroups().stream().map(c -> 
c.getIdentifier()).map(id -> revisionManager.getRevision(id)).forEach(rev -> 
revisions.add(rev));
-processGroup.getProcessGroups().stream().map(c -> 
c.getIdentifier()).forEach(id -> revisions.addAll(getRevisionsForGroup(id)));
-
-return revisions;
-}
-
-private Set getRevisionsForSnippet(final SnippetDTO 
snippetDto) {
-final Set requiredRevisions = new HashSet<>();
-
requiredRevisions.add(revisionManager.getRevision(snippetDto.getId()));
-snippetDto.getConnections().entrySet().stream()
-.map(entry -> new Revision(entry.getValue().getVersion(), 
entry.getValue().getClientId(), entry.getKey()))
-.forEach(rev -> requiredRevisions.add(rev));
-
-snippetDto.getFunnels().entrySet().stream()
-.map(entry -> new Revision(entry.getValue().getVersion(), 
entry.getValue().getClientId(), entry.getKey()))
-.forEach(rev -> requiredRevisions.add(rev));
-
-snippetDto.getInputPorts().entrySet().stream()
-.map(entry -> new Revision(entry.getValue().getVersion(), 
entry.getValue().getClientId(), entry.getKey()))
-.forEach(rev -> requiredRevisions.add(rev));
-
-snippetDto.getOutputPorts().entrySet().stream()
-.map(entry -> new Revision(entry.getValue().getVersion(), 
entry.getValue().getClientId(), entry.getKey()))
-.forEach(rev -> requiredRevisions.add(rev));
-
-snippetDto.getLabels().entrySet().stream()
-.map(entry -> new Revision(entry.getValue().getVersion(), 
entry.getValue().getClientId(), entry.getKey()))
-.forEach(rev -> requiredRevisions.add(rev));
-
-snippetDto.getProcessors().entrySet().stream()
-.map(entry -> new Revision(entry.getValue().getVersion(), 
entry.getValue().getClientId(), entry.getKey()))
-.forEach(rev -> requiredRevisions.add(rev));
-
-snippetDto.getRemoteProcessGroups().entrySet().stream()
-.map(entry -> new Revis

[GitHub] nifi pull request: NIFI-1781: UI authorization updates

2016-05-23 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/461#discussion_r64248276
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/groups/StandardProcessGroup.java
 ---
@@ -283,38 +283,21 @@ public boolean isRootGroup() {
 public void startProcessing() {
 readLock.lock();
 try {
-for (final ProcessorNode node : processors.values()) {
+
findAllProcessors().stream().filter(SCHEDULABLE_PROCESSORS).forEach(node -> {
--- End diff --

It seems like here, rather than finding all processors and putting them 
into a set, then converting to a stream and filtering, we should allow a 
Predicate to be passed into the findAllProcessors() method? Same for 
input/output ports


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1781: UI authorization updates

2016-05-23 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/461#discussion_r64248023
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core-api/src/main/java/org/apache/nifi/groups/ProcessGroup.java
 ---
@@ -46,6 +48,26 @@
 public interface ProcessGroup extends Authorizable {
 
 /**
+ * Predicate for filtering schedulable Processors.
+ */
+Predicate SCHEDULABLE_PROCESSORS = node -> 
!node.isRunning() && node.getScheduledState() != ScheduledState.DISABLED;
+
+/**
+ * Predicate for filtering unschedulable Processors.
+ */
+Predicate UNSCHEDULABLE_PROCESSORS = node -> 
node.isRunning();
+
+/**
+ * Predicate for filtering schedulable Ports
+ */
+Predicate SCHEDULABLE_PORTS = port -> port.getScheduledState() 
!= ScheduledState.DISABLED;
+
+/**
+ * Predicate for filtering schedulable Ports
+ */
+Predicate UNSCHEDULABLE_PORTS = port -> port.getScheduledState() 
== ScheduledState.RUNNING;
--- End diff --

Shouldn't this be "port.isRunning()" instead, like we do for Processor? It 
seems like we could actually just have a SCHEDULABLE_TRIGGERABLE and an 
UNSCHEDULABLE_TRIGGERABLE rather than separate ones for ports & processors?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1781: UI authorization updates

2016-05-23 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/461#discussion_r64247888
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core-api/src/main/java/org/apache/nifi/groups/ProcessGroup.java
 ---
@@ -46,6 +48,26 @@
 public interface ProcessGroup extends Authorizable {
 
 /**
+ * Predicate for filtering schedulable Processors.
+ */
+Predicate SCHEDULABLE_PROCESSORS = node -> 
!node.isRunning() && node.getScheduledState() != ScheduledState.DISABLED;
+
+/**
+ * Predicate for filtering unschedulable Processors.
+ */
+Predicate UNSCHEDULABLE_PROCESSORS = node -> 
node.isRunning();
+
+/**
+ * Predicate for filtering schedulable Ports
+ */
+Predicate SCHEDULABLE_PORTS = port -> port.getScheduledState() 
!= ScheduledState.DISABLED;
--- End diff --

It seems odd to me that we are checking "!node.isRunning()" for 
ProcessorsNode but not checking "!port.isRunning()" for ports.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1660 - Enhance the expression language wit...

2016-05-23 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/303#discussion_r64235176
  
--- Diff: 
nifi-commons/nifi-expression-language/src/test/java/org/apache/nifi/attribute/expression/language/TestQuery.java
 ---
@@ -233,6 +233,24 @@ public void testEmbeddedExpressionsAndQuotes() {
 }
 
 @Test
+public void testJsonPath() {
+final Map<String, String> attributes = new HashMap<>();
+attributes.put("json",
--- End diff --

Would recommend we pull the JSON out into a file in src/test/resources. 
Having this live in the codebase makes it very difficult to read & edit


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1660 - Enhance the expression language wit...

2016-05-23 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/303#discussion_r64234770
  
--- Diff: 
nifi-commons/nifi-expression-language/src/main/java/org/apache/nifi/attribute/expression/language/evaluation/functions/JsonPathEvaluator.java
 ---
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.attribute.expression.language.evaluation.functions;
+
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+
+import org.apache.nifi.attribute.expression.language.evaluation.Evaluator;
+import 
org.apache.nifi.attribute.expression.language.evaluation.QueryResult;
+import 
org.apache.nifi.attribute.expression.language.evaluation.StringEvaluator;
+import 
org.apache.nifi.attribute.expression.language.evaluation.StringQueryResult;
+
+import com.jayway.jsonpath.Configuration;
+import com.jayway.jsonpath.DocumentContext;
+import com.jayway.jsonpath.InvalidJsonException;
+import com.jayway.jsonpath.JsonPath;
+import com.jayway.jsonpath.spi.json.JacksonJsonProvider;
+import com.jayway.jsonpath.spi.json.JsonProvider;
+
+
+public class JsonPathEvaluator extends StringEvaluator {
+
+private static final StringQueryResult NULL_RESULT = new 
StringQueryResult("");
+private static final Configuration STRICT_PROVIDER_CONFIGURATION = 
Configuration.builder().jsonProvider(new JacksonJsonProvider()).build();
+private static final JsonProvider JSON_PROVIDER = 
STRICT_PROVIDER_CONFIGURATION.jsonProvider();
+
+private final Evaluator subject;
+private final Evaluator jsonPathExp;
+
+public JsonPathEvaluator(final Evaluator subject, final 
Evaluator jsonPathExp) {
+this.subject = subject;
+this.jsonPathExp = jsonPathExp;
+}
+
+@Override
+public QueryResult evaluate(final Map<String, String> 
attributes) {
+final String subjectValue = 
subject.evaluate(attributes).getValue();
+if (subjectValue == null || subjectValue.length() == 0) {
+return NULL_RESULT;
+}
+DocumentContext documentContext = null;
+try {
+documentContext = 
validateAndEstablishJsonContext(subjectValue);
+} catch (InvalidJsonException e) {
+return NULL_RESULT;
+}
+
+final JsonPath compiledJsonPath = 
JsonPath.compile(jsonPathExp.evaluate(attributes).getValue());
--- End diff --

In the vast majority of use cases, I would expect that the JSON Path will 
be a literal string that does not reference any attributes. In this case, I 
would want to avoid compiling the JSON Path every time. Would recommend that if 
the jsonPathExp is an instance of the StringLiteralEvaluator, we pre-compile 
the JSON Path. We follow a similar pattern with the MatchesEvaluator for 
pre-compiling regular expressions, and it can make a pretty huge difference on 
performance.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1909 Adding ability to process schemaless ...

2016-05-20 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/459#discussion_r64097658
  
--- Diff: 
nifi-nar-bundles/nifi-avro-bundle/nifi-avro-processors/src/main/java/org/apache/nifi/processors/avro/ConvertAvroToJSON.java
 ---
@@ -128,49 +141,77 @@ public void onTrigger(ProcessContext context, 
ProcessSession session) throws Pro
 // Wrap a single record (inclusive of no records) only when a 
container is being used
 final boolean wrapSingleRecord = 
context.getProperty(WRAP_SINGLE_RECORD).asBoolean() && useContainer;
 
+final String stringSchema = context.getProperty(SCHEMA).getValue();
+final boolean schemaLess = stringSchema != null;
+
 try {
 flowFile = session.write(flowFile, new StreamCallback() {
 @Override
 public void process(final InputStream rawIn, final 
OutputStream rawOut) throws IOException {
-try (final InputStream in = new 
BufferedInputStream(rawIn);
- final OutputStream out = new 
BufferedOutputStream(rawOut);
- final DataFileStream reader = new 
DataFileStream<>(in, new GenericDatumReader())) {
+if (schemaLess) {
+if (schema == null) {
+schema = new 
Schema.Parser().parse(stringSchema);
+}
+try (final InputStream in = new 
BufferedInputStream(rawIn);
+ final OutputStream out = new 
BufferedOutputStream(rawOut)) {
+final DatumReader reader = new 
GenericDatumReader(schema);
+final BinaryDecoder decoder = 
DecoderFactory.get().binaryDecoder(in, null);
+final GenericRecord record = reader.read(null, 
decoder);
+
+// Schemaless records are singletons, so both 
useContainer and wrapSingleRecord
+// need to be true before we wrap it with an 
array
+if (useContainer && wrapSingleRecord) {
+out.write('[');
+}
 
-final GenericData genericData = GenericData.get();
+final byte[] outputBytes = (record == null) ? 
EMPTY_JSON_OBJECT : record.toString().getBytes(StandardCharsets.UTF_8);
--- End diff --

I think we need to keep the use of GenericData here. While 
record.toString() does in fact convert the Avro object to JSON, the toString() 
method is not documented as doing so and could change at any time. The 
GenericData object is documented to convert the object into JSON.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1909 Adding ability to process schemaless ...

2016-05-20 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/459#discussion_r64097258
  
--- Diff: 
nifi-nar-bundles/nifi-avro-bundle/nifi-avro-processors/src/main/java/org/apache/nifi/processors/avro/ConvertAvroToJSON.java
 ---
@@ -92,6 +103,7 @@
 .build();
 
 private List properties;
+private Schema schema = null;
--- End diff --

This could be marked by multiple different threads concurrently, so we need 
to make sure that it is protected. Marking it as 'volatile' should take care of 
this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1745: Refactor how revisions are handled a...

2016-05-20 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/454#discussion_r64043396
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-optimistic-locking/src/main/java/org/apache/nifi/web/revision/NaiveRevisionManager.java
 ---
@@ -423,7 +440,7 @@ public boolean requestWriteLock(final Revision 
proposedRevision) throws ExpiredR
 throw ise;
 }
 
-if (stamp.getClientId() == null || 
stamp.getClientId().equals(proposedRevision.getClientId())) {
+if (stamp.getUser() == null || 
stamp.getUser().equals(user)) {
--- End diff --

@mcgilman good call. I have addressed the issue and updated the PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1745: Refactor how revisions are handled a...

2016-05-18 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/454

NIFI-1745: Refactor how revisions are handled at NCM/Distributed to Node

This is part 2 of NIFI-1745, which provides user info to revision 
claims/locks and removes the cluster context, as it is no longer necessary

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1745-Part2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/454.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #454


commit 55fb1dab52880cfba94ac6ebd3552b4019020b1b
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-05-17T15:51:09Z

NIFI-1745: Refactor how revisions are handled at NCM/Distributed to Node




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1866 ProcessException handling in Standard...

2016-05-16 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/439#discussion_r63425823
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/test/java/org/apache/nifi/controller/repository/TestStandardProcessSession.java
 ---
@@ -323,6 +327,37 @@ public void process(final OutputStream outputStream) 
throws IOException {
 assertDisabled(outputStreamHolder.get());
 }
 
+@Test(expected=ProcessException.class)
+public void testExportTo() throws IOException {
+final ContentClaim claim = contentRepo.create(false);
+final FlowFileRecord flowFileRecord = new 
StandardFlowFileRecord.Builder()
+.contentClaim(claim)
+.addAttribute("uuid", "12345678-1234-1234-1234-123456789012")
+.entryDate(System.currentTimeMillis())
+.build();
+flowFileQueue.put(flowFileRecord);
+FlowFile flowFile = session.get();
+assertNotNull(flowFile);
+
+flowFile = session.append(flowFile, new OutputStreamCallback() {
+@Override
+public void process(OutputStream out) throws IOException {
+out.write("Hello World".getBytes());
+}
+});
+
+// should be OK
+ByteArrayOutputStream os = new ByteArrayOutputStream();
+session.exportTo(flowFile, os);
+assertEquals("Hello World", new String(os.toByteArray()));
+os.close();
+
+// should throw ProcessException because of IOException (from 
processor code)
+FileOutputStream mock = Mockito.mock(FileOutputStream.class);
+doThrow(new IOException()).when(mock).write((byte[]) notNull(), 
any(Integer.class), any(Integer.class));
+session.exportTo(flowFile, mock);
--- End diff --

I would recommend wrapping this call in a try/catch and ensuring that 
ProcessException is thrown here. Indicating that it is expected in the @Test 
annotation can be somewhat error-prone, as several other method calls within 
this method could actually throw ProcessException


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1866 ProcessException handling in Standard...

2016-05-16 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/439#discussion_r63425553
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java
 ---
@@ -2308,18 +2308,50 @@ public void exportTo(final FlowFile source, final 
Path destination, final boolea
 public void exportTo(final FlowFile source, final OutputStream 
destination) {
 validateRecordState(source);
 final StandardRepositoryRecord record = records.get(source);
+
+if(record.getCurrentClaim() == null) {
+return;
+}
+
 try {
-if (record.getCurrentClaim() == null) {
-return;
+ensureNotAppending(record.getCurrentClaim());
+} catch (final IOException e) {
+throw new FlowFileAccessException("Failed to access 
ContentClaim for " + source.toString(), e);
+}
+
+try (final InputStream rawIn = getInputStream(source, 
record.getCurrentClaim(), record.getCurrentClaimOffset());
+final InputStream limitedIn = new 
LimitedInputStream(rawIn, source.getSize());
+final InputStream disableOnCloseIn = new 
DisableOnCloseInputStream(limitedIn);
+final ByteCountingInputStream countingStream = new 
ByteCountingInputStream(disableOnCloseIn, this.bytesRead)) {
+
+// We want to differentiate between IOExceptions thrown by the 
repository and IOExceptions thrown from
+// Processor code. As a result, as have the 
FlowFileAccessInputStream that catches IOException from the repository
+// and translates into either FlowFileAccessException or 
ContentNotFoundException. We keep track of any
+// ContentNotFoundException because if it is thrown, the 
Processor code may catch it and do something else with it
+// but in reality, if it is thrown, we want to know about it 
and handle it, even if the Processor code catches it.
+final FlowFileAccessInputStream ffais = new 
FlowFileAccessInputStream(countingStream, source, record.getCurrentClaim());
+boolean cnfeThrown = false;
+
+try {
+recursionSet.add(source);
+StreamUtils.skip(ffais, record.getCurrentClaimOffset());
+StreamUtils.copy(ffais, destination, source.getSize());
+} catch (final ContentNotFoundException cnfe) {
+cnfeThrown = true;
+throw cnfe;
+} finally {
+recursionSet.remove(source);
+IOUtils.closeQuietly(ffais);
+// if cnfeThrown is true, we don't need to re-thrown the 
Exception; it will propagate.
--- End diff --

i think the comment here is supposed to see "need to re-throw" rather than 
"need to re-thrown"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1866 ProcessException handling in Standard...

2016-05-16 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/439#discussion_r63425128
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java
 ---
@@ -2308,18 +2308,50 @@ public void exportTo(final FlowFile source, final 
Path destination, final boolea
 public void exportTo(final FlowFile source, final OutputStream 
destination) {
 validateRecordState(source);
 final StandardRepositoryRecord record = records.get(source);
+
+if(record.getCurrentClaim() == null) {
+return;
+}
+
 try {
-if (record.getCurrentClaim() == null) {
-return;
+ensureNotAppending(record.getCurrentClaim());
+} catch (final IOException e) {
+throw new FlowFileAccessException("Failed to access 
ContentClaim for " + source.toString(), e);
+}
+
+try (final InputStream rawIn = getInputStream(source, 
record.getCurrentClaim(), record.getCurrentClaimOffset());
+final InputStream limitedIn = new 
LimitedInputStream(rawIn, source.getSize());
+final InputStream disableOnCloseIn = new 
DisableOnCloseInputStream(limitedIn);
+final ByteCountingInputStream countingStream = new 
ByteCountingInputStream(disableOnCloseIn, this.bytesRead)) {
+
+// We want to differentiate between IOExceptions thrown by the 
repository and IOExceptions thrown from
+// Processor code. As a result, as have the 
FlowFileAccessInputStream that catches IOException from the repository
+// and translates into either FlowFileAccessException or 
ContentNotFoundException. We keep track of any
+// ContentNotFoundException because if it is thrown, the 
Processor code may catch it and do something else with it
+// but in reality, if it is thrown, we want to know about it 
and handle it, even if the Processor code catches it.
+final FlowFileAccessInputStream ffais = new 
FlowFileAccessInputStream(countingStream, source, record.getCurrentClaim());
+boolean cnfeThrown = false;
+
+try {
+recursionSet.add(source);
+StreamUtils.skip(ffais, record.getCurrentClaimOffset());
--- End diff --

I don't believe we want to be skipping here. This is done already in the 
call to getInputStream() above, is it not?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1866 ProcessException handling in Standard...

2016-05-16 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/439#discussion_r63344780
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java
 ---
@@ -2308,18 +2308,46 @@ public void exportTo(final FlowFile source, final 
Path destination, final boolea
 public void exportTo(final FlowFile source, final OutputStream 
destination) {
 validateRecordState(source);
 final StandardRepositoryRecord record = records.get(source);
+
 try {
-if (record.getCurrentClaim() == null) {
--- End diff --

Can you explain the reasoning behind removing this? If there is no content 
claim, i think it still makes sense to return immediately. This would happen, 
for instance, with some source processors such as ListFile that don't actually 
write any content to the FlowFile.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1801: Scope Templates to Process Groups

2016-05-16 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/446#discussion_r63344217
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core-api/src/main/java/org/apache/nifi/controller/Template.java
 ---
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.controller;
+
+import java.util.HashSet;
+import java.util.Set;
+
+import org.apache.nifi.authorization.AccessDeniedException;
+import org.apache.nifi.authorization.AuthorizationRequest;
+import org.apache.nifi.authorization.AuthorizationResult;
+import org.apache.nifi.authorization.AuthorizationResult.Result;
+import org.apache.nifi.authorization.Authorizer;
+import org.apache.nifi.authorization.RequestAction;
+import org.apache.nifi.authorization.Resource;
+import org.apache.nifi.authorization.resource.Authorizable;
+import org.apache.nifi.authorization.user.NiFiUser;
+import org.apache.nifi.authorization.user.NiFiUserUtils;
+import org.apache.nifi.connectable.Connection;
+import org.apache.nifi.controller.label.Label;
+import org.apache.nifi.groups.ProcessGroup;
+import org.apache.nifi.groups.RemoteProcessGroup;
+import org.apache.nifi.web.api.dto.ConnectionDTO;
+import org.apache.nifi.web.api.dto.ControllerServiceDTO;
+import org.apache.nifi.web.api.dto.FlowSnippetDTO;
+import org.apache.nifi.web.api.dto.LabelDTO;
+import org.apache.nifi.web.api.dto.ProcessGroupDTO;
+import org.apache.nifi.web.api.dto.ProcessorDTO;
+import org.apache.nifi.web.api.dto.RemoteProcessGroupDTO;
+import org.apache.nifi.web.api.dto.TemplateDTO;
+
+public class Template implements Authorizable {
+
+private final TemplateDTO dto;
+private volatile ProcessGroup processGroup;
+
+public Template(final TemplateDTO dto) {
+this.dto = dto;
+}
+
+public String getIdentifier() {
+return dto.getId();
+}
+
+/**
+ * Returns a TemplateDTO object that describes the contents of this 
Template
+ *
+ * @return template dto
+ */
+public TemplateDTO getDetails() {
+return dto;
+}
+
+public void setProcessGroup(final ProcessGroup group) {
+this.processGroup = group;
+}
+
+public ProcessGroup getProcessGroup() {
+return processGroup;
+}
+
+
+@Override
+public Authorizable getParentAuthorizable() {
+return null;
+}
+
+@Override
+public Resource getResource() {
+return new Resource() {
--- End diff --

Good call. Updated PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1801: Scope Templates to Process Groups

2016-05-16 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/446

NIFI-1801: Scope Templates to Process Groups

Also includes a few bug fixes that were not necessarily explicitly related 
to templates being scoped to Process Groups but that are required in order to 
test the PR

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1801

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/446.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #446


commit e3d704637670857a47e33cedb8446a1e52dd42e6
Author: Mark Payne <marka...@hotmail.com>
Date:   2016-05-12T17:09:36Z

NIFI-1801: Scope Templates to Process Groups




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   3   4   >