[jira] [Updated] (NIFI-4201) Add Processors for interacting with Kafka 0.11.x

2017-09-22 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-4201:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add Processors for interacting with Kafka 0.11.x
> 
>
> Key: NIFI-4201
> URL: https://issues.apache.org/jira/browse/NIFI-4201
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4201) Add Processors for interacting with Kafka 0.11.x

2017-09-22 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177453#comment-16177453
 ] 

Joseph Witt commented on NIFI-4201:
---

fix L and updated to latest client library.  Tested against kafka 11 with 
flows using both record and non record publishers and consumers.  Header 
support works great against the record processors.  Was not able to get headers 
working on non record processors but might have been user error.  

All checks out +1 merged to master.  Closing now.  Thanks

> Add Processors for interacting with Kafka 0.11.x
> 
>
> Key: NIFI-4201
> URL: https://issues.apache.org/jira/browse/NIFI-4201
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4201) Add Processors for interacting with Kafka 0.11.x

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177451#comment-16177451
 ] 

ASF GitHub Bot commented on NIFI-4201:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2024


> Add Processors for interacting with Kafka 0.11.x
> 
>
> Key: NIFI-4201
> URL: https://issues.apache.org/jira/browse/NIFI-4201
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2024: NIFI-4201: Initial implementation of processors for...

2017-09-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2024


---


[jira] [Commented] (NIFI-4201) Add Processors for interacting with Kafka 0.11.x

2017-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177449#comment-16177449
 ] 

ASF subversion and git services commented on NIFI-4201:
---

Commit 3fb704c58f44f106779ac536bb6d5802c829f626 in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=3fb704c ]

NIFI-4201: This closes #2024. Implementation of processors for interacting with 
Kafka 0.11

Signed-off-by: joewitt 


> Add Processors for interacting with Kafka 0.11.x
> 
>
> Key: NIFI-4201
> URL: https://issues.apache.org/jira/browse/NIFI-4201
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4201) Add Processors for interacting with Kafka 0.11.x

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177428#comment-16177428
 ] 

ASF GitHub Bot commented on NIFI-4201:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2024
  
have reviewed.  now running a series of tests.  also updated to latest 
kafka 11 client version.


> Add Processors for interacting with Kafka 0.11.x
> 
>
> Key: NIFI-4201
> URL: https://issues.apache.org/jira/browse/NIFI-4201
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2024: NIFI-4201: Initial implementation of processors for intera...

2017-09-22 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2024
  
have reviewed.  now running a series of tests.  also updated to latest 
kafka 11 client version.


---


[jira] [Commented] (NIFI-4201) Add Processors for interacting with Kafka 0.11.x

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177413#comment-16177413
 ] 

ASF GitHub Bot commented on NIFI-4201:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2024
  
I've pushed a new commit that rebases against master and addresses any 
conflicts. Also addressed the issue raised above by Koji.


> Add Processors for interacting with Kafka 0.11.x
> 
>
> Key: NIFI-4201
> URL: https://issues.apache.org/jira/browse/NIFI-4201
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2024: NIFI-4201: Initial implementation of processors for intera...

2017-09-22 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2024
  
I've pushed a new commit that rebases against master and addresses any 
conflicts. Also addressed the issue raised above by Koji.


---


[jira] [Commented] (NIFI-4201) Add Processors for interacting with Kafka 0.11.x

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177389#comment-16177389
 ] 

ASF GitHub Bot commented on NIFI-4201:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2024#discussion_r140620330
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-0-11-processors/src/main/java/org/apache/nifi/processors/kafka/pubsub/PublisherLease.java
 ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.kafka.pubsub;
+
+import java.io.ByteArrayOutputStream;
+import java.io.Closeable;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.regex.Pattern;
+
+import org.apache.kafka.clients.producer.Callback;
+import org.apache.kafka.clients.producer.Producer;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.clients.producer.RecordMetadata;
+import org.apache.kafka.common.header.Headers;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.RecordSet;
+import org.apache.nifi.stream.io.exception.TokenTooLargeException;
+import org.apache.nifi.stream.io.util.StreamDemarcator;
+
+public class PublisherLease implements Closeable {
+private final ComponentLog logger;
+private final Producer producer;
+private final int maxMessageSize;
+private final long maxAckWaitMillis;
+private final boolean useTransactions;
+private final Pattern attributeNameRegex;
+private final Charset headerCharacterSet;
+private volatile boolean poisoned = false;
+private final AtomicLong messagesSent = new AtomicLong(0L);
+
+private volatile boolean transactionsInitialized = false;
+private volatile boolean activeTransaction = false;
+
+private InFlightMessageTracker tracker;
+
+public PublisherLease(final Producer producer, final 
int maxMessageSize, final long maxAckWaitMillis, final ComponentLog logger,
+final boolean useTransactions, final Pattern attributeNameRegex, 
final Charset headerCharacterSet) {
+this.producer = producer;
+this.maxMessageSize = maxMessageSize;
+this.logger = logger;
+this.maxAckWaitMillis = maxAckWaitMillis;
+this.useTransactions = useTransactions;
+this.attributeNameRegex = attributeNameRegex;
+this.headerCharacterSet = headerCharacterSet;
+}
+
+protected void poison() {
+this.poisoned = true;
+}
+
+public boolean isPoisoned() {
+return poisoned;
+}
+
+void beginTransaction() {
+if (!useTransactions) {
+return;
+}
+
+if (!transactionsInitialized) {
+producer.initTransactions();
+transactionsInitialized = true;
+}
+
+producer.beginTransaction();
+activeTransaction = true;
+}
+
+void rollback() {
+if (!useTransactions || !activeTransaction) {
+return;
+}
+
+producer.abortTransaction();
+activeTransaction = false;
+}
+
+void fail(final FlowFile 

[GitHub] nifi pull request #2024: NIFI-4201: Initial implementation of processors for...

2017-09-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2024#discussion_r140620330
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-0-11-processors/src/main/java/org/apache/nifi/processors/kafka/pubsub/PublisherLease.java
 ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.kafka.pubsub;
+
+import java.io.ByteArrayOutputStream;
+import java.io.Closeable;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.regex.Pattern;
+
+import org.apache.kafka.clients.producer.Callback;
+import org.apache.kafka.clients.producer.Producer;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.clients.producer.RecordMetadata;
+import org.apache.kafka.common.header.Headers;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.RecordSetWriter;
+import org.apache.nifi.serialization.RecordSetWriterFactory;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.RecordSet;
+import org.apache.nifi.stream.io.exception.TokenTooLargeException;
+import org.apache.nifi.stream.io.util.StreamDemarcator;
+
+public class PublisherLease implements Closeable {
+private final ComponentLog logger;
+private final Producer producer;
+private final int maxMessageSize;
+private final long maxAckWaitMillis;
+private final boolean useTransactions;
+private final Pattern attributeNameRegex;
+private final Charset headerCharacterSet;
+private volatile boolean poisoned = false;
+private final AtomicLong messagesSent = new AtomicLong(0L);
+
+private volatile boolean transactionsInitialized = false;
+private volatile boolean activeTransaction = false;
+
+private InFlightMessageTracker tracker;
+
+public PublisherLease(final Producer producer, final 
int maxMessageSize, final long maxAckWaitMillis, final ComponentLog logger,
+final boolean useTransactions, final Pattern attributeNameRegex, 
final Charset headerCharacterSet) {
+this.producer = producer;
+this.maxMessageSize = maxMessageSize;
+this.logger = logger;
+this.maxAckWaitMillis = maxAckWaitMillis;
+this.useTransactions = useTransactions;
+this.attributeNameRegex = attributeNameRegex;
+this.headerCharacterSet = headerCharacterSet;
+}
+
+protected void poison() {
+this.poisoned = true;
+}
+
+public boolean isPoisoned() {
+return poisoned;
+}
+
+void beginTransaction() {
+if (!useTransactions) {
+return;
+}
+
+if (!transactionsInitialized) {
+producer.initTransactions();
+transactionsInitialized = true;
+}
+
+producer.beginTransaction();
+activeTransaction = true;
+}
+
+void rollback() {
+if (!useTransactions || !activeTransaction) {
+return;
+}
+
+producer.abortTransaction();
+activeTransaction = false;
+}
+
+void fail(final FlowFile flowFile, final Exception cause) {
+getTracker().fail(flowFile, cause);
+rollback();
+}
+
+void publish(final FlowFile flowFile, final InputStream 
flowFileContent, final byte[] messageKey, 

[jira] [Commented] (NIFI-2829) PutSQL assumes all Date and Time values are provided in Epoch

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177258#comment-16177258
 ] 

ASF GitHub Bot commented on NIFI-2829:
--

Github user jtstorck commented on the issue:

https://github.com/apache/nifi/pull/2082
  
Reviewing...


> PutSQL assumes all Date and Time values are provided in Epoch
> -
>
> Key: NIFI-2829
> URL: https://issues.apache.org/jira/browse/NIFI-2829
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Paul Gibeault
>Assignee: Peter Wicks
> Fix For: 1.4.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> This bug is the same as NIFI-2576 only extended to data types DATE and TIME.
> https://issues.apache.org/jira/browse/NIFI-2576
> When PutSQL sees a DATE or TIME data type it assumes that it's being provided 
> as a Long in Epoch format.
> This doesn't make much sense since the Query Database tools that return Avro 
> return DATES and TIME values as strings; and thus following the 
> Avro->JSON->JSON To SQL Route leads to DATE and TIME fields as being strings.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2082: NIFI-2829: Fixed PutSQL time unit test.

2017-09-22 Thread jtstorck
Github user jtstorck commented on the issue:

https://github.com/apache/nifi/pull/2082
  
Reviewing...


---


[jira] [Commented] (NIFI-2829) PutSQL assumes all Date and Time values are provided in Epoch

2017-09-22 Thread Jeff Storck (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177248#comment-16177248
 ] 

Jeff Storck commented on NIFI-2829:
---

Are there any status updates for [PR 
2082|https://github.com/apache/nifi/pull/2082]?  Can we get this JIRA closed 
out?

> PutSQL assumes all Date and Time values are provided in Epoch
> -
>
> Key: NIFI-2829
> URL: https://issues.apache.org/jira/browse/NIFI-2829
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Paul Gibeault
>Assignee: Peter Wicks
> Fix For: 1.4.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> This bug is the same as NIFI-2576 only extended to data types DATE and TIME.
> https://issues.apache.org/jira/browse/NIFI-2576
> When PutSQL sees a DATE or TIME data type it assumes that it's being provided 
> as a Long in Epoch format.
> This doesn't make much sense since the Query Database tools that return Avro 
> return DATES and TIME values as strings; and thus following the 
> Avro->JSON->JSON To SQL Route leads to DATE and TIME fields as being strings.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4289) Implement put processor for InfluxDB

2017-09-22 Thread Jeff Storck (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177239#comment-16177239
 ] 

Jeff Storck commented on NIFI-4289:
---

Removed fix version, the linked PR has not been reviewed/accepted in time for 
the 1.4.0 release.

> Implement put processor for InfluxDB
> 
>
> Key: NIFI-4289
> URL: https://issues.apache.org/jira/browse/NIFI-4289
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.3.0
> Environment: All
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: insert, measurements,, put, timeseries
>
> Support inserting time series measurements into InfluxDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4289) Implement put processor for InfluxDB

2017-09-22 Thread Jeff Storck (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Storck updated NIFI-4289:
--
Fix Version/s: (was: 1.4.0)

> Implement put processor for InfluxDB
> 
>
> Key: NIFI-4289
> URL: https://issues.apache.org/jira/browse/NIFI-4289
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.3.0
> Environment: All
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: insert, measurements,, put, timeseries
>
> Support inserting time series measurements into InfluxDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4333) Dockerize NiFi Toolkit

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177205#comment-16177205
 ] 

ASF GitHub Bot commented on NIFI-4333:
--

Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2161#discussion_r140604161
  
--- Diff: nifi-toolkit/nifi-toolkit-assembly/docker/Dockerfile ---
@@ -0,0 +1,48 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+FROM openjdk:8-jre-alpine
+LABEL maintainer "Apache NiFi "
+
+ARG UID=1000
+ARG GID=1000
+ARG NIFI_TOOLKIT_VERSION=1.4.0-SNAPSHOT
--- End diff --

Will hardcoding the version here (and in Dockerfile.hub) cause extra 
maintenance for releases after 1.4.0?


> Dockerize NiFi Toolkit
> --
>
> Key: NIFI-4333
> URL: https://issues.apache.org/jira/browse/NIFI-4333
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Docker
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
> Fix For: 1.4.0
>
>
> It would be helpful to have a TLS Toolkit image to work in conjunction with 
> NiFI and would help in orchestration of clustered instances as well as 
> provisioning security items.
> As this is one assembly, we should support all tooling provided through one 
> image.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2161: NIFI-4333: Providing Docker support of the NiFi Too...

2017-09-22 Thread jtstorck
Github user jtstorck commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2161#discussion_r140604161
  
--- Diff: nifi-toolkit/nifi-toolkit-assembly/docker/Dockerfile ---
@@ -0,0 +1,48 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+FROM openjdk:8-jre-alpine
+LABEL maintainer "Apache NiFi "
+
+ARG UID=1000
+ARG GID=1000
+ARG NIFI_TOOLKIT_VERSION=1.4.0-SNAPSHOT
--- End diff --

Will hardcoding the version here (and in Dockerfile.hub) cause extra 
maintenance for releases after 1.4.0?


---


[jira] [Commented] (NIFI-4201) Add Processors for interacting with Kafka 0.11.x

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177184#comment-16177184
 ] 

ASF GitHub Bot commented on NIFI-4201:
--

Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2024
  
Excuse me, but please review this if anyone could. I won't be able to 
finish reviewing within next few days to make it available in 1.4.0 timeline.


> Add Processors for interacting with Kafka 0.11.x
> 
>
> Key: NIFI-4201
> URL: https://issues.apache.org/jira/browse/NIFI-4201
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2024: NIFI-4201: Initial implementation of processors for intera...

2017-09-22 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/2024
  
Excuse me, but please review this if anyone could. I won't be able to 
finish reviewing within next few days to make it available in 1.4.0 timeline.


---


[jira] [Assigned] (NIFI-4410) PutElasticHttp

2017-09-22 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-4410:
--

Assignee: Matt Burgess

> PutElasticHttp
> --
>
> Key: NIFI-4410
> URL: https://issues.apache.org/jira/browse/NIFI-4410
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Joseph Witt
>Assignee: Matt Burgess
>
> https://github.com/apache/nifi/blob/6b5015e39b4233cf230151fb45bebcb21df03730/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttp.java#L364-L366
> If it cannot extract the reason text it provides a very generic error and 
> there is nothing else logged.  You get no context as to what went wrong and 
> further the condition doesn't cause yielding or anything so there is just a 
> massive flood of errors in logs that dont' advise the user of the problem.
> We need to make sure the information can be made available to help 
> troubleshoot and we need to cause yielding so that such cases do not cause 
> continuous floods of errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4367) InvokedScriptedProcessor

2017-09-22 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-4367:
--
Fix Version/s: (was: 1.4.0)

> InvokedScriptedProcessor
> 
>
> Key: NIFI-4367
> URL: https://issues.apache.org/jira/browse/NIFI-4367
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.3.0, 1.4.0
> Environment: Linux / Windows
>Reporter: Patrice Freydiere
>  Labels: InvokeScriptedProcessor, validation
> Attachments: 
> 0001-NIFI-4367-Fix-on-processor-for-permit-deriving-scrip.patch
>
>
> InvokeScriptedProcessor pass his ValidationContext to the inner script, 
> validate call
> InvokeScriptedProcessor Line 465 :final 
> Collection instanceResults = instance.validate(context);
>  
> The problem is that the invokedscript pass the ScriptFile PropertyDescriptor 
> that is validated, if the script derived from the 
> AbstractConfigurableComponent, (because the AbstractConfigurableComponent 
> validate all the context properties).
> The context should be refined to remove the InvokeScriptedProcessor 
> Properties.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4367) InvokedScriptedProcessor

2017-09-22 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177108#comment-16177108
 ] 

Joseph Witt commented on NIFI-4367:
---

removing fix version until we can get review traction on this.

> InvokedScriptedProcessor
> 
>
> Key: NIFI-4367
> URL: https://issues.apache.org/jira/browse/NIFI-4367
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.3.0, 1.4.0
> Environment: Linux / Windows
>Reporter: Patrice Freydiere
>  Labels: InvokeScriptedProcessor, validation
> Attachments: 
> 0001-NIFI-4367-Fix-on-processor-for-permit-deriving-scrip.patch
>
>
> InvokeScriptedProcessor pass his ValidationContext to the inner script, 
> validate call
> InvokeScriptedProcessor Line 465 :final 
> Collection instanceResults = instance.validate(context);
>  
> The problem is that the invokedscript pass the ScriptFile PropertyDescriptor 
> that is validated, if the script derived from the 
> AbstractConfigurableComponent, (because the AbstractConfigurableComponent 
> validate all the context properties).
> The context should be refined to remove the InvokeScriptedProcessor 
> Properties.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4410) PutElasticHttp

2017-09-22 Thread Joseph Witt (JIRA)
Joseph Witt created NIFI-4410:
-

 Summary: PutElasticHttp
 Key: NIFI-4410
 URL: https://issues.apache.org/jira/browse/NIFI-4410
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Joseph Witt


https://github.com/apache/nifi/blob/6b5015e39b4233cf230151fb45bebcb21df03730/nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttp.java#L364-L366

If it cannot extract the reason text it provides a very generic error and there 
is nothing else logged.  You get no context as to what went wrong and further 
the condition doesn't cause yielding or anything so there is just a massive 
flood of errors in logs that dont' advise the user of the problem.

We need to make sure the information can be made available to help troubleshoot 
and we need to cause yielding so that such cases do not cause continuous floods 
of errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #137: MINIFI-372: Replace leveldb with RocksDB

2017-09-22 Thread phrocker
Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/137
  
Just dawned on me that I missed editing the license filewill add that 
shortly. 


---


[jira] [Commented] (NIFI-3803) Allow use of up/down arrow keys during selection in Add Processor dialog

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176981#comment-16176981
 ] 

ASF GitHub Bot commented on NIFI-3803:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2170
  
Thanks for the PR @yuri1969! I had tried implementing this awhile back 
allowing the user to tab into the table and then navigate from there. I wasn't 
thrilled with the solution has it required the user to tab twice to get focus 
into the first visible row. I believe the double tab was required due to the 
'focusSink' that `SlickGrid` adds to the DOM [1]. This functionality is already 
in your master. In conjunction with your PR, I like the capability much better. 
Now the user can navigate from the filter field and from the table if they get 
focus into the table via tab (albeit double tab) or clicking on a row.

Just a couple minor comments on the PR. 

- I believe these `nfFilteredDialogCommon` needs to be included in 
nf-settings for the Add Reporting Task dialog.
- Also, I was wondering if it would be possible for pageDown/pageUp to 
select the first/last visible row after navigating. The behavior seems a little 
inconsistent currently. If the table has/had focus from clicking on a row, then 
the pageDown/pageUp behavior seems right. If it hasn't, the pageDown/pageUp 
just navigates the view but does not change the selected row. It would be nice 
if the selected row changed with the view regardless if the navigation is being 
triggered through `nfFilteredDialogCommon` or through the existing 
capabilities. Let me know if you need to me to explain what I'm seeing further.

[1] 
https://github.com/mleibman/SlickGrid/blob/bf4705a96c40fea088216034def4d0256a335e65/slick.grid.js#L237


> Allow use of up/down arrow keys during selection in Add Processor dialog
> 
>
> Key: NIFI-3803
> URL: https://issues.apache.org/jira/browse/NIFI-3803
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.2.0
>Reporter: Andy LoPresto
>Priority: Trivial
>  Labels: ui
>
> In the Add Processor dialog, it would be convenient to be able to use the up 
> and down arrow keys to select the filtered results after typing the first few 
> characters of the desired component name. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2170: NIFI-3803 - Allow use of up/down arrow keys...

2017-09-22 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2170
  
Thanks for the PR @yuri1969! I had tried implementing this awhile back 
allowing the user to tab into the table and then navigate from there. I wasn't 
thrilled with the solution has it required the user to tab twice to get focus 
into the first visible row. I believe the double tab was required due to the 
'focusSink' that `SlickGrid` adds to the DOM [1]. This functionality is already 
in your master. In conjunction with your PR, I like the capability much better. 
Now the user can navigate from the filter field and from the table if they get 
focus into the table via tab (albeit double tab) or clicking on a row.

Just a couple minor comments on the PR. 

- I believe these `nfFilteredDialogCommon` needs to be included in 
nf-settings for the Add Reporting Task dialog.
- Also, I was wondering if it would be possible for pageDown/pageUp to 
select the first/last visible row after navigating. The behavior seems a little 
inconsistent currently. If the table has/had focus from clicking on a row, then 
the pageDown/pageUp behavior seems right. If it hasn't, the pageDown/pageUp 
just navigates the view but does not change the selected row. It would be nice 
if the selected row changed with the view regardless if the navigation is being 
triggered through `nfFilteredDialogCommon` or through the existing 
capabilities. Let me know if you need to me to explain what I'm seeing further.

[1] 
https://github.com/mleibman/SlickGrid/blob/bf4705a96c40fea088216034def4d0256a335e65/slick.grid.js#L237


---


[GitHub] nifi-minifi-cpp pull request #137: MINIFI-372: Replace leveldb with RocksDB

2017-09-22 Thread phrocker
GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/137

MINIFI-372: Replace leveldb with RocksDB

MINIFI-372: Update Cmake
MINIFI-372: Fix tests
MINIFI-372: Include deps
MINIFI-372: Rename data
MINIFI-372: Change cmake cxx flags
MINIFI-372: Fix linter issues

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFI-372

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/137.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #137


commit efc1de6de453c490b70a503783d116afec3e10f5
Author: Marc Parisi 
Date:   2017-08-15T15:34:12Z

MINIFI-372: Replace leveldb with RocksDB

MINIFI-372: Update Cmake
MINIFI-372: Fix tests
MINIFI-372: Include deps
MINIFI-372: Rename data
MINIFI-372: Change cmake cxx flags
MINIFI-372: Fix linter issues




---


[GitHub] nifi issue #2170: NIFI-3803 - Allow use of up/down arrow keys...

2017-09-22 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2170
  
Will review...


---


[jira] [Commented] (NIFI-3803) Allow use of up/down arrow keys during selection in Add Processor dialog

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176914#comment-16176914
 ] 

ASF GitHub Bot commented on NIFI-3803:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2170
  
Will review...


> Allow use of up/down arrow keys during selection in Add Processor dialog
> 
>
> Key: NIFI-3803
> URL: https://issues.apache.org/jira/browse/NIFI-3803
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.2.0
>Reporter: Andy LoPresto
>Priority: Trivial
>  Labels: ui
>
> In the Add Processor dialog, it would be convenient to be able to use the up 
> and down arrow keys to select the filtered results after typing the first few 
> characters of the desired component name. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-22 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4395:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176904#comment-16176904
 ] 

ASF GitHub Bot commented on NIFI-4395:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2166


> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2166: NIFI-4395 - GenerateTableFetch can't fetch column t...

2017-09-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2166


---


[jira] [Updated] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-22 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4395:
---
Fix Version/s: 1.4.0

> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176901#comment-16176901
 ] 

ASF subversion and git services commented on NIFI-4395:
---

Commit a29348f2a4d8ec6576f1ec73c57911b827d46315 in nifi's branch 
refs/heads/master from [~Deon]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=a29348f ]

NIFI-4395 - GenerateTableFetch can't fetch column type by state after instance 
reboot

NIFI-4395: Updated unit test for GenerateTableFetch
Signed-off-by: Matthew Burgess 

This closes #2166


> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176900#comment-16176900
 ] 

ASF GitHub Bot commented on NIFI-4395:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2166
  
The unit test doesn't pass when this PR is rebased against master, due to 
[NIFI-3484](https://issues.apache.org/jira/browse/NIFI-3484). I can fix this 
when I merge.

+1 LGTM, verified changes incorporated from @ijokarumawak 's review, built 
and ran (altered) unit tests, and tried with MySQL using multiple tables. Thank 
you for this fix! Merging to master.


> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2166: NIFI-4395 - GenerateTableFetch can't fetch column type by ...

2017-09-22 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2166
  
The unit test doesn't pass when this PR is rebased against master, due to 
[NIFI-3484](https://issues.apache.org/jira/browse/NIFI-3484). I can fix this 
when I merge.

+1 LGTM, verified changes incorporated from @ijokarumawak 's review, built 
and ran (altered) unit tests, and tried with MySQL using multiple tables. Thank 
you for this fix! Merging to master.


---


[GitHub] nifi issue #2134: NIFI-4357 Global improvement of XML unmarshalling

2017-09-22 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2134
  
Thanks @alopresto! I've made some minor adjustments to your PR to address 
the merge conflicts and the error I reported earlier. This has now been merged 
to master.


---


[jira] [Commented] (NIFI-4353) Improve template XML loading

2017-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176892#comment-16176892
 ] 

ASF subversion and git services commented on NIFI-4353:
---

Commit 9e2c7be7d3c6a380c5f61074d9a5a690b617c3dc in nifi's branch 
refs/heads/master from [~alopresto]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=9e2c7be ]

NIFI-4353
- Added XmlUtils class.
- Added unit test.
- Added XXE test resource.
- Refactored JAXB unmarshalling globally to prevent XXE attacks.
- Refactored duplicated/legacy code.
- Cleaned up commented code.
- Switched from FileInputStream back to StreamSource in AuthorizerFactoryBean.
- This closes #2134


> Improve template XML loading
> 
>
> Key: NIFI-4353
> URL: https://issues.apache.org/jira/browse/NIFI-4353
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
> Fix For: 1.4.0
>
>
> We should improve the template loading code (uses JAXBContext). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4357) Improve template XML loading globally

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176894#comment-16176894
 ] 

ASF GitHub Bot commented on NIFI-4357:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2134


> Improve template XML loading globally
> -
>
> Key: NIFI-4357
> URL: https://issues.apache.org/jira/browse/NIFI-4357
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
> Fix For: 1.4.0
>
>
> We should improve the template loading code (uses JAXBContext). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3803) Allow use of up/down arrow keys during selection in Add Processor dialog

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176876#comment-16176876
 ] 

ASF GitHub Bot commented on NIFI-3803:
--

GitHub user yuri1969 opened a pull request:

https://github.com/apache/nifi/pull/2170

NIFI-3803 - Allow use of up/down arrow keys...

...during selection in Add Processor dialog

* Added navigation logic to both Add Processor and Add CS dialogs.
* No extending to the SlickGrid library done.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yuri1969/nifi NIFI-3803

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2170.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2170


commit 153b260900789f69e05424439602c7636d1a28b6
Author: yuri1969 <1969yuri1...@gmail.com>
Date:   2017-09-18T20:58:59Z

NIFI-3803 - Allow use of up/down arrow keys...

...during selection in Add Processor dialog

* Added navigation logic to both Add Processor and Add CS dialogs.
* No extending to the SlickGrid library done.




> Allow use of up/down arrow keys during selection in Add Processor dialog
> 
>
> Key: NIFI-3803
> URL: https://issues.apache.org/jira/browse/NIFI-3803
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.2.0
>Reporter: Andy LoPresto
>Priority: Trivial
>  Labels: ui
>
> In the Add Processor dialog, it would be convenient to be able to use the up 
> and down arrow keys to select the filtered results after typing the first few 
> characters of the desired component name. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (MINIFICPP-67) Mergecontent processor for minifi-cpp

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-67?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176801#comment-16176801
 ] 

ASF GitHub Bot commented on MINIFICPP-67:
-

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/133


> Mergecontent processor for minifi-cpp
> -
>
> Key: MINIFICPP-67
> URL: https://issues.apache.org/jira/browse/MINIFICPP-67
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Karthik Narayanan
>
> A simpler processor than the nifi merge content processor. It should support 
> at least binary concatenation. it will basically allow a flow running in 
> minifi to group several events at a time and send them to nifi, to better 
> utilize the network bandwidth. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp pull request #133: MINIFICPP-67: Merge Content processor

2017-09-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/133


---


[jira] [Updated] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2017-09-22 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4395:
---
Fix Version/s: (was: 1.4.0)
   Status: Patch Available  (was: Open)

> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4345) Add a controller service and a lookup service for MongoDB

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176783#comment-16176783
 ] 

ASF GitHub Bot commented on NIFI-4345:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2123


> Add a controller service and a lookup service for MongoDB
> -
>
> Key: NIFI-4345
> URL: https://issues.apache.org/jira/browse/NIFI-4345
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
> Fix For: 1.4.0
>
>
> - Create a Controller Service that wraps the functionality of the Mongo 
> driver.
> - Create a lookup service that can return elements based on a query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4345) Add a controller service and a lookup service for MongoDB

2017-09-22 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4345:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add a controller service and a lookup service for MongoDB
> -
>
> Key: NIFI-4345
> URL: https://issues.apache.org/jira/browse/NIFI-4345
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
> Fix For: 1.4.0
>
>
> - Create a Controller Service that wraps the functionality of the Mongo 
> driver.
> - Create a lookup service that can return elements based on a query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4345) Add a controller service and a lookup service for MongoDB

2017-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176781#comment-16176781
 ] 

ASF subversion and git services commented on NIFI-4345:
---

Commit 9a8e6b2eb150865361dda241d71405c5a969f5e8 in nifi's branch 
refs/heads/master from [~mike.thomsen]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=9a8e6b2 ]

NIFI-4345 Added a MongoDB controller service and a lookup service.

NIFI-4345: Changed Lookup Key to Lookup Value Field
Signed-off-by: Matthew Burgess 

This closes #2123


> Add a controller service and a lookup service for MongoDB
> -
>
> Key: NIFI-4345
> URL: https://issues.apache.org/jira/browse/NIFI-4345
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>
> - Create a Controller Service that wraps the functionality of the Mongo 
> driver.
> - Create a lookup service that can return elements based on a query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4345) Add a controller service and a lookup service for MongoDB

2017-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176782#comment-16176782
 ] 

ASF subversion and git services commented on NIFI-4345:
---

Commit 9a8e6b2eb150865361dda241d71405c5a969f5e8 in nifi's branch 
refs/heads/master from [~mike.thomsen]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=9a8e6b2 ]

NIFI-4345 Added a MongoDB controller service and a lookup service.

NIFI-4345: Changed Lookup Key to Lookup Value Field
Signed-off-by: Matthew Burgess 

This closes #2123


> Add a controller service and a lookup service for MongoDB
> -
>
> Key: NIFI-4345
> URL: https://issues.apache.org/jira/browse/NIFI-4345
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>
> - Create a Controller Service that wraps the functionality of the Mongo 
> driver.
> - Create a lookup service that can return elements based on a query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4345) Add a controller service and a lookup service for MongoDB

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176772#comment-16176772
 ] 

ASF GitHub Bot commented on NIFI-4345:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2123
  
I took the liberty of changing the Lookup Key property to Lookup Value 
Field, and updated variables and unit tests and such. +1 LGTM, built and ran 
unit tests, also tried a flow with LookupRecord with various settings (Lookup 
Value Field set and not set, Insert Entire Record and Insert Record Fields, 
etc.).  Thank you for the feature! Merging (after rebasing with my addition) to 
master


> Add a controller service and a lookup service for MongoDB
> -
>
> Key: NIFI-4345
> URL: https://issues.apache.org/jira/browse/NIFI-4345
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>
> - Create a Controller Service that wraps the functionality of the Mongo 
> driver.
> - Create a lookup service that can return elements based on a query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2123: NIFI-4345 Added a MongoDB controller service and a lookup ...

2017-09-22 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2123
  
I took the liberty of changing the Lookup Key property to Lookup Value 
Field, and updated variables and unit tests and such. +1 LGTM, built and ran 
unit tests, also tried a flow with LookupRecord with various settings (Lookup 
Value Field set and not set, Insert Entire Record and Insert Record Fields, 
etc.).  Thank you for the feature! Merging (after rebasing with my addition) to 
master


---


[jira] [Commented] (NIFI-3116) Remove Jasypt library

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176702#comment-16176702
 ] 

ASF GitHub Bot commented on NIFI-3116:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2108
  
I have verified that when building an older nifi flow with sensitive 
properties I can bring it into the new non-jasypted nifi.  So that is great.

I found Jasypt L entries in the following

- nifi-assembly/NOTICE  This should be removed.  I realize you still have 
it for test but our assembly notice wont need it for test cases since we're not 
bundling those binary deps in that case.
-nifi-nar-bundles/nifi-framework-nar/NOTICE  Can be removed from here now 
too

If you can please rebase to resolve the conflict, address the test failures 
for environments like mine which do not have unlimited JCE installed, and fix 
these L i think we're in good shape.

Thanks


> Remove Jasypt library
> -
>
> Key: NIFI-3116
> URL: https://issues.apache.org/jira/browse/NIFI-3116
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.1.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>  Labels: encryption, kdf, pbe, security
>
> The [Jasypt|http://www.jasypt.org/index.html] library is used internally by 
> NiFi for String encryption operations (specifically password-based encryption 
> (PBE) in {{EncryptContent}} and sensitive processor property protection). I 
> feel there are a number of reasons to remove this library from NiFi and 
> provide centralized symmetric encryption operations using Java cryptographic 
> primitives (and BouncyCastle features where necessary). 
> * The library was last updated February 25, 2014. For comparison, 
> BouncyCastle has been [updated 5 
> times|https://www.bouncycastle.org/releasenotes.html] since then
> * {{StandardPBEStringEncryptor}}, the high-level class wrapped by NiFi's 
> {{StringEncryptor}} is final. This makes it, and features relying on it, 
> difficult to test in isolation
> * Jasypt encapsulates many decisions about {{Cipher}} configuration, 
> specifically salt-generation strategy. This can be a valuable feature for 
> pluggable libraries, but is less than ideal when dealing with encryption and 
> key derivation, which are in constant struggle with evolving attacks and 
> improving hardware. There are hard-coded constants which are not compatible 
> with better decisions available now (i.e. requiring custom implementations of 
> the {{SaltGenerator}} interface to provide new derivations). The existence of 
> these values was opaque to NiFi and led to serious compatibility issues 
> [NIFI-1259], [NIFI-1257], [NIFI-1242], [NIFI-1463], [NIFI-1465], [NIFI-3024]
> * {{StringEncryptor}}, the NiFi class wrapping {{StandardPBEStringEncryptor}} 
> is also final and does not expose methods to instantiate it with only the 
> relevant values (i.e. {{algorithm}}, {{provider}}, and {{password}}) but 
> rather requires an entire {{NiFiProperties}} instance. 
> * {{StringEncryptor.createEncryptor()}} performs an unnecessary "validation 
> check" on instantiation, which was one cause of reported issues where a 
> secure node/cluster blocks on startup on VMs due to lack of entropy in 
> {{/dev/random}}
> * The use of custom salts with PBE means that the internal {{Cipher}} object 
> must be re-created and initialized and the key re-derived from the password 
> on every decryption call. Symmetric keyed encryption with a strong KDF (order 
> of magnitude higher iterations of a stronger algorithm) and unique 
> initialization vector (IV) values would be substantially more resistant to 
> brute force attacks and yet more performant at scale. 
> I have already implemented backwards-compatible code to perform the actions 
> of symmetric key encryption using keys derived from passwords in both the 
> {{ConfigEncryptionTool}} and {{OpenSSLPKCS5CipherProvider}} and 
> {{NiFiLegacyCipherProvider}} classes, which empirical tests confirm are 
> compatible with the Jasypt output. 
> Additional research on some underlying/related issues:
> * [Why does Java allow AES-256 bit encryption on systems without JCE 
> unlimited strength policies if using 
> PBE?|https://security.stackexchange.com/questions/107321/why-does-java-allow-aes-256-bit-encryption-on-systems-without-jce-unlimited-stre]
> * [How To Decrypt OpenSSL-encrypted Data In Apache 
> NiFi|https://community.hortonworks.com/articles/5319/how-to-decrypt-openssl-encrypted-data-in-apache-ni.html]
> * [d...@nifi.apache.org "Passwords in 
> EncryptContent"|https://lists.apache.org/thread.html/b93ced98eff6a77dd0a2a2f0b5785ef42a3b02de2cee5c17607a8c49@%3Cdev.nifi.apache.org%3E]



--
This message was sent by Atlassian JIRA

[GitHub] nifi issue #2108: NIFI-3116 Remove Jasypt

2017-09-22 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2108
  
I have verified that when building an older nifi flow with sensitive 
properties I can bring it into the new non-jasypted nifi.  So that is great.

I found Jasypt L entries in the following

- nifi-assembly/NOTICE  This should be removed.  I realize you still have 
it for test but our assembly notice wont need it for test cases since we're not 
bundling those binary deps in that case.
-nifi-nar-bundles/nifi-framework-nar/NOTICE  Can be removed from here now 
too

If you can please rebase to resolve the conflict, address the test failures 
for environments like mine which do not have unlimited JCE installed, and fix 
these L i think we're in good shape.

Thanks


---


[jira] [Commented] (NIFI-4409) QueryRecord should pass to the Record Writer the schema based on the ResultSet, not the record as it was read

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176696#comment-16176696
 ] 

ASF GitHub Bot commented on NIFI-4409:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2169


> QueryRecord should pass to the Record Writer the schema based on the 
> ResultSet, not the record as it was read
> -
>
> Key: NIFI-4409
> URL: https://issues.apache.org/jira/browse/NIFI-4409
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> When a Record is read for QueryRecord, the schema is passed to the writer as 
> the 'read schema'. However, the resultant data is likely not to follow that 
> schema because of the SELECT clause. We should instead provide the schema 
> that is provided by the ResultSet to the writer.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4409) QueryRecord should pass to the Record Writer the schema based on the ResultSet, not the record as it was read

2017-09-22 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4409:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> QueryRecord should pass to the Record Writer the schema based on the 
> ResultSet, not the record as it was read
> -
>
> Key: NIFI-4409
> URL: https://issues.apache.org/jira/browse/NIFI-4409
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> When a Record is read for QueryRecord, the schema is passed to the writer as 
> the 'read schema'. However, the resultant data is likely not to follow that 
> schema because of the SELECT clause. We should instead provide the schema 
> that is provided by the ResultSet to the writer.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2169: NIFI-4409: Ensure that when record schema is inheri...

2017-09-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2169


---


[jira] [Updated] (NIFI-4409) QueryRecord should pass to the Record Writer the schema based on the ResultSet, not the record as it was read

2017-09-22 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4409:
---
Fix Version/s: 1.4.0

> QueryRecord should pass to the Record Writer the schema based on the 
> ResultSet, not the record as it was read
> -
>
> Key: NIFI-4409
> URL: https://issues.apache.org/jira/browse/NIFI-4409
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> When a Record is read for QueryRecord, the schema is passed to the writer as 
> the 'read schema'. However, the resultant data is likely not to follow that 
> schema because of the SELECT clause. We should instead provide the schema 
> that is provided by the ResultSet to the writer.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4409) QueryRecord should pass to the Record Writer the schema based on the ResultSet, not the record as it was read

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176693#comment-16176693
 ] 

ASF GitHub Bot commented on NIFI-4409:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2169
  
+1 LGTM, built and ran unit tests, and tried two queries with different 
result sets, verified the different schemas and values were output correctly. 
Thanks for the improvement! Merging to master


> QueryRecord should pass to the Record Writer the schema based on the 
> ResultSet, not the record as it was read
> -
>
> Key: NIFI-4409
> URL: https://issues.apache.org/jira/browse/NIFI-4409
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> When a Record is read for QueryRecord, the schema is passed to the writer as 
> the 'read schema'. However, the resultant data is likely not to follow that 
> schema because of the SELECT clause. We should instead provide the schema 
> that is provided by the ResultSet to the writer.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4409) QueryRecord should pass to the Record Writer the schema based on the ResultSet, not the record as it was read

2017-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176694#comment-16176694
 ] 

ASF subversion and git services commented on NIFI-4409:
---

Commit 05b5dd14884c3ce27f9978f84b357397f1d6f20a in nifi's branch 
refs/heads/master from [~markap14]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=05b5dd1 ]

NIFI-4409: Ensure that when record schema is inherited, the schema from teh 
ResultSet is used instead of the schema from the RecordReader because the 
schema from the RecordReader mmay not match the actual data

Signed-off-by: Matthew Burgess 

This closes #2169


> QueryRecord should pass to the Record Writer the schema based on the 
> ResultSet, not the record as it was read
> -
>
> Key: NIFI-4409
> URL: https://issues.apache.org/jira/browse/NIFI-4409
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.4.0
>
>
> When a Record is read for QueryRecord, the schema is passed to the writer as 
> the 'read schema'. However, the resultant data is likely not to follow that 
> schema because of the SELECT clause. We should instead provide the schema 
> that is provided by the ResultSet to the writer.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2169: NIFI-4409: Ensure that when record schema is inherited, th...

2017-09-22 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2169
  
+1 LGTM, built and ran unit tests, and tried two queries with different 
result sets, verified the different schemas and values were output correctly. 
Thanks for the improvement! Merging to master


---


[jira] [Commented] (NIFI-4409) QueryRecord should pass to the Record Writer the schema based on the ResultSet, not the record as it was read

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176671#comment-16176671
 ] 

ASF GitHub Bot commented on NIFI-4409:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2169
  
Reviewing...


> QueryRecord should pass to the Record Writer the schema based on the 
> ResultSet, not the record as it was read
> -
>
> Key: NIFI-4409
> URL: https://issues.apache.org/jira/browse/NIFI-4409
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
>
> When a Record is read for QueryRecord, the schema is passed to the writer as 
> the 'read schema'. However, the resultant data is likely not to follow that 
> schema because of the SELECT clause. We should instead provide the schema 
> that is provided by the ResultSet to the writer.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2169: NIFI-4409: Ensure that when record schema is inherited, th...

2017-09-22 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2169
  
Reviewing...


---


[GitHub] nifi pull request #2123: NIFI-4345 Added a MongoDB controller service and a ...

2017-09-22 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2123#discussion_r140537987
  
--- Diff: pom.xml ---
@@ -1109,7 +1109,7 @@
 
 
 org.apache.nifi
-nifi-kudu-nar
--- End diff --

This is still an issue with the latest PR


---


[jira] [Commented] (NIFI-4345) Add a controller service and a lookup service for MongoDB

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176618#comment-16176618
 ] 

ASF GitHub Bot commented on NIFI-4345:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2123#discussion_r140531630
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services/src/main/java/org/apache/nifi/mongodb/MongoDBLookupService.java
 ---
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.mongodb;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.lookup.LookupFailureException;
+import org.apache.nifi.lookup.LookupService;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.serialization.SimpleRecordSchema;
+import org.apache.nifi.serialization.record.MapRecord;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordField;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.bson.Document;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+
+
+@Tags({"mongo", "mongodb", "lookup", "record"})
+@CapabilityDescription(
+"Provides a lookup service based around MongoDB. Each key that is 
specified \n" +
+"will be added to a query as-is. For example, if you specify the two 
keys, \n" +
+"user and email, the resulting query will be { \"user\": \"tester\", 
\"email\": \"tes...@test.com\" }.\n" +
+"The query is limited to the first result (findOne in the Mongo 
documentation). If no \"lookup key\" is specified " +
+"then the entire Mongo result document minus the _id field will be 
returned as a record and merged."
+)
+public class MongoDBLookupService extends MongoDBControllerService 
implements LookupService {
+
+public static final PropertyDescriptor LOOKUP_KEY = new 
PropertyDescriptor.Builder()
--- End diff --

This doesn't appear to be used as a lookup key, it looks like a Lookup 
Value Field or something, the name of the field whose value is returned when 
the lookup key matches a record. Can you explain a bit more about this 
property, how its used, etc. not just in the PR but in the documentation? I was 
quite confused about its usage and wasn't able to test it successfully until I 
configured it as a Lookup Value Field


> Add a controller service and a lookup service for MongoDB
> -
>
> Key: NIFI-4345
> URL: https://issues.apache.org/jira/browse/NIFI-4345
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>
> - Create a Controller Service that wraps the functionality of the Mongo 
> driver.
> - Create a lookup service that can return elements based on a query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2123: NIFI-4345 Added a MongoDB controller service and a ...

2017-09-22 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2123#discussion_r140531630
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services/src/main/java/org/apache/nifi/mongodb/MongoDBLookupService.java
 ---
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.mongodb;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.lookup.LookupFailureException;
+import org.apache.nifi.lookup.LookupService;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.serialization.SimpleRecordSchema;
+import org.apache.nifi.serialization.record.MapRecord;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordField;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.bson.Document;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+
+
+@Tags({"mongo", "mongodb", "lookup", "record"})
+@CapabilityDescription(
+"Provides a lookup service based around MongoDB. Each key that is 
specified \n" +
+"will be added to a query as-is. For example, if you specify the two 
keys, \n" +
+"user and email, the resulting query will be { \"user\": \"tester\", 
\"email\": \"tes...@test.com\" }.\n" +
+"The query is limited to the first result (findOne in the Mongo 
documentation). If no \"lookup key\" is specified " +
+"then the entire Mongo result document minus the _id field will be 
returned as a record and merged."
+)
+public class MongoDBLookupService extends MongoDBControllerService 
implements LookupService {
+
+public static final PropertyDescriptor LOOKUP_KEY = new 
PropertyDescriptor.Builder()
--- End diff --

This doesn't appear to be used as a lookup key, it looks like a Lookup 
Value Field or something, the name of the field whose value is returned when 
the lookup key matches a record. Can you explain a bit more about this 
property, how its used, etc. not just in the PR but in the documentation? I was 
quite confused about its usage and wasn't able to test it successfully until I 
configured it as a Lookup Value Field


---


[jira] [Commented] (NIFI-4403) Add group name in bulletin data

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176611#comment-16176611
 ] 

ASF GitHub Bot commented on NIFI-4403:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2167


> Add group name in bulletin data
> ---
>
> Key: NIFI-4403
> URL: https://issues.apache.org/jira/browse/NIFI-4403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
> Fix For: 1.4.0
>
>
> At the moment a bulletin includes the following information:
> - timestamp
> - id
> - nodeAddress
> - level
> - category
>   - message
> - groupId
>  - sourceId
>  - sourceName
>  - sourceType
> When S2S is used to redirect bulletins to external monitoring tools it'd be 
> useful to also indicate the group name in addition to the group id.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4403) Add group name in bulletin data

2017-09-22 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-4403:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add group name in bulletin data
> ---
>
> Key: NIFI-4403
> URL: https://issues.apache.org/jira/browse/NIFI-4403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
> Fix For: 1.4.0
>
>
> At the moment a bulletin includes the following information:
> - timestamp
> - id
> - nodeAddress
> - level
> - category
>   - message
> - groupId
>  - sourceId
>  - sourceName
>  - sourceType
> When S2S is used to redirect bulletins to external monitoring tools it'd be 
> useful to also indicate the group name in addition to the group id.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4403) Add group name in bulletin data

2017-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176610#comment-16176610
 ] 

ASF subversion and git services commented on NIFI-4403:
---

Commit 50d018566db4d2c6f97995dcde7a180d7e154ae6 in nifi's branch 
refs/heads/master from [~pvillard]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=50d0185 ]

NIFI-4403 - add group name to bulletins model. This closes #2167


> Add group name in bulletin data
> ---
>
> Key: NIFI-4403
> URL: https://issues.apache.org/jira/browse/NIFI-4403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
> Fix For: 1.4.0
>
>
> At the moment a bulletin includes the following information:
> - timestamp
> - id
> - nodeAddress
> - level
> - category
>   - message
> - groupId
>  - sourceId
>  - sourceName
>  - sourceType
> When S2S is used to redirect bulletins to external monitoring tools it'd be 
> useful to also indicate the group name in addition to the group id.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4403) Add group name in bulletin data

2017-09-22 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-4403:
--
Fix Version/s: 1.4.0

> Add group name in bulletin data
> ---
>
> Key: NIFI-4403
> URL: https://issues.apache.org/jira/browse/NIFI-4403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Minor
> Fix For: 1.4.0
>
>
> At the moment a bulletin includes the following information:
> - timestamp
> - id
> - nodeAddress
> - level
> - category
>   - message
> - groupId
>  - sourceId
>  - sourceName
>  - sourceType
> When S2S is used to redirect bulletins to external monitoring tools it'd be 
> useful to also indicate the group name in addition to the group id.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2167: NIFI-4403 - add group name to bulletins model & S2S...

2017-09-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2167


---


[jira] [Commented] (MINIFICPP-67) Mergecontent processor for minifi-cpp

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-67?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176581#comment-16176581
 ] 

ASF GitHub Bot commented on MINIFICPP-67:
-

Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@phrocker thanks for the help


> Mergecontent processor for minifi-cpp
> -
>
> Key: MINIFICPP-67
> URL: https://issues.apache.org/jira/browse/MINIFICPP-67
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Karthik Narayanan
>
> A simpler processor than the nifi merge content processor. It should support 
> at least binary concatenation. it will basically allow a flow running in 
> minifi to group several events at a time and send them to nifi, to better 
> utilize the network bandwidth. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #133: MINIFICPP-67: Merge Content processor

2017-09-22 Thread minifirocks
Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@phrocker thanks for the help


---


[jira] [Commented] (NIFI-4409) QueryRecord should pass to the Record Writer the schema based on the ResultSet, not the record as it was read

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176566#comment-16176566
 ] 

ASF GitHub Bot commented on NIFI-4409:
--

GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/2169

NIFI-4409: Ensure that when record schema is inherited, the schema fr…

…om teh ResultSet is used instead of the schema from the RecordReader 
because the schema from the RecordReader mmay not match the actual data

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-4409

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2169.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2169


commit c6a89c1c068038f2740cf4b2ad4a42983bbf7480
Author: Mark Payne 
Date:   2017-09-22T15:08:22Z

NIFI-4409: Ensure that when record schema is inherited, the schema from teh 
ResultSet is used instead of the schema from the RecordReader because the 
schema from the RecordReader mmay not match the actual data




> QueryRecord should pass to the Record Writer the schema based on the 
> ResultSet, not the record as it was read
> -
>
> Key: NIFI-4409
> URL: https://issues.apache.org/jira/browse/NIFI-4409
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Payne
>Assignee: Mark Payne
>
> When a Record is read for QueryRecord, the schema is passed to the writer as 
> the 'read schema'. However, the resultant data is likely not to follow that 
> schema because of the SELECT clause. We should instead provide the schema 
> that is provided by the ResultSet to the writer.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2169: NIFI-4409: Ensure that when record schema is inheri...

2017-09-22 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/2169

NIFI-4409: Ensure that when record schema is inherited, the schema fr…

…om teh ResultSet is used instead of the schema from the RecordReader 
because the schema from the RecordReader mmay not match the actual data

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-4409

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2169.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2169


commit c6a89c1c068038f2740cf4b2ad4a42983bbf7480
Author: Mark Payne 
Date:   2017-09-22T15:08:22Z

NIFI-4409: Ensure that when record schema is inherited, the schema from teh 
ResultSet is used instead of the schema from the RecordReader because the 
schema from the RecordReader mmay not match the actual data




---


[jira] [Updated] (NIFI-4345) Add a controller service and a lookup service for MongoDB

2017-09-22 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4345:
---
Status: Patch Available  (was: Open)

> Add a controller service and a lookup service for MongoDB
> -
>
> Key: NIFI-4345
> URL: https://issues.apache.org/jira/browse/NIFI-4345
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>
> - Create a Controller Service that wraps the functionality of the Mongo 
> driver.
> - Create a lookup service that can return elements based on a query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (NIFI-3915) Add Kerberos support to the Cassandra processors

2017-09-22 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-3915:
--

Assignee: (was: Matt Burgess)

> Add Kerberos support to the Cassandra processors
> 
>
> Key: NIFI-3915
> URL: https://issues.apache.org/jira/browse/NIFI-3915
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.1.0, 1.1.1
> Environment: RHEL 7
>Reporter: RAVINDRA
>
> Currently we use PutCassandraQL processor to persists the in to cassandra
> We have a requirement to kerborize the Cassandra cluster
> Since PutCassandraQL processor does not support kerberos,we are having issues 
> integrating Cassndra from NIFI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4297) Immediately actionable dependency upgrades

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176531#comment-16176531
 ] 

ASF GitHub Bot commented on NIFI-4297:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2084
  
@alopresto I believe the exclusions for the solr processors are incorrect. 
`jackson-core` has been excluded but not explicitly added. Also, I noticed that 
the change to `jackson-annotations` made in the root pom, affects the version 
of `jackson-annotations` which `solr-solrj` brings in transitively. It appears 
that prior to this PR it also overriding (2.6.1 vs 2.5.4). Because the use of 
jackson is so widespread and the behavior of dependency management with regards 
to transitive dependencies, I'm wondering if it makes to remove 
`jackson-annotations` and allow individual modules to pull it in using the 
`jackson.version` property introduced here. Thoughts?


> Immediately actionable dependency upgrades
> --
>
> Key: NIFI-4297
> URL: https://issues.apache.org/jira/browse/NIFI-4297
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>  Labels: dependencies, security
>
> The immediately actionable items are:
> * {{org.apache.logging.log4j:log4j-core}} in {{nifi-storm-spout}} 2.1 -> 2.8.2
> * {{org.apache.poi:poi}} in {{nifi-email-processors}} 3.14 -> 3.15
> * {{org.apache.logging.log4j:log4j-core}} in 
> {{nifi-elasticsearch-5-processors}} 2.7 -> 2.8.2
> * {{org.springframework:spring-web}} in {{nifi-jetty}} 4.2.4.RELEASE -> 
> 4.3.10.RELEASE
> * {{org.springframework:spring-web}} in {{nifi-jetty}} 4.2.4.RELEASE -> 
> 4.3.10.RELEASE
> * {{org.apache.derby:derby}} in {{nifi-kite-processors}} 10.11.1.1 -> 
> 10.12.1.1 (already excluded)
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-azure-processors}} 
> 2.6.0 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-expression-language}} 
> 2.6.1 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-standard-utils}} 
> 2.6.2 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-hwx-schema-registry}} 
> 2.7.3 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-solr-processors}} 
> 2.5.4 -> 2.8.6



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2084: NIFI-4297 Updated dependency versions

2017-09-22 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2084
  
@alopresto I believe the exclusions for the solr processors are incorrect. 
`jackson-core` has been excluded but not explicitly added. Also, I noticed that 
the change to `jackson-annotations` made in the root pom, affects the version 
of `jackson-annotations` which `solr-solrj` brings in transitively. It appears 
that prior to this PR it also overriding (2.6.1 vs 2.5.4). Because the use of 
jackson is so widespread and the behavior of dependency management with regards 
to transitive dependencies, I'm wondering if it makes to remove 
`jackson-annotations` and allow individual modules to pull it in using the 
`jackson.version` property introduced here. Thoughts?


---


[jira] [Commented] (NIFI-4345) Add a controller service and a lookup service for MongoDB

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176521#comment-16176521
 ] 

ASF GitHub Bot commented on NIFI-4345:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2123
  
@MikeThomsen I have reviewed the updated PR. It looks like all of my 
concerns are addressed. Thanks for the new iteration!  I'm a +1 as long as 
@mattyb149 's concerns are all addressed. Thanks!


> Add a controller service and a lookup service for MongoDB
> -
>
> Key: NIFI-4345
> URL: https://issues.apache.org/jira/browse/NIFI-4345
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>
> - Create a Controller Service that wraps the functionality of the Mongo 
> driver.
> - Create a lookup service that can return elements based on a query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2123: NIFI-4345 Added a MongoDB controller service and a lookup ...

2017-09-22 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2123
  
@MikeThomsen I have reviewed the updated PR. It looks like all of my 
concerns are addressed. Thanks for the new iteration!  I'm a +1 as long as 
@mattyb149 's concerns are all addressed. Thanks!


---


[jira] [Commented] (MINIFICPP-67) Mergecontent processor for minifi-cpp

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-67?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176498#comment-16176498
 ] 

ASF GitHub Bot commented on MINIFICPP-67:
-

Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@minifirocks thanks! I'll merge to master at some point today


> Mergecontent processor for minifi-cpp
> -
>
> Key: MINIFICPP-67
> URL: https://issues.apache.org/jira/browse/MINIFICPP-67
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Karthik Narayanan
>
> A simpler processor than the nifi merge content processor. It should support 
> at least binary concatenation. it will basically allow a flow running in 
> minifi to group several events at a time and send them to nifi, to better 
> utilize the network bandwidth. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #133: MINIFICPP-67: Merge Content processor

2017-09-22 Thread phrocker
Github user phrocker commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@minifirocks thanks! I'll merge to master at some point today


---


[jira] [Commented] (MINIFICPP-67) Mergecontent processor for minifi-cpp

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-67?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176490#comment-16176490
 ] 

ASF GitHub Bot commented on MINIFICPP-67:
-

Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@phrocker @apiri please take a look and see whether you can merge the PR to 
master. i tested the site2site as above.


> Mergecontent processor for minifi-cpp
> -
>
> Key: MINIFICPP-67
> URL: https://issues.apache.org/jira/browse/MINIFICPP-67
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Karthik Narayanan
>
> A simpler processor than the nifi merge content processor. It should support 
> at least binary concatenation. it will basically allow a flow running in 
> minifi to group several events at a time and send them to nifi, to better 
> utilize the network bandwidth. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp issue #133: MINIFICPP-67: Merge Content processor

2017-09-22 Thread minifirocks
Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/133
  
@phrocker @apiri please take a look and see whether you can merge the PR to 
master. i tested the site2site as above.


---


[jira] [Commented] (NIFI-4360) Add support for Azure Data Lake Store (ADLS)

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176476#comment-16176476
 ] 

ASF GitHub Bot commented on NIFI-4360:
--

Github user milanchandna commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2158#discussion_r140503820
  
--- Diff: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/test/java/org/apache/nifi/processors/adls/TestPutADLSFile.java
 ---
@@ -0,0 +1,522 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.adls;
+
+import com.microsoft.azure.datalake.store.ADLStoreClient;
+import com.microsoft.azure.datalake.store.ADLStoreOptions;
+import okhttp3.mockwebserver.MockResponse;
+import okhttp3.mockwebserver.MockWebServer;
+import okhttp3.mockwebserver.QueueDispatcher;
+import okhttp3.mockwebserver.RecordedRequest;
+import org.apache.nifi.processor.Processor;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.hamcrest.CoreMatchers;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Paths;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+
+public class TestPutADLSFile {
+
+private MockWebServer server;
+private ADLStoreClient client;
+private Processor processor;
+private TestRunner runner;
+Map fileAttributes;
+
+@Before
+public void init() throws IOException, InitializationException {
+server = new MockWebServer();
+QueueDispatcher dispatcher = new QueueDispatcher();
+dispatcher.setFailFast(new MockResponse().setResponseCode(400));
+server.setDispatcher(dispatcher);
+server.start();
+String accountFQDN = server.getHostName() + ":" + server.getPort();
+String dummyToken = "testDummyAadToken";
+
+client = ADLStoreClient.createClient(accountFQDN, dummyToken);
+client.setOptions(new ADLStoreOptions().setInsecureTransport());
+
+processor = new PutADLSWithMockClient(client);
+
+runner = TestRunners.newTestRunner(processor);
+
+runner.setProperty(ADLSConstants.ACCOUNT_NAME, accountFQDN);
+runner.setProperty(ADLSConstants.CLIENT_ID, "foobar");
+runner.setProperty(ADLSConstants.CLIENT_SECRET, "foobar");
+runner.setProperty(ADLSConstants.AUTH_TOKEN_ENDPOINT, "foobar");
+runner.setProperty(PutADLSFile.DIRECTORY, "/sample/");
+runner.setProperty(PutADLSFile.CONFLICT_RESOLUTION, 
PutADLSFile.FAIL_RESOLUTION_AV);
+runner.removeProperty(PutADLSFile.UMASK);
+runner.removeProperty(PutADLSFile.ACL);
+
+fileAttributes = new HashMap<>();
+fileAttributes.put("filename", "sample.txt");
+fileAttributes.put("path", "/root/");
+}
+
+@Test
+public void testPutConflictFail() throws IOException, 
InterruptedException {
+
+//for create call
+server.enqueue(new MockResponse().setResponseCode(200));
+//for append call(internal call while writing to stream)
+server.enqueue(new MockResponse().setResponseCode(200));
+//for rename call(since its fail conflict)
+server.enqueue(new 
MockResponse().setResponseCode(200).setBody("{\"boolean\":true}"));
+//for delete call(removing temp file)
+server.enqueue(new MockResponse().setResponseCode(200));
+
+//String bodySimpleFileContent = new 
String(Files.readAllBytes(Paths.get(bodySimpleFileName)));
--- End diff --

yes, removed.


> Add support for Azure Data Lake Store 

[jira] [Commented] (NIFI-4360) Add support for Azure Data Lake Store (ADLS)

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176475#comment-16176475
 ] 

ASF GitHub Bot commented on NIFI-4360:
--

Github user milanchandna commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2158#discussion_r140503615
  
--- Diff: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/adls/PutADLSFile.java
 ---
@@ -0,0 +1,267 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.adls;
+
+import com.microsoft.azure.datalake.store.ADLFileOutputStream;
+import com.microsoft.azure.datalake.store.ADLStoreClient;
+import com.microsoft.azure.datalake.store.IfExists;
+import com.microsoft.azure.datalake.store.acl.AclEntry;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.StopWatch;
+import org.apache.nifi.util.StringUtils;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.nifi.processors.adls.ADLSConstants.*;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "ADLS", "put", "copy", "filesystem", "restricted"})
+@CapabilityDescription("Write FlowFile data to Azure Data Lake Store 
(ADLS)")
+@ReadsAttributes({
+@ReadsAttribute(attribute = "filename", description = "The name of 
the file written to ADLS comes from the value of this attribute."),
+@ReadsAttribute(attribute = "filepath", description = "The 
relative path to the file on ADLS comes from the value of this attribute.")
+})
+@WritesAttributes({
+@WritesAttribute(attribute = "filename", description = "The name 
of the file written to ADLS is stored in this attribute."),
+@WritesAttribute(attribute = "absolute.adls.path", description = 
"The absolute path to the file on ADLS is stored in this attribute.")
+})
+@Restricted("Provides operator the ability to write to any file that NiFi 
has access to in ADLS.")
+public class PutADLSFile extends ADLSAbstractProcessor {
+
+private static final String PART_FILE_EXTENSION = ".nifipart";
+private static final String REPLACE_RESOLUTION = "replace";
+private static final String FAIL_RESOLUTION = "fail";
+private static final String APPEND_RESOLUTION = "append";
+
+protected static final AllowableValue REPLACE_RESOLUTION_AV = new 
AllowableValue(REPLACE_RESOLUTION,
+REPLACE_RESOLUTION, "Replaces the existing file if any.");
+protected static final AllowableValue FAIL_RESOLUTION_AV = new 
AllowableValue(FAIL_RESOLUTION, FAIL_RESOLUTION,
+"Penalizes the flow file and routes it to failure.");
+protected static final AllowableValue APPEND_RESOLUTION_AV = new 
AllowableValue(APPEND_RESOLUTION, APPEND_RESOLUTION,
+"Appends to the existing file if any, creates a new file 
otherwise.");
+
+// properties
+protected static final PropertyDescriptor CONFLICT_RESOLUTION = new 
PropertyDescriptor.Builder()
+

[GitHub] nifi pull request #2158: NIFI-4360 Adding support for ADLS Processors. Featu...

2017-09-22 Thread milanchandna
Github user milanchandna commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2158#discussion_r140503820
  
--- Diff: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/test/java/org/apache/nifi/processors/adls/TestPutADLSFile.java
 ---
@@ -0,0 +1,522 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.adls;
+
+import com.microsoft.azure.datalake.store.ADLStoreClient;
+import com.microsoft.azure.datalake.store.ADLStoreOptions;
+import okhttp3.mockwebserver.MockResponse;
+import okhttp3.mockwebserver.MockWebServer;
+import okhttp3.mockwebserver.QueueDispatcher;
+import okhttp3.mockwebserver.RecordedRequest;
+import org.apache.nifi.processor.Processor;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.hamcrest.CoreMatchers;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Paths;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+
+public class TestPutADLSFile {
+
+private MockWebServer server;
+private ADLStoreClient client;
+private Processor processor;
+private TestRunner runner;
+Map fileAttributes;
+
+@Before
+public void init() throws IOException, InitializationException {
+server = new MockWebServer();
+QueueDispatcher dispatcher = new QueueDispatcher();
+dispatcher.setFailFast(new MockResponse().setResponseCode(400));
+server.setDispatcher(dispatcher);
+server.start();
+String accountFQDN = server.getHostName() + ":" + server.getPort();
+String dummyToken = "testDummyAadToken";
+
+client = ADLStoreClient.createClient(accountFQDN, dummyToken);
+client.setOptions(new ADLStoreOptions().setInsecureTransport());
+
+processor = new PutADLSWithMockClient(client);
+
+runner = TestRunners.newTestRunner(processor);
+
+runner.setProperty(ADLSConstants.ACCOUNT_NAME, accountFQDN);
+runner.setProperty(ADLSConstants.CLIENT_ID, "foobar");
+runner.setProperty(ADLSConstants.CLIENT_SECRET, "foobar");
+runner.setProperty(ADLSConstants.AUTH_TOKEN_ENDPOINT, "foobar");
+runner.setProperty(PutADLSFile.DIRECTORY, "/sample/");
+runner.setProperty(PutADLSFile.CONFLICT_RESOLUTION, 
PutADLSFile.FAIL_RESOLUTION_AV);
+runner.removeProperty(PutADLSFile.UMASK);
+runner.removeProperty(PutADLSFile.ACL);
+
+fileAttributes = new HashMap<>();
+fileAttributes.put("filename", "sample.txt");
+fileAttributes.put("path", "/root/");
+}
+
+@Test
+public void testPutConflictFail() throws IOException, 
InterruptedException {
+
+//for create call
+server.enqueue(new MockResponse().setResponseCode(200));
+//for append call(internal call while writing to stream)
+server.enqueue(new MockResponse().setResponseCode(200));
+//for rename call(since its fail conflict)
+server.enqueue(new 
MockResponse().setResponseCode(200).setBody("{\"boolean\":true}"));
+//for delete call(removing temp file)
+server.enqueue(new MockResponse().setResponseCode(200));
+
+//String bodySimpleFileContent = new 
String(Files.readAllBytes(Paths.get(bodySimpleFileName)));
--- End diff --

yes, removed.


---


[GitHub] nifi pull request #2158: NIFI-4360 Adding support for ADLS Processors. Featu...

2017-09-22 Thread milanchandna
Github user milanchandna commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2158#discussion_r140503615
  
--- Diff: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/adls/PutADLSFile.java
 ---
@@ -0,0 +1,267 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.adls;
+
+import com.microsoft.azure.datalake.store.ADLFileOutputStream;
+import com.microsoft.azure.datalake.store.ADLStoreClient;
+import com.microsoft.azure.datalake.store.IfExists;
+import com.microsoft.azure.datalake.store.acl.AclEntry;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.StopWatch;
+import org.apache.nifi.util.StringUtils;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.nifi.processors.adls.ADLSConstants.*;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@Tags({"hadoop", "ADLS", "put", "copy", "filesystem", "restricted"})
+@CapabilityDescription("Write FlowFile data to Azure Data Lake Store 
(ADLS)")
+@ReadsAttributes({
+@ReadsAttribute(attribute = "filename", description = "The name of 
the file written to ADLS comes from the value of this attribute."),
+@ReadsAttribute(attribute = "filepath", description = "The 
relative path to the file on ADLS comes from the value of this attribute.")
+})
+@WritesAttributes({
+@WritesAttribute(attribute = "filename", description = "The name 
of the file written to ADLS is stored in this attribute."),
+@WritesAttribute(attribute = "absolute.adls.path", description = 
"The absolute path to the file on ADLS is stored in this attribute.")
+})
+@Restricted("Provides operator the ability to write to any file that NiFi 
has access to in ADLS.")
+public class PutADLSFile extends ADLSAbstractProcessor {
+
+private static final String PART_FILE_EXTENSION = ".nifipart";
+private static final String REPLACE_RESOLUTION = "replace";
+private static final String FAIL_RESOLUTION = "fail";
+private static final String APPEND_RESOLUTION = "append";
+
+protected static final AllowableValue REPLACE_RESOLUTION_AV = new 
AllowableValue(REPLACE_RESOLUTION,
+REPLACE_RESOLUTION, "Replaces the existing file if any.");
+protected static final AllowableValue FAIL_RESOLUTION_AV = new 
AllowableValue(FAIL_RESOLUTION, FAIL_RESOLUTION,
+"Penalizes the flow file and routes it to failure.");
+protected static final AllowableValue APPEND_RESOLUTION_AV = new 
AllowableValue(APPEND_RESOLUTION, APPEND_RESOLUTION,
+"Appends to the existing file if any, creates a new file 
otherwise.");
+
+// properties
+protected static final PropertyDescriptor CONFLICT_RESOLUTION = new 
PropertyDescriptor.Builder()
+.name("Conflict Resolution Strategy")
+.description("Indicates what should happen when a file " +
+"with the same name already exists in the output 
directory")
+.required(true)

[jira] [Commented] (NIFI-4360) Add support for Azure Data Lake Store (ADLS)

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176472#comment-16176472
 ] 

ASF GitHub Bot commented on NIFI-4360:
--

Github user milanchandna commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2158#discussion_r140503030
  
--- Diff: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/adls/ADLSConstants.java
 ---
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.adls;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+
+
+public class ADLSConstants {
+
+public static final int CHUNK_SIZE_IN_BYTES = 400;
+
+public static final PropertyDescriptor PATH_NAME = new 
PropertyDescriptor.Builder()
+.name("Path")
+.description("Path for file in Azure Data Lake, e.g. 
/adlshome/")
+.required(true)
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor RETRY_ON_FAIL = new 
PropertyDescriptor.Builder()
+.name("Overwrite policy")
+.description("How many times to retry if read fails per chunk 
read, defaults to 3")
+.required(true)
+//TODO add Integer validator
+.defaultValue("3")
+.build();
+
+public static final PropertyDescriptor ACCOUNT_NAME = new 
PropertyDescriptor.Builder()
+.name("Account FQDN")
+.description("Azure account fully qualified domain name eg: 
accountname.azuredatalakestore.net")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CLIENT_ID = new 
PropertyDescriptor.Builder()
+.name("Client ID")
+.description("Azure client ID")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CLIENT_SECRET = new 
PropertyDescriptor.Builder()
+.name("Client secret")
+.description("Azure client secret")
+.required(true)
+.sensitive(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor AUTH_TOKEN_ENDPOINT = new 
PropertyDescriptor.Builder()
+.name("Auth token endpoint")
+.description("Azure client secret")
--- End diff --

Removed it, wasn't being used.


> Add support for Azure Data Lake Store (ADLS)
> 
>
> Key: NIFI-4360
> URL: https://issues.apache.org/jira/browse/NIFI-4360
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Milan Chandna
>Assignee: Milan Chandna
>  Labels: adls, azure, hdfs
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently ingress and egress in ADLS account is possible only using HDFS 
> processors.
> Opening this feature to support separate processors for interaction with ADLS 
> accounts directly.
> Benefits are many like 
> - simple configuration.
> - Helping users not familiar with HDFS 
> - Helping users who currently are accessing ADLS accounts directly.
> - using the ADLS SDK rather than HDFS client, one lesser layer to go through.
> Can be achieved by adding separate ADLS processors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2158: NIFI-4360 Adding support for ADLS Processors. Featu...

2017-09-22 Thread milanchandna
Github user milanchandna commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2158#discussion_r140503030
  
--- Diff: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/adls/ADLSConstants.java
 ---
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.adls;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+
+
+public class ADLSConstants {
+
+public static final int CHUNK_SIZE_IN_BYTES = 400;
+
+public static final PropertyDescriptor PATH_NAME = new 
PropertyDescriptor.Builder()
+.name("Path")
+.description("Path for file in Azure Data Lake, e.g. 
/adlshome/")
+.required(true)
+.expressionLanguageSupported(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor RETRY_ON_FAIL = new 
PropertyDescriptor.Builder()
+.name("Overwrite policy")
+.description("How many times to retry if read fails per chunk 
read, defaults to 3")
+.required(true)
+//TODO add Integer validator
+.defaultValue("3")
+.build();
+
+public static final PropertyDescriptor ACCOUNT_NAME = new 
PropertyDescriptor.Builder()
+.name("Account FQDN")
+.description("Azure account fully qualified domain name eg: 
accountname.azuredatalakestore.net")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CLIENT_ID = new 
PropertyDescriptor.Builder()
+.name("Client ID")
+.description("Azure client ID")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor CLIENT_SECRET = new 
PropertyDescriptor.Builder()
+.name("Client secret")
+.description("Azure client secret")
+.required(true)
+.sensitive(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor AUTH_TOKEN_ENDPOINT = new 
PropertyDescriptor.Builder()
+.name("Auth token endpoint")
+.description("Azure client secret")
--- End diff --

Removed it, wasn't being used.


---


[GitHub] nifi pull request #2158: NIFI-4360 Adding support for ADLS Processors. Featu...

2017-09-22 Thread milanchandna
Github user milanchandna commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2158#discussion_r140502906
  
--- Diff: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/adls/ADLSConstants.java
 ---
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.adls;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+
+
+public class ADLSConstants {
+
+public static final int CHUNK_SIZE_IN_BYTES = 400;
+
+public static final PropertyDescriptor PATH_NAME = new 
PropertyDescriptor.Builder()
+.name("Path")
--- End diff --

Used name and displayname properly wherever applicable.


---


[jira] [Created] (NIFI-4409) QueryRecord should pass to the Record Writer the schema based on the ResultSet, not the record as it was read

2017-09-22 Thread Mark Payne (JIRA)
Mark Payne created NIFI-4409:


 Summary: QueryRecord should pass to the Record Writer the schema 
based on the ResultSet, not the record as it was read
 Key: NIFI-4409
 URL: https://issues.apache.org/jira/browse/NIFI-4409
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Mark Payne
Assignee: Mark Payne


When a Record is read for QueryRecord, the schema is passed to the writer as 
the 'read schema'. However, the resultant data is likely not to follow that 
schema because of the SELECT clause. We should instead provide the schema that 
is provided by the ResultSet to the writer.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4297) Immediately actionable dependency upgrades

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176360#comment-16176360
 ] 

ASF GitHub Bot commented on NIFI-4297:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2084
  
@joewitt Thanks for reviewing the L concerns...

@alopresto Reviewing...


> Immediately actionable dependency upgrades
> --
>
> Key: NIFI-4297
> URL: https://issues.apache.org/jira/browse/NIFI-4297
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>  Labels: dependencies, security
>
> The immediately actionable items are:
> * {{org.apache.logging.log4j:log4j-core}} in {{nifi-storm-spout}} 2.1 -> 2.8.2
> * {{org.apache.poi:poi}} in {{nifi-email-processors}} 3.14 -> 3.15
> * {{org.apache.logging.log4j:log4j-core}} in 
> {{nifi-elasticsearch-5-processors}} 2.7 -> 2.8.2
> * {{org.springframework:spring-web}} in {{nifi-jetty}} 4.2.4.RELEASE -> 
> 4.3.10.RELEASE
> * {{org.springframework:spring-web}} in {{nifi-jetty}} 4.2.4.RELEASE -> 
> 4.3.10.RELEASE
> * {{org.apache.derby:derby}} in {{nifi-kite-processors}} 10.11.1.1 -> 
> 10.12.1.1 (already excluded)
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-azure-processors}} 
> 2.6.0 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-expression-language}} 
> 2.6.1 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-standard-utils}} 
> 2.6.2 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-hwx-schema-registry}} 
> 2.7.3 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-solr-processors}} 
> 2.5.4 -> 2.8.6



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2084: NIFI-4297 Updated dependency versions

2017-09-22 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2084
  
@joewitt Thanks for reviewing the L concerns...

@alopresto Reviewing...


---


[jira] [Commented] (NIFI-3116) Remove Jasypt library

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176359#comment-16176359
 ] 

ASF GitHub Bot commented on NIFI-3116:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2108
  
@alopresto test failures

Tests run: 10, Failures: 0, Errors: 3, Skipped: 4, Time elapsed: 1.727 sec 
<<< FAILURE! - in org.apache.nifi.encrypt.StringEncryptorTest

testPBEncryptionShouldBeInternallyConsistent(org.apache.nifi.encrypt.StringEncryptorTest)
  Time elapsed: 0.046 sec  <<< ERROR!
org.apache.nifi.encrypt.EncryptionException: 
org.apache.nifi.encrypt.EncryptionException: Could not encrypt sensitive value
at javax.crypto.Cipher.checkCryptoPerm(Cipher.java:1039)
at javax.crypto.Cipher.init(Cipher.java:1393)
at javax.crypto.Cipher.init(Cipher.java:1327)
at 
org.apache.nifi.security.util.crypto.OpenSSLPKCS5CipherProvider.getInitializedCipher(OpenSSLPKCS5CipherProvider.java:127)
at 
org.apache.nifi.security.util.crypto.NiFiLegacyCipherProvider.getCipher(NiFiLegacyCipherProvider.java:62)
at 
org.apache.nifi.encrypt.StringEncryptor.encryptPBE(StringEncryptor.java:309)
at 
org.apache.nifi.encrypt.StringEncryptor.encrypt(StringEncryptor.java:278)
at org.apache.nifi.encrypt.StringEncryptor$encrypt$0.call(Unknown 
Source)
at 
org.apache.nifi.encrypt.StringEncryptorTest.testPBEncryptionShouldBeInternallyConsistent(StringEncryptorTest.groovy:140)


testPBEncryptionShouldBeExternallyConsistent(org.apache.nifi.encrypt.StringEncryptorTest)
  Time elapsed: 0.014 sec  <<< ERROR!
java.security.InvalidKeyException: Illegal key size
at javax.crypto.Cipher.checkCryptoPerm(Cipher.java:1039)
at javax.crypto.Cipher.init(Cipher.java:1393)
at javax.crypto.Cipher.init(Cipher.java:1327)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoCachedMethodSite.invoke(PojoMetaMethodSite.java:192)
at 
org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:56)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:141)
at 
org.apache.nifi.encrypt.StringEncryptorTest.generatePBECipher(StringEncryptorTest.groovy:115)


testPBEncryptionShouldBeConsistentWithLegacyEncryption(org.apache.nifi.encrypt.StringEncryptorTest)
  Time elapsed: 1.631 sec  <<< ERROR!
org.jasypt.exceptions.EncryptionOperationNotPossibleException: Encryption 
raised an exception. A possible cause is you are using strong encryption 
algorithms and you have not installed the Java Cryptography Extension (JCE) 
Unlimited Strength Jurisdiction Policy Files in this Java Virtual Machine
at 
org.jasypt.encryption.pbe.StandardPBEByteEncryptor.handleInvalidKeyException(StandardPBEByteEncryptor.java:1073)
at 
org.jasypt.encryption.pbe.StandardPBEByteEncryptor.encrypt(StandardPBEByteEncryptor.java:924)
at 
org.jasypt.encryption.pbe.StandardPBEStringEncryptor.encrypt(StandardPBEStringEncryptor.java:642)
at org.jasypt.encryption.StringEncryptor$encrypt.call(Unknown Source)
at 
org.apache.nifi.encrypt.StringEncryptorTest.testPBEncryptionShouldBeConsistentWithLegacyEncryption(StringEncryptorTest.groovy:228)



> Remove Jasypt library
> -
>
> Key: NIFI-3116
> URL: https://issues.apache.org/jira/browse/NIFI-3116
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.1.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>  Labels: encryption, kdf, pbe, security
>
> The [Jasypt|http://www.jasypt.org/index.html] library is used internally by 
> NiFi for String encryption operations (specifically password-based encryption 
> (PBE) in {{EncryptContent}} and sensitive processor property protection). I 
> feel there are a number of reasons to remove this library from NiFi and 
> provide centralized symmetric encryption operations using Java cryptographic 
> primitives (and BouncyCastle features where necessary). 
> * The library was last updated February 25, 2014. For comparison, 
> BouncyCastle has been [updated 5 
> times|https://www.bouncycastle.org/releasenotes.html] since then
> * {{StandardPBEStringEncryptor}}, the high-level class wrapped by NiFi's 
> {{StringEncryptor}} is final. This makes it, and features relying on it, 
> difficult to test in isolation
> * Jasypt encapsulates many decisions about 

[GitHub] nifi issue #2108: NIFI-3116 Remove Jasypt

2017-09-22 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2108
  
@alopresto test failures

Tests run: 10, Failures: 0, Errors: 3, Skipped: 4, Time elapsed: 1.727 sec 
<<< FAILURE! - in org.apache.nifi.encrypt.StringEncryptorTest

testPBEncryptionShouldBeInternallyConsistent(org.apache.nifi.encrypt.StringEncryptorTest)
  Time elapsed: 0.046 sec  <<< ERROR!
org.apache.nifi.encrypt.EncryptionException: 
org.apache.nifi.encrypt.EncryptionException: Could not encrypt sensitive value
at javax.crypto.Cipher.checkCryptoPerm(Cipher.java:1039)
at javax.crypto.Cipher.init(Cipher.java:1393)
at javax.crypto.Cipher.init(Cipher.java:1327)
at 
org.apache.nifi.security.util.crypto.OpenSSLPKCS5CipherProvider.getInitializedCipher(OpenSSLPKCS5CipherProvider.java:127)
at 
org.apache.nifi.security.util.crypto.NiFiLegacyCipherProvider.getCipher(NiFiLegacyCipherProvider.java:62)
at 
org.apache.nifi.encrypt.StringEncryptor.encryptPBE(StringEncryptor.java:309)
at 
org.apache.nifi.encrypt.StringEncryptor.encrypt(StringEncryptor.java:278)
at org.apache.nifi.encrypt.StringEncryptor$encrypt$0.call(Unknown 
Source)
at 
org.apache.nifi.encrypt.StringEncryptorTest.testPBEncryptionShouldBeInternallyConsistent(StringEncryptorTest.groovy:140)


testPBEncryptionShouldBeExternallyConsistent(org.apache.nifi.encrypt.StringEncryptorTest)
  Time elapsed: 0.014 sec  <<< ERROR!
java.security.InvalidKeyException: Illegal key size
at javax.crypto.Cipher.checkCryptoPerm(Cipher.java:1039)
at javax.crypto.Cipher.init(Cipher.java:1393)
at javax.crypto.Cipher.init(Cipher.java:1327)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoCachedMethodSite.invoke(PojoMetaMethodSite.java:192)
at 
org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:56)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:141)
at 
org.apache.nifi.encrypt.StringEncryptorTest.generatePBECipher(StringEncryptorTest.groovy:115)


testPBEncryptionShouldBeConsistentWithLegacyEncryption(org.apache.nifi.encrypt.StringEncryptorTest)
  Time elapsed: 1.631 sec  <<< ERROR!
org.jasypt.exceptions.EncryptionOperationNotPossibleException: Encryption 
raised an exception. A possible cause is you are using strong encryption 
algorithms and you have not installed the Java Cryptography Extension (JCE) 
Unlimited Strength Jurisdiction Policy Files in this Java Virtual Machine
at 
org.jasypt.encryption.pbe.StandardPBEByteEncryptor.handleInvalidKeyException(StandardPBEByteEncryptor.java:1073)
at 
org.jasypt.encryption.pbe.StandardPBEByteEncryptor.encrypt(StandardPBEByteEncryptor.java:924)
at 
org.jasypt.encryption.pbe.StandardPBEStringEncryptor.encrypt(StandardPBEStringEncryptor.java:642)
at org.jasypt.encryption.StringEncryptor$encrypt.call(Unknown Source)
at 
org.apache.nifi.encrypt.StringEncryptorTest.testPBEncryptionShouldBeConsistentWithLegacyEncryption(StringEncryptorTest.groovy:228)



---


[jira] [Commented] (NIFI-4297) Immediately actionable dependency upgrades

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176357#comment-16176357
 ] 

ASF GitHub Bot commented on NIFI-4297:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2084
  
@mcgilman @alopresto the lack of L on that is because we're not 
publishing a bundled artifact for it (like we do with nars).  We're simply 
putting source up or the binary of our external but then normal dep/maven magic 
takes over.  So, we're ok as-is and having a cat-b dependency is still fair 
game.  Once someone pulls in our library (which depends on that stuff) then 
they would have to consider the EPL implications.  In short, i think we're good 
here for L based on andy's above assessment.


> Immediately actionable dependency upgrades
> --
>
> Key: NIFI-4297
> URL: https://issues.apache.org/jira/browse/NIFI-4297
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>  Labels: dependencies, security
>
> The immediately actionable items are:
> * {{org.apache.logging.log4j:log4j-core}} in {{nifi-storm-spout}} 2.1 -> 2.8.2
> * {{org.apache.poi:poi}} in {{nifi-email-processors}} 3.14 -> 3.15
> * {{org.apache.logging.log4j:log4j-core}} in 
> {{nifi-elasticsearch-5-processors}} 2.7 -> 2.8.2
> * {{org.springframework:spring-web}} in {{nifi-jetty}} 4.2.4.RELEASE -> 
> 4.3.10.RELEASE
> * {{org.springframework:spring-web}} in {{nifi-jetty}} 4.2.4.RELEASE -> 
> 4.3.10.RELEASE
> * {{org.apache.derby:derby}} in {{nifi-kite-processors}} 10.11.1.1 -> 
> 10.12.1.1 (already excluded)
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-azure-processors}} 
> 2.6.0 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-expression-language}} 
> 2.6.1 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-standard-utils}} 
> 2.6.2 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-hwx-schema-registry}} 
> 2.7.3 -> 2.8.6
> * {{com.fasterxml.jackson.core:jackson-core}} in {{nifi-solr-processors}} 
> 2.5.4 -> 2.8.6



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2084: NIFI-4297 Updated dependency versions

2017-09-22 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2084
  
@mcgilman @alopresto the lack of L on that is because we're not 
publishing a bundled artifact for it (like we do with nars).  We're simply 
putting source up or the binary of our external but then normal dep/maven magic 
takes over.  So, we're ok as-is and having a cat-b dependency is still fair 
game.  Once someone pulls in our library (which depends on that stuff) then 
they would have to consider the EPL implications.  In short, i think we're good 
here for L based on andy's above assessment.


---


[jira] [Created] (NIFI-4408) Add a filter() EL function

2017-09-22 Thread Pierre Villard (JIRA)
Pierre Villard created NIFI-4408:


 Summary: Add a filter() EL function
 Key: NIFI-4408
 URL: https://issues.apache.org/jira/browse/NIFI-4408
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Reporter: Pierre Villard


It could be useful to have a filter function. Use case would be:

Input
'attribute' = "bfoo;bfaa;cfuu;bfii"

Expected output
'attribute' = "prefix_bfoo,prefix_bfaa,prefix_bfii"

With something like:

{code}
${allDelineatedValues("${attribute}", ";"):filter("^b"):replaceAll("(.*)", 
"prefix$1"):join(",")}
{code}

The "filter" function would act like "matches" but would return the subject 
element only if the subject element matches the regular expression. Or we could 
have a "filter" function allowing EL as argument to check a condition on each 
argument. That would certainly be more useful but not sure to see how the 
argument would be passed to the expression language. Something like:

{code}
${allDelineatedValues("${attribute}", 
";"):filter("${_:matches("^b")}"):replaceAll("(.*)", "prefix$1"):join(",")}
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4360) Add support for Azure Data Lake Store (ADLS)

2017-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176249#comment-16176249
 ] 

ASF GitHub Bot commented on NIFI-4360:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2158
  
@milanchandna - you can go in ``nifi-nar-bundles/nifi-azure-bundle`` and 
build from this directory ``mvn clean install -Pcontrib-check``


> Add support for Azure Data Lake Store (ADLS)
> 
>
> Key: NIFI-4360
> URL: https://issues.apache.org/jira/browse/NIFI-4360
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Milan Chandna
>Assignee: Milan Chandna
>  Labels: adls, azure, hdfs
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently ingress and egress in ADLS account is possible only using HDFS 
> processors.
> Opening this feature to support separate processors for interaction with ADLS 
> accounts directly.
> Benefits are many like 
> - simple configuration.
> - Helping users not familiar with HDFS 
> - Helping users who currently are accessing ADLS accounts directly.
> - using the ADLS SDK rather than HDFS client, one lesser layer to go through.
> Can be achieved by adding separate ADLS processors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2158: NIFI-4360 Adding support for ADLS Processors. Feature incl...

2017-09-22 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2158
  
@milanchandna - you can go in ``nifi-nar-bundles/nifi-azure-bundle`` and 
build from this directory ``mvn clean install -Pcontrib-check``


---


  1   2   >