[jira] [Commented] (NIFI-4516) Add FetchSolr processor

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434805#comment-16434805
 ] 

ASF GitHub Bot commented on NIFI-4516:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2517
  
Do this:

1. git checkout master
2. git pull upstream master (whatever you call github.com/apache/nifi 
master)
3. git checkout NIFI-4516
4. git rebase master
5. git push origin NIFI-4516 --force

I just built master and it didn't have that problem. Try a full rebuild 
with `mvn clean install -DskipTests=true` from the root folder.


> Add FetchSolr processor
> ---
>
> Key: NIFI-4516
> URL: https://issues.apache.org/jira/browse/NIFI-4516
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Johannes Peter
>Assignee: Johannes Peter
>Priority: Major
>  Labels: features
>
> The processor shall be capable 
> * to query Solr within a workflow,
> * to make use of standard functionalities of Solr such as faceting, 
> highlighting, result grouping, etc.,
> * to make use of NiFis expression language to build Solr queries, 
> * to handle results as records.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2517: NIFI-4516 FetchSolr Processor

2018-04-11 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2517
  
Do this:

1. git checkout master
2. git pull upstream master (whatever you call github.com/apache/nifi 
master)
3. git checkout NIFI-4516
4. git rebase master
5. git push origin NIFI-4516 --force

I just built master and it didn't have that problem. Try a full rebuild 
with `mvn clean install -DskipTests=true` from the root folder.


---


[jira] [Updated] (NIFI-5070) java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed

2018-04-11 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5070:
---
Affects Version/s: (was: 1.6.0)

> java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed
> --
>
> Key: NIFI-5070
> URL: https://issues.apache.org/jira/browse/NIFI-5070
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Gardella Juan Pablo
>Priority: Major
>
> Discovered during NIFI-5049. According [ResultSet.next() 
> javadoc|https://docs.oracle.com/javase/8/docs/api/java/sql/ResultSet.html#next%E2%80%93]:
> _When a call to the {{next}} method returns {{false}}, the cursor is 
> positioned after the last row. Any invocation of a {{ResultSet}} method which 
> requires a current row will result in a {{SQLException}} being thrown. If the 
> result set type is {{TYPE_FORWARD_ONLY}}, it is vendor specified whether 
> their JDBC driver implementation will return {{false}} or throw an 
> {{SQLException}} on a subsequent call to {{next}}._
> With Phoenix Database and QueryDatabaseTable the exception 
> {{java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed}} is raised.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5070) java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed

2018-04-11 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5070:
---
Status: Patch Available  (was: Open)

> java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed
> --
>
> Key: NIFI-5070
> URL: https://issues.apache.org/jira/browse/NIFI-5070
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.6.0
>Reporter: Gardella Juan Pablo
>Priority: Major
>
> Discovered during NIFI-5049. According [ResultSet.next() 
> javadoc|https://docs.oracle.com/javase/8/docs/api/java/sql/ResultSet.html#next%E2%80%93]:
> _When a call to the {{next}} method returns {{false}}, the cursor is 
> positioned after the last row. Any invocation of a {{ResultSet}} method which 
> requires a current row will result in a {{SQLException}} being thrown. If the 
> result set type is {{TYPE_FORWARD_ONLY}}, it is vendor specified whether 
> their JDBC driver implementation will return {{false}} or throw an 
> {{SQLException}} on a subsequent call to {{next}}._
> With Phoenix Database and QueryDatabaseTable the exception 
> {{java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed}} is raised.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4975) Add support for MongoDB GridFS

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434776#comment-16434776
 ] 

ASF GitHub Bot commented on NIFI-4975:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2546#discussion_r180938216
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/gridfs/DeleteGridFS.java
 ---
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.nifi.processors.mongodb.gridfs;
+
+import com.mongodb.client.MongoCursor;
+import com.mongodb.client.gridfs.GridFSBucket;
+import com.mongodb.client.gridfs.model.GridFSFile;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.StringUtils;
+import org.bson.Document;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@CapabilityDescription("Deletes a file from GridFS using a file name or a 
query.")
+@Tags({"gridfs", "delete", "mongodb"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+public class DeleteGridFS extends AbstractGridFSProcessor {
+static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("delete-gridfs-query")
+.displayName("Query")
+.description("A valid MongoDB query to use to find and delete one 
or more files from GridFS.")
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(DOCUMENT_VALIDATOR)
+.required(false)
+.build();
+
+static final PropertyDescriptor FILE_NAME = new 
PropertyDescriptor.Builder()
+.name("gridfs-file-name")
+.displayName("File Name")
--- End diff --

Is this a fully-qualified path or just a file name? I couldn't tell if it 
worked like S3 buckets or not, if so maybe add something to the description for 
a n00b like me?


> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2546: NIFI-4975 Add GridFS processors

2018-04-11 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2546#discussion_r180937580
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/AbstractMongoProcessor.java
 ---
@@ -229,7 +252,9 @@ protected MongoDatabase getDatabase(final 
ProcessContext context, final FlowFile
 }
 
 protected MongoCollection getCollection(final ProcessContext 
context, final FlowFile flowFile) {
-final String collectionName = 
context.getProperty(COLLECTION_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String collectionName = flowFile == null
--- End diff --

I think this should be != like the getDatabase() call above?


---


[jira] [Commented] (NIFI-4975) Add support for MongoDB GridFS

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434774#comment-16434774
 ] 

ASF GitHub Bot commented on NIFI-4975:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2546#discussion_r180937580
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/AbstractMongoProcessor.java
 ---
@@ -229,7 +252,9 @@ protected MongoDatabase getDatabase(final 
ProcessContext context, final FlowFile
 }
 
 protected MongoCollection getCollection(final ProcessContext 
context, final FlowFile flowFile) {
-final String collectionName = 
context.getProperty(COLLECTION_NAME).evaluateAttributeExpressions(flowFile).getValue();
+final String collectionName = flowFile == null
--- End diff --

I think this should be != like the getDatabase() call above?


> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4975) Add support for MongoDB GridFS

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434775#comment-16434775
 ] 

ASF GitHub Bot commented on NIFI-4975:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2546#discussion_r180937998
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/gridfs/AbstractGridFSProcessor.java
 ---
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.nifi.processors.mongodb.gridfs;
+
+import com.mongodb.client.gridfs.GridFSBucket;
+import com.mongodb.client.gridfs.GridFSBuckets;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.mongodb.AbstractMongoProcessor;
+import org.apache.nifi.util.StringUtils;
+import org.bson.types.ObjectId;
+
+import java.util.ArrayList;
+import java.util.List;
+
+public abstract class AbstractGridFSProcessor extends 
AbstractMongoProcessor {
+
+static final PropertyDescriptor BUCKET_NAME = new 
PropertyDescriptor.Builder()
+.name("gridfs-bucket-name")
+.displayName("Bucket Name")
+.description("The GridFS bucket where the files will be stored. If 
left blank, it will use the default value 'fs' " +
+"that the MongoDB client driver uses.")
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(false)
+.addValidator(Validator.VALID)
+.build();
+
+static final PropertyDescriptor FILE_NAME = new 
PropertyDescriptor.Builder()
+.name("gridfs-file-name")
+.displayName("File Name")
+.description("The name of the file in the bucket that is the 
target of this processor.")
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(false)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("When there is a failure processing the flowfile, it 
goes to this relationship.")
+.build();
+
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("When the operation success, the flowfile is sent to 
this relationship.")
--- End diff --

This wording is a bit awkward (perhaps copy-paste, but here's the chance to 
improve it)


> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2546: NIFI-4975 Add GridFS processors

2018-04-11 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2546#discussion_r180938216
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/gridfs/DeleteGridFS.java
 ---
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.nifi.processors.mongodb.gridfs;
+
+import com.mongodb.client.MongoCursor;
+import com.mongodb.client.gridfs.GridFSBucket;
+import com.mongodb.client.gridfs.model.GridFSFile;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.StringUtils;
+import org.bson.Document;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@CapabilityDescription("Deletes a file from GridFS using a file name or a 
query.")
+@Tags({"gridfs", "delete", "mongodb"})
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+public class DeleteGridFS extends AbstractGridFSProcessor {
+static final PropertyDescriptor QUERY = new 
PropertyDescriptor.Builder()
+.name("delete-gridfs-query")
+.displayName("Query")
+.description("A valid MongoDB query to use to find and delete one 
or more files from GridFS.")
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(DOCUMENT_VALIDATOR)
+.required(false)
+.build();
+
+static final PropertyDescriptor FILE_NAME = new 
PropertyDescriptor.Builder()
+.name("gridfs-file-name")
+.displayName("File Name")
--- End diff --

Is this a fully-qualified path or just a file name? I couldn't tell if it 
worked like S3 buckets or not, if so maybe add something to the description for 
a n00b like me?


---


[jira] [Commented] (NIFI-4975) Add support for MongoDB GridFS

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434777#comment-16434777
 ] 

ASF GitHub Bot commented on NIFI-4975:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2546#discussion_r180937862
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/QueryHelper.java
 ---
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.nifi.processors.mongodb;
+
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+
+public interface QueryHelper {
+AllowableValue MODE_ONE_COMMIT = new AllowableValue("all-at-once", 
"Full Query Fetch",
+"Fetch the entire query result and then make it available to 
downstream processors.");
+AllowableValue MODE_MANY_COMMITS = new AllowableValue("streaming", 
"Stream Query Results",
+"As soon as the query start sending results to the downstream 
processors at regular intervals.");
+
+PropertyDescriptor OPERATION_MODE = new PropertyDescriptor.Builder()
+.name("mongo-operation-mode")
+.displayName("Operation Mode")
+.allowableValues(MODE_ONE_COMMIT, MODE_MANY_COMMITS)
+.defaultValue(MODE_ONE_COMMIT.getValue())
+.required(true)
+.description("This option controls when results are made 
available to downstream processors. If streaming mode is enabled, " +
+"provenance will not be tracked relative to the input 
flowfile if an input flowfile is received and starts the query. In streaming 
mode " +
--- End diff --

Can you explain why provenance would not be tracked relative to the input 
flowfile? Also perhaps refer to "streaming mode" as "Stream Query Results" for 
consistency with what is presented to the user.


> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2546: NIFI-4975 Add GridFS processors

2018-04-11 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2546#discussion_r180937998
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/gridfs/AbstractGridFSProcessor.java
 ---
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.nifi.processors.mongodb.gridfs;
+
+import com.mongodb.client.gridfs.GridFSBucket;
+import com.mongodb.client.gridfs.GridFSBuckets;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.mongodb.AbstractMongoProcessor;
+import org.apache.nifi.util.StringUtils;
+import org.bson.types.ObjectId;
+
+import java.util.ArrayList;
+import java.util.List;
+
+public abstract class AbstractGridFSProcessor extends 
AbstractMongoProcessor {
+
+static final PropertyDescriptor BUCKET_NAME = new 
PropertyDescriptor.Builder()
+.name("gridfs-bucket-name")
+.displayName("Bucket Name")
+.description("The GridFS bucket where the files will be stored. If 
left blank, it will use the default value 'fs' " +
+"that the MongoDB client driver uses.")
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(false)
+.addValidator(Validator.VALID)
+.build();
+
+static final PropertyDescriptor FILE_NAME = new 
PropertyDescriptor.Builder()
+.name("gridfs-file-name")
+.displayName("File Name")
+.description("The name of the file in the bucket that is the 
target of this processor.")
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(false)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+static final Relationship REL_FAILURE = new Relationship.Builder()
+.name("failure")
+.description("When there is a failure processing the flowfile, it 
goes to this relationship.")
+.build();
+
+static final Relationship REL_SUCCESS = new Relationship.Builder()
+.name("success")
+.description("When the operation success, the flowfile is sent to 
this relationship.")
--- End diff --

This wording is a bit awkward (perhaps copy-paste, but here's the chance to 
improve it)


---


[GitHub] nifi pull request #2546: NIFI-4975 Add GridFS processors

2018-04-11 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2546#discussion_r180937862
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/QueryHelper.java
 ---
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.nifi.processors.mongodb;
+
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+
+public interface QueryHelper {
+AllowableValue MODE_ONE_COMMIT = new AllowableValue("all-at-once", 
"Full Query Fetch",
+"Fetch the entire query result and then make it available to 
downstream processors.");
+AllowableValue MODE_MANY_COMMITS = new AllowableValue("streaming", 
"Stream Query Results",
+"As soon as the query start sending results to the downstream 
processors at regular intervals.");
+
+PropertyDescriptor OPERATION_MODE = new PropertyDescriptor.Builder()
+.name("mongo-operation-mode")
+.displayName("Operation Mode")
+.allowableValues(MODE_ONE_COMMIT, MODE_MANY_COMMITS)
+.defaultValue(MODE_ONE_COMMIT.getValue())
+.required(true)
+.description("This option controls when results are made 
available to downstream processors. If streaming mode is enabled, " +
+"provenance will not be tracked relative to the input 
flowfile if an input flowfile is received and starts the query. In streaming 
mode " +
--- End diff --

Can you explain why provenance would not be tracked relative to the input 
flowfile? Also perhaps refer to "streaming mode" as "Stream Query Results" for 
consistency with what is presented to the user.


---


[jira] [Updated] (NIFI-4975) Add support for MongoDB GridFS

2018-04-11 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4975:
---
Status: Patch Available  (was: Open)

> Add support for MongoDB GridFS
> --
>
> Key: NIFI-4975
> URL: https://issues.apache.org/jira/browse/NIFI-4975
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> [An overview |https://docs.mongodb.com/manual/core/gridfs/]of what GridFS is.
> Basic CRUD processors for handling GridFS should be added to the MongoDB NAR.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5070) java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed

2018-04-11 Thread Gardella Juan Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434740#comment-16434740
 ] 

Gardella Juan Pablo commented on NIFI-5070:
---

Patch available.

> java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed
> --
>
> Key: NIFI-5070
> URL: https://issues.apache.org/jira/browse/NIFI-5070
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.6.0
>Reporter: Gardella Juan Pablo
>Priority: Major
>
> Discovered during NIFI-5049. According [ResultSet.next() 
> javadoc|https://docs.oracle.com/javase/8/docs/api/java/sql/ResultSet.html#next%E2%80%93]:
> _When a call to the {{next}} method returns {{false}}, the cursor is 
> positioned after the last row. Any invocation of a {{ResultSet}} method which 
> requires a current row will result in a {{SQLException}} being thrown. If the 
> result set type is {{TYPE_FORWARD_ONLY}}, it is vendor specified whether 
> their JDBC driver implementation will return {{false}} or throw an 
> {{SQLException}} on a subsequent call to {{next}}._
> With Phoenix Database and QueryDatabaseTable the exception 
> {{java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed}} is raised.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2629: NIFI-5070 Fix java.sql.SQLException: ERROR 1101 (XC...

2018-04-11 Thread gardellajuanpablo
GitHub user gardellajuanpablo opened a pull request:

https://github.com/apache/nifi/pull/2629

NIFI-5070 Fix java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is…

… closed

Discovered on Phoenix Database by using QueryDatabaseTable processor. The 
fix consists
on applying Matt Burgess' suggestion:

"I think we'd need a try/catch around the next() only to see if the result 
set is closed,
and an inner try/catch around everything else, to catch other errors not 
related to
this behavior. "

The solution was verified against Phoenix DB. Also added unit tests to 
cover the
change.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gardellajuanpablo/nifi NIFI-5070

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2629.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2629


commit 859616f300986e62bd613cb3fd8cc05d823952c6
Author: Gardella Juan Pablo 
Date:   2018-04-11T23:53:17Z

NIFI-5070 Fix java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed

Discovered on Phoenix Database by using QueryDatabaseTable processor. The 
fix consists
on applying Matt Burgess' suggestion:

"I think we'd need a try/catch around the next() only to see if the result 
set is closed,
and an inner try/catch around everything else, to catch other errors not 
related to
this behavior. "

The solution was verified against Phoenix DB. Also added unit tests to 
cover the
change.




---


[jira] [Commented] (NIFI-5070) java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434738#comment-16434738
 ] 

ASF GitHub Bot commented on NIFI-5070:
--

GitHub user gardellajuanpablo opened a pull request:

https://github.com/apache/nifi/pull/2629

NIFI-5070 Fix java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is…

… closed

Discovered on Phoenix Database by using QueryDatabaseTable processor. The 
fix consists
on applying Matt Burgess' suggestion:

"I think we'd need a try/catch around the next() only to see if the result 
set is closed,
and an inner try/catch around everything else, to catch other errors not 
related to
this behavior. "

The solution was verified against Phoenix DB. Also added unit tests to 
cover the
change.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gardellajuanpablo/nifi NIFI-5070

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2629.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2629


commit 859616f300986e62bd613cb3fd8cc05d823952c6
Author: Gardella Juan Pablo 
Date:   2018-04-11T23:53:17Z

NIFI-5070 Fix java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed

Discovered on Phoenix Database by using QueryDatabaseTable processor. The 
fix consists
on applying Matt Burgess' suggestion:

"I think we'd need a try/catch around the next() only to see if the result 
set is closed,
and an inner try/catch around everything else, to catch other errors not 
related to
this behavior. "

The solution was verified against Phoenix DB. Also added unit tests to 
cover the
change.




> java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed
> --
>
> Key: NIFI-5070
> URL: https://issues.apache.org/jira/browse/NIFI-5070
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.6.0
>Reporter: Gardella Juan Pablo
>Priority: Major
>
> Discovered during NIFI-5049. According [ResultSet.next() 
> javadoc|https://docs.oracle.com/javase/8/docs/api/java/sql/ResultSet.html#next%E2%80%93]:
> _When a call to the {{next}} method returns {{false}}, the cursor is 
> positioned after the last row. Any invocation of a {{ResultSet}} method which 
> requires a current row will result in a {{SQLException}} being thrown. If the 
> result set type is {{TYPE_FORWARD_ONLY}}, it is vendor specified whether 
> their JDBC driver implementation will return {{false}} or throw an 
> {{SQLException}} on a subsequent call to {{next}}._
> With Phoenix Database and QueryDatabaseTable the exception 
> {{java.sql.SQLException: ERROR 1101 (XCL01): ResultSet is closed}} is raised.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-450) Provide consumers with the ability to put metrics into a local tsdb to send off device

2018-04-11 Thread marco polo (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

marco polo updated MINIFICPP-450:
-
Description: Metrics can be gathered but are dropped when the device goes 
offline. Provide a controller service ( or similar mechanism ) and reporting 
task so that these metrics can be stored when offline and sent to another 
device or service when available.   (was: Metrics can be gathered but are 
dropped when the device goes offline. Provide a controller service ( or similar 
mechanism ) and reporting task so that these metrics can be send offline to 
another device or service.)

> Provide consumers with the ability to put metrics into a local tsdb to send 
> off device
> --
>
> Key: MINIFICPP-450
> URL: https://issues.apache.org/jira/browse/MINIFICPP-450
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 0.5.0, 0.4.0
>Reporter: marco polo
>Priority: Major
>
> Metrics can be gathered but are dropped when the device goes offline. Provide 
> a controller service ( or similar mechanism ) and reporting task so that 
> these metrics can be stored when offline and sent to another device or 
> service when available. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4516) Add FetchSolr processor

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434594#comment-16434594
 ] 

ASF GitHub Bot commented on NIFI-4516:
--

Github user JohannesDaniel commented on the issue:

https://github.com/apache/nifi/pull/2517
  
@MikeThomsen rebased everything, build for solr processors works without 
any problems. however, when I try to build the whole application, I receive the 
following error:
[ERROR] Failed to execute goal on project nifi-livy-processors: Could not 
resolve dependencies for project 
org.apache.nifi:nifi-livy-processors:jar:1.7.0-SNAPSHOT: Failure to find 
org.apache.nifi:nifi-standard-processors:jar:tests:1.7.0-SNAPSHOT in 
https://repository.apache.org/snapshots was cached in the local repository, 
resolution will not be reattempted until the update interval of 
apache.snapshots has elapsed or updates are forced -> [Help 1]
is this a known problem?


> Add FetchSolr processor
> ---
>
> Key: NIFI-4516
> URL: https://issues.apache.org/jira/browse/NIFI-4516
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Johannes Peter
>Assignee: Johannes Peter
>Priority: Major
>  Labels: features
>
> The processor shall be capable 
> * to query Solr within a workflow,
> * to make use of standard functionalities of Solr such as faceting, 
> highlighting, result grouping, etc.,
> * to make use of NiFis expression language to build Solr queries, 
> * to handle results as records.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2517: NIFI-4516 FetchSolr Processor

2018-04-11 Thread JohannesDaniel
Github user JohannesDaniel commented on the issue:

https://github.com/apache/nifi/pull/2517
  
@MikeThomsen rebased everything, build for solr processors works without 
any problems. however, when I try to build the whole application, I receive the 
following error:
[ERROR] Failed to execute goal on project nifi-livy-processors: Could not 
resolve dependencies for project 
org.apache.nifi:nifi-livy-processors:jar:1.7.0-SNAPSHOT: Failure to find 
org.apache.nifi:nifi-standard-processors:jar:tests:1.7.0-SNAPSHOT in 
https://repository.apache.org/snapshots was cached in the local repository, 
resolution will not be reattempted until the update interval of 
apache.snapshots has elapsed or updates are forced -> [Help 1]
is this a known problem?


---


[jira] [Commented] (NIFI-543) Provide extensions a way to indicate that they can run only on primary node, if clustered

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434532#comment-16434532
 ] 

ASF GitHub Bot commented on NIFI-543:
-

Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2509#discussion_r18094
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-processor-configuration.js
 ---
@@ -741,8 +742,8 @@
 }
 });
 
-// show the execution node option if we're cluster or 
we're currently configured to run on the primary node only
-if (nfClusterSummary.isClustered() || executionNode 
=== 'PRIMARY') {
+// show the execution node option if we're clustered 
and execution node is not restricted to run only in primary node
+if (nfClusterSummary.isClustered() && 
executionNodeRestricted !== true) {
--- End diff --

I believe the `executionNode === 'PRIMARY'` was in place to ensure the 
currently configured value is shown. If the current value is set to PRIMARY, 
but this instance is no longer clustered we need to render that fact. Once the 
user reconfigures this value, they will no longer be able to select this option 
(since the node isn't clustered and executionNode would be ALL). Hope this 
makes sense.


> Provide extensions a way to indicate that they can run only on primary node, 
> if clustered
> -
>
> Key: NIFI-543
> URL: https://issues.apache.org/jira/browse/NIFI-543
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core Framework, Documentation  Website, Extensions
>Reporter: Mark Payne
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>
> There are Processors that are known to be problematic if run from multiple 
> nodes simultaneously. These processors should be able to use a 
> @PrimaryNodeOnly annotation (or something similar) to indicate that they can 
> be scheduled to run only on primary node if run in a cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2509: NIFI-543 Added annotation to indicate processor sho...

2018-04-11 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2509#discussion_r18094
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-processor-configuration.js
 ---
@@ -741,8 +742,8 @@
 }
 });
 
-// show the execution node option if we're cluster or 
we're currently configured to run on the primary node only
-if (nfClusterSummary.isClustered() || executionNode 
=== 'PRIMARY') {
+// show the execution node option if we're clustered 
and execution node is not restricted to run only in primary node
+if (nfClusterSummary.isClustered() && 
executionNodeRestricted !== true) {
--- End diff --

I believe the `executionNode === 'PRIMARY'` was in place to ensure the 
currently configured value is shown. If the current value is set to PRIMARY, 
but this instance is no longer clustered we need to render that fact. Once the 
user reconfigures this value, they will no longer be able to select this option 
(since the node isn't clustered and executionNode would be ALL). Hope this 
makes sense.


---


[jira] [Commented] (NIFI-4997) Actions taken on process groups do not appear in flow configuration history

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434515#comment-16434515
 ] 

ASF GitHub Bot commented on NIFI-4997:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2626


> Actions taken on process groups do not appear in flow configuration history
> ---
>
> Key: NIFI-4997
> URL: https://issues.apache.org/jira/browse/NIFI-4997
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.4.0
> Environment: CentOS 7, Docker
>Reporter: Craig Becker
>Assignee: Matt Gilman
>Priority: Major
>
> Selecting a process group and executing a stop or start action does not cause 
> anything to be logged in the flow configuration history. The current behavior 
> makes it impossible to trace who/when a process group was turned off. 
>  
> The expected/desired behavior would be to either add a log entry per 
> processor that was stopped/started, or to log that the process group was 
> stopped/started.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2626: NIFI-4997: Fixing process group audit advice

2018-04-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2626


---


[jira] [Commented] (NIFI-4997) Actions taken on process groups do not appear in flow configuration history

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434513#comment-16434513
 ] 

ASF GitHub Bot commented on NIFI-4997:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2626
  
@mcgilman thanks, this definitely simplifies the code a lot, too, and makes 
it more consistent. Was able to verify starting/stopping process 
groups/individual components. Was able to verify changing to different versions 
of a versioned flow, and changing variable registry. All handled the 
authorization properly. Tested all of the above in both standalone mode and 
clustered. +1 will merge to master.


> Actions taken on process groups do not appear in flow configuration history
> ---
>
> Key: NIFI-4997
> URL: https://issues.apache.org/jira/browse/NIFI-4997
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.4.0
> Environment: CentOS 7, Docker
>Reporter: Craig Becker
>Assignee: Matt Gilman
>Priority: Major
>
> Selecting a process group and executing a stop or start action does not cause 
> anything to be logged in the flow configuration history. The current behavior 
> makes it impossible to trace who/when a process group was turned off. 
>  
> The expected/desired behavior would be to either add a log entry per 
> processor that was stopped/started, or to log that the process group was 
> stopped/started.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4997) Actions taken on process groups do not appear in flow configuration history

2018-04-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434514#comment-16434514
 ] 

ASF subversion and git services commented on NIFI-4997:
---

Commit b7272e3f3282c6e42eb7ada86d15f32188527ccf in nifi's branch 
refs/heads/master from [~mcgilman]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=b7272e3 ]

NIFI-4997:
- Fixing process group audit advice.
- Setting spring security user in background threads.
- Removing unnecessary overloaded methods.

This closes #2626.

Signed-off-by: Mark Payne 


> Actions taken on process groups do not appear in flow configuration history
> ---
>
> Key: NIFI-4997
> URL: https://issues.apache.org/jira/browse/NIFI-4997
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.4.0
> Environment: CentOS 7, Docker
>Reporter: Craig Becker
>Assignee: Matt Gilman
>Priority: Major
>
> Selecting a process group and executing a stop or start action does not cause 
> anything to be logged in the flow configuration history. The current behavior 
> makes it impossible to trace who/when a process group was turned off. 
>  
> The expected/desired behavior would be to either add a log entry per 
> processor that was stopped/started, or to log that the process group was 
> stopped/started.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2626: NIFI-4997: Fixing process group audit advice

2018-04-11 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/2626
  
@mcgilman thanks, this definitely simplifies the code a lot, too, and makes 
it more consistent. Was able to verify starting/stopping process 
groups/individual components. Was able to verify changing to different versions 
of a versioned flow, and changing variable registry. All handled the 
authorization properly. Tested all of the above in both standalone mode and 
clustered. +1 will merge to master.


---


[jira] [Commented] (NIFI-5060) UpdateRecord substringAfter and substringAfterLast only increments by 1

2018-04-11 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434468#comment-16434468
 ] 

Pierre Villard commented on NIFI-5060:
--

Hi [~ioneethling], welcome in the NiFi community and thanks for having a look 
at how to contribute. I'd suggest you to have a look at the documentation on 
this subject: 
[https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide]

Also, if you pick a Jira you want to work on, I'd recommend ensuring no one is 
working on it yet. For this one you can see that a pull request is already 
opened. However you're more than welcome to help reviewing the PR by giving it 
a try and commenting/sharing your thoughts on the Github PR page. For 
NIFI-4908, there is also existing PRs, and I'm sure we'd highly appreciate help 
to review the new processors. For NIFI-4517, best is to discuss your thoughts 
on the Jira if you're not sure how to proceed, otherwise you can go ahead and 
submit a PR on github.

> UpdateRecord substringAfter and substringAfterLast only increments by 1
> ---
>
> Key: NIFI-5060
> URL: https://issues.apache.org/jira/browse/NIFI-5060
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: Chris Green
>Priority: Major
>  Labels: easyfix, newbie
> Attachments: Validate_substringafter_Behavior.xml
>
>
> This is my first submitted issue, so please feel free to point me in the 
> correct direction if I make process mistakes.
> Replication:
> Drag a GenerateFlowFile onto the canvas and configure this property, and set 
> run schedule to some high value like 600 seconds
> "Custom Text" \{"value": "01230123"}
> Connect GenerateFlowFile with an UpdateAttribute set to add the attribute 
> "avro.schema" with a value of 
>  
> {code:java}
> { 
> "type": "record", 
> "name": "test", 
> "fields" : [{"name": "value", "type": "string"}]
> }
> {code}
>  
>  
> Connect UpdateAttribute to an UpdateRecord onto the canvas, Autoterminate 
> success and failure. Set the Record Reader to a new JSONTreeReader. On the 
> JsonTreeReader configure it to use the "Use 'Schema Text' Attribute".
> Create a JsonRecordSetWriter and set the Schema Text to:
>  
>  
> {code:java}
> { 
> "type": "record", 
> "name": "test", 
> "fields" : [
> {"name": "value", "type": "string"},
> {"name": "example1", "type": "string"},
> {"name": "example2", "type": "string"},
> {"name": "example3", "type": "string"},
> {"name": "example4", "type": "string"}
> ]
>  }
> {code}
>  
> Add the following properties to UpdateRecord
>  
> ||Heading 1||Heading 2||
> |/example1|substringAfter(/value, "1") |
> |/example2|substringAfter(/value, "123") |
> |/example3|substringAfterLast(/value, "1")|
> |/example4|substringAfterLast(/value, "123")|
>  
> Resulting record currently:
>  
> {code:java}
> [{ 
> "value" : "01230123", 
> "example1" : "230123", 
> "example2" : "30123", 
> "example3" : "23", 
> "example4" : "3" 
> }]
> {code}
>  
>  
>  
> Problem:
> When using the UpdateRecord processor, and the substringAfter() function 
> after the search phrase is found it will only increment the substring 
> returned by 1 rather than the length of the search term. 
> Based off XPath and other implementations of substringAfter functions I've 
> seen the value returned should remove the search term rather than just the 
> first character of the search term.
>  
>  
> Resulting record should be:
>  
> {code:java}
> [{ 
> "value" : "01230123", 
> "example1" : "230123", 
> "example2" : "0123", 
> "example3" : "23", 
> "example4" : "" 
> }]
> {code}
>  
>  
> I'm cleaning up a fix with test code that will change the increment from 1 to 
> the length of the search terms. 
> It appears substringBefore are not impacted by the behavior as always returns 
> the index before the found search term which is the expected behavior



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4942) NiFi Toolkit - Allow migration of master key without previous password

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434457#comment-16434457
 ] 

ASF GitHub Bot commented on NIFI-4942:
--

Github user YolandaMDavis commented on the issue:

https://github.com/apache/nifi/pull/2628
  
@alopresto thanks for addressing this, happy to review.  It does look like 
Travis is failing on a ratcheck related error in nifi-toolkit-encrypt-config


> NiFi Toolkit - Allow migration of master key without previous password
> --
>
> Key: NIFI-4942
> URL: https://issues.apache.org/jira/browse/NIFI-4942
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.5.0
>Reporter: Yolanda M. Davis
>Assignee: Andy LoPresto
>Priority: Major
>
> Currently the encryption cli in nifi toolkit requires that, in order to 
> migrate from one master key to the next, the previous master key or password 
> should be provided. In cases where the provisioning tool doesn't have the 
> previous value available this becomes challenging to provide and may be prone 
> to error. In speaking with [~alopresto] we can allow toolkit to support a 
> mode of execution such that the master key can be updated without requiring 
> the previous password. Also documentation around it's usage should be updated 
> to be clear in describing the purpose and the type of environment where this 
> command should be used (admin only access etc).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2628: NIFI-4942 Add capability for encrypt-config tool to use se...

2018-04-11 Thread YolandaMDavis
Github user YolandaMDavis commented on the issue:

https://github.com/apache/nifi/pull/2628
  
@alopresto thanks for addressing this, happy to review.  It does look like 
Travis is failing on a ratcheck related error in nifi-toolkit-encrypt-config


---


[jira] [Commented] (NIFI-5052) Create a "delete by query" ElasticSearch processor

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434453#comment-16434453
 ] 

ASF GitHub Bot commented on NIFI-5052:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180870941
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/ElasticSearchRestProcessor.java
 ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.elasticsearch.ElasticSearchClientService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+
+public interface ElasticSearchRestProcessor {
+PropertyDescriptor INDEX = new PropertyDescriptor.Builder()
+.name("el-rest-fetch-index")
+.displayName("Index")
+.description("The name of the index to read from")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor TYPE = new PropertyDescriptor.Builder()
+.name("el-rest-type")
+.displayName("Type")
+.description("The type of this document (used by Elasticsearch 
for indexing and searching)")
+.required(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor QUERY = new PropertyDescriptor.Builder()
+.name("el-rest-query")
+.displayName("Query")
+.description("A query in JSON syntax, not Lucene syntax. Ex: " 
+
+"{\n" +
--- End diff --

Done.


> Create a "delete by query" ElasticSearch processor
> --
>
> Key: NIFI-5052
> URL: https://issues.apache.org/jira/browse/NIFI-5052
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5052) Create a "delete by query" ElasticSearch processor

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434452#comment-16434452
 ] 

ASF GitHub Bot commented on NIFI-5052:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180870902
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/ElasticSearchRestProcessor.java
 ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.elasticsearch.ElasticSearchClientService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+
+public interface ElasticSearchRestProcessor {
+PropertyDescriptor INDEX = new PropertyDescriptor.Builder()
+.name("el-rest-fetch-index")
+.displayName("Index")
+.description("The name of the index to read from")
--- End diff --

Done.


> Create a "delete by query" ElasticSearch processor
> --
>
> Key: NIFI-5052
> URL: https://issues.apache.org/jira/browse/NIFI-5052
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5052) Create a "delete by query" ElasticSearch processor

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434451#comment-16434451
 ] 

ASF GitHub Bot commented on NIFI-5052:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180870855
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/pom.xml
 ---
@@ -137,7 +149,7 @@
 9400
 5.6.2
 90
-
${project.basedir}/src/test/resources/setup.script
+

--- End diff --

Done.


> Create a "delete by query" ElasticSearch processor
> --
>
> Key: NIFI-5052
> URL: https://issues.apache.org/jira/browse/NIFI-5052
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5052) Create a "delete by query" ElasticSearch processor

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434450#comment-16434450
 ] 

ASF GitHub Bot commented on NIFI-5052:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180870831
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/pom.xml
 ---
@@ -69,11 +69,11 @@
 2.6
 
 
-
-org.elasticsearch.client
-rest
-5.0.1
-
+
+
+
+
+
--- End diff --

Done


> Create a "delete by query" ElasticSearch processor
> --
>
> Key: NIFI-5052
> URL: https://issues.apache.org/jira/browse/NIFI-5052
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2616: NIFI-5052 Added DeleteByQuery ElasticSearch process...

2018-04-11 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180870902
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/ElasticSearchRestProcessor.java
 ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.elasticsearch.ElasticSearchClientService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+
+public interface ElasticSearchRestProcessor {
+PropertyDescriptor INDEX = new PropertyDescriptor.Builder()
+.name("el-rest-fetch-index")
+.displayName("Index")
+.description("The name of the index to read from")
--- End diff --

Done.


---


[GitHub] nifi pull request #2616: NIFI-5052 Added DeleteByQuery ElasticSearch process...

2018-04-11 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180870831
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/pom.xml
 ---
@@ -69,11 +69,11 @@
 2.6
 
 
-
-org.elasticsearch.client
-rest
-5.0.1
-
+
+
+
+
+
--- End diff --

Done


---


[GitHub] nifi pull request #2616: NIFI-5052 Added DeleteByQuery ElasticSearch process...

2018-04-11 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180870941
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/ElasticSearchRestProcessor.java
 ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.elasticsearch.ElasticSearchClientService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+
+public interface ElasticSearchRestProcessor {
+PropertyDescriptor INDEX = new PropertyDescriptor.Builder()
+.name("el-rest-fetch-index")
+.displayName("Index")
+.description("The name of the index to read from")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor TYPE = new PropertyDescriptor.Builder()
+.name("el-rest-type")
+.displayName("Type")
+.description("The type of this document (used by Elasticsearch 
for indexing and searching)")
+.required(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor QUERY = new PropertyDescriptor.Builder()
+.name("el-rest-query")
+.displayName("Query")
+.description("A query in JSON syntax, not Lucene syntax. Ex: " 
+
+"{\n" +
--- End diff --

Done.


---


[GitHub] nifi pull request #2616: NIFI-5052 Added DeleteByQuery ElasticSearch process...

2018-04-11 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180870855
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/pom.xml
 ---
@@ -137,7 +149,7 @@
 9400
 5.6.2
 90
-
${project.basedir}/src/test/resources/setup.script
+

--- End diff --

Done.


---


[jira] [Commented] (NIFI-5052) Create a "delete by query" ElasticSearch processor

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434449#comment-16434449
 ] 

ASF GitHub Bot commented on NIFI-5052:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180870705
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/ElasticSearchRestProcessor.java
 ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.elasticsearch.ElasticSearchClientService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+
+public interface ElasticSearchRestProcessor {
+PropertyDescriptor INDEX = new PropertyDescriptor.Builder()
+.name("el-rest-fetch-index")
+.displayName("Index")
+.description("The name of the index to read from")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor TYPE = new PropertyDescriptor.Builder()
+.name("el-rest-type")
+.displayName("Type")
+.description("The type of this document (used by Elasticsearch 
for indexing and searching)")
+.required(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor QUERY = new PropertyDescriptor.Builder()
+.name("el-rest-query")
+.displayName("Query")
+.description("A query in JSON syntax, not Lucene syntax. Ex: " 
+
--- End diff --

Done


> Create a "delete by query" ElasticSearch processor
> --
>
> Key: NIFI-5052
> URL: https://issues.apache.org/jira/browse/NIFI-5052
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2616: NIFI-5052 Added DeleteByQuery ElasticSearch process...

2018-04-11 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180870705
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/ElasticSearchRestProcessor.java
 ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.elasticsearch.ElasticSearchClientService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+
+public interface ElasticSearchRestProcessor {
+PropertyDescriptor INDEX = new PropertyDescriptor.Builder()
+.name("el-rest-fetch-index")
+.displayName("Index")
+.description("The name of the index to read from")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor TYPE = new PropertyDescriptor.Builder()
+.name("el-rest-type")
+.displayName("Type")
+.description("The type of this document (used by Elasticsearch 
for indexing and searching)")
+.required(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor QUERY = new PropertyDescriptor.Builder()
+.name("el-rest-query")
+.displayName("Query")
+.description("A query in JSON syntax, not Lucene syntax. Ex: " 
+
--- End diff --

Done


---


[jira] [Created] (MINIFICPP-450) Provide consumers with the ability to put metrics into a local tsdb to send off device

2018-04-11 Thread marco polo (JIRA)
marco polo created MINIFICPP-450:


 Summary: Provide consumers with the ability to put metrics into a 
local tsdb to send off device
 Key: MINIFICPP-450
 URL: https://issues.apache.org/jira/browse/MINIFICPP-450
 Project: NiFi MiNiFi C++
  Issue Type: Improvement
Affects Versions: 0.4.0, 0.5.0
Reporter: marco polo


Metrics can be gathered but are dropped when the device goes offline. Provide a 
controller service ( or similar mechanism ) and reporting task so that these 
metrics can be send offline to another device or service.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4809) Implement a SiteToSiteMetricsReportingTask

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434421#comment-16434421
 ] 

ASF GitHub Bot commented on NIFI-4809:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2575
  
+1 LGTM, ran full build with unit tests and tried Ambari Format as well as 
Record Format with an AvroRecordSetWriter. Verified the records are in the 
prescribed format and the standard S2S reporting task attributes are correct. 
Also the documentation is great, thanks for this addition! Merging to master


> Implement a SiteToSiteMetricsReportingTask
> --
>
> Key: NIFI-4809
> URL: https://issues.apache.org/jira/browse/NIFI-4809
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.7.0
>
>
> At the moment there is an AmbariReportingTask to send the NiFi-related 
> metrics of the host to the Ambari Metrics Service. In a multi-cluster 
> configuration, or when working with MiNiFi (Java) agents, it might not be 
> possible for all the NiFi instances (NiFi and/or MiNiFi) to access the AMS 
> REST API.
> To solve this problem, a solution would be to implement a 
> SiteToSiteMetricsReportingTask to send the data via S2S to the "main" NiFi 
> instance/cluster that will be able to publish the metrics into AMS (using 
> InvokeHTTP). This way, it is possible to have the metrics of all the 
> instances exposed in one AMS instance.
> I propose to send the data formatted as we are doing right now in the Ambari 
> reporting task. If needed, it can be easily converted into another schema 
> using the record processors once received via S2S.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4809) Implement a SiteToSiteMetricsReportingTask

2018-04-11 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4809:
---
Status: Patch Available  (was: Reopened)

> Implement a SiteToSiteMetricsReportingTask
> --
>
> Key: NIFI-4809
> URL: https://issues.apache.org/jira/browse/NIFI-4809
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> At the moment there is an AmbariReportingTask to send the NiFi-related 
> metrics of the host to the Ambari Metrics Service. In a multi-cluster 
> configuration, or when working with MiNiFi (Java) agents, it might not be 
> possible for all the NiFi instances (NiFi and/or MiNiFi) to access the AMS 
> REST API.
> To solve this problem, a solution would be to implement a 
> SiteToSiteMetricsReportingTask to send the data via S2S to the "main" NiFi 
> instance/cluster that will be able to publish the metrics into AMS (using 
> InvokeHTTP). This way, it is possible to have the metrics of all the 
> instances exposed in one AMS instance.
> I propose to send the data formatted as we are doing right now in the Ambari 
> reporting task. If needed, it can be easily converted into another schema 
> using the record processors once received via S2S.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4809) Implement a SiteToSiteMetricsReportingTask

2018-04-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434422#comment-16434422
 ] 

ASF subversion and git services commented on NIFI-4809:
---

Commit 6fbe1515eefd2071dc75a1de2c1fc15cc282da76 in nifi's branch 
refs/heads/master from [~pvillard]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=6fbe151 ]

NIFI-4809 - Implement a SiteToSiteMetricsReportingTask

Fixed dependency issue by providing a local JSON reader

Rebased + fixed conflict + updated versions in pom + EL scope

Signed-off-by: Matthew Burgess 

This closes #2575


> Implement a SiteToSiteMetricsReportingTask
> --
>
> Key: NIFI-4809
> URL: https://issues.apache.org/jira/browse/NIFI-4809
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.7.0
>
>
> At the moment there is an AmbariReportingTask to send the NiFi-related 
> metrics of the host to the Ambari Metrics Service. In a multi-cluster 
> configuration, or when working with MiNiFi (Java) agents, it might not be 
> possible for all the NiFi instances (NiFi and/or MiNiFi) to access the AMS 
> REST API.
> To solve this problem, a solution would be to implement a 
> SiteToSiteMetricsReportingTask to send the data via S2S to the "main" NiFi 
> instance/cluster that will be able to publish the metrics into AMS (using 
> InvokeHTTP). This way, it is possible to have the metrics of all the 
> instances exposed in one AMS instance.
> I propose to send the data formatted as we are doing right now in the Ambari 
> reporting task. If needed, it can be easily converted into another schema 
> using the record processors once received via S2S.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4809) Implement a SiteToSiteMetricsReportingTask

2018-04-11 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4809:
---
Fix Version/s: 1.7.0

> Implement a SiteToSiteMetricsReportingTask
> --
>
> Key: NIFI-4809
> URL: https://issues.apache.org/jira/browse/NIFI-4809
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.7.0
>
>
> At the moment there is an AmbariReportingTask to send the NiFi-related 
> metrics of the host to the Ambari Metrics Service. In a multi-cluster 
> configuration, or when working with MiNiFi (Java) agents, it might not be 
> possible for all the NiFi instances (NiFi and/or MiNiFi) to access the AMS 
> REST API.
> To solve this problem, a solution would be to implement a 
> SiteToSiteMetricsReportingTask to send the data via S2S to the "main" NiFi 
> instance/cluster that will be able to publish the metrics into AMS (using 
> InvokeHTTP). This way, it is possible to have the metrics of all the 
> instances exposed in one AMS instance.
> I propose to send the data formatted as we are doing right now in the Ambari 
> reporting task. If needed, it can be easily converted into another schema 
> using the record processors once received via S2S.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4809) Implement a SiteToSiteMetricsReportingTask

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434423#comment-16434423
 ] 

ASF GitHub Bot commented on NIFI-4809:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2575


> Implement a SiteToSiteMetricsReportingTask
> --
>
> Key: NIFI-4809
> URL: https://issues.apache.org/jira/browse/NIFI-4809
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.7.0
>
>
> At the moment there is an AmbariReportingTask to send the NiFi-related 
> metrics of the host to the Ambari Metrics Service. In a multi-cluster 
> configuration, or when working with MiNiFi (Java) agents, it might not be 
> possible for all the NiFi instances (NiFi and/or MiNiFi) to access the AMS 
> REST API.
> To solve this problem, a solution would be to implement a 
> SiteToSiteMetricsReportingTask to send the data via S2S to the "main" NiFi 
> instance/cluster that will be able to publish the metrics into AMS (using 
> InvokeHTTP). This way, it is possible to have the metrics of all the 
> instances exposed in one AMS instance.
> I propose to send the data formatted as we are doing right now in the Ambari 
> reporting task. If needed, it can be easily converted into another schema 
> using the record processors once received via S2S.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2575: NIFI-4809 - Implement a SiteToSiteMetricsReportingT...

2018-04-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2575


---


[jira] [Updated] (NIFI-4809) Implement a SiteToSiteMetricsReportingTask

2018-04-11 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4809:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Implement a SiteToSiteMetricsReportingTask
> --
>
> Key: NIFI-4809
> URL: https://issues.apache.org/jira/browse/NIFI-4809
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.7.0
>
>
> At the moment there is an AmbariReportingTask to send the NiFi-related 
> metrics of the host to the Ambari Metrics Service. In a multi-cluster 
> configuration, or when working with MiNiFi (Java) agents, it might not be 
> possible for all the NiFi instances (NiFi and/or MiNiFi) to access the AMS 
> REST API.
> To solve this problem, a solution would be to implement a 
> SiteToSiteMetricsReportingTask to send the data via S2S to the "main" NiFi 
> instance/cluster that will be able to publish the metrics into AMS (using 
> InvokeHTTP). This way, it is possible to have the metrics of all the 
> instances exposed in one AMS instance.
> I propose to send the data formatted as we are doing right now in the Ambari 
> reporting task. If needed, it can be easily converted into another schema 
> using the record processors once received via S2S.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2575: NIFI-4809 - Implement a SiteToSiteMetricsReportingTask

2018-04-11 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2575
  
+1 LGTM, ran full build with unit tests and tried Ambari Format as well as 
Record Format with an AvroRecordSetWriter. Verified the records are in the 
prescribed format and the standard S2S reporting task attributes are correct. 
Also the documentation is great, thanks for this addition! Merging to master


---


[jira] [Commented] (NIFI-4942) NiFi Toolkit - Allow migration of master key without previous password

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434409#comment-16434409
 ] 

ASF GitHub Bot commented on NIFI-4942:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2628
  
Here are some instructions and expected outputs to demonstrate that the 
tool works as intended: 

```
# CD to $NIFI_HOME/conf because secure_hash.key must be written to 
immediate directory

# Populate sensitive properties in nifi.properties in order for something 
to be encrypted
sed 's/asswd=//' nifi.properties 
>nifi-sensitive.properties

# Initial encryption of nifi.properties

../../../../../nifi-toolkit/nifi-toolkit-assembly/target/nifi-toolkit-1.7.0-SNAPSHOT-bin/nifi-toolkit-1.7.0-SNAPSHOT/bin/encrypt-config.sh
 -v \
-b bootstrap.conf \
-n nifi-sensitive.properties \
-o nifi-encrypted.properties \
-p passwordpassword

# Example hashes for "passwordpassword"
# 
secureHashKey=$s0$100801$H8N5sEErC9hOVpQLxUt+oA$RrwImM1uWD59KuA1AxFamK7oPHlnI1uBXEN2lt4CpbM
# 
secureHashPassword=$s0$100801$dZ04VTEBHxTR8tb6j29q/w$mXsXKxvd3nYXXOSoxobO7gkLaLAdz2dZRqAvPNfOzWE

# Verify secure_hash.key file generated and populated w/ both key and 
password hash
more secure_hash.key

# Derived key for "passwordpassword"
# 
nifi.bootstrap.sensitive.key=A2EA52795B33AB2F21C93E7E820D08369F1448478C877F4C710D6E85FD904AE6

# Verify bootstrap.conf file updated with master key value
more bootstrap.conf

# Verify encryption of sensitive properties occurred
more nifi-sensitive.properties | grep 'assw'
more nifi-encrypted.properties | grep 'assw'

# Migration using raw password

../../../../../nifi-toolkit/nifi-toolkit-assembly/target/nifi-toolkit-1.7.0-SNAPSHOT-bin/nifi-toolkit-1.7.0-SNAPSHOT/bin/encrypt-config.sh
 -v -m \
-b bootstrap.conf \
-n nifi-encrypted.properties \
-o nifi-migrated.properties \
-p thisIsABadPassword \
-w passwordpassword

# Example hashes for "thisIsABadPassword"
# 
secureHashKey=$s0$100801$Y5rcY+pECpOBw5JBT1esMw$OEfnR/cze9u6ZjHMbd6NzvQltz2cC0qskSH8XeiXcp4
# 
secureHashPassword=$s0$100801$rxjtgO5m859l6aI1xHIjpA$jAqTpGrJNiTkcIei6HtbCuZmhkPnqDlC3G4RjxRtf18

# Migration using hashed password (single quote escape hash to avoid 
dollar-sign variable evaluation)

../../../../../nifi-toolkit/nifi-toolkit-assembly/target/nifi-toolkit-1.7.0-SNAPSHOT-bin/nifi-toolkit-1.7.0-SNAPSHOT/bin/encrypt-config.sh
 -v -m \
-b bootstrap.conf \
-n nifi-migrated.properties \
-o nifi-migrated-from-hash.properties \
-p thisIsABadPassword2 \
-z 
'$s0$100801$rxjtgO5m859l6aI1xHIjpA$jAqTpGrJNiTkcIei6HtbCuZmhkPnqDlC3G4RjxRtf18'

# Example output

hw12203:.../nifi/nifi-assembly/target/nifi-1.7.0-SNAPSHOT-bin/nifi-1.7.0-SNAPSHOT/conf
 (NIFI-4942) alopresto
 174714s @ 14:37:45 $ 
../../../../../nifi-toolkit/nifi-toolkit-assembly/target/nifi-toolkit-1.7.0-SNAPSHOT-bin/nifi-toolkit-1.7.0-SNAPSHOT/bin/encrypt-config.sh
 -v \
> -m -b bootstrap.conf \
> -n nifi-migrated.properties \
> -o nifi-migrated-from-hash.properties \
> -p thisIsABadPassword2 \
> -z 
'$s0$100801$rxjtgO5m859l6aI1xHIjpA$jAqTpGrJNiTkcIei6HtbCuZmhkPnqDlC3G4RjxRtf18'
Listening for transport dt_socket at address: 8000
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: Handling encryption of 
nifi.properties
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool:bootstrap.conf: 
  bootstrap.conf
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (src)  nifi.properties:
  nifi-migrated.properties
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (dest) nifi.properties:
  nifi-migrated-from-hash.properties
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (src)  
login-identity-providers.xml: null
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (dest) 
login-identity-providers.xml: null
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (src)  authorizers.xml:
  null
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (dest) authorizers.xml:
  null
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (src)  flow.xml.gz:
  null
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (dest) flow.xml.gz:
  null
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: Key migration mode activated
2018/04/11 14:38:28 INFO [main] 

[GitHub] nifi issue #2628: NIFI-4942 Add capability for encrypt-config tool to use se...

2018-04-11 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2628
  
Here are some instructions and expected outputs to demonstrate that the 
tool works as intended: 

```
# CD to $NIFI_HOME/conf because secure_hash.key must be written to 
immediate directory

# Populate sensitive properties in nifi.properties in order for something 
to be encrypted
sed 's/asswd=//' nifi.properties 
>nifi-sensitive.properties

# Initial encryption of nifi.properties

../../../../../nifi-toolkit/nifi-toolkit-assembly/target/nifi-toolkit-1.7.0-SNAPSHOT-bin/nifi-toolkit-1.7.0-SNAPSHOT/bin/encrypt-config.sh
 -v \
-b bootstrap.conf \
-n nifi-sensitive.properties \
-o nifi-encrypted.properties \
-p passwordpassword

# Example hashes for "passwordpassword"
# 
secureHashKey=$s0$100801$H8N5sEErC9hOVpQLxUt+oA$RrwImM1uWD59KuA1AxFamK7oPHlnI1uBXEN2lt4CpbM
# 
secureHashPassword=$s0$100801$dZ04VTEBHxTR8tb6j29q/w$mXsXKxvd3nYXXOSoxobO7gkLaLAdz2dZRqAvPNfOzWE

# Verify secure_hash.key file generated and populated w/ both key and 
password hash
more secure_hash.key

# Derived key for "passwordpassword"
# 
nifi.bootstrap.sensitive.key=A2EA52795B33AB2F21C93E7E820D08369F1448478C877F4C710D6E85FD904AE6

# Verify bootstrap.conf file updated with master key value
more bootstrap.conf

# Verify encryption of sensitive properties occurred
more nifi-sensitive.properties | grep 'assw'
more nifi-encrypted.properties | grep 'assw'

# Migration using raw password

../../../../../nifi-toolkit/nifi-toolkit-assembly/target/nifi-toolkit-1.7.0-SNAPSHOT-bin/nifi-toolkit-1.7.0-SNAPSHOT/bin/encrypt-config.sh
 -v -m \
-b bootstrap.conf \
-n nifi-encrypted.properties \
-o nifi-migrated.properties \
-p thisIsABadPassword \
-w passwordpassword

# Example hashes for "thisIsABadPassword"
# 
secureHashKey=$s0$100801$Y5rcY+pECpOBw5JBT1esMw$OEfnR/cze9u6ZjHMbd6NzvQltz2cC0qskSH8XeiXcp4
# 
secureHashPassword=$s0$100801$rxjtgO5m859l6aI1xHIjpA$jAqTpGrJNiTkcIei6HtbCuZmhkPnqDlC3G4RjxRtf18

# Migration using hashed password (single quote escape hash to avoid 
dollar-sign variable evaluation)

../../../../../nifi-toolkit/nifi-toolkit-assembly/target/nifi-toolkit-1.7.0-SNAPSHOT-bin/nifi-toolkit-1.7.0-SNAPSHOT/bin/encrypt-config.sh
 -v -m \
-b bootstrap.conf \
-n nifi-migrated.properties \
-o nifi-migrated-from-hash.properties \
-p thisIsABadPassword2 \
-z 
'$s0$100801$rxjtgO5m859l6aI1xHIjpA$jAqTpGrJNiTkcIei6HtbCuZmhkPnqDlC3G4RjxRtf18'

# Example output

hw12203:.../nifi/nifi-assembly/target/nifi-1.7.0-SNAPSHOT-bin/nifi-1.7.0-SNAPSHOT/conf
 (NIFI-4942) alopresto
🔓 174714s @ 14:37:45 $ 
../../../../../nifi-toolkit/nifi-toolkit-assembly/target/nifi-toolkit-1.7.0-SNAPSHOT-bin/nifi-toolkit-1.7.0-SNAPSHOT/bin/encrypt-config.sh
 -v \
> -m -b bootstrap.conf \
> -n nifi-migrated.properties \
> -o nifi-migrated-from-hash.properties \
> -p thisIsABadPassword2 \
> -z 
'$s0$100801$rxjtgO5m859l6aI1xHIjpA$jAqTpGrJNiTkcIei6HtbCuZmhkPnqDlC3G4RjxRtf18'
Listening for transport dt_socket at address: 8000
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: Handling encryption of 
nifi.properties
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool:bootstrap.conf: 
  bootstrap.conf
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (src)  nifi.properties:
  nifi-migrated.properties
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (dest) nifi.properties:
  nifi-migrated-from-hash.properties
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (src)  
login-identity-providers.xml: null
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (dest) 
login-identity-providers.xml: null
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (src)  authorizers.xml:
  null
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (dest) authorizers.xml:
  null
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (src)  flow.xml.gz:
  null
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: (dest) flow.xml.gz:
  null
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: Key migration mode activated
2018/04/11 14:38:28 INFO [main] 
org.apache.nifi.properties.ConfigEncryptionTool: Secure hash argument present
2018/04/11 14:38:39 INFO [main] 
org.apache.nifi.properties.NiFiPropertiesLoader: Loaded 151 properties from 

[jira] [Commented] (NIFI-4942) NiFi Toolkit - Allow migration of master key without previous password

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434408#comment-16434408
 ] 

ASF GitHub Bot commented on NIFI-4942:
--

GitHub user alopresto opened a pull request:

https://github.com/apache/nifi/pull/2628

NIFI-4942 Add capability for encrypt-config tool to use securely hashed 
key/password for demonstration of previous knowledge

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alopresto/nifi NIFI-4942

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2628.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2628


commit b122f9192394b1d50eb319eb2b6e1999aefafd30
Author: Andy LoPresto 
Date:   2018-03-23T03:47:53Z

NIFI-4942 [WIP] Added skeleton for secure hash handling in encrypt-config 
toolkit.
Added test resource for Python scrypt implementation/verifier.
Added unit tests.

commit 4f7bd03d8730ec50babb66850d6dbdc05c1a4834
Author: Andy LoPresto 
Date:   2018-03-29T02:11:42Z

NIFI-4942 [WIP] More unit tests passing.

commit 5d43edfac6fa99a8f4cba4174628b9133ff5a6f2
Author: Andy LoPresto 
Date:   2018-04-02T23:18:38Z

NIFI-4942 All unit tests pass and test artifacts are cleaned up.

commit c2fc9555704b3df68a63200ff8bc850b2c3730fd
Author: Andy LoPresto 
Date:   2018-04-05T00:25:33Z

NIFI-4942 Added RAT exclusions.

commit 513dadfb29595d9755d220678b6233c9fb2a8c66
Author: Andy LoPresto 
Date:   2018-04-05T00:26:00Z

NIFI-4942 Added Scrypt hash format checker.
Added unit tests.

commit 411b54f15871227d8128446cc131b85da00a56ef
Author: Andy LoPresto 
Date:   2018-04-05T00:26:22Z

NIFI-4942 Added NiFi hash format checker.
Added unit tests.

commit 1b2d6406b94c08b99b434bdc7b47d1cf1eb7319c
Author: Andy LoPresto 
Date:   2018-04-05T23:45:24Z

NIFI-4942 Added check for simultaneous use of -z/-y.
Added logic to check hashed password/key.
Added logic to retrieve secure hash from file to compare.
Added unit tests (125/125).

commit 706015ce3fa745a7de0485e68e7cc96efe529d62
Author: Andy LoPresto 
Date:   2018-04-10T02:48:44Z

NIFI-4942 Added new ExitCode.
Added logic to return current hash params in JSON for Ambari to consume.
Fixed typos in error messages.
Added unit tests (129/129).

commit 6308fd65b994bad637a205a5e661db4712cde811
Author: Andy LoPresto 
Date:   2018-04-10T22:28:49Z

NIFI-4942 Added Scrypt hash format verification for hash check.
Added unit tests.

commit 0b2d12f9a440d207f1d21c06566b839a18efd089
Author: Andy LoPresto 
Date:   2018-04-11T17:54:40Z

NIFI-4942 Fixed RAT checks.




> NiFi Toolkit - Allow migration of master key without previous password
> --
>
> Key: NIFI-4942
> URL: 

[GitHub] nifi pull request #2628: NIFI-4942 Add capability for encrypt-config tool to...

2018-04-11 Thread alopresto
GitHub user alopresto opened a pull request:

https://github.com/apache/nifi/pull/2628

NIFI-4942 Add capability for encrypt-config tool to use securely hashed 
key/password for demonstration of previous knowledge

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alopresto/nifi NIFI-4942

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2628.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2628


commit b122f9192394b1d50eb319eb2b6e1999aefafd30
Author: Andy LoPresto 
Date:   2018-03-23T03:47:53Z

NIFI-4942 [WIP] Added skeleton for secure hash handling in encrypt-config 
toolkit.
Added test resource for Python scrypt implementation/verifier.
Added unit tests.

commit 4f7bd03d8730ec50babb66850d6dbdc05c1a4834
Author: Andy LoPresto 
Date:   2018-03-29T02:11:42Z

NIFI-4942 [WIP] More unit tests passing.

commit 5d43edfac6fa99a8f4cba4174628b9133ff5a6f2
Author: Andy LoPresto 
Date:   2018-04-02T23:18:38Z

NIFI-4942 All unit tests pass and test artifacts are cleaned up.

commit c2fc9555704b3df68a63200ff8bc850b2c3730fd
Author: Andy LoPresto 
Date:   2018-04-05T00:25:33Z

NIFI-4942 Added RAT exclusions.

commit 513dadfb29595d9755d220678b6233c9fb2a8c66
Author: Andy LoPresto 
Date:   2018-04-05T00:26:00Z

NIFI-4942 Added Scrypt hash format checker.
Added unit tests.

commit 411b54f15871227d8128446cc131b85da00a56ef
Author: Andy LoPresto 
Date:   2018-04-05T00:26:22Z

NIFI-4942 Added NiFi hash format checker.
Added unit tests.

commit 1b2d6406b94c08b99b434bdc7b47d1cf1eb7319c
Author: Andy LoPresto 
Date:   2018-04-05T23:45:24Z

NIFI-4942 Added check for simultaneous use of -z/-y.
Added logic to check hashed password/key.
Added logic to retrieve secure hash from file to compare.
Added unit tests (125/125).

commit 706015ce3fa745a7de0485e68e7cc96efe529d62
Author: Andy LoPresto 
Date:   2018-04-10T02:48:44Z

NIFI-4942 Added new ExitCode.
Added logic to return current hash params in JSON for Ambari to consume.
Fixed typos in error messages.
Added unit tests (129/129).

commit 6308fd65b994bad637a205a5e661db4712cde811
Author: Andy LoPresto 
Date:   2018-04-10T22:28:49Z

NIFI-4942 Added Scrypt hash format verification for hash check.
Added unit tests.

commit 0b2d12f9a440d207f1d21c06566b839a18efd089
Author: Andy LoPresto 
Date:   2018-04-11T17:54:40Z

NIFI-4942 Fixed RAT checks.




---


[jira] [Commented] (NIFI-4809) Implement a SiteToSiteMetricsReportingTask

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434300#comment-16434300
 ] 

ASF GitHub Bot commented on NIFI-4809:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2575
  
Done, thanks @mattyb149 !


> Implement a SiteToSiteMetricsReportingTask
> --
>
> Key: NIFI-4809
> URL: https://issues.apache.org/jira/browse/NIFI-4809
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> At the moment there is an AmbariReportingTask to send the NiFi-related 
> metrics of the host to the Ambari Metrics Service. In a multi-cluster 
> configuration, or when working with MiNiFi (Java) agents, it might not be 
> possible for all the NiFi instances (NiFi and/or MiNiFi) to access the AMS 
> REST API.
> To solve this problem, a solution would be to implement a 
> SiteToSiteMetricsReportingTask to send the data via S2S to the "main" NiFi 
> instance/cluster that will be able to publish the metrics into AMS (using 
> InvokeHTTP). This way, it is possible to have the metrics of all the 
> instances exposed in one AMS instance.
> I propose to send the data formatted as we are doing right now in the Ambari 
> reporting task. If needed, it can be easily converted into another schema 
> using the record processors once received via S2S.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2575: NIFI-4809 - Implement a SiteToSiteMetricsReportingTask

2018-04-11 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2575
  
Done, thanks @mattyb149 !


---


[jira] [Resolved] (MINIFICPP-449) Allow cURL to be built and statically linked

2018-04-11 Thread Andrew Christianson (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Christianson resolved MINIFICPP-449.
---
Resolution: Fixed

> Allow cURL to be built and statically linked
> 
>
> Key: MINIFICPP-449
> URL: https://issues.apache.org/jira/browse/MINIFICPP-449
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> Allowing cURL to be built as an external project and linked statically will 
> help support certain embedded deployments and certain portability situations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5066) Support enable and disable component action when multiple components selected or when selecting a process group.

2018-04-11 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman reassigned NIFI-5066:
-

Assignee: Matt Gilman

> Support enable and disable component action when multiple components selected 
> or when selecting a process group.
> 
>
> Key: NIFI-5066
> URL: https://issues.apache.org/jira/browse/NIFI-5066
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.5.0
>Reporter: Matthew Clarke
>Assignee: Matt Gilman
>Priority: Major
>
> Currently NiFi validates all processors that are in a STOPPED state.  To 
> reduce impact when flows contain very large numbers of STOPPED processors, 
> users should be disabling these STOPPED processors.  NiFi's "Enable" and 
> "Disable" buttons do not support being used when more then one processor is 
> selected.  When needing to enable or disable large numbers of processors, 
> this is less then ideal. The Enable and Disable buttons should work similar 
> to how the Start and Stop buttons work.
> Have multiple components selected or a process group selected.  Select 
> "Enable" or "Disabled" button.  Any eligible component (those that are not 
> running) should be either enabled or disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5074) Add section to User Guide explaining how variables are captured when versioning a PG

2018-04-11 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-5074.
--
   Resolution: Fixed
Fix Version/s: 1.7.0

> Add section to User Guide explaining how variables are captured when 
> versioning a PG
> 
>
> Key: NIFI-5074
> URL: https://issues.apache.org/jira/browse/NIFI-5074
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Reporter: Andrew Lim
>Assignee: Andrew Lim
>Priority: Minor
> Fix For: 1.7.0
>
>
> Issue originally filed as NIFIREG-133.  Filing in NIFI project since addition 
> will be made to the NiFi User Guide.
> Original Jira description:
> Context:
> I have a process group PG that contains an embedded process group PG1. I have 
> a variable defined at PG level that is referenced in a PG1's processor. When 
> versioning both PG and PG1 in NiFi Registry, a copy of the variable defined 
> at PG level will be created at PG1 level.
> Consequently, when importing PG in another environment, the variable needs to 
> be modified at PG1 level because the one created at PG1 level overwrites the 
> one initially defined at PG level.
> > Would be nice to add this behavior in the documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5074) Add section to User Guide explaining how variables are captured when versioning a PG

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434214#comment-16434214
 ] 

ASF GitHub Bot commented on NIFI-5074:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2627


> Add section to User Guide explaining how variables are captured when 
> versioning a PG
> 
>
> Key: NIFI-5074
> URL: https://issues.apache.org/jira/browse/NIFI-5074
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Reporter: Andrew Lim
>Assignee: Andrew Lim
>Priority: Minor
> Fix For: 1.7.0
>
>
> Issue originally filed as NIFIREG-133.  Filing in NIFI project since addition 
> will be made to the NiFi User Guide.
> Original Jira description:
> Context:
> I have a process group PG that contains an embedded process group PG1. I have 
> a variable defined at PG level that is referenced in a PG1's processor. When 
> versioning both PG and PG1 in NiFi Registry, a copy of the variable defined 
> at PG level will be created at PG1 level.
> Consequently, when importing PG in another environment, the variable needs to 
> be modified at PG1 level because the one created at PG1 level overwrites the 
> one initially defined at PG level.
> > Would be nice to add this behavior in the documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5074) Add section to User Guide explaining how variables are captured when versioning a PG

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434210#comment-16434210
 ] 

ASF GitHub Bot commented on NIFI-5074:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2627
  
+1, merging to master, thanks @andrewmlim 


> Add section to User Guide explaining how variables are captured when 
> versioning a PG
> 
>
> Key: NIFI-5074
> URL: https://issues.apache.org/jira/browse/NIFI-5074
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Reporter: Andrew Lim
>Assignee: Andrew Lim
>Priority: Minor
>
> Issue originally filed as NIFIREG-133.  Filing in NIFI project since addition 
> will be made to the NiFi User Guide.
> Original Jira description:
> Context:
> I have a process group PG that contains an embedded process group PG1. I have 
> a variable defined at PG level that is referenced in a PG1's processor. When 
> versioning both PG and PG1 in NiFi Registry, a copy of the variable defined 
> at PG level will be created at PG1 level.
> Consequently, when importing PG in another environment, the variable needs to 
> be modified at PG1 level because the one created at PG1 level overwrites the 
> one initially defined at PG level.
> > Would be nice to add this behavior in the documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2627: NIFI-5074 Added section Variables in Versioned Flow...

2018-04-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2627


---


[jira] [Commented] (NIFI-5074) Add section to User Guide explaining how variables are captured when versioning a PG

2018-04-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434211#comment-16434211
 ] 

ASF subversion and git services commented on NIFI-5074:
---

Commit ce0855e9883ef353108a3989350d1e8097ac5191 in nifi's branch 
refs/heads/master from [~andrewmlim]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=ce0855e ]

NIFI-5074 Added section Variables in Versioned Flows to User Guide

Signed-off-by: Pierre Villard 

This closes #2627.


> Add section to User Guide explaining how variables are captured when 
> versioning a PG
> 
>
> Key: NIFI-5074
> URL: https://issues.apache.org/jira/browse/NIFI-5074
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Reporter: Andrew Lim
>Assignee: Andrew Lim
>Priority: Minor
>
> Issue originally filed as NIFIREG-133.  Filing in NIFI project since addition 
> will be made to the NiFi User Guide.
> Original Jira description:
> Context:
> I have a process group PG that contains an embedded process group PG1. I have 
> a variable defined at PG level that is referenced in a PG1's processor. When 
> versioning both PG and PG1 in NiFi Registry, a copy of the variable defined 
> at PG level will be created at PG1 level.
> Consequently, when importing PG in another environment, the variable needs to 
> be modified at PG1 level because the one created at PG1 level overwrites the 
> one initially defined at PG level.
> > Would be nice to add this behavior in the documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2627: NIFI-5074 Added section Variables in Versioned Flows to Us...

2018-04-11 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2627
  
+1, merging to master, thanks @andrewmlim 


---


[jira] [Commented] (NIFI-5052) Create a "delete by query" ElasticSearch processor

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434195#comment-16434195
 ] 

ASF GitHub Bot commented on NIFI-5052:
--

Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180821403
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/ElasticSearchRestProcessor.java
 ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.elasticsearch.ElasticSearchClientService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+
+public interface ElasticSearchRestProcessor {
+PropertyDescriptor INDEX = new PropertyDescriptor.Builder()
+.name("el-rest-fetch-index")
+.displayName("Index")
+.description("The name of the index to read from")
--- End diff --

I know this is not introduced by you, rather moved over from 
`JsonQueryElasticSearch` but for `JsonQueryElasticSearch`, the description 
matched the intention, because it was just reading but since now the delete 
processor is using this property, changing it to something like "The name of 
the ElasticSearch Index to use" makes sense


> Create a "delete by query" ElasticSearch processor
> --
>
> Key: NIFI-5052
> URL: https://issues.apache.org/jira/browse/NIFI-5052
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5052) Create a "delete by query" ElasticSearch processor

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434197#comment-16434197
 ] 

ASF GitHub Bot commented on NIFI-5052:
--

Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180815675
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/pom.xml
 ---
@@ -69,11 +69,11 @@
 2.6
 
 
-
-org.elasticsearch.client
-rest
-5.0.1
-
+
+
+
+
+
--- End diff --

Comments to be deleted?


> Create a "delete by query" ElasticSearch processor
> --
>
> Key: NIFI-5052
> URL: https://issues.apache.org/jira/browse/NIFI-5052
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5052) Create a "delete by query" ElasticSearch processor

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434198#comment-16434198
 ] 

ASF GitHub Bot commented on NIFI-5052:
--

Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180815792
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/pom.xml
 ---
@@ -137,7 +149,7 @@
 9400
 5.6.2
 90
-
${project.basedir}/src/test/resources/setup.script
+

--- End diff --

Same here. Can they be removed?


> Create a "delete by query" ElasticSearch processor
> --
>
> Key: NIFI-5052
> URL: https://issues.apache.org/jira/browse/NIFI-5052
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5052) Create a "delete by query" ElasticSearch processor

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434194#comment-16434194
 ] 

ASF GitHub Bot commented on NIFI-5052:
--

Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180815175
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/ElasticSearchRestProcessor.java
 ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.elasticsearch.ElasticSearchClientService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+
+public interface ElasticSearchRestProcessor {
+PropertyDescriptor INDEX = new PropertyDescriptor.Builder()
+.name("el-rest-fetch-index")
+.displayName("Index")
+.description("The name of the index to read from")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor TYPE = new PropertyDescriptor.Builder()
+.name("el-rest-type")
+.displayName("Type")
+.description("The type of this document (used by Elasticsearch 
for indexing and searching)")
+.required(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor QUERY = new PropertyDescriptor.Builder()
+.name("el-rest-query")
+.displayName("Query")
+.description("A query in JSON syntax, not Lucene syntax. Ex: " 
+
--- End diff --

`description` can be more descriptive. If we don't give the query in 
`QUERY` then it reads from the flowfile content. I think that can be added to 
the description.


> Create a "delete by query" ElasticSearch processor
> --
>
> Key: NIFI-5052
> URL: https://issues.apache.org/jira/browse/NIFI-5052
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5052) Create a "delete by query" ElasticSearch processor

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434196#comment-16434196
 ] 

ASF GitHub Bot commented on NIFI-5052:
--

Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180822675
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/ElasticSearchRestProcessor.java
 ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.elasticsearch.ElasticSearchClientService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+
+public interface ElasticSearchRestProcessor {
+PropertyDescriptor INDEX = new PropertyDescriptor.Builder()
+.name("el-rest-fetch-index")
+.displayName("Index")
+.description("The name of the index to read from")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor TYPE = new PropertyDescriptor.Builder()
+.name("el-rest-type")
+.displayName("Type")
+.description("The type of this document (used by Elasticsearch 
for indexing and searching)")
+.required(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor QUERY = new PropertyDescriptor.Builder()
+.name("el-rest-query")
+.displayName("Query")
+.description("A query in JSON syntax, not Lucene syntax. Ex: " 
+
+"{\n" +
--- End diff --

Same here - not something you introduced but moved and mostly a cosmetic 
change. Feel free to ignore this. The `\t` and `\n` won't be rendered in the UI 
so simply writing `{ \"query\": { \"match\": { \"name\": \"John Smith\" } } }"` 
is enough.


> Create a "delete by query" ElasticSearch processor
> --
>
> Key: NIFI-5052
> URL: https://issues.apache.org/jira/browse/NIFI-5052
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2616: NIFI-5052 Added DeleteByQuery ElasticSearch process...

2018-04-11 Thread zenfenan
Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180815675
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/pom.xml
 ---
@@ -69,11 +69,11 @@
 2.6
 
 
-
-org.elasticsearch.client
-rest
-5.0.1
-
+
+
+
+
+
--- End diff --

Comments to be deleted?


---


[GitHub] nifi pull request #2616: NIFI-5052 Added DeleteByQuery ElasticSearch process...

2018-04-11 Thread zenfenan
Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180822675
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/ElasticSearchRestProcessor.java
 ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.elasticsearch.ElasticSearchClientService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+
+public interface ElasticSearchRestProcessor {
+PropertyDescriptor INDEX = new PropertyDescriptor.Builder()
+.name("el-rest-fetch-index")
+.displayName("Index")
+.description("The name of the index to read from")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor TYPE = new PropertyDescriptor.Builder()
+.name("el-rest-type")
+.displayName("Type")
+.description("The type of this document (used by Elasticsearch 
for indexing and searching)")
+.required(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor QUERY = new PropertyDescriptor.Builder()
+.name("el-rest-query")
+.displayName("Query")
+.description("A query in JSON syntax, not Lucene syntax. Ex: " 
+
+"{\n" +
--- End diff --

Same here - not something you introduced but moved and mostly a cosmetic 
change. Feel free to ignore this. The `\t` and `\n` won't be rendered in the UI 
so simply writing `{ \"query\": { \"match\": { \"name\": \"John Smith\" } } }"` 
is enough.


---


[GitHub] nifi pull request #2616: NIFI-5052 Added DeleteByQuery ElasticSearch process...

2018-04-11 Thread zenfenan
Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180815175
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/ElasticSearchRestProcessor.java
 ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.elasticsearch.ElasticSearchClientService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+
+public interface ElasticSearchRestProcessor {
+PropertyDescriptor INDEX = new PropertyDescriptor.Builder()
+.name("el-rest-fetch-index")
+.displayName("Index")
+.description("The name of the index to read from")
+.required(true)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor TYPE = new PropertyDescriptor.Builder()
+.name("el-rest-type")
+.displayName("Type")
+.description("The type of this document (used by Elasticsearch 
for indexing and searching)")
+.required(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+PropertyDescriptor QUERY = new PropertyDescriptor.Builder()
+.name("el-rest-query")
+.displayName("Query")
+.description("A query in JSON syntax, not Lucene syntax. Ex: " 
+
--- End diff --

`description` can be more descriptive. If we don't give the query in 
`QUERY` then it reads from the flowfile content. I think that can be added to 
the description.


---


[GitHub] nifi pull request #2616: NIFI-5052 Added DeleteByQuery ElasticSearch process...

2018-04-11 Thread zenfenan
Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180821403
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/ElasticSearchRestProcessor.java
 ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.elasticsearch.ElasticSearchClientService;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+
+public interface ElasticSearchRestProcessor {
+PropertyDescriptor INDEX = new PropertyDescriptor.Builder()
+.name("el-rest-fetch-index")
+.displayName("Index")
+.description("The name of the index to read from")
--- End diff --

I know this is not introduced by you, rather moved over from 
`JsonQueryElasticSearch` but for `JsonQueryElasticSearch`, the description 
matched the intention, because it was just reading but since now the delete 
processor is using this property, changing it to something like "The name of 
the ElasticSearch Index to use" makes sense


---


[GitHub] nifi pull request #2616: NIFI-5052 Added DeleteByQuery ElasticSearch process...

2018-04-11 Thread zenfenan
Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r180815792
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/pom.xml
 ---
@@ -137,7 +149,7 @@
 9400
 5.6.2
 90
-
${project.basedir}/src/test/resources/setup.script
+

--- End diff --

Same here. Can they be removed?


---


[jira] [Commented] (MINIFICPP-449) Allow cURL to be built and statically linked

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434190#comment-16434190
 ] 

ASF GitHub Bot commented on MINIFICPP-449:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/296


> Allow cURL to be built and statically linked
> 
>
> Key: MINIFICPP-449
> URL: https://issues.apache.org/jira/browse/MINIFICPP-449
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> Allowing cURL to be built as an external project and linked statically will 
> help support certain embedded deployments and certain portability situations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #296: MINIFICPP-449 Add cURL external project b...

2018-04-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi-minifi-cpp/pull/296


---


[GitHub] nifi pull request #2627: NIFI-5074 Added section Variables in Versioned Flow...

2018-04-11 Thread andrewmlim
GitHub user andrewmlim opened a pull request:

https://github.com/apache/nifi/pull/2627

NIFI-5074 Added section Variables in Versioned Flows to User Guide



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/andrewmlim/nifi NIFI-5074

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2627.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2627


commit a75876f3b353caa56bd83f4198ac1be0dd32
Author: Andrew Lim 
Date:   2018-04-11T16:24:16Z

NIFI-5074 Added section Variables in Versioned Flows to User Guide




---


[jira] [Commented] (NIFI-5074) Add section to User Guide explaining how variables are captured when versioning a PG

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434137#comment-16434137
 ] 

ASF GitHub Bot commented on NIFI-5074:
--

GitHub user andrewmlim opened a pull request:

https://github.com/apache/nifi/pull/2627

NIFI-5074 Added section Variables in Versioned Flows to User Guide



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/andrewmlim/nifi NIFI-5074

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2627.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2627


commit a75876f3b353caa56bd83f4198ac1be0dd32
Author: Andrew Lim 
Date:   2018-04-11T16:24:16Z

NIFI-5074 Added section Variables in Versioned Flows to User Guide




> Add section to User Guide explaining how variables are captured when 
> versioning a PG
> 
>
> Key: NIFI-5074
> URL: https://issues.apache.org/jira/browse/NIFI-5074
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Reporter: Andrew Lim
>Assignee: Andrew Lim
>Priority: Minor
>
> Issue originally filed as NIFIREG-133.  Filing in NIFI project since addition 
> will be made to the NiFi User Guide.
> Original Jira description:
> Context:
> I have a process group PG that contains an embedded process group PG1. I have 
> a variable defined at PG level that is referenced in a PG1's processor. When 
> versioning both PG and PG1 in NiFi Registry, a copy of the variable defined 
> at PG level will be created at PG1 level.
> Consequently, when importing PG in another environment, the variable needs to 
> be modified at PG1 level because the one created at PG1 level overwrites the 
> one initially defined at PG level.
> > Would be nice to add this behavior in the documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (NIFIREG-133) [Documentation] Explain how variables are captured when versioning a PG

2018-04-11 Thread Andrew Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lim closed NIFIREG-133.
--
Resolution: Duplicate

Adding this documentation to the NiFi User Guide via NIFI-5074.

> [Documentation] Explain how variables are captured when versioning a PG
> ---
>
> Key: NIFIREG-133
> URL: https://issues.apache.org/jira/browse/NIFIREG-133
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Pierre Villard
>Assignee: Andrew Lim
>Priority: Minor
>
> Context:
> I have a process group PG that contains an embedded process group PG1. I have 
> a variable defined at PG level that is referenced in a PG1's processor. When 
> versioning both PG and PG1 in NiFi Registry, a copy of the variable defined 
> at PG level will be created at PG1 level.
> Consequently, when importing PG in another environment, the variable needs to 
> be modified at PG1 level because the one created at PG1 level overwrites the 
> one initially defined at PG level.
> > Would be nice to add this behavior in the documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5074) Add section to User Guide explaining how variables are captured when versioning a PG

2018-04-11 Thread Andrew Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Lim reassigned NIFI-5074:


Assignee: Andrew Lim

> Add section to User Guide explaining how variables are captured when 
> versioning a PG
> 
>
> Key: NIFI-5074
> URL: https://issues.apache.org/jira/browse/NIFI-5074
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Reporter: Andrew Lim
>Assignee: Andrew Lim
>Priority: Minor
>
> Issue originally filed as NIFIREG-133.  Filing in NIFI project since addition 
> will be made to the NiFi User Guide.
> Original Jira description:
> Context:
> I have a process group PG that contains an embedded process group PG1. I have 
> a variable defined at PG level that is referenced in a PG1's processor. When 
> versioning both PG and PG1 in NiFi Registry, a copy of the variable defined 
> at PG level will be created at PG1 level.
> Consequently, when importing PG in another environment, the variable needs to 
> be modified at PG1 level because the one created at PG1 level overwrites the 
> one initially defined at PG level.
> > Would be nice to add this behavior in the documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5074) Add section to User Guide explaining how variables are captured when versioning a PG

2018-04-11 Thread Andrew Lim (JIRA)
Andrew Lim created NIFI-5074:


 Summary: Add section to User Guide explaining how variables are 
captured when versioning a PG
 Key: NIFI-5074
 URL: https://issues.apache.org/jira/browse/NIFI-5074
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Documentation  Website
Reporter: Andrew Lim


Issue originally filed as NIFIREG-133.  Filing in NIFI project since addition 
will be made to the NiFi User Guide.

Original Jira description:

Context:

I have a process group PG that contains an embedded process group PG1. I have a 
variable defined at PG level that is referenced in a PG1's processor. When 
versioning both PG and PG1 in NiFi Registry, a copy of the variable defined at 
PG level will be created at PG1 level.

Consequently, when importing PG in another environment, the variable needs to 
be modified at PG1 level because the one created at PG1 level overwrites the 
one initially defined at PG level.

> Would be nice to add this behavior in the documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #297: MINIFICPP-446 Add escape/unescape HTML3 E...

2018-04-11 Thread achristianson
GitHub user achristianson opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/297

MINIFICPP-446 Add escape/unescape HTML3 EL functions

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [x] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [x] If applicable, have you updated the LICENSE file?
- [x] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-446

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/297.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #297


commit 096c0682cc4a2e44c22a0c18b682705ecefc569c
Author: Andrew I. Christianson 
Date:   2018-04-11T15:46:13Z

MINIFICPP-446 Add escape/unescape HTML3 EL functions




---


[jira] [Commented] (MINIFICPP-446) Implement escape/unescape HTML3 EL functions

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434091#comment-16434091
 ] 

ASF GitHub Bot commented on MINIFICPP-446:
--

GitHub user achristianson opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/297

MINIFICPP-446 Add escape/unescape HTML3 EL functions

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [x] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [x] If applicable, have you updated the LICENSE file?
- [x] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-446

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/297.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #297


commit 096c0682cc4a2e44c22a0c18b682705ecefc569c
Author: Andrew I. Christianson 
Date:   2018-04-11T15:46:13Z

MINIFICPP-446 Add escape/unescape HTML3 EL functions




> Implement escape/unescape HTML3 EL functions
> 
>
> Key: MINIFICPP-446
> URL: https://issues.apache.org/jira/browse/MINIFICPP-446
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> [Encode/Decode 
> Functions|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#encode]
>  * 
> [escapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapehtml3]
>  * 
> [unescapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapehtml3]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5049) Fix handling of Phonenix datetime columns in QueryDatabaseTable and GenerateTableFetch

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434038#comment-16434038
 ] 

ASF GitHub Bot commented on NIFI-5049:
--

Github user gardellajuanpablo commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2625#discussion_r180787552
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/db/impl/PhoenixDatabaseAdapter.java
 ---
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard.db.impl;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.processors.standard.db.DatabaseAdapter;
+
+/**
+ * A Apache Phoenix database adapter that generates ANSI SQL.
+ */
+public final class PhoenixDatabaseAdapter implements DatabaseAdapter {
+public static final String NAME = "Phoenix";
+
+@Override
+public String getName() {
+return NAME;
+}
+
+@Override
+public String getDescription() {
+return "Generates Phoenix compliant SQL";
+}
+
+@Override
+public String getSelectStatement(String tableName, String columnNames, 
String whereClause, String orderByClause,
--- End diff --

Yes, tt will prevent duplicate code, but any change introduced in 
GenericDatabaseAdapter may break Phoenix adapter. Also PhoenixDatabaseAdapter 
it is not (my personal opinion) a Generic Adapter. Having in this way, it's 
less risk introducing changes in Generic/Phoenix Adapter. 


> Fix handling of Phonenix datetime columns in QueryDatabaseTable and 
> GenerateTableFetch
> --
>
> Key: NIFI-5049
> URL: https://issues.apache.org/jira/browse/NIFI-5049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Gardella Juan Pablo
>Assignee: Matt Burgess
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> QueryDatabaseAdapter does not work against Phoenix DB if it should convert 
> TIMESTAMP. The error is described below:
> [https://stackoverflow.com/questions/45989678/convert-varchar-to-timestamp-in-hbase]
> Basically, it's required to use TO_TIMESTAMP(MAX_COLUMN) to make it work. 
> See 
> [https://lists.apache.org/thread.html/%3cca+kifscje8ay+uxt_d_vst4qgzf4jxwovboynjgztt4dsbs...@mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2625: NIFI-5049 Fix handling of Phonenix datetime columns

2018-04-11 Thread gardellajuanpablo
Github user gardellajuanpablo commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2625#discussion_r180787552
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/db/impl/PhoenixDatabaseAdapter.java
 ---
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard.db.impl;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.processors.standard.db.DatabaseAdapter;
+
+/**
+ * A Apache Phoenix database adapter that generates ANSI SQL.
+ */
+public final class PhoenixDatabaseAdapter implements DatabaseAdapter {
+public static final String NAME = "Phoenix";
+
+@Override
+public String getName() {
+return NAME;
+}
+
+@Override
+public String getDescription() {
+return "Generates Phoenix compliant SQL";
+}
+
+@Override
+public String getSelectStatement(String tableName, String columnNames, 
String whereClause, String orderByClause,
--- End diff --

Yes, tt will prevent duplicate code, but any change introduced in 
GenericDatabaseAdapter may break Phoenix adapter. Also PhoenixDatabaseAdapter 
it is not (my personal opinion) a Generic Adapter. Having in this way, it's 
less risk introducing changes in Generic/Phoenix Adapter. 


---


[jira] [Commented] (NIFI-5049) Fix handling of Phonenix datetime columns in QueryDatabaseTable and GenerateTableFetch

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434037#comment-16434037
 ] 

ASF GitHub Bot commented on NIFI-5049:
--

Github user gardellajuanpablo commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2625#discussion_r180786540
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -436,13 +437,26 @@ protected static String getLiteralByType(int type, 
String value, String database
 case NVARCHAR:
 case VARCHAR:
 case ROWID:
-case DATE:
-case TIME:
 return "'" + value + "'";
+case TIME:
+if (PhoenixDatabaseAdapter.NAME.equals(databaseType)) {
+return "time '" + value + "'";
+}
+case DATE:
 case TIMESTAMP:
-if (!StringUtils.isEmpty(databaseType) && 
databaseType.contains("Oracle")) {
-// For backwards compatibility, the type might be 
TIMESTAMP but the state value is in DATE format. This should be a one-time 
occurrence as the next maximum value
-// should be stored as a full timestamp. Even so, 
check to see if the value is missing time-of-day information, and use the 
"date" coercion rather than the
+// TODO delegate to database adapter the conversion 
instead of using if in this
--- End diff --

Yes, but it will break the method signature (it is not private). I prefer 
to keep it compatible in this bug fixing. Maybe it's better to file a 
IMPROVEMENT to improve how the DB specific behavior is handling. 


> Fix handling of Phonenix datetime columns in QueryDatabaseTable and 
> GenerateTableFetch
> --
>
> Key: NIFI-5049
> URL: https://issues.apache.org/jira/browse/NIFI-5049
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
>Reporter: Gardella Juan Pablo
>Assignee: Matt Burgess
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> QueryDatabaseAdapter does not work against Phoenix DB if it should convert 
> TIMESTAMP. The error is described below:
> [https://stackoverflow.com/questions/45989678/convert-varchar-to-timestamp-in-hbase]
> Basically, it's required to use TO_TIMESTAMP(MAX_COLUMN) to make it work. 
> See 
> [https://lists.apache.org/thread.html/%3cca+kifscje8ay+uxt_d_vst4qgzf4jxwovboynjgztt4dsbs...@mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2625: NIFI-5049 Fix handling of Phonenix datetime columns

2018-04-11 Thread gardellajuanpablo
Github user gardellajuanpablo commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2625#discussion_r180786540
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractDatabaseFetchProcessor.java
 ---
@@ -436,13 +437,26 @@ protected static String getLiteralByType(int type, 
String value, String database
 case NVARCHAR:
 case VARCHAR:
 case ROWID:
-case DATE:
-case TIME:
 return "'" + value + "'";
+case TIME:
+if (PhoenixDatabaseAdapter.NAME.equals(databaseType)) {
+return "time '" + value + "'";
+}
+case DATE:
 case TIMESTAMP:
-if (!StringUtils.isEmpty(databaseType) && 
databaseType.contains("Oracle")) {
-// For backwards compatibility, the type might be 
TIMESTAMP but the state value is in DATE format. This should be a one-time 
occurrence as the next maximum value
-// should be stored as a full timestamp. Even so, 
check to see if the value is missing time-of-day information, and use the 
"date" coercion rather than the
+// TODO delegate to database adapter the conversion 
instead of using if in this
--- End diff --

Yes, but it will break the method signature (it is not private). I prefer 
to keep it compatible in this bug fixing. Maybe it's better to file a 
IMPROVEMENT to improve how the DB specific behavior is handling. 


---


[jira] [Commented] (NIFI-3576) QueryElasticsearchHttp should have a "Not Found"/"Zero results" relationship

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434025#comment-16434025
 ] 

ASF GitHub Bot commented on NIFI-3576:
--

Github user ottobackwards commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2601#discussion_r180783317
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/QueryElasticsearchHttp.java
 ---
@@ -175,16 +197,21 @@
 .allowableValues(TARGET_FLOW_FILE_CONTENT, 
TARGET_FLOW_FILE_ATTRIBUTES)
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR).build();
 
-private static final Set relationships;
+public static final PropertyDescriptor ROUTING_QUERY_INFO_STRATEGY = 
new PropertyDescriptor.Builder()
+.name("routing-query-info-strategy")
+.displayName("Routing Strategy for Query Info")
+.description("Specifies when to generate and route Query Info 
after a successful query")
+.expressionLanguageSupported(false)
--- End diff --

Fixed thanks


> QueryElasticsearchHttp should have a "Not Found"/"Zero results" relationship
> 
>
> Key: NIFI-3576
> URL: https://issues.apache.org/jira/browse/NIFI-3576
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Percivall
>Assignee: Otto Fowler
>Priority: Minor
>
> In the event of a successful call, QueryElasticsearchHttp always drops the 
> incoming flowfile and then emits pages of results to the success 
> relationship. If the search returns no results then no pages of results are 
> emitted to the success relationship. 
> The processor should offer other options for handling when there are no 
> results returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp issue #295: MINFICPP-403: Flow Meta tagging

2018-04-11 Thread minifirocks
Github user minifirocks commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/295
  
@phrocker please review
change shared_ptr to unique ptr
use agent.version and flow.version for meta info



---


[GitHub] nifi pull request #2601: NIFI-3576 Support for QueryInfo relationship, usefu...

2018-04-11 Thread ottobackwards
Github user ottobackwards commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2601#discussion_r180783317
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/QueryElasticsearchHttp.java
 ---
@@ -175,16 +197,21 @@
 .allowableValues(TARGET_FLOW_FILE_CONTENT, 
TARGET_FLOW_FILE_ATTRIBUTES)
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR).build();
 
-private static final Set relationships;
+public static final PropertyDescriptor ROUTING_QUERY_INFO_STRATEGY = 
new PropertyDescriptor.Builder()
+.name("routing-query-info-strategy")
+.displayName("Routing Strategy for Query Info")
+.description("Specifies when to generate and route Query Info 
after a successful query")
+.expressionLanguageSupported(false)
--- End diff --

Fixed thanks


---


[jira] [Commented] (NIFI-3576) QueryElasticsearchHttp should have a "Not Found"/"Zero results" relationship

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433995#comment-16433995
 ] 

ASF GitHub Bot commented on NIFI-3576:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2601#discussion_r180773767
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/QueryElasticsearchHttp.java
 ---
@@ -175,16 +197,21 @@
 .allowableValues(TARGET_FLOW_FILE_CONTENT, 
TARGET_FLOW_FILE_ATTRIBUTES)
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR).build();
 
-private static final Set relationships;
+public static final PropertyDescriptor ROUTING_QUERY_INFO_STRATEGY = 
new PropertyDescriptor.Builder()
+.name("routing-query-info-strategy")
+.displayName("Routing Strategy for Query Info")
+.description("Specifies when to generate and route Query Info 
after a successful query")
+.expressionLanguageSupported(false)
--- End diff --

As of PR #2205 I believe this has to be changed from false to 
ExpressionLanguageScope.NONE


> QueryElasticsearchHttp should have a "Not Found"/"Zero results" relationship
> 
>
> Key: NIFI-3576
> URL: https://issues.apache.org/jira/browse/NIFI-3576
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joseph Percivall
>Assignee: Otto Fowler
>Priority: Minor
>
> In the event of a successful call, QueryElasticsearchHttp always drops the 
> incoming flowfile and then emits pages of results to the success 
> relationship. If the search returns no results then no pages of results are 
> emitted to the success relationship. 
> The processor should offer other options for handling when there are no 
> results returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2601: NIFI-3576 Support for QueryInfo relationship, usefu...

2018-04-11 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2601#discussion_r180773767
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/QueryElasticsearchHttp.java
 ---
@@ -175,16 +197,21 @@
 .allowableValues(TARGET_FLOW_FILE_CONTENT, 
TARGET_FLOW_FILE_ATTRIBUTES)
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR).build();
 
-private static final Set relationships;
+public static final PropertyDescriptor ROUTING_QUERY_INFO_STRATEGY = 
new PropertyDescriptor.Builder()
+.name("routing-query-info-strategy")
+.displayName("Routing Strategy for Query Info")
+.description("Specifies when to generate and route Query Info 
after a successful query")
+.expressionLanguageSupported(false)
--- End diff --

As of PR #2205 I believe this has to be changed from false to 
ExpressionLanguageScope.NONE


---


[jira] [Commented] (NIFI-4982) Add a DistributedMapCacheLookupService

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433987#comment-16433987
 ] 

ASF GitHub Bot commented on NIFI-4982:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2558#discussion_r180771987
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-lookup-services-bundle/nifi-lookup-services/src/main/java/org/apache/nifi/lookup/DistributedMapCacheLookupService.java
 ---
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.lookup;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import 
org.apache.nifi.distributed.cache.client.exception.DeserializationException;
+import 
org.apache.nifi.distributed.cache.client.exception.SerializationException;
+
+@Tags({"lookup", "enrich", "key", "value", "map", "cache", "distributed"})
+@CapabilityDescription("Allows to choose a distributed map cache client to 
retrieve the value associated to a key. "
++ "The coordinates that are passed to the lookup must contain the key 
'key'.")
+public class DistributedMapCacheLookupService extends 
AbstractControllerService implements StringLookupService {
+private static final String KEY = "key";
+private static final Set REQUIRED_KEYS = 
Stream.of(KEY).collect(Collectors.toSet());
+
+private volatile DistributedMapCacheClient cache;
+private final Serializer keySerializer = new 
StringSerializer();
+private final Deserializer valueDeserializer = new 
StringDeserializer();
+
+public static final PropertyDescriptor PROP_DISTRIBUTED_CACHE_SERVICE 
= new PropertyDescriptor.Builder()
+.name("distributed-map-cache-service")
+.displayName("Distributed Cache Service")
+.description("The Controller Service that is used to get the 
cached values.")
+.required(true)
+.identifiesControllerService(DistributedMapCacheClient.class)
+.build();
+
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.required(false)
+.dynamic(true)
+.addValidator(Validator.VALID)
+.expressionLanguageSupported(true)
+.build();
+}
+
+@OnEnabled
+public void cacheConfiguredValues(final ConfigurationContext context) {
+cache = 
context.getProperty(PROP_DISTRIBUTED_CACHE_SERVICE).asControllerService(DistributedMapCacheClient.class);
+}
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+final List descriptors = new ArrayList<>();
+descriptors.add(PROP_DISTRIBUTED_CACHE_SERVICE);
+return descriptors;
+}
+
+@Override
+public Optional lookup(final Map coordinates) {
+if (coordinates == 

[jira] [Commented] (NIFI-4982) Add a DistributedMapCacheLookupService

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433984#comment-16433984
 ] 

ASF GitHub Bot commented on NIFI-4982:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2558#discussion_r180771727
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-lookup-services-bundle/nifi-lookup-services/src/main/java/org/apache/nifi/lookup/DistributedMapCacheLookupService.java
 ---
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.lookup;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import 
org.apache.nifi.distributed.cache.client.exception.DeserializationException;
+import 
org.apache.nifi.distributed.cache.client.exception.SerializationException;
+
+@Tags({"lookup", "enrich", "key", "value", "map", "cache", "distributed"})
+@CapabilityDescription("Allows to choose a distributed map cache client to 
retrieve the value associated to a key. "
++ "The coordinates that are passed to the lookup must contain the key 
'key'.")
+public class DistributedMapCacheLookupService extends 
AbstractControllerService implements StringLookupService {
+private static final String KEY = "key";
--- End diff --

Does requiring a single key "key" mean we can only do one lookup at a time? 
Perhaps we should not require any keys and let the dynamic properties define 
the lookup keys?


> Add a DistributedMapCacheLookupService
> --
>
> Key: NIFI-4982
> URL: https://issues.apache.org/jira/browse/NIFI-4982
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Attachments: distributedMapCacheLookup.xml
>
>
> Add a new lookup controller service that takes a Distributed Map Cache client 
> to lookup for values based on a key. This allows users to leverage the 
> internal Disitributed Map Cache server and/or the Redis to perform lookup 
> access (with LookupRecord processor for instance).
> Attached is a template to test the PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4982) Add a DistributedMapCacheLookupService

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433986#comment-16433986
 ] 

ASF GitHub Bot commented on NIFI-4982:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2558#discussion_r180772442
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-lookup-services-bundle/nifi-lookup-services/src/test/java/org/apache/nifi/lookup/TestDistributedMapCacheLookupService.java
 ---
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.lookup;
+
+import static org.hamcrest.CoreMatchers.instanceOf;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertThat;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Optional;
+
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Test;
+
+public class TestDistributedMapCacheLookupService {
+
+final static Optional EMPTY_STRING = Optional.empty();
+
+@Test
+public void testDistributedMapCacheLookupService() throws 
InitializationException {
+final TestRunner runner = 
TestRunners.newTestRunner(TestProcessor.class);
+final DistributedMapCacheLookupService service = new 
DistributedMapCacheLookupService();
+final DistributedMapCacheClient client = new 
DistributedMapCacheClientImpl();
+
+runner.addControllerService("client", client);
+runner.addControllerService("lookup-service", service);
+runner.setProperty(service, 
DistributedMapCacheLookupService.PROP_DISTRIBUTED_CACHE_SERVICE, "client");
+
+runner.enableControllerService(client);
+runner.enableControllerService(service);
+
+runner.assertValid(service);
+
+assertThat(service, instanceOf(LookupService.class));
+
+final Optional get = 
service.lookup(Collections.singletonMap("key", "myKey"));
+assertEquals(Optional.of("myValue"), get);
+
+final Optional absent = 
service.lookup(Collections.singletonMap("key", "absentKey"));
+assertEquals(EMPTY_STRING, absent);
+}
+
+static final class DistributedMapCacheClientImpl extends 
AbstractControllerService implements DistributedMapCacheClient {
+
+private Map map = new HashMap();
+
+@OnEnabled
+public void onEnabled(final ConfigurationContext context) {
+map.put("myKey", "myValue");
+}
+
+@Override
+public void close() throws IOException {
+}
+
+@Override
+public void onPropertyModified(final PropertyDescriptor 
descriptor, final String oldValue, final String newValue) {
+}
+
+@Override
+protected java.util.List 
getSupportedPropertyDescriptors() {
+return new ArrayList<>();
+}
+
+@Override
+public  boolean putIfAbsent(final K key, final V value, 
final Serializer keySerializer, final Serializer valueSerializer) throws 
IOException {
+throw new IOException("not implemented");
--- End diff --

Nitpick, but these should probably be 

[jira] [Commented] (NIFI-4982) Add a DistributedMapCacheLookupService

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433985#comment-16433985
 ] 

ASF GitHub Bot commented on NIFI-4982:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2558#discussion_r180771298
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-lookup-services-bundle/nifi-lookup-services/src/main/java/org/apache/nifi/lookup/DistributedMapCacheLookupService.java
 ---
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.lookup;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import 
org.apache.nifi.distributed.cache.client.exception.DeserializationException;
+import 
org.apache.nifi.distributed.cache.client.exception.SerializationException;
+
+@Tags({"lookup", "enrich", "key", "value", "map", "cache", "distributed"})
+@CapabilityDescription("Allows to choose a distributed map cache client to 
retrieve the value associated to a key. "
++ "The coordinates that are passed to the lookup must contain the key 
'key'.")
+public class DistributedMapCacheLookupService extends 
AbstractControllerService implements StringLookupService {
+private static final String KEY = "key";
+private static final Set REQUIRED_KEYS = 
Stream.of(KEY).collect(Collectors.toSet());
+
+private volatile DistributedMapCacheClient cache;
+private final Serializer keySerializer = new 
StringSerializer();
+private final Deserializer valueDeserializer = new 
StringDeserializer();
+
+public static final PropertyDescriptor PROP_DISTRIBUTED_CACHE_SERVICE 
= new PropertyDescriptor.Builder()
+.name("distributed-map-cache-service")
+.displayName("Distributed Cache Service")
+.description("The Controller Service that is used to get the 
cached values.")
+.required(true)
+.identifiesControllerService(DistributedMapCacheClient.class)
+.build();
+
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.required(false)
+.dynamic(true)
+.addValidator(Validator.VALID)
+.expressionLanguageSupported(true)
--- End diff --

As of your PR #2205 , this should be changed to indicate the scope :)


> Add a DistributedMapCacheLookupService
> --
>
> Key: NIFI-4982
> URL: https://issues.apache.org/jira/browse/NIFI-4982
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Attachments: distributedMapCacheLookup.xml
>
>
> Add a new lookup controller service that takes a Distributed Map Cache client 
> to lookup for values 

[GitHub] nifi pull request #2558: NIFI-4982 - Add a DistributedMapCacheLookupService

2018-04-11 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2558#discussion_r180771987
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-lookup-services-bundle/nifi-lookup-services/src/main/java/org/apache/nifi/lookup/DistributedMapCacheLookupService.java
 ---
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.lookup;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import 
org.apache.nifi.distributed.cache.client.exception.DeserializationException;
+import 
org.apache.nifi.distributed.cache.client.exception.SerializationException;
+
+@Tags({"lookup", "enrich", "key", "value", "map", "cache", "distributed"})
+@CapabilityDescription("Allows to choose a distributed map cache client to 
retrieve the value associated to a key. "
++ "The coordinates that are passed to the lookup must contain the key 
'key'.")
+public class DistributedMapCacheLookupService extends 
AbstractControllerService implements StringLookupService {
+private static final String KEY = "key";
+private static final Set REQUIRED_KEYS = 
Stream.of(KEY).collect(Collectors.toSet());
+
+private volatile DistributedMapCacheClient cache;
+private final Serializer keySerializer = new 
StringSerializer();
+private final Deserializer valueDeserializer = new 
StringDeserializer();
+
+public static final PropertyDescriptor PROP_DISTRIBUTED_CACHE_SERVICE 
= new PropertyDescriptor.Builder()
+.name("distributed-map-cache-service")
+.displayName("Distributed Cache Service")
+.description("The Controller Service that is used to get the 
cached values.")
+.required(true)
+.identifiesControllerService(DistributedMapCacheClient.class)
+.build();
+
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.required(false)
+.dynamic(true)
+.addValidator(Validator.VALID)
+.expressionLanguageSupported(true)
+.build();
+}
+
+@OnEnabled
+public void cacheConfiguredValues(final ConfigurationContext context) {
+cache = 
context.getProperty(PROP_DISTRIBUTED_CACHE_SERVICE).asControllerService(DistributedMapCacheClient.class);
+}
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+final List descriptors = new ArrayList<>();
+descriptors.add(PROP_DISTRIBUTED_CACHE_SERVICE);
+return descriptors;
+}
+
+@Override
+public Optional lookup(final Map coordinates) {
+if (coordinates == null) {
+return Optional.empty();
+}
+
+final String key = coordinates.get(KEY).toString();
+if (key == null) {
+return Optional.empty();
+}
+
+ 

[GitHub] nifi pull request #2558: NIFI-4982 - Add a DistributedMapCacheLookupService

2018-04-11 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2558#discussion_r180772442
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-lookup-services-bundle/nifi-lookup-services/src/test/java/org/apache/nifi/lookup/TestDistributedMapCacheLookupService.java
 ---
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.lookup;
+
+import static org.hamcrest.CoreMatchers.instanceOf;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertThat;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Optional;
+
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.reporting.InitializationException;
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Test;
+
+public class TestDistributedMapCacheLookupService {
+
+final static Optional EMPTY_STRING = Optional.empty();
+
+@Test
+public void testDistributedMapCacheLookupService() throws 
InitializationException {
+final TestRunner runner = 
TestRunners.newTestRunner(TestProcessor.class);
+final DistributedMapCacheLookupService service = new 
DistributedMapCacheLookupService();
+final DistributedMapCacheClient client = new 
DistributedMapCacheClientImpl();
+
+runner.addControllerService("client", client);
+runner.addControllerService("lookup-service", service);
+runner.setProperty(service, 
DistributedMapCacheLookupService.PROP_DISTRIBUTED_CACHE_SERVICE, "client");
+
+runner.enableControllerService(client);
+runner.enableControllerService(service);
+
+runner.assertValid(service);
+
+assertThat(service, instanceOf(LookupService.class));
+
+final Optional get = 
service.lookup(Collections.singletonMap("key", "myKey"));
+assertEquals(Optional.of("myValue"), get);
+
+final Optional absent = 
service.lookup(Collections.singletonMap("key", "absentKey"));
+assertEquals(EMPTY_STRING, absent);
+}
+
+static final class DistributedMapCacheClientImpl extends 
AbstractControllerService implements DistributedMapCacheClient {
+
+private Map map = new HashMap();
+
+@OnEnabled
+public void onEnabled(final ConfigurationContext context) {
+map.put("myKey", "myValue");
+}
+
+@Override
+public void close() throws IOException {
+}
+
+@Override
+public void onPropertyModified(final PropertyDescriptor 
descriptor, final String oldValue, final String newValue) {
+}
+
+@Override
+protected java.util.List 
getSupportedPropertyDescriptors() {
+return new ArrayList<>();
+}
+
+@Override
+public  boolean putIfAbsent(final K key, final V value, 
final Serializer keySerializer, final Serializer valueSerializer) throws 
IOException {
+throw new IOException("not implemented");
--- End diff --

Nitpick, but these should probably be UnsupportedOperationExceptions :P 


---


[GitHub] nifi pull request #2558: NIFI-4982 - Add a DistributedMapCacheLookupService

2018-04-11 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2558#discussion_r180771727
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-lookup-services-bundle/nifi-lookup-services/src/main/java/org/apache/nifi/lookup/DistributedMapCacheLookupService.java
 ---
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.lookup;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import 
org.apache.nifi.distributed.cache.client.exception.DeserializationException;
+import 
org.apache.nifi.distributed.cache.client.exception.SerializationException;
+
+@Tags({"lookup", "enrich", "key", "value", "map", "cache", "distributed"})
+@CapabilityDescription("Allows to choose a distributed map cache client to 
retrieve the value associated to a key. "
++ "The coordinates that are passed to the lookup must contain the key 
'key'.")
+public class DistributedMapCacheLookupService extends 
AbstractControllerService implements StringLookupService {
+private static final String KEY = "key";
--- End diff --

Does requiring a single key "key" mean we can only do one lookup at a time? 
Perhaps we should not require any keys and let the dynamic properties define 
the lookup keys?


---


[GitHub] nifi pull request #2558: NIFI-4982 - Add a DistributedMapCacheLookupService

2018-04-11 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2558#discussion_r180771298
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-lookup-services-bundle/nifi-lookup-services/src/main/java/org/apache/nifi/lookup/DistributedMapCacheLookupService.java
 ---
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.lookup;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import 
org.apache.nifi.distributed.cache.client.exception.DeserializationException;
+import 
org.apache.nifi.distributed.cache.client.exception.SerializationException;
+
+@Tags({"lookup", "enrich", "key", "value", "map", "cache", "distributed"})
+@CapabilityDescription("Allows to choose a distributed map cache client to 
retrieve the value associated to a key. "
++ "The coordinates that are passed to the lookup must contain the key 
'key'.")
+public class DistributedMapCacheLookupService extends 
AbstractControllerService implements StringLookupService {
+private static final String KEY = "key";
+private static final Set REQUIRED_KEYS = 
Stream.of(KEY).collect(Collectors.toSet());
+
+private volatile DistributedMapCacheClient cache;
+private final Serializer keySerializer = new 
StringSerializer();
+private final Deserializer valueDeserializer = new 
StringDeserializer();
+
+public static final PropertyDescriptor PROP_DISTRIBUTED_CACHE_SERVICE 
= new PropertyDescriptor.Builder()
+.name("distributed-map-cache-service")
+.displayName("Distributed Cache Service")
+.description("The Controller Service that is used to get the 
cached values.")
+.required(true)
+.identifiesControllerService(DistributedMapCacheClient.class)
+.build();
+
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.required(false)
+.dynamic(true)
+.addValidator(Validator.VALID)
+.expressionLanguageSupported(true)
--- End diff --

As of your PR #2205 , this should be changed to indicate the scope :)


---


[jira] [Commented] (NIFI-4809) Implement a SiteToSiteMetricsReportingTask

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433978#comment-16433978
 ] 

ASF GitHub Bot commented on NIFI-4809:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2575#discussion_r180769501
  
--- Diff: 
nifi-nar-bundles/nifi-ambari-bundle/nifi-ambari-reporting-task/pom.xml ---
@@ -53,6 +43,11 @@
 nifi-utils
 1.6.0-SNAPSHOT
 
+
+org.apache.nifi
+nifi-reporting-utils
+1.6.0-SNAPSHOT
--- End diff --

You'll need to update these after rebase


> Implement a SiteToSiteMetricsReportingTask
> --
>
> Key: NIFI-4809
> URL: https://issues.apache.org/jira/browse/NIFI-4809
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> At the moment there is an AmbariReportingTask to send the NiFi-related 
> metrics of the host to the Ambari Metrics Service. In a multi-cluster 
> configuration, or when working with MiNiFi (Java) agents, it might not be 
> possible for all the NiFi instances (NiFi and/or MiNiFi) to access the AMS 
> REST API.
> To solve this problem, a solution would be to implement a 
> SiteToSiteMetricsReportingTask to send the data via S2S to the "main" NiFi 
> instance/cluster that will be able to publish the metrics into AMS (using 
> InvokeHTTP). This way, it is possible to have the metrics of all the 
> instances exposed in one AMS instance.
> I propose to send the data formatted as we are doing right now in the Ambari 
> reporting task. If needed, it can be easily converted into another schema 
> using the record processors once received via S2S.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4809) Implement a SiteToSiteMetricsReportingTask

2018-04-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433977#comment-16433977
 ] 

ASF GitHub Bot commented on NIFI-4809:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2575
  
Mind doing another rebase? I will review shortly thereafter, thanks!


> Implement a SiteToSiteMetricsReportingTask
> --
>
> Key: NIFI-4809
> URL: https://issues.apache.org/jira/browse/NIFI-4809
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> At the moment there is an AmbariReportingTask to send the NiFi-related 
> metrics of the host to the Ambari Metrics Service. In a multi-cluster 
> configuration, or when working with MiNiFi (Java) agents, it might not be 
> possible for all the NiFi instances (NiFi and/or MiNiFi) to access the AMS 
> REST API.
> To solve this problem, a solution would be to implement a 
> SiteToSiteMetricsReportingTask to send the data via S2S to the "main" NiFi 
> instance/cluster that will be able to publish the metrics into AMS (using 
> InvokeHTTP). This way, it is possible to have the metrics of all the 
> instances exposed in one AMS instance.
> I propose to send the data formatted as we are doing right now in the Ambari 
> reporting task. If needed, it can be easily converted into another schema 
> using the record processors once received via S2S.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >