[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66547015
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/resources/ignite-client.xml
 ---
@@ -0,0 +1,26 @@
+
+
+http://www.springframework.org/schema/beans;
+   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+   xsi:schemaLocation="http://www.springframework.org/schema/beans
+http://www.springframework.org/schema/beans/spring-beans.xsd;>
+
+
+
+
--- End diff --

This is the id of the abstract IgniteConfiguration bean (defined in the 
ignite-default-client.xml) so that the user can override it's properties.  


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66546632
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/AbstractIgniteProcessor.java
 ---
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.nifi.processors.ignite;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+
+/**
+ * Base class for Ignite processors
+ */
+public abstract class AbstractIgniteProcessor extends AbstractProcessor  {
+
+/**
+ * Ignite spring configuration file
+ */
+public static final PropertyDescriptor IGNITE_CONFIGURATION_FILE = new 
PropertyDescriptor.Builder()
+.name("Ignite Spring Properties Xml File")
+.description("Ignite spring configuration file, 
/.xml")
--- End diff --

Ok will add that to the documentation.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66546338
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/cache/PutIgniteCache.java
 ---
@@ -0,0 +1,373 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.ignite.cache;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.AbstractMap;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.ignite.IgniteDataStreamer;
+import org.apache.ignite.lang.IgniteFuture;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.StreamUtils;
+
+/**
+ * Put cache processors which pushes the flow file content into Ignite 
Cache using
+ * DataStreamer interface
+ */
+@EventDriven
+@SupportsBatching
+@Tags({ "Ignite", "insert", "update", "stream", "write", "put", "cache", 
"key" })
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Stream the contents of a FlowFile to Ignite Cache 
using DataStreamer. " +
+"The processor uses the value of FlowFile attribute " + 
PutIgniteCache.IGNITE_CACHE_ENTRY_KEY + " as the " +
+"cache key and the byte array of the FlowFile as the value of the 
cache entry value.  Both the string key and a " +
+" non-empty byte array value are required otherwise the FlowFile is 
transfered to the failure relation.")
+@WritesAttributes({
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_TOTAL_COUNT, description = "The total 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_ITEM_NUMBER, description = "The item 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_ITEM_NUMBER, description = 
"The successful FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_COUNT, description = "The 
number of successful FlowFiles"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_ITEM_NUMBER, description = "The 
failed FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_COUNT, description = "The total 
number of failed FlowFiles in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_REASON_ATTRIBUTE_KEY, description 
= "The failed reason attribute key")
+})
+@ReadsAttributes({
+@ReadsAttribute(attribute = PutIgniteCache.IGNITE_CACHE_ENTRY_KEY, 
description = "Ignite cache key"),
+})
+public class 

[GitHub] nifi pull request #517: NIFI-1994: Fixed issues with controller services and...

2016-06-09 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/517

NIFI-1994: Fixed issues with controller services and templates

Fixed issue with Controller Service Fully Qualified Class Names and ensure 
that services are added to the process groups as appropriate when instantiating 
templates

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1994

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/517.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #517


commit a5fecda5a2ffb35e21d950aa19a07127e19a419e
Author: Bryan Rosander 
Date:   2016-05-27T14:56:02Z

NIFI-1975 - Processor for parsing evtx files

Signed-off-by: Matt Burgess 

This closes #492

commit c120c4982d4fc811b06b672e3983b8ca5fb8ae64
Author: Koji Kawamura 
Date:   2016-06-06T13:19:26Z

NIFI-1857: HTTPS Site-to-Site

- Enable HTTP(S) for Site-to-Site communication
- Support HTTP Proxy in the middle of local and remote NiFi
- Support BASIC and DIGEST auth with Proxy Server
- Provide 2-phase style commit same as existing socket version
- [WIP] Test with the latest cluster env (without NCM) hasn't tested yet

- Fixed Buffer handling issues at asyc http client POST
- Fixed JS error when applying Remote Process Group Port setting from UI
- Use compression setting from UI
- Removed already finished TODO comments

- Added additional buffer draining code after receiving EOF
- Added inspection and assert code to make sure Site-to-Site client has
  written data fully to output
stream
- Changed default nifi.remote.input.secure from true to false

This closes #497.

commit bfebe76d17b2024c8ae90fd3837df71ba77d
Author: Mark Payne 
Date:   2016-06-10T00:39:29Z

NIFI-1994: Fixed issue with Controller Service Fully Qualified Class Names 
and ensure that services are added to the process groups as appropriate when 
instantiating templates




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66546103
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/cache/AbstractIgniteCacheProcessor.java
 ---
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.ignite.cache;
+
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.ignite.IgniteCache;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.ignite.AbstractIgniteProcessor;
+
+/**
+ * Base class of Ignite cache based processor
+ */
+public abstract class AbstractIgniteCacheProcessor extends 
AbstractIgniteProcessor {
+
+/**
+ * Flow File attribute for cache entry key
+ */
+public static final String IGNITE_CACHE_ENTRY_KEY = 
"ignite.cache.entry.key";
--- End diff --

Will update it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #475: - Add Maven profile to compile nifi-hadoop-libraries-nar us...

2016-06-09 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/475
  
As an optional Maven build profile I would imagine it's ok but will defer 
to @joewitt and others. In the meantime I will review and test this profile 
(and its absence) against various Hadoop distros.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66545243
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/cache/PutIgniteCache.java
 ---
@@ -0,0 +1,373 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.ignite.cache;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.AbstractMap;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.ignite.IgniteDataStreamer;
+import org.apache.ignite.lang.IgniteFuture;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.StreamUtils;
+
+/**
+ * Put cache processors which pushes the flow file content into Ignite 
Cache using
+ * DataStreamer interface
+ */
+@EventDriven
+@SupportsBatching
+@Tags({ "Ignite", "insert", "update", "stream", "write", "put", "cache", 
"key" })
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Stream the contents of a FlowFile to Ignite Cache 
using DataStreamer. " +
+"The processor uses the value of FlowFile attribute " + 
PutIgniteCache.IGNITE_CACHE_ENTRY_KEY + " as the " +
+"cache key and the byte array of the FlowFile as the value of the 
cache entry value.  Both the string key and a " +
+" non-empty byte array value are required otherwise the FlowFile is 
transfered to the failure relation.")
+@WritesAttributes({
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_TOTAL_COUNT, description = "The total 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_ITEM_NUMBER, description = "The item 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_ITEM_NUMBER, description = 
"The successful FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_COUNT, description = "The 
number of successful FlowFiles"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_ITEM_NUMBER, description = "The 
failed FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_COUNT, description = "The total 
number of failed FlowFiles in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_REASON_ATTRIBUTE_KEY, description 
= "The failed reason attribute key")
+})
+@ReadsAttributes({
+@ReadsAttribute(attribute = PutIgniteCache.IGNITE_CACHE_ENTRY_KEY, 
description = "Ignite cache key"),
+})
+public class 

[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66545042
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/cache/PutIgniteCache.java
 ---
@@ -0,0 +1,373 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.ignite.cache;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.AbstractMap;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.ignite.IgniteDataStreamer;
+import org.apache.ignite.lang.IgniteFuture;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.StreamUtils;
+
+/**
+ * Put cache processors which pushes the flow file content into Ignite 
Cache using
+ * DataStreamer interface
+ */
+@EventDriven
+@SupportsBatching
+@Tags({ "Ignite", "insert", "update", "stream", "write", "put", "cache", 
"key" })
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Stream the contents of a FlowFile to Ignite Cache 
using DataStreamer. " +
+"The processor uses the value of FlowFile attribute " + 
PutIgniteCache.IGNITE_CACHE_ENTRY_KEY + " as the " +
+"cache key and the byte array of the FlowFile as the value of the 
cache entry value.  Both the string key and a " +
+" non-empty byte array value are required otherwise the FlowFile is 
transfered to the failure relation.")
+@WritesAttributes({
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_TOTAL_COUNT, description = "The total 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_ITEM_NUMBER, description = "The item 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_ITEM_NUMBER, description = 
"The successful FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_COUNT, description = "The 
number of successful FlowFiles"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_ITEM_NUMBER, description = "The 
failed FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_COUNT, description = "The total 
number of failed FlowFiles in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_REASON_ATTRIBUTE_KEY, description 
= "The failed reason attribute key")
+})
+@ReadsAttributes({
+@ReadsAttribute(attribute = PutIgniteCache.IGNITE_CACHE_ENTRY_KEY, 
description = "Ignite cache key"),
+})
+public class 

[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66544841
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/cache/PutIgniteCache.java
 ---
@@ -0,0 +1,373 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.ignite.cache;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.AbstractMap;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.ignite.IgniteDataStreamer;
+import org.apache.ignite.lang.IgniteFuture;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.StreamUtils;
+
+/**
+ * Put cache processors which pushes the flow file content into Ignite 
Cache using
+ * DataStreamer interface
+ */
+@EventDriven
+@SupportsBatching
+@Tags({ "Ignite", "insert", "update", "stream", "write", "put", "cache", 
"key" })
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Stream the contents of a FlowFile to Ignite Cache 
using DataStreamer. " +
+"The processor uses the value of FlowFile attribute " + 
PutIgniteCache.IGNITE_CACHE_ENTRY_KEY + " as the " +
+"cache key and the byte array of the FlowFile as the value of the 
cache entry value.  Both the string key and a " +
+" non-empty byte array value are required otherwise the FlowFile is 
transfered to the failure relation.")
+@WritesAttributes({
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_TOTAL_COUNT, description = "The total 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_ITEM_NUMBER, description = "The item 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_ITEM_NUMBER, description = 
"The successful FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_COUNT, description = "The 
number of successful FlowFiles"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_ITEM_NUMBER, description = "The 
failed FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_COUNT, description = "The total 
number of failed FlowFiles in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_REASON_ATTRIBUTE_KEY, description 
= "The failed reason attribute key")
+})
+@ReadsAttributes({
+@ReadsAttribute(attribute = PutIgniteCache.IGNITE_CACHE_ENTRY_KEY, 
description = "Ignite cache key"),
+})
+public class 

[GitHub] nifi issue #475: - Add Maven profile to compile nifi-hadoop-libraries-nar us...

2016-06-09 Thread trixpan
Github user trixpan commented on the issue:

https://github.com/apache/nifi/pull/475
  
@joewitt would you know if this is something that can be merged or if 
ASF/License prevents it from happening?

Happy to add profiles to HDP and CDH as well if needed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #516: NIFI-1993 upgraded CGLIB to 3.2.2

2016-06-09 Thread olegz
Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/516
  
@mattyb149 I got one better ;) Try PR 
https://github.com/apache/nifi/pull/515 as it stands now and then with this 
change. That is how it was discovered. I've attached some notes in JIRA. Let me 
know if you need more details.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #516: NIFI-1993 upgraded CGLIB to 3.2.2

2016-06-09 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/516
  
What's a good test? Successful build? Or something more?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66531971
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/AbstractIgniteProcessor.java
 ---
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.nifi.processors.ignite;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+
+/**
+ * Base class for Ignite processors
+ */
+public abstract class AbstractIgniteProcessor extends AbstractProcessor  {
+
+/**
+ * Ignite spring configuration file
+ */
+public static final PropertyDescriptor IGNITE_CONFIGURATION_FILE = new 
PropertyDescriptor.Builder()
+.name("Ignite Spring Properties Xml File")
--- End diff --

I will do that @pvillard31 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #497: NIFI-1857: HTTPS Site-to-Site

2016-06-09 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/497
  
@markap14 Thanks for taking your time to review and test, I'm so glad to 
hear that worked!
The CI test haven't been finished, maybe that's why this isn't closed yet. 
I'll check the status again later.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Consuming web services through NiFi

2016-06-09 Thread saikrishnat
i am running on my laptop. yaa thats what confuses me. i could able to
download files when i used GetFTP process from a public site.
its only GetHTTP or InvokeHTTP is giving me problems. its acting like it
cant go out of the network when i try thru NiFi. if firewall blocks it , it
should block it when i try directly.

On Thu, Jun 9, 2016 at 3:13 PM, Matt Burgess [via Apache NiFi Developer
List]  wrote:

> Do you have NiFi running locally or are you connecting over the browser to
> an instance on a different node? If you're on the same node then I don't
> know why it won't connect, if it's on a different node or VM there might be
> a connectivity problem between that node and the Internet.
>
> Regards,
> Matt
>
> > On Jun 9, 2016, at 4:06 PM, saikrishnat <[hidden email]
> > wrote:
> >
> > Hi Matt,
> > i tried with google.com and cnn to get rss feed as shown in one of the
> > examples online by someone..
> > thing is i can go to the site when i use the browser directly with the
> same
> > URL..but not thru NiFi..
> >
> > do i have to change any settings.??
> >
> > Regards,Sai
> >
> > On Thu, Jun 9, 2016 at 3:02 PM, Matt Burgess [via Apache NiFi Developer
> > List] <[hidden email]
> > wrote:
> >
> >> Which other sites did you try? I've noticed that the randomuser.me API
> >> is often unavailable and gives those SocketTimeout messages.  Also
> >> perhaps try the InvokeHttp processor. I would imagine any site should
> >> work in both GetHttp and InvokeHttp, but I'm curious to see if there
> >> is a difference.
> >>
> >> What version of NiFi are you using?
> >>
> >> Thanks,
> >> Matt
> >>
> >> On Wed, Jun 8, 2016 at 4:42 PM, saikrishnat <[hidden email]
> >> > wrote:
> >>
> >>> Hi Matt,
> >>> Thank you for the reply , when i tried to use the example
> >>> "Working_With_CSV.xml" i am getting
> >>>
> >>> InvokeHTTP[id=a3aab33d-76dd-4169-9a29-fd0aeae219f3] Yielding processor
> >> due
> >>> to exception encountered as a source processor:
> >>> java.net.SocketTimeoutException:
> >>> connect timed out
> >>>
> >>> i could go to the URL when i try from my browser directly. but not
> thru
> >>> NiFi.
> >>> i tried other sites thru GetHTTP process , getting same error..
> >>>
> >>> any idea.??
> >>>
> >>> On Tue, Jun 7, 2016 at 4:55 PM, Matt Burgess [via Apache NiFi
> Developer
> >>> List] <[hidden email]
> >> > wrote:
> >>>
>  saikrishnat,
> 
>  There are multiple processors that can get data from public web
>  services, such as InvokeHttp [1]. There are sample templates at [2],
>  including "Working_With_CSV.xml" which consumes a RESTful web service
>  at http://randomuser.me. If you are looking to discover/invoke
>  WSDL/SOAP services, then there is no general processor to do so,
>  although you can certainly speak SOAP using InvokeHttp. Discovering
>  methods via WSDL then invoking them via SOAP is a more involved
>  process, but I believe NiFi has the processors in place
>  (EvaluateXPath, ReplaceText, etc.) to enable this.
> 
>  Also, besides the official docs [3], there is a great repository of
>  helpful NiFi-related items [4].
> 
>  Regards,
>  Matt
> 
>  [1]
> >>
> https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.InvokeHTTP/index.html
>  [2]
> >>
> https://cwiki.apache.org/confluence/display/NIFI/Example+Dataflow+Templates
>  [3] https://nifi.apache.org/docs.html
>  [4] https://github.com/jfrazee/awesome-nifi
> 
>  On Tue, Jun 7, 2016 at 3:28 PM, saikrishnat <[hidden email]
>  > wrote:
> 
> > Hi ,
> > I am very new to NiFi. So far i can just move files between
> >> directories
>  :).
> > I was trying access public web services like weather service by
> >> calling
>  its
> > methods like getWeather,getCitiesByCountry etc..can i do that using
> >> NiFi
>  and
> > how..? any help is much appreciated.
> >
> > also any good resources for help on NiFi.??
> >
> >
> >
> > --
> > View this message in context:
> >>
> http://apache-nifi-developer-list.39713.n7.nabble.com/Consuming-web-services-through-NiFi-tp11190.html
> > Sent from the Apache NiFi Developer List mailing list archive at
>  Nabble.com.
> 
> 
>  --
>  If you reply to this email, your message will be added to the
> >> discussion
>  below:
> >>
> http://apache-nifi-developer-list.39713.n7.nabble.com/Consuming-web-services-through-NiFi-tp11190p11201.html
>  To unsubscribe from Consuming web services through NiFi, click here
>  <
>  .
>  NAML
>  <
> >>
> 

Re: Consuming web services through NiFi

2016-06-09 Thread Matt Burgess
Do you have NiFi running locally or are you connecting over the browser to an 
instance on a different node? If you're on the same node then I don't know why 
it won't connect, if it's on a different node or VM there might be a 
connectivity problem between that node and the Internet.

Regards,
Matt

> On Jun 9, 2016, at 4:06 PM, saikrishnat  wrote:
> 
> Hi Matt,
> i tried with google.com and cnn to get rss feed as shown in one of the
> examples online by someone..
> thing is i can go to the site when i use the browser directly with the same
> URL..but not thru NiFi..
> 
> do i have to change any settings.??
> 
> Regards,Sai
> 
> On Thu, Jun 9, 2016 at 3:02 PM, Matt Burgess [via Apache NiFi Developer
> List]  wrote:
> 
>> Which other sites did you try? I've noticed that the randomuser.me API
>> is often unavailable and gives those SocketTimeout messages.  Also
>> perhaps try the InvokeHttp processor. I would imagine any site should
>> work in both GetHttp and InvokeHttp, but I'm curious to see if there
>> is a difference.
>> 
>> What version of NiFi are you using?
>> 
>> Thanks,
>> Matt
>> 
>> On Wed, Jun 8, 2016 at 4:42 PM, saikrishnat <[hidden email]
>> > wrote:
>> 
>>> Hi Matt,
>>> Thank you for the reply , when i tried to use the example
>>> "Working_With_CSV.xml" i am getting
>>> 
>>> InvokeHTTP[id=a3aab33d-76dd-4169-9a29-fd0aeae219f3] Yielding processor
>> due
>>> to exception encountered as a source processor:
>>> java.net.SocketTimeoutException:
>>> connect timed out
>>> 
>>> i could go to the URL when i try from my browser directly. but not thru
>>> NiFi.
>>> i tried other sites thru GetHTTP process , getting same error..
>>> 
>>> any idea.??
>>> 
>>> On Tue, Jun 7, 2016 at 4:55 PM, Matt Burgess [via Apache NiFi Developer
>>> List] <[hidden email]
>> > wrote:
>>> 
 saikrishnat,
 
 There are multiple processors that can get data from public web
 services, such as InvokeHttp [1]. There are sample templates at [2],
 including "Working_With_CSV.xml" which consumes a RESTful web service
 at http://randomuser.me. If you are looking to discover/invoke
 WSDL/SOAP services, then there is no general processor to do so,
 although you can certainly speak SOAP using InvokeHttp. Discovering
 methods via WSDL then invoking them via SOAP is a more involved
 process, but I believe NiFi has the processors in place
 (EvaluateXPath, ReplaceText, etc.) to enable this.
 
 Also, besides the official docs [3], there is a great repository of
 helpful NiFi-related items [4].
 
 Regards,
 Matt
 
 [1]
>> https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.InvokeHTTP/index.html
 [2]
>> https://cwiki.apache.org/confluence/display/NIFI/Example+Dataflow+Templates
 [3] https://nifi.apache.org/docs.html
 [4] https://github.com/jfrazee/awesome-nifi
 
 On Tue, Jun 7, 2016 at 3:28 PM, saikrishnat <[hidden email]
 > wrote:
 
> Hi ,
> I am very new to NiFi. So far i can just move files between
>> directories
 :).
> I was trying access public web services like weather service by
>> calling
 its
> methods like getWeather,getCitiesByCountry etc..can i do that using
>> NiFi
 and
> how..? any help is much appreciated.
> 
> also any good resources for help on NiFi.??
> 
> 
> 
> --
> View this message in context:
>> http://apache-nifi-developer-list.39713.n7.nabble.com/Consuming-web-services-through-NiFi-tp11190.html
> Sent from the Apache NiFi Developer List mailing list archive at
 Nabble.com.
 
 
 --
 If you reply to this email, your message will be added to the
>> discussion
 below:
>> http://apache-nifi-developer-list.39713.n7.nabble.com/Consuming-web-services-through-NiFi-tp11190p11201.html
 To unsubscribe from Consuming web services through NiFi, click here
 <
 .
 NAML
 <
>> http://apache-nifi-developer-list.39713.n7.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>> 
>>> 
>>> 
>>> 
>>> 
>>> --
>>> View this message in context:
>> http://apache-nifi-developer-list.39713.n7.nabble.com/Consuming-web-services-through-NiFi-tp11190p11251.html
>>> Sent from the Apache NiFi Developer List mailing list archive at
>> Nabble.com.
>> 
>> 
>> --
>> If you reply to this email, your message will be added to the discussion
>> below:
>> 
>> 

Re: Consuming web services through NiFi

2016-06-09 Thread saikrishnat
Hi Matt,
i tried with google.com and cnn to get rss feed as shown in one of the
examples online by someone..
thing is i can go to the site when i use the browser directly with the same
URL..but not thru NiFi..

do i have to change any settings.??

Regards,Sai

On Thu, Jun 9, 2016 at 3:02 PM, Matt Burgess [via Apache NiFi Developer
List]  wrote:

> Which other sites did you try? I've noticed that the randomuser.me API
> is often unavailable and gives those SocketTimeout messages.  Also
> perhaps try the InvokeHttp processor. I would imagine any site should
> work in both GetHttp and InvokeHttp, but I'm curious to see if there
> is a difference.
>
> What version of NiFi are you using?
>
> Thanks,
> Matt
>
> On Wed, Jun 8, 2016 at 4:42 PM, saikrishnat <[hidden email]
> > wrote:
>
> > Hi Matt,
> > Thank you for the reply , when i tried to use the example
> > "Working_With_CSV.xml" i am getting
> >
> > InvokeHTTP[id=a3aab33d-76dd-4169-9a29-fd0aeae219f3] Yielding processor
> due
> > to exception encountered as a source processor:
> > java.net.SocketTimeoutException:
> > connect timed out
> >
> > i could go to the URL when i try from my browser directly. but not thru
> > NiFi.
> > i tried other sites thru GetHTTP process , getting same error..
> >
> > any idea.??
> >
> > On Tue, Jun 7, 2016 at 4:55 PM, Matt Burgess [via Apache NiFi Developer
> > List] <[hidden email]
> > wrote:
> >
> >> saikrishnat,
> >>
> >> There are multiple processors that can get data from public web
> >> services, such as InvokeHttp [1]. There are sample templates at [2],
> >> including "Working_With_CSV.xml" which consumes a RESTful web service
> >> at http://randomuser.me. If you are looking to discover/invoke
> >> WSDL/SOAP services, then there is no general processor to do so,
> >> although you can certainly speak SOAP using InvokeHttp. Discovering
> >> methods via WSDL then invoking them via SOAP is a more involved
> >> process, but I believe NiFi has the processors in place
> >> (EvaluateXPath, ReplaceText, etc.) to enable this.
> >>
> >> Also, besides the official docs [3], there is a great repository of
> >> helpful NiFi-related items [4].
> >>
> >> Regards,
> >> Matt
> >>
> >> [1]
> >>
> https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.InvokeHTTP/index.html
> >> [2]
> >>
> https://cwiki.apache.org/confluence/display/NIFI/Example+Dataflow+Templates
> >> [3] https://nifi.apache.org/docs.html
> >> [4] https://github.com/jfrazee/awesome-nifi
> >>
> >> On Tue, Jun 7, 2016 at 3:28 PM, saikrishnat <[hidden email]
> >> > wrote:
> >>
> >> > Hi ,
> >> > I am very new to NiFi. So far i can just move files between
> directories
> >> :).
> >> > I was trying access public web services like weather service by
> calling
> >> its
> >> > methods like getWeather,getCitiesByCountry etc..can i do that using
> NiFi
> >> and
> >> > how..? any help is much appreciated.
> >> >
> >> > also any good resources for help on NiFi.??
> >> >
> >> >
> >> >
> >> > --
> >> > View this message in context:
> >>
> http://apache-nifi-developer-list.39713.n7.nabble.com/Consuming-web-services-through-NiFi-tp11190.html
> >> > Sent from the Apache NiFi Developer List mailing list archive at
> >> Nabble.com.
> >>
> >>
> >> --
> >> If you reply to this email, your message will be added to the
> discussion
> >> below:
> >>
> >>
> http://apache-nifi-developer-list.39713.n7.nabble.com/Consuming-web-services-through-NiFi-tp11190p11201.html
> >> To unsubscribe from Consuming web services through NiFi, click here
> >> <
> >> .
> >> NAML
> >> <
> http://apache-nifi-developer-list.39713.n7.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>
> >>
> >
> >
> >
> >
> > --
> > View this message in context:
> http://apache-nifi-developer-list.39713.n7.nabble.com/Consuming-web-services-through-NiFi-tp11190p11251.html
> > Sent from the Apache NiFi Developer List mailing list archive at
> Nabble.com.
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
>
> http://apache-nifi-developer-list.39713.n7.nabble.com/Consuming-web-services-through-NiFi-tp11190p11300.html
> To unsubscribe from Consuming web services through NiFi, click here
> 
> .
> NAML
> 

[GitHub] nifi pull request #509: NIFI-1982: Use Compressed check box value.

2016-06-09 Thread ijokarumawak
Github user ijokarumawak closed the pull request at:

https://github.com/apache/nifi/pull/509


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Consuming web services through NiFi

2016-06-09 Thread Matt Burgess
Which other sites did you try? I've noticed that the randomuser.me API
is often unavailable and gives those SocketTimeout messages.  Also
perhaps try the InvokeHttp processor. I would imagine any site should
work in both GetHttp and InvokeHttp, but I'm curious to see if there
is a difference.

What version of NiFi are you using?

Thanks,
Matt

On Wed, Jun 8, 2016 at 4:42 PM, saikrishnat  wrote:
> Hi Matt,
> Thank you for the reply , when i tried to use the example
> "Working_With_CSV.xml" i am getting
>
> InvokeHTTP[id=a3aab33d-76dd-4169-9a29-fd0aeae219f3] Yielding processor due
> to exception encountered as a source processor:
> java.net.SocketTimeoutException:
> connect timed out
>
> i could go to the URL when i try from my browser directly. but not thru
> NiFi.
> i tried other sites thru GetHTTP process , getting same error..
>
> any idea.??
>
> On Tue, Jun 7, 2016 at 4:55 PM, Matt Burgess [via Apache NiFi Developer
> List]  wrote:
>
>> saikrishnat,
>>
>> There are multiple processors that can get data from public web
>> services, such as InvokeHttp [1]. There are sample templates at [2],
>> including "Working_With_CSV.xml" which consumes a RESTful web service
>> at http://randomuser.me. If you are looking to discover/invoke
>> WSDL/SOAP services, then there is no general processor to do so,
>> although you can certainly speak SOAP using InvokeHttp. Discovering
>> methods via WSDL then invoking them via SOAP is a more involved
>> process, but I believe NiFi has the processors in place
>> (EvaluateXPath, ReplaceText, etc.) to enable this.
>>
>> Also, besides the official docs [3], there is a great repository of
>> helpful NiFi-related items [4].
>>
>> Regards,
>> Matt
>>
>> [1]
>> https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.InvokeHTTP/index.html
>> [2]
>> https://cwiki.apache.org/confluence/display/NIFI/Example+Dataflow+Templates
>> [3] https://nifi.apache.org/docs.html
>> [4] https://github.com/jfrazee/awesome-nifi
>>
>> On Tue, Jun 7, 2016 at 3:28 PM, saikrishnat <[hidden email]
>> > wrote:
>>
>> > Hi ,
>> > I am very new to NiFi. So far i can just move files between directories
>> :).
>> > I was trying access public web services like weather service by calling
>> its
>> > methods like getWeather,getCitiesByCountry etc..can i do that using NiFi
>> and
>> > how..? any help is much appreciated.
>> >
>> > also any good resources for help on NiFi.??
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> http://apache-nifi-developer-list.39713.n7.nabble.com/Consuming-web-services-through-NiFi-tp11190.html
>> > Sent from the Apache NiFi Developer List mailing list archive at
>> Nabble.com.
>>
>>
>> --
>> If you reply to this email, your message will be added to the discussion
>> below:
>>
>> http://apache-nifi-developer-list.39713.n7.nabble.com/Consuming-web-services-through-NiFi-tp11190p11201.html
>> To unsubscribe from Consuming web services through NiFi, click here
>> 
>> .
>> NAML
>> 
>>
>
>
>
>
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/Consuming-web-services-through-NiFi-tp11190p11251.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66516313
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/resources/ignite-client.xml
 ---
@@ -0,0 +1,26 @@
+
+
+http://www.springframework.org/schema/beans;
+   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+   xsi:schemaLocation="http://www.springframework.org/schema/beans
+http://www.springframework.org/schema/beans/spring-beans.xsd;>
+
+
+
+
--- End diff --

What is ignite.cfg for?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66515866
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/AbstractIgniteProcessor.java
 ---
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.nifi.processors.ignite;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+
+/**
+ * Base class for Ignite processors
+ */
+public abstract class AbstractIgniteProcessor extends AbstractProcessor  {
+
+/**
+ * Ignite spring configuration file
+ */
+public static final PropertyDescriptor IGNITE_CONFIGURATION_FILE = new 
PropertyDescriptor.Builder()
+.name("Ignite Spring Properties Xml File")
+.description("Ignite spring configuration file, 
/.xml")
--- End diff --

Shouldn't be required to be sure there is no surprise for the user? At 
least, I think you could add that if this value is not set, a default 
configuration is used in order to bind with a local Ignite endpoint on 
127.0.0.1:47500..47509?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66514230
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/cache/AbstractIgniteCacheProcessor.java
 ---
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.ignite.cache;
+
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.ignite.IgniteCache;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.ignite.AbstractIgniteProcessor;
+
+/**
+ * Base class of Ignite cache based processor
+ */
+public abstract class AbstractIgniteCacheProcessor extends 
AbstractIgniteProcessor {
+
+/**
+ * Flow File attribute for cache entry key
+ */
+public static final String IGNITE_CACHE_ENTRY_KEY = 
"ignite.cache.entry.key";
--- End diff --

Wouldn't be better to let the user decide what attribute is used as the key 
in Ignite? I believe this is what we do in PutDistributedMapCache processor and 
it would avoid an unnecessary step with UpdateAttribute processor.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66514855
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/cache/PutIgniteCache.java
 ---
@@ -0,0 +1,373 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.ignite.cache;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.AbstractMap;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.ignite.IgniteDataStreamer;
+import org.apache.ignite.lang.IgniteFuture;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.StreamUtils;
+
+/**
+ * Put cache processors which pushes the flow file content into Ignite 
Cache using
+ * DataStreamer interface
+ */
+@EventDriven
+@SupportsBatching
+@Tags({ "Ignite", "insert", "update", "stream", "write", "put", "cache", 
"key" })
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Stream the contents of a FlowFile to Ignite Cache 
using DataStreamer. " +
+"The processor uses the value of FlowFile attribute " + 
PutIgniteCache.IGNITE_CACHE_ENTRY_KEY + " as the " +
+"cache key and the byte array of the FlowFile as the value of the 
cache entry value.  Both the string key and a " +
+" non-empty byte array value are required otherwise the FlowFile is 
transfered to the failure relation.")
+@WritesAttributes({
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_TOTAL_COUNT, description = "The total 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_ITEM_NUMBER, description = "The item 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_ITEM_NUMBER, description = 
"The successful FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_COUNT, description = "The 
number of successful FlowFiles"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_ITEM_NUMBER, description = "The 
failed FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_COUNT, description = "The total 
number of failed FlowFiles in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_REASON_ATTRIBUTE_KEY, description 
= "The failed reason attribute key")
+})
+@ReadsAttributes({
+@ReadsAttribute(attribute = PutIgniteCache.IGNITE_CACHE_ENTRY_KEY, 
description = "Ignite cache key"),
+})
+public class 

[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66513705
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/cache/PutIgniteCache.java
 ---
@@ -0,0 +1,373 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.ignite.cache;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.AbstractMap;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.ignite.IgniteDataStreamer;
+import org.apache.ignite.lang.IgniteFuture;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.StreamUtils;
+
+/**
+ * Put cache processors which pushes the flow file content into Ignite 
Cache using
+ * DataStreamer interface
+ */
+@EventDriven
+@SupportsBatching
+@Tags({ "Ignite", "insert", "update", "stream", "write", "put", "cache", 
"key" })
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Stream the contents of a FlowFile to Ignite Cache 
using DataStreamer. " +
+"The processor uses the value of FlowFile attribute " + 
PutIgniteCache.IGNITE_CACHE_ENTRY_KEY + " as the " +
+"cache key and the byte array of the FlowFile as the value of the 
cache entry value.  Both the string key and a " +
+" non-empty byte array value are required otherwise the FlowFile is 
transfered to the failure relation.")
+@WritesAttributes({
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_TOTAL_COUNT, description = "The total 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_ITEM_NUMBER, description = "The item 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_ITEM_NUMBER, description = 
"The successful FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_COUNT, description = "The 
number of successful FlowFiles"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_ITEM_NUMBER, description = "The 
failed FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_COUNT, description = "The total 
number of failed FlowFiles in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_REASON_ATTRIBUTE_KEY, description 
= "The failed reason attribute key")
+})
+@ReadsAttributes({
+@ReadsAttribute(attribute = PutIgniteCache.IGNITE_CACHE_ENTRY_KEY, 
description = "Ignite cache key"),
+})
+public class 

[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66513495
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/cache/PutIgniteCache.java
 ---
@@ -0,0 +1,373 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.ignite.cache;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.AbstractMap;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.ignite.IgniteDataStreamer;
+import org.apache.ignite.lang.IgniteFuture;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.StreamUtils;
+
+/**
+ * Put cache processors which pushes the flow file content into Ignite 
Cache using
+ * DataStreamer interface
+ */
+@EventDriven
+@SupportsBatching
+@Tags({ "Ignite", "insert", "update", "stream", "write", "put", "cache", 
"key" })
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Stream the contents of a FlowFile to Ignite Cache 
using DataStreamer. " +
+"The processor uses the value of FlowFile attribute " + 
PutIgniteCache.IGNITE_CACHE_ENTRY_KEY + " as the " +
+"cache key and the byte array of the FlowFile as the value of the 
cache entry value.  Both the string key and a " +
+" non-empty byte array value are required otherwise the FlowFile is 
transfered to the failure relation.")
+@WritesAttributes({
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_TOTAL_COUNT, description = "The total 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_ITEM_NUMBER, description = "The item 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_ITEM_NUMBER, description = 
"The successful FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_COUNT, description = "The 
number of successful FlowFiles"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_ITEM_NUMBER, description = "The 
failed FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_COUNT, description = "The total 
number of failed FlowFiles in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_REASON_ATTRIBUTE_KEY, description 
= "The failed reason attribute key")
+})
+@ReadsAttributes({
+@ReadsAttribute(attribute = PutIgniteCache.IGNITE_CACHE_ENTRY_KEY, 
description = "Ignite cache key"),
+})
+public class 

[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66513208
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/cache/PutIgniteCache.java
 ---
@@ -0,0 +1,373 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.ignite.cache;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.AbstractMap;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.ignite.IgniteDataStreamer;
+import org.apache.ignite.lang.IgniteFuture;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.StreamUtils;
+
+/**
+ * Put cache processors which pushes the flow file content into Ignite 
Cache using
+ * DataStreamer interface
+ */
+@EventDriven
+@SupportsBatching
+@Tags({ "Ignite", "insert", "update", "stream", "write", "put", "cache", 
"key" })
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Stream the contents of a FlowFile to Ignite Cache 
using DataStreamer. " +
+"The processor uses the value of FlowFile attribute " + 
PutIgniteCache.IGNITE_CACHE_ENTRY_KEY + " as the " +
+"cache key and the byte array of the FlowFile as the value of the 
cache entry value.  Both the string key and a " +
+" non-empty byte array value are required otherwise the FlowFile is 
transfered to the failure relation.")
+@WritesAttributes({
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_TOTAL_COUNT, description = "The total 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_ITEM_NUMBER, description = "The item 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_ITEM_NUMBER, description = 
"The successful FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_COUNT, description = "The 
number of successful FlowFiles"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_ITEM_NUMBER, description = "The 
failed FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_COUNT, description = "The total 
number of failed FlowFiles in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_REASON_ATTRIBUTE_KEY, description 
= "The failed reason attribute key")
+})
+@ReadsAttributes({
+@ReadsAttribute(attribute = PutIgniteCache.IGNITE_CACHE_ENTRY_KEY, 
description = "Ignite cache key"),
+})
+public class 

[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66512938
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/cache/PutIgniteCache.java
 ---
@@ -0,0 +1,373 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.ignite.cache;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.AbstractMap;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.ignite.IgniteDataStreamer;
+import org.apache.ignite.lang.IgniteFuture;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.ReadsAttribute;
+import org.apache.nifi.annotation.behavior.ReadsAttributes;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.io.InputStreamCallback;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.stream.io.StreamUtils;
+
+/**
+ * Put cache processors which pushes the flow file content into Ignite 
Cache using
+ * DataStreamer interface
+ */
+@EventDriven
+@SupportsBatching
+@Tags({ "Ignite", "insert", "update", "stream", "write", "put", "cache", 
"key" })
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Stream the contents of a FlowFile to Ignite Cache 
using DataStreamer. " +
+"The processor uses the value of FlowFile attribute " + 
PutIgniteCache.IGNITE_CACHE_ENTRY_KEY + " as the " +
+"cache key and the byte array of the FlowFile as the value of the 
cache entry value.  Both the string key and a " +
+" non-empty byte array value are required otherwise the FlowFile is 
transfered to the failure relation.")
+@WritesAttributes({
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_TOTAL_COUNT, description = "The total 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_ITEM_NUMBER, description = "The item 
number of FlowFile in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_ITEM_NUMBER, description = 
"The successful FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_SUCCESSFUL_COUNT, description = "The 
number of successful FlowFiles"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_ITEM_NUMBER, description = "The 
failed FlowFile item number"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_COUNT, description = "The total 
number of failed FlowFiles in the batch"),
+@WritesAttribute(attribute = 
PutIgniteCache.IGNITE_BATCH_FLOW_FILE_FAILED_REASON_ATTRIBUTE_KEY, description 
= "The failed reason attribute key")
+})
+@ReadsAttributes({
+@ReadsAttribute(attribute = PutIgniteCache.IGNITE_CACHE_ENTRY_KEY, 
description = "Ignite cache key"),
+})
+public class 

[GitHub] nifi issue #497: NIFI-1857: HTTPS Site-to-Site

2016-06-09 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/497
  
@ijokarumawak I added the "This closes #497" message to the commit that I 
pushed, but it doesn't seem to have worked... can you close the PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread pvillard31
Github user pvillard31 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/502#discussion_r66510540
  
--- Diff: 
nifi-nar-bundles/nifi-ignite-bundle/nifi-ignite-processors/src/main/java/org/apache/nifi/processors/ignite/AbstractIgniteProcessor.java
 ---
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.nifi.processors.ignite;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.ignite.Ignite;
+import org.apache.ignite.Ignition;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+
+/**
+ * Base class for Ignite processors
+ */
+public abstract class AbstractIgniteProcessor extends AbstractProcessor  {
+
+/**
+ * Ignite spring configuration file
+ */
+public static final PropertyDescriptor IGNITE_CONFIGURATION_FILE = new 
PropertyDescriptor.Builder()
+.name("Ignite Spring Properties Xml File")
--- End diff --

Could you use both .name() and displayName() for backward compatibility?

See 
https://mail-archives.apache.org/mod_mbox/nifi-dev/201605.mbox/%3c5a6fdf1e-1889-46fe-a3c4-5d2f0a905...@apache.org%3E


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #502: Nifi-1972 Apache Ignite Put Cache Processor

2016-06-09 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/502
  
@mans2singh Could you check all the lines marked as removed? I believe 
there should not be any and this is probably because of some settings in your 
IDE.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #497: NIFI-1857: HTTPS Site-to-Site

2016-06-09 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/497
  
@ijokarumawak this is looking good now! I have pulled down the latest PR, 
rebased against master, and have been able to test this running directly 
against my NiFi instance and while using nginx as a proxy. Nicely done! I have 
pushed this to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #513: PutHBaseJSON processor treats all values as Strings

2016-06-09 Thread rtempleton
Github user rtempleton commented on the issue:

https://github.com/apache/nifi/pull/513
  
I like that idea.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #513: PutHBaseJSON processor treats all values as Strings

2016-06-09 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/513
  
Thats a great point about leveraging any changes they make to the Bytes 
class... I'm now thinking, what if we added simple methods to the 
HBaseClientService interface [1] that wrapped the calls to Bytes?

Something like:

`byte[] toBytes(boolean b)
 byte[] toBytes(long l)
 byte[] toBytes(double d)
 byte[] toBytes(String s)
`
Then HBase_1_1_2_ClientService would implement those methods and use the 
Bytes class from 1.1.2, and if someone implemented a 0.94 version it would be 
up to them to use the Bytes class from 0.94 or whatever they wanted to do. The 
processors never know they are dealing with the Bytes class.

[1] 
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-hbase-client-service-api/src/main/java/org/apache/nifi/hbase/HBaseClientService.java


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #515: NIFI-826 [REVIEW ONLY] Initial commit for deterministic tem...

2016-06-09 Thread olegz
Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/515
  
@mcgilman so, this is the initial commit that essentially demonstrates the 
approach that is discussed in JIRA. Basically the new ID _ inceptionId_ is 
generated and is immutable and perpetual. With such contract the serialization 
of the components is no deterministic. There is initial test there that 
demonstrates, but I'll be adding more. Let's find time to discuss it and also 
see if we need to address purging of the elements that do not belong there as 
part of this effort or a separate. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


NiFi Script Tester v1.1.1 available

2016-06-09 Thread Matt Burgess
All,

Just wanted to let you know that my NiFi Script Tester has been
updated, turned out the original version didn't work so well :-P

I've fixed the bugs I found and added some helpful things (such as
Apache Commons IO to the fat JAR, for IOUtils.toString()) and the
"-module" option to add module paths like you would set in the Module
Directory property in the ExecuteScript processor.

Also I finally got around to writing a blog post about it:

http://funnifi.blogspot.com/2016/06/testing-executescript-processor-scripts.html

If you use it, please let me know how/if it works for you :)

Regards,
Matt


[GitHub] nifi issue #509: NIFI-1982: Use Compressed check box value.

2016-06-09 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/509
  
@ijokarumawak you can close this when you get a chance since it has been 
merged, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #513: PutHBaseJSON processor treats all values as Strings

2016-06-09 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/513
  
Ryan, thanks for submitting this PR! 

Is it possible that we could remove the dependency on hbase-client here?

The reason being is that the hbase processor bundle purposely didn't 
include the hbase-client and instead used the hbase-client-service (which is 
where the hbase-client depndency is). This means that someone could implement 
an hbase-client-service for another version of hbase, such as 0.94 and the 
processors don't have to change at all. I think adding this dependency here 
would make that not possible because a specific version of hbase-client would 
now be bundled with the processors.

Seems like the reason for the dependency was to use the Bytes class that 
provides the conversion, which makes total sense. I'm wondering if we could 
look at what the code is doing and possibly write our own util, or if there is 
some other utility library to do this maybe we can use that.

Thoughts?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #514: fixes NIFI-1989

2016-06-09 Thread PuspenduBanerjee
GitHub user PuspenduBanerjee opened a pull request:

https://github.com/apache/nifi/pull/514

fixes NIFI-1989

Fixes : https://issues.sonatype.org/browse/MVNCENTRAL-244

> Taking a look at the POM at 
http://repo1.maven.org/maven2/org/apache/commons/commons-io/1.3.2/commons-io-1.3.2.pom,
 the groupId and artifactId are different from the deploy path and it seems the 
identical artifacts are available at 
http://repo1.maven.org/maven2/commons-io/commons-io/1.3.2/
Having the same classes available under two different GAV coordinates may 
lead to a lot of problems should one decide to update the component to a new 
version. I see a lot of old builds could break when we delete from the old 
coordinates, but what about a (correct) relocation POM? Builds would continue 
to work and the mishap would appear on the screen.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/PuspenduBanerjee/nifi NIFI-1989

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/514.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #514


commit e25bdd7b2b455728fb14ab7d0cd9a2c76c271aad
Author: Puspendu Banerjee 
Date:   2016-06-09T18:00:57Z

fixes NIFI-1989




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: NIFI ListenTCP Processor

2016-06-09 Thread venkat
Thanks Brayn for creating JIRA task. yes its very useful for us if we have
option to change delimiter and processor can pick that delimiter. currently
exact match scenario works well for my scenario.

Thanks,
Venkat



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/NIFI-ListenTCP-Processor-tp11212p11277.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


[GitHub] nifi issue #492: NIFI-1975 - Processor for parsing evtx files

2016-06-09 Thread brosander
Github user brosander commented on the issue:

https://github.com/apache/nifi/pull/492
  
changes merged upstream


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #492: NIFI-1975 - Processor for parsing evtx files

2016-06-09 Thread brosander
Github user brosander closed the pull request at:

https://github.com/apache/nifi/pull/492


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-09 Thread YolandaMDavis
Github user YolandaMDavis commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66475348
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/registry/FileVariableRegistry.java ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.registry;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Path;
+import java.util.Map;
+
+
+public abstract class FileVariableRegistry extends 
MultiMapVariableRegistry {
+
+public FileVariableRegistry() {
+super();
+}
+
+public FileVariableRegistry(File... files){
+super();
+addVariables(files);
+}
+
+public FileVariableRegistry(Path... paths){
+super();
+addVariables(paths);
+}
+
+@SuppressWarnings({"unchecked", "rawtypes"})
+public void addVariables(File ...files){
+if(files != null) {
+for (final File file : files) {
+try {
+registry.addMap(convertFile(file));
+} catch (IOException iex) {
+throw new IllegalArgumentException("A file provided 
was invalid.", iex);
--- End diff --

After reviewing I made the suggested change, since removing the public off 
of the constructor to ensure only Factory creation made the exception on the 
constructor much more palatable :).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #513: PutHBaseJSON processor treats all values as Strings

2016-06-09 Thread rtempleton
GitHub user rtempleton opened a pull request:

https://github.com/apache/nifi/pull/513

PutHBaseJSON processor treats all values as Strings

The operator will now inspect the node value to determine type and convert 
as such.
Numeric integral - Long (assumes widest type)
Numeric not integral - Double (assumes widest type)
Logical - Boolean
everything else (including current Complex Type logic) - String

Values that represent the row key continue to be implictly treated as 
Strings by the processor

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rtempleton/nifi NIFI-1895

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/513.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #513


commit f7d494cf968970448f7789d23e2b820a382a0edd
Author: rtempleton 
Date:   2016-06-09T15:16:49Z

PutHBaseJSON processor treats all values as Strings

The operator will now inspect the node value to determine type and convert 
as such.
Numeric integral - Long (assumes widest type)
Numeric not integral - Double (assumes widest type)
Logical - Boolean
everything else (including current Complex Type logic) - String

Values that represent the row key continue to be implictly treated as 
Strings by the processor




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #492: NIFI-1975 - Processor for parsing evtx files

2016-06-09 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/492
  
+1 LGTM, built and ran tests and contrib-check. Ran a NiFi flow with 
multiple EVTX files exercising all relationships and granularities.

Great contribution, thanks much!  Merging to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #512: NIFI-401

2016-06-09 Thread beugley
GitHub user beugley opened a pull request:

https://github.com/apache/nifi/pull/512

NIFI-401



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/beugley/nifi NIFI-401

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/512.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #512


commit 011b5e724648d3f72746b18e17e79be0ebee200d
Author: Brian Eugley 
Date:   2016-06-09T15:32:15Z

NIFI-401




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Limiting a queue

2016-06-09 Thread Shaine Berube
I am on GitHub, but not with this project, plus there is some proprietary
code contained within the folder I'm developing in, so I'll create a zip
file of the java files that this contains.  You should be able to gather
and link in the libraries yourself, mostly they're pretty standard.

On Thu, Jun 9, 2016 at 9:09 AM, Matt Burgess  wrote:

> Are you working in GitHub? That's a pretty easy way to share code, I
> can fork your repo/branch and issue pull requests, or vice versa. If
> you are working from a fork of the NiFi repo, I think you can make me
> (mattyb149) a contributor to your fork/branch, and we'd go from there.
> The latter is a cleaner solution IMO, since once it's done we can
> squash the commits and issue the PR directly to the NiFi project for
> review/merge.
>
> If you're not on GitHub, we can figure something else out. Please feel
> free to DM me at mattyb...@apache.org :)
>
> Thanks,
> Matt
>
> On Thu, Jun 9, 2016 at 10:51 AM, Shaine Berube
>  wrote:
> > @Matt
> > I would enjoy collaborating on it, I'm thinking perhaps with some
> > additional eyes this data flow can get even faster and get some
> additional
> > bugs worked out.  Presently, the necessity of the situation requiring me
> to
> > mostly hard-code the SQL queries, the data flow only works between an MS
> > SQL and a MySQL database, MS SQL being the source and MySQL being the
> > target, but with some small changes and perhaps an added library that can
> > be changed / fixed so that this data flow will work with any database
> being
> > in source and any database being in target.
> >
> > As far as collaboration, what files should I send and where?
> >
> > On Thu, Jun 9, 2016 at 7:58 AM, Matt Burgess 
> wrote:
> >
> >> Shaine,
> >>
> >> I was about to start work on processor(s) almost exactly like the ones
> >> you are describing. Is this something you'd be interested in
> >> collaborating on? I've worked on the SQL generation piece before
> >> (though not for NiFi).
> >>
> >> Regards,
> >> Matt
> >>
> >> On Wed, Jun 8, 2016 at 5:29 PM, Shaine Berube
> >>  wrote:
> >> > Thanks for your responses,
> >> >
> >> > As far as possibly contributing these back to the community, I need a
> >> good
> >> > way for them to connect to any database and generate that database's
> >> > specific flavor of SQL, so when I get that functionality built out I
> >> would
> >> > be glad to.
> >> >
> >> > As far as the memory goes, step two sends an insert query per flow
> file
> >> > (for migration), and each query is designed to pull out 1000 records,
> if
> >> > that makes more sense.  But good to know that Nifi can handle a lot of
> >> flow
> >> > files, with the back-pressure configured it should wait for the queue
> >> ahead
> >> > to clear out before starting the next table.
> >> > Also forgot to mention, this is interacting with two live databases,
> but
> >> is
> >> > going through my VM, so in other words, it can actually be faster if
> >> placed
> >> > on the machine the target database is running on.
> >> >
> >> > Fun facts - I'm running bench marking now, the speeds I'm seeing are
> >> > because of the concurrent processing functionality of Nifi
> >> > 562,000 records inserted from 1 source table into 1 target table -
> >> speed: 8
> >> > minutes 14 seconds (I'm being throttled by the source database).
> >> > At that speed, it is approximately 1,137 records per second.
> >> >
> >> > Step two is running 1 thread, step three is running 60 threads, step
> 4 is
> >> > running 30 threads.
> >> >
> >> > On Wed, Jun 8, 2016 at 1:23 PM, Mark Payne 
> wrote:
> >> >
> >> >> Shaine,
> >> >>
> >> >> This is a really cool set of functionality! Any chance you would be
> >> >> interested in contributing
> >> >> these processors back to the NiFi community?
> >> >>
> >> >> Regardless, one thing to consider here is that with NiFi, because of
> the
> >> >> way that the repositories
> >> >> are structured, the way that we think about heap utilization is a
> little
> >> >> different than with most projects.
> >> >> As Bryan pointed out, you will want to stream the content directly to
> >> the
> >> >> FlowFile, rather than buffering
> >> >> in memory. The framework will handle the rest. Where we will be more
> >> >> concerned about heap utilization
> >> >> is actually in the number of FlowFiles that are held in memory at any
> >> one
> >> >> time, not the size of those FlowFiles.
> >> >> So you will be better off keeping a smaller number of FlowFiles, each
> >> >> having larger content. So I would
> >> >> recommend making the number of records per FlowFile configurable,
> >> perhaps
> >> >> with a default value of
> >> >> 25,000. This would also result in far fewer JDBC calls, which should
> be
> >> >> beneficial performance-wise.
> >> >> NiFi will handle swapping FlowFiles to disk when they are queued up,
> so
> >> >> you 

[GitHub] nifi issue #509: NIFI-1982: Use Compressed check box value.

2016-06-09 Thread ijokarumawak
Github user ijokarumawak commented on the issue:

https://github.com/apache/nifi/pull/509
  
@bbende Yes, the same change is in PR #497 . This is only for 0.x. Thanks! 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Limiting a queue

2016-06-09 Thread Matt Burgess
Are you working in GitHub? That's a pretty easy way to share code, I
can fork your repo/branch and issue pull requests, or vice versa. If
you are working from a fork of the NiFi repo, I think you can make me
(mattyb149) a contributor to your fork/branch, and we'd go from there.
The latter is a cleaner solution IMO, since once it's done we can
squash the commits and issue the PR directly to the NiFi project for
review/merge.

If you're not on GitHub, we can figure something else out. Please feel
free to DM me at mattyb...@apache.org :)

Thanks,
Matt

On Thu, Jun 9, 2016 at 10:51 AM, Shaine Berube
 wrote:
> @Matt
> I would enjoy collaborating on it, I'm thinking perhaps with some
> additional eyes this data flow can get even faster and get some additional
> bugs worked out.  Presently, the necessity of the situation requiring me to
> mostly hard-code the SQL queries, the data flow only works between an MS
> SQL and a MySQL database, MS SQL being the source and MySQL being the
> target, but with some small changes and perhaps an added library that can
> be changed / fixed so that this data flow will work with any database being
> in source and any database being in target.
>
> As far as collaboration, what files should I send and where?
>
> On Thu, Jun 9, 2016 at 7:58 AM, Matt Burgess  wrote:
>
>> Shaine,
>>
>> I was about to start work on processor(s) almost exactly like the ones
>> you are describing. Is this something you'd be interested in
>> collaborating on? I've worked on the SQL generation piece before
>> (though not for NiFi).
>>
>> Regards,
>> Matt
>>
>> On Wed, Jun 8, 2016 at 5:29 PM, Shaine Berube
>>  wrote:
>> > Thanks for your responses,
>> >
>> > As far as possibly contributing these back to the community, I need a
>> good
>> > way for them to connect to any database and generate that database's
>> > specific flavor of SQL, so when I get that functionality built out I
>> would
>> > be glad to.
>> >
>> > As far as the memory goes, step two sends an insert query per flow file
>> > (for migration), and each query is designed to pull out 1000 records, if
>> > that makes more sense.  But good to know that Nifi can handle a lot of
>> flow
>> > files, with the back-pressure configured it should wait for the queue
>> ahead
>> > to clear out before starting the next table.
>> > Also forgot to mention, this is interacting with two live databases, but
>> is
>> > going through my VM, so in other words, it can actually be faster if
>> placed
>> > on the machine the target database is running on.
>> >
>> > Fun facts - I'm running bench marking now, the speeds I'm seeing are
>> > because of the concurrent processing functionality of Nifi
>> > 562,000 records inserted from 1 source table into 1 target table -
>> speed: 8
>> > minutes 14 seconds (I'm being throttled by the source database).
>> > At that speed, it is approximately 1,137 records per second.
>> >
>> > Step two is running 1 thread, step three is running 60 threads, step 4 is
>> > running 30 threads.
>> >
>> > On Wed, Jun 8, 2016 at 1:23 PM, Mark Payne  wrote:
>> >
>> >> Shaine,
>> >>
>> >> This is a really cool set of functionality! Any chance you would be
>> >> interested in contributing
>> >> these processors back to the NiFi community?
>> >>
>> >> Regardless, one thing to consider here is that with NiFi, because of the
>> >> way that the repositories
>> >> are structured, the way that we think about heap utilization is a little
>> >> different than with most projects.
>> >> As Bryan pointed out, you will want to stream the content directly to
>> the
>> >> FlowFile, rather than buffering
>> >> in memory. The framework will handle the rest. Where we will be more
>> >> concerned about heap utilization
>> >> is actually in the number of FlowFiles that are held in memory at any
>> one
>> >> time, not the size of those FlowFiles.
>> >> So you will be better off keeping a smaller number of FlowFiles, each
>> >> having larger content. So I would
>> >> recommend making the number of records per FlowFile configurable,
>> perhaps
>> >> with a default value of
>> >> 25,000. This would also result in far fewer JDBC calls, which should be
>> >> beneficial performance-wise.
>> >> NiFi will handle swapping FlowFiles to disk when they are queued up, so
>> >> you can certainly queue up
>> >> millions of FlowFiles in a single queue without exhausting your heap
>> >> space. However, if you are buffering
>> >> up all of those FlowFiles in your processor, you may run into problems,
>> so
>> >> using a smaller number of
>> >> FlowFiles, each with many thousand records will likely provide the best
>> >> heap utilization.
>> >>
>> >> Does this help?
>> >>
>> >> Thanks
>> >> -Mark
>> >>
>> >>
>> >> > On Jun 8, 2016, at 2:05 PM, Bryan Bende  wrote:
>> >> >
>> >> > Thank you for the detailed explanation! It sounds like you have built
>> >> 

Re: Limiting a queue

2016-06-09 Thread Shaine Berube
@Matt
I would enjoy collaborating on it, I'm thinking perhaps with some
additional eyes this data flow can get even faster and get some additional
bugs worked out.  Presently, the necessity of the situation requiring me to
mostly hard-code the SQL queries, the data flow only works between an MS
SQL and a MySQL database, MS SQL being the source and MySQL being the
target, but with some small changes and perhaps an added library that can
be changed / fixed so that this data flow will work with any database being
in source and any database being in target.

As far as collaboration, what files should I send and where?

On Thu, Jun 9, 2016 at 7:58 AM, Matt Burgess  wrote:

> Shaine,
>
> I was about to start work on processor(s) almost exactly like the ones
> you are describing. Is this something you'd be interested in
> collaborating on? I've worked on the SQL generation piece before
> (though not for NiFi).
>
> Regards,
> Matt
>
> On Wed, Jun 8, 2016 at 5:29 PM, Shaine Berube
>  wrote:
> > Thanks for your responses,
> >
> > As far as possibly contributing these back to the community, I need a
> good
> > way for them to connect to any database and generate that database's
> > specific flavor of SQL, so when I get that functionality built out I
> would
> > be glad to.
> >
> > As far as the memory goes, step two sends an insert query per flow file
> > (for migration), and each query is designed to pull out 1000 records, if
> > that makes more sense.  But good to know that Nifi can handle a lot of
> flow
> > files, with the back-pressure configured it should wait for the queue
> ahead
> > to clear out before starting the next table.
> > Also forgot to mention, this is interacting with two live databases, but
> is
> > going through my VM, so in other words, it can actually be faster if
> placed
> > on the machine the target database is running on.
> >
> > Fun facts - I'm running bench marking now, the speeds I'm seeing are
> > because of the concurrent processing functionality of Nifi
> > 562,000 records inserted from 1 source table into 1 target table -
> speed: 8
> > minutes 14 seconds (I'm being throttled by the source database).
> > At that speed, it is approximately 1,137 records per second.
> >
> > Step two is running 1 thread, step three is running 60 threads, step 4 is
> > running 30 threads.
> >
> > On Wed, Jun 8, 2016 at 1:23 PM, Mark Payne  wrote:
> >
> >> Shaine,
> >>
> >> This is a really cool set of functionality! Any chance you would be
> >> interested in contributing
> >> these processors back to the NiFi community?
> >>
> >> Regardless, one thing to consider here is that with NiFi, because of the
> >> way that the repositories
> >> are structured, the way that we think about heap utilization is a little
> >> different than with most projects.
> >> As Bryan pointed out, you will want to stream the content directly to
> the
> >> FlowFile, rather than buffering
> >> in memory. The framework will handle the rest. Where we will be more
> >> concerned about heap utilization
> >> is actually in the number of FlowFiles that are held in memory at any
> one
> >> time, not the size of those FlowFiles.
> >> So you will be better off keeping a smaller number of FlowFiles, each
> >> having larger content. So I would
> >> recommend making the number of records per FlowFile configurable,
> perhaps
> >> with a default value of
> >> 25,000. This would also result in far fewer JDBC calls, which should be
> >> beneficial performance-wise.
> >> NiFi will handle swapping FlowFiles to disk when they are queued up, so
> >> you can certainly queue up
> >> millions of FlowFiles in a single queue without exhausting your heap
> >> space. However, if you are buffering
> >> up all of those FlowFiles in your processor, you may run into problems,
> so
> >> using a smaller number of
> >> FlowFiles, each with many thousand records will likely provide the best
> >> heap utilization.
> >>
> >> Does this help?
> >>
> >> Thanks
> >> -Mark
> >>
> >>
> >> > On Jun 8, 2016, at 2:05 PM, Bryan Bende  wrote:
> >> >
> >> > Thank you for the detailed explanation! It sounds like you have built
> >> > something very cool here.
> >> >
> >> > I'm still digesting the different steps and thinking of what can be
> done,
> >> > but something that initially jumped out at me was
> >> > when you mentioned considering how much memory NiFi has and not
> wanting
> >> to
> >> > go over 1000 records per chunk...
> >> >
> >> > You should be able to read and write the chunks in a streaming fashion
> >> and
> >> > never have the entire chunk in memory. For example,
> >> > when creating the chunk you would be looping over a ResultSet from the
> >> > database and writing each record to the OutputStream of the
> >> > FlowFile, never having all 1000 records in memory. On the down stream
> >> > processor you would read the record from the  InputStream of the
> >> > 

Re: Limiting a queue

2016-06-09 Thread Matt Burgess
Shaine,

I was about to start work on processor(s) almost exactly like the ones
you are describing. Is this something you'd be interested in
collaborating on? I've worked on the SQL generation piece before
(though not for NiFi).

Regards,
Matt

On Wed, Jun 8, 2016 at 5:29 PM, Shaine Berube
 wrote:
> Thanks for your responses,
>
> As far as possibly contributing these back to the community, I need a good
> way for them to connect to any database and generate that database's
> specific flavor of SQL, so when I get that functionality built out I would
> be glad to.
>
> As far as the memory goes, step two sends an insert query per flow file
> (for migration), and each query is designed to pull out 1000 records, if
> that makes more sense.  But good to know that Nifi can handle a lot of flow
> files, with the back-pressure configured it should wait for the queue ahead
> to clear out before starting the next table.
> Also forgot to mention, this is interacting with two live databases, but is
> going through my VM, so in other words, it can actually be faster if placed
> on the machine the target database is running on.
>
> Fun facts - I'm running bench marking now, the speeds I'm seeing are
> because of the concurrent processing functionality of Nifi
> 562,000 records inserted from 1 source table into 1 target table - speed: 8
> minutes 14 seconds (I'm being throttled by the source database).
> At that speed, it is approximately 1,137 records per second.
>
> Step two is running 1 thread, step three is running 60 threads, step 4 is
> running 30 threads.
>
> On Wed, Jun 8, 2016 at 1:23 PM, Mark Payne  wrote:
>
>> Shaine,
>>
>> This is a really cool set of functionality! Any chance you would be
>> interested in contributing
>> these processors back to the NiFi community?
>>
>> Regardless, one thing to consider here is that with NiFi, because of the
>> way that the repositories
>> are structured, the way that we think about heap utilization is a little
>> different than with most projects.
>> As Bryan pointed out, you will want to stream the content directly to the
>> FlowFile, rather than buffering
>> in memory. The framework will handle the rest. Where we will be more
>> concerned about heap utilization
>> is actually in the number of FlowFiles that are held in memory at any one
>> time, not the size of those FlowFiles.
>> So you will be better off keeping a smaller number of FlowFiles, each
>> having larger content. So I would
>> recommend making the number of records per FlowFile configurable, perhaps
>> with a default value of
>> 25,000. This would also result in far fewer JDBC calls, which should be
>> beneficial performance-wise.
>> NiFi will handle swapping FlowFiles to disk when they are queued up, so
>> you can certainly queue up
>> millions of FlowFiles in a single queue without exhausting your heap
>> space. However, if you are buffering
>> up all of those FlowFiles in your processor, you may run into problems, so
>> using a smaller number of
>> FlowFiles, each with many thousand records will likely provide the best
>> heap utilization.
>>
>> Does this help?
>>
>> Thanks
>> -Mark
>>
>>
>> > On Jun 8, 2016, at 2:05 PM, Bryan Bende  wrote:
>> >
>> > Thank you for the detailed explanation! It sounds like you have built
>> > something very cool here.
>> >
>> > I'm still digesting the different steps and thinking of what can be done,
>> > but something that initially jumped out at me was
>> > when you mentioned considering how much memory NiFi has and not wanting
>> to
>> > go over 1000 records per chunk...
>> >
>> > You should be able to read and write the chunks in a streaming fashion
>> and
>> > never have the entire chunk in memory. For example,
>> > when creating the chunk you would be looping over a ResultSet from the
>> > database and writing each record to the OutputStream of the
>> > FlowFile, never having all 1000 records in memory. On the down stream
>> > processor you would read the record from the  InputStream of the
>> > FlowFile, sending each one to the destination database, again not having
>> > all 1000 records in memory. If you can operate like this then having
>> > 1000 records per chunk, or 100,000 records per chunk, shouldn't change
>> the
>> > memory requirement for NiFi.
>> >
>> > An example of what we do for ExecuteSQL and QueryDatabaseTable is in the
>> > JdbcCommon util where it converts the ResultSet to Avro records by
>> writing
>> > to the OutputStream:
>> >
>> https://github.com/apache/nifi/blob/e4b7e47836edf47042973e604005058c28eed23b/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/JdbcCommon.java#L80
>> >
>> > Another point is that it is not necessarily a bad thing to have say
>> 10,000
>> > Flow Files in a queue. The queue is not actually holding the content of
>> > those FlowFiles, it is only holding pointers to where the content 

[GitHub] nifi issue #509: NIFI-1982: Use Compressed check box value.

2016-06-09 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/509
  
+1 looks good, will merge to 0.x... I am assuming this was only meant for 
0.x and the same fix is in your other PR for http site-to-site


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] - Markdown option for documentation artifacts

2016-06-09 Thread Joe Skora
+1

On Wed, Jun 8, 2016 at 10:40 AM, Matt Burgess  wrote:

> +1 with template
>
> On Wed, Jun 8, 2016 at 10:39 AM, dan bress  wrote:
> > +1
> >
> > On Wed, Jun 8, 2016 at 7:05 AM Andre  wrote:
> >
> >> +1 on this + a template that matches existing additional.html
> >> On 8 Jun 2016 04:28, "Bryan Rosander"  wrote:
> >>
> >> > Hey all,
> >> >
> >> > When writing documentation (e.g. the additionalDetails.html for a
> >> > processor) it would be nice to have the option to use Markdown
> instead of
> >> > html.
> >> >
> >> > I think Markdown is easier to read and write than raw HTML and for
> simple
> >> > cases does the job pretty well.  It also has the advantage of being
> able
> >> to
> >> > be translated into other document types easily and it would be
> rendered
> >> by
> >> > default in Github when the file is clicked.
> >> >
> >> > There is an MIT-licensed Markdown maven plugin (
> >> > https://github.com/walokra/markdown-page-generator-plugin) that seems
> >> like
> >> > it might work for translating additionalDetails.md (and others) into
> an
> >> > equivalent html page.
> >> >
> >> > Thanks,
> >> > Bryan Rosander
> >> >
> >>
>


[GitHub] nifi issue #501: NIFI-1974 - Support Custom Properties in Expression Languag...

2016-06-09 Thread YolandaMDavis
Github user YolandaMDavis commented on the issue:

https://github.com/apache/nifi/pull/501
  
@markap14 Thanks for reviewing! Concerning the extension of 
VariableRegistryProvider by the ControllerServiceLookup, it came from the need 
to populate the VariableRegistry from NiFiProperties, if available, (which was 
provided by implementations of ControllerServiceLookup, including 
FlowController, WebClusterManager) and provide it to StatePropertyValue (which 
received a ControllerServiceLookup object in it's constructor). I thought about 
having a third variable in the constructor for StatePropertyValue for the 
variableRegistry however when attempting to implement I needed to interrogate 
the ControllerServiceLookup  anyway in many cases. I definitely understand the 
weirdness which is why a the least I had it extend the interface as opposed to 
adding the getVariableRegistry method to that interface directly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-09 Thread YolandaMDavis
Github user YolandaMDavis commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66415203
  
--- Diff: 
nifi-api/src/test/java/org/apache/nifi/registry/TestVariableRegistry.java ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.registry;
+
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+
+import org.junit.Test;
+
+import static org.junit.Assert.assertTrue;
+
+public class TestVariableRegistry {
+
+@Test
+public void testReadMap(){
+Map variables1 = new HashMap<>();
+variables1.put("fake.property.1","fake test value");
+
+Map variables2 = new HashMap<>();
+variables1.put("fake.property.2","fake test value");
+
+VariableRegistry registry = 
VariableRegistryFactory.getInstance(variables1,variables2);
+
+Map variables = registry.getVariables();
+assertTrue(variables.size() == 2);
+assertTrue(variables.get("fake.property.1").equals("fake test 
value"));
+
assertTrue(registry.getVariableValue("fake.property.2").equals("fake test 
value"));
+}
+
+@Test
+public void testReadProperties(){
+Properties properties = new Properties();
+properties.setProperty("fake.property.1","fake test value");
+VariableRegistry registry = 
VariableRegistryFactory.getInstance(properties);
+Map variables = registry.getVariables();
+assertTrue(variables.get("fake.property.1").equals("fake test 
value"));
+}
+
+@Test
+public void testReadFiles(){
+final Path fooPath = 
Paths.get("src/test/resources/TestVariableRegistry/foobar.properties");
+final Path testPath = 
Paths.get("src/test/resources/TestVariableRegistry/test.properties");
+VariableRegistry registry = 
VariableRegistryFactory.getInstance(fooPath.toFile(),testPath.toFile());
+Map variables = registry.getVariables();
+assertTrue(variables.size() == 3);
+assertTrue(variables.get("fake.property.1").equals("test me out 
1"));
+assertTrue(variables.get("fake.property.3").equals("test me out 3, 
test me out 4"));
+}
+
+@Test
+public void testReadPaths(){
+final Path fooPath = 
Paths.get("src/test/resources/TestVariableRegistry/foobar.properties");
+final Path testPath = 
Paths.get("src/test/resources/TestVariableRegistry/test.properties");
+VariableRegistry registry = 
VariableRegistryFactory.getInstance(fooPath,testPath);
+Map variables = registry.getVariables();
+assertTrue(variables.size() == 3);
+assertTrue(variables.get("fake.property.1").equals("test me out 
1"));
+assertTrue(variables.get("fake.property.3").equals("test me out 3, 
test me out 4"));
+}
+
+@Test
+public void testAddRegistry(){
+
+final Map variables1 = new HashMap<>();
+variables1.put("fake.property.1","fake test value");
+
+
+final Path fooPath = 
Paths.get("src/test/resources/TestVariableRegistry/foobar.properties");
+VariableRegistry pathRegistry = 
VariableRegistryFactory.getInstance(fooPath);
+
+final Path testPath = 
Paths.get("src/test/resources/TestVariableRegistry/test.properties");
+VariableRegistry fileRegistry = 
VariableRegistryFactory.getInstance(testPath.toFile());
+
+Properties properties = new Properties();
+properties.setProperty("fake.property.5","test me out 5");
+VariableRegistry propRegistry = 
VariableRegistryFactory.getInstance(properties);
+
+propRegistry.addRegistry(pathRegistry);
+

[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-09 Thread YolandaMDavis
Github user YolandaMDavis commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66412944
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/registry/FileVariableRegistry.java ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.registry;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Path;
+import java.util.Map;
+
+
+public abstract class FileVariableRegistry extends 
MultiMapVariableRegistry {
+
+public FileVariableRegistry() {
+super();
+}
+
+public FileVariableRegistry(File... files){
+super();
+addVariables(files);
+}
+
+public FileVariableRegistry(Path... paths){
+super();
+addVariables(paths);
+}
+
+@SuppressWarnings({"unchecked", "rawtypes"})
+public void addVariables(File ...files){
+if(files != null) {
+for (final File file : files) {
+try {
+registry.addMap(convertFile(file));
+} catch (IOException iex) {
+throw new IllegalArgumentException("A file provided 
was invalid.", iex);
--- End diff --

wrapped it because of it's use in the constructor, wasn't too keen on 
throwing IOE from the constructor as well.  Also  as I side note I think I can 
remove references to public.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-09 Thread YolandaMDavis
Github user YolandaMDavis commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66411796
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/registry/VariableRegistryUtils.java ---
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.registry;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.nifi.flowfile.FlowFile;
+
+public class VariableRegistryUtils {
+
+public static VariableRegistry createVariableRegistry(){
+VariableRegistry variableRegistry = 
VariableRegistryFactory.getInstance();
+VariableRegistry envRegistry = 
VariableRegistryFactory.getInstance(System.getenv());
+VariableRegistry propRegistry = 
VariableRegistryFactory.getInstance(System.getProperties());
+variableRegistry.addRegistry(envRegistry);
+variableRegistry.addRegistry(propRegistry);
--- End diff --

Yes that is correct since the first found would be the first matched. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-09 Thread YolandaMDavis
Github user YolandaMDavis commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66409919
  
--- Diff: nifi-api/src/main/java/org/apache/nifi/registry/MultiMap.java ---
@@ -0,0 +1,154 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.registry;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+public class MultiMap implements Map {
+
+private final List> maps;
+
+MultiMap() {
+this.maps = new ArrayList<>();
+}
+
+@Override
+public int size() {
+int size = 0;
+for (final Map map : maps) {
+size += map.size();
+}
+return size;
--- End diff --

I believe I pulled this from the code that was originally in 
Query.java...just to confirm behavior if a map with duplicate key exists that 
duplicate key isn't overwritten correct? So size in that case would be the 
total size of all the maps from the list of maps. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-09 Thread YolandaMDavis
Github user YolandaMDavis commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66407532
  
--- Diff: 
nifi-api/src/test/java/org/apache/nifi/registry/TestVariableRegistry.java ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.registry;
+
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+
+import org.junit.Test;
+
+import static org.junit.Assert.assertTrue;
+
+public class TestVariableRegistry {
+
+@Test
+public void testReadMap(){
+Map variables1 = new HashMap<>();
+variables1.put("fake.property.1","fake test value");
+
+Map variables2 = new HashMap<>();
+variables1.put("fake.property.2","fake test value");
+
+VariableRegistry registry = 
VariableRegistryFactory.getInstance(variables1,variables2);
+
+Map variables = registry.getVariables();
+assertTrue(variables.size() == 2);
+assertTrue(variables.get("fake.property.1").equals("fake test 
value"));
+
assertTrue(registry.getVariableValue("fake.property.2").equals("fake test 
value"));
+}
+
+@Test
+public void testReadProperties(){
+Properties properties = new Properties();
+properties.setProperty("fake.property.1","fake test value");
+VariableRegistry registry = 
VariableRegistryFactory.getInstance(properties);
+Map variables = registry.getVariables();
+assertTrue(variables.get("fake.property.1").equals("fake test 
value"));
+}
+
+@Test
+public void testReadFiles(){
+final Path fooPath = 
Paths.get("src/test/resources/TestVariableRegistry/foobar.properties");
+final Path testPath = 
Paths.get("src/test/resources/TestVariableRegistry/test.properties");
+VariableRegistry registry = 
VariableRegistryFactory.getInstance(fooPath.toFile(),testPath.toFile());
+Map variables = registry.getVariables();
+assertTrue(variables.size() == 3);
+assertTrue(variables.get("fake.property.1").equals("test me out 
1"));
+assertTrue(variables.get("fake.property.3").equals("test me out 3, 
test me out 4"));
+}
+
+@Test
+public void testReadPaths(){
+final Path fooPath = 
Paths.get("src/test/resources/TestVariableRegistry/foobar.properties");
+final Path testPath = 
Paths.get("src/test/resources/TestVariableRegistry/test.properties");
+VariableRegistry registry = 
VariableRegistryFactory.getInstance(fooPath,testPath);
+Map variables = registry.getVariables();
+assertTrue(variables.size() == 3);
+assertTrue(variables.get("fake.property.1").equals("test me out 
1"));
+assertTrue(variables.get("fake.property.3").equals("test me out 3, 
test me out 4"));
+}
+
+@Test
+public void testAddRegistry(){
+
+final Map variables1 = new HashMap<>();
+variables1.put("fake.property.1","fake test value");
+
+
+final Path fooPath = 
Paths.get("src/test/resources/TestVariableRegistry/foobar.properties");
+VariableRegistry pathRegistry = 
VariableRegistryFactory.getInstance(fooPath);
+
+final Path testPath = 
Paths.get("src/test/resources/TestVariableRegistry/test.properties");
+VariableRegistry fileRegistry = 
VariableRegistryFactory.getInstance(testPath.toFile());
+
+Properties properties = new Properties();
+properties.setProperty("fake.property.5","test me out 5");
+VariableRegistry propRegistry = 
VariableRegistryFactory.getInstance(properties);
+
+propRegistry.addRegistry(pathRegistry);
+

[GitHub] nifi pull request #501: NIFI-1974 - Support Custom Properties in Expression ...

2016-06-09 Thread YolandaMDavis
Github user YolandaMDavis commented on a diff in the pull request:

https://github.com/apache/nifi/pull/501#discussion_r66406906
  
--- Diff: 
nifi-api/src/test/resources/TestVariableRegistry/foobar.properties ---
@@ -0,0 +1 @@
+fake.property.3=test me out 3, test me out 4
--- End diff --

I already added the rat check but I can add the ASF license, that's not a 
problem.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---