[jira] [Comment Edited] (NIFIREG-136) Switch to unique human-friendly names for buckets and flows

2018-02-03 Thread Andrew Grande (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351522#comment-16351522
 ] 

Andrew Grande edited comment on NIFIREG-136 at 2/3/18 8:43 PM:
---

Sure Kevin.

If we keep using UUIDs *internally*, that would allow for any renaming support. 
The point really is shift the burden of dealing with uuids away from users, now 
that they are exposed to how NiFi handled PG, etc.

The UUID has no meaning really when promoting to a different env (new bucket or 
different registry instance). I would like to think in terms of:
 # Copy this permalink of the flow version I want to promote. Or, if using a 
serialized flow version json file, I don't care about source UUIDs either.
 # 'Paste' it into the new bucket and/or registry instance (e.g. DEV -> QA). 
The target will be generating new uuids in any case (ideally transparently 
behind the scenes just by determining that path doesn't exist).

Should it come to that, I can trade off Flow and Bucket names being immutable 
for having shareable and friendly URIs.


was (Author: aperepel):
Sure Kevin.

If we keep using UUIDs *internally*, that would allow for any renaming support. 
The point really is shift the burden of dealing with uuids away from users, now 
that they are exposed to how NiFi handled PG, etc.

The UUID has no meaning really when promoting to a different env (new bucket or 
different registry instance). I would like to think in terms of:
 # Copy this permalink of the flow version I want to promote.
 # 'Paste' it into the new bucket and/or registry instance (e.g. DEV -> QA). 
The target will be generating new uuids in any case (ideally transparently 
behind the scenes just by determining that path doesn't exist).

Should it come to that, I can trade off Flow and Bucket names being immutable 
for having shareable and friendly URIs.

> Switch to unique human-friendly names for buckets and flows
> ---
>
> Key: NIFIREG-136
> URL: https://issues.apache.org/jira/browse/NIFIREG-136
> Project: NiFi Registry
>  Issue Type: Improvement
>Affects Versions: 0.1.0
>Reporter: Andrew Grande
>Priority: Major
>
> I have been playing with the Registry and using [~bende] 's early CLI to 
> accomplish some automation tasks. Have had really tough times with UUIDs 
> being used for buckets and flows, it introduced a lot of context switches to 
> locate/save/copy/paste those when using the API.
> I would strongly suggest considering the human-friendly names and convert 
> deep links to using those instead. This not only provides for an easy 
> portable full URI, but also addresses compatibility issues between instances 
> of the registry, as buckets & flows with the same name are guaranteed to have 
> different UUIDs. A kind of copy/paste between environments.
> I never came across a unique name requirement within a tree-like structure to 
> be an issue when dealing with NiFi. E.g. NiFi and NiFi Registry could 
> transparently reverse-look up the UUID by extracting names from the URI. The 
> goal is to have a great user experience.
> P.S.: spaces in the name in the URI could be substituted for '+' sign 
> transparently, using the %20 would defeat the purpose of smooth ease-of-use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-136) Switch to unique human-friendly names for buckets and flows

2018-02-03 Thread Andrew Grande (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351522#comment-16351522
 ] 

Andrew Grande commented on NIFIREG-136:
---

Sure Kevin.

If we keep using UUIDs *internally*, that would allow for any renaming support. 
The point really is shift the burden of dealing with uuids away from users, now 
that they are exposed to how NiFi handled PG, etc.

The UUID has no meaning really when promoting to a different env (new bucket or 
different registry instance). I would like to think in terms of:
 # Copy this permalink of the flow version I want to promote.
 # 'Paste' it into the new bucket and/or registry instance (e.g. DEV -> QA). 
The target will be generating new uuids in any case (ideally transparently 
behind the scenes just by determining that path doesn't exist).

Should it come to that, I can trade off Flow and Bucket names being immutable 
for having shareable and friendly URIs.

> Switch to unique human-friendly names for buckets and flows
> ---
>
> Key: NIFIREG-136
> URL: https://issues.apache.org/jira/browse/NIFIREG-136
> Project: NiFi Registry
>  Issue Type: Improvement
>Affects Versions: 0.1.0
>Reporter: Andrew Grande
>Priority: Major
>
> I have been playing with the Registry and using [~bende] 's early CLI to 
> accomplish some automation tasks. Have had really tough times with UUIDs 
> being used for buckets and flows, it introduced a lot of context switches to 
> locate/save/copy/paste those when using the API.
> I would strongly suggest considering the human-friendly names and convert 
> deep links to using those instead. This not only provides for an easy 
> portable full URI, but also addresses compatibility issues between instances 
> of the registry, as buckets & flows with the same name are guaranteed to have 
> different UUIDs. A kind of copy/paste between environments.
> I never came across a unique name requirement within a tree-like structure to 
> be an issue when dealing with NiFi. E.g. NiFi and NiFi Registry could 
> transparently reverse-look up the UUID by extracting names from the URI. The 
> goal is to have a great user experience.
> P.S.: spaces in the name in the URI could be substituted for '+' sign 
> transparently, using the %20 would defeat the purpose of smooth ease-of-use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFIREG-138) UX: add an easy-to-copy permalink in the UI

2018-02-03 Thread Andrew Grande (JIRA)
Andrew Grande created NIFIREG-138:
-

 Summary: UX: add an easy-to-copy permalink in the UI
 Key: NIFIREG-138
 URL: https://issues.apache.org/jira/browse/NIFIREG-138
 Project: NiFi Registry
  Issue Type: Improvement
Reporter: Andrew Grande


Think of github's `Clone or Download` button as a reference, it should be as 
easy and as few clicks as that one.

Need a very simple and straightforward means to copy a deep link to the flow 
version I'm currently looking at in the UI, the main use case is to share this 
link further and automation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-136) Switch to unique human-friendly names for buckets and flows

2018-02-03 Thread Kevin Doran (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351518#comment-16351518
 ] 

Kevin Doran commented on NIFIREG-136:
-

I think we should definitely consider this, especially since we already enforce 
the requirement of unique bucket names and unique flow names within a bucket 
(correct?). My biggest concern would be that the human readable names are, by 
design, changeable, so URLs based of bucket/item names would change. So to 
introduce this safely for those that need/want immutable URIs (e.g., used in an 
automation task that should not fail simply because someone renames a flow), we 
would need a way to support permalinks as well.

> Switch to unique human-friendly names for buckets and flows
> ---
>
> Key: NIFIREG-136
> URL: https://issues.apache.org/jira/browse/NIFIREG-136
> Project: NiFi Registry
>  Issue Type: Improvement
>Affects Versions: 0.1.0
>Reporter: Andrew Grande
>Priority: Major
>
> I have been playing with the Registry and using [~bende] 's early CLI to 
> accomplish some automation tasks. Have had really tough times with UUIDs 
> being used for buckets and flows, it introduced a lot of context switches to 
> locate/save/copy/paste those when using the API.
> I would strongly suggest considering the human-friendly names and convert 
> deep links to using those instead. This not only provides for an easy 
> portable full URI, but also addresses compatibility issues between instances 
> of the registry, as buckets & flows with the same name are guaranteed to have 
> different UUIDs. A kind of copy/paste between environments.
> I never came across a unique name requirement within a tree-like structure to 
> be an issue when dealing with NiFi. E.g. NiFi and NiFi Registry could 
> transparently reverse-look up the UUID by extracting names from the URI. The 
> goal is to have a great user experience.
> P.S.: spaces in the name in the URI could be substituted for '+' sign 
> transparently, using the %20 would defeat the purpose of smooth ease-of-use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFIREG-137) Make 'Sort by: Newest' the default option

2018-02-03 Thread Andrew Grande (JIRA)
Andrew Grande created NIFIREG-137:
-

 Summary: Make 'Sort by: Newest' the default option
 Key: NIFIREG-137
 URL: https://issues.apache.org/jira/browse/NIFIREG-137
 Project: NiFi Registry
  Issue Type: Improvement
Reporter: Andrew Grande


Not sure if sort order settings are saved per user, but at least initially we 
should promote the most recently updated flows to the top.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows

2018-02-03 Thread Bryan Bende (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351516#comment-16351516
 ] 

Bryan Bende commented on NIFI-4839:
---

Work-in-progress here:

https://github.com/bbende/nifi/tree/nifi-registry-toolkit/nifi-toolkit/nifi-toolkit-cli

> Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
> 
>
> Key: NIFI-4839
> URL: https://issues.apache.org/jira/browse/NIFI-4839
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
>
> Now that we have NiFi Registry and the ability to import/upgrade flows in 
> NiFi, we should offer a command-line tool to interact with these REST 
> end-points. This could part of NiFi Toolkit and would help people potentially 
> automate some of these operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFIREG-136) Switch to unique human-friendly names for buckets and flows

2018-02-03 Thread Andrew Grande (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Grande updated NIFIREG-136:
--
Description: 
I have been playing with the Registry and using [~bende] 's early CLI to 
accomplish some automation tasks. Have had really tough times with UUIDs being 
used for buckets and flows, it introduced a lot of context switches to 
locate/save/copy/paste those when using the API.

I would strongly suggest considering the human-friendly names and convert deep 
links to using those instead. This not only provides for an easy portable full 
URI, but also addresses compatibility issues between instances of the registry, 
as buckets & flows with the same name are guaranteed to have different UUIDs. A 
kind of copy/paste between environments.

I never came across a unique name requirement within a tree-like structure to 
be an issue when dealing with NiFi. E.g. NiFi and NiFi Registry could 
transparently reverse-look up the UUID by extracting names from the URI. The 
goal is to have a great user experience.

P.S.: spaces in the name in the URI could be substituted for '+' sign 
transparently, using the %20 would defeat the purpose of smooth ease-of-use.

  was:
I have been playing with the Registry and using [~bende] 's early CLI to 
accomplish some automation tasks. Have had really tough times with UUIDs being 
used for buckets and flows, it introduced a lot of context switches to 
locate/save/copy/paste those when using the API.

I would strongly suggest considering the human-friendly names and convert deep 
links to using those instead. This not only provides for an easy portable full 
URI, but also addresses compatibility issues between instances of the registry, 
as buckets & flows with the same name are guaranteed to have different UUIDs. A 
kind of copy/paste between environments.

P.S.: spaces in the name in the URI could be substituted for '+' sign 
transparently, using the %20 would defeat the purpose of smooth ease-of-use.


> Switch to unique human-friendly names for buckets and flows
> ---
>
> Key: NIFIREG-136
> URL: https://issues.apache.org/jira/browse/NIFIREG-136
> Project: NiFi Registry
>  Issue Type: Improvement
>Affects Versions: 0.1.0
>Reporter: Andrew Grande
>Priority: Major
>
> I have been playing with the Registry and using [~bende] 's early CLI to 
> accomplish some automation tasks. Have had really tough times with UUIDs 
> being used for buckets and flows, it introduced a lot of context switches to 
> locate/save/copy/paste those when using the API.
> I would strongly suggest considering the human-friendly names and convert 
> deep links to using those instead. This not only provides for an easy 
> portable full URI, but also addresses compatibility issues between instances 
> of the registry, as buckets & flows with the same name are guaranteed to have 
> different UUIDs. A kind of copy/paste between environments.
> I never came across a unique name requirement within a tree-like structure to 
> be an issue when dealing with NiFi. E.g. NiFi and NiFi Registry could 
> transparently reverse-look up the UUID by extracting names from the URI. The 
> goal is to have a great user experience.
> P.S.: spaces in the name in the URI could be substituted for '+' sign 
> transparently, using the %20 would defeat the purpose of smooth ease-of-use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows

2018-02-03 Thread Bryan Bende (JIRA)
Bryan Bende created NIFI-4839:
-

 Summary: Create a CLI in NiFi Toolkit to interact with NIFi 
Registry/deploy flows
 Key: NIFI-4839
 URL: https://issues.apache.org/jira/browse/NIFI-4839
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Bryan Bende
Assignee: Bryan Bende


Now that we have NiFi Registry and the ability to import/upgrade flows in NiFi, 
we should offer a command-line tool to interact with these REST end-points. 
This could part of NiFi Toolkit and would help people potentially automate some 
of these operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFIREG-136) Switch to unique human-friendly names for buckets and flows

2018-02-03 Thread Andrew Grande (JIRA)
Andrew Grande created NIFIREG-136:
-

 Summary: Switch to unique human-friendly names for buckets and 
flows
 Key: NIFIREG-136
 URL: https://issues.apache.org/jira/browse/NIFIREG-136
 Project: NiFi Registry
  Issue Type: Improvement
Affects Versions: 0.1.0
Reporter: Andrew Grande


I have been playing with the Registry and using [~bende] 's early CLI to 
accomplish some automation tasks. Have had really tough times with UUIDs being 
used for buckets and flows, it introduced a lot of context switches to 
locate/save/copy/paste those when using the API.

I would strongly suggest considering the human-friendly names and convert deep 
links to using those instead. This not only provides for an easy portable full 
URI, but also addresses compatibility issues between instances of the registry, 
as buckets & flows with the same name are guaranteed to have different UUIDs. A 
kind of copy/paste between environments.

P.S.: spaces in the name in the URI could be substituted for '+' sign 
transparently, using the %20 would defeat the purpose of smooth ease-of-use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-89) Add a default URL handler for root '/' instead of returning a 404

2018-02-03 Thread Danny Lane (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351514#comment-16351514
 ] 

Danny Lane commented on NIFIREG-89:
---

It seems like there isn't a consensus on how this should be handled right now.

There is some conversation on the NiFi side about changing the redirect page 
and registry may end up following whatever the outcome of that is?

I've closed the PR and unassigned myself from this for now.

 

> Add a default URL handler for root '/' instead of returning a 404
> -
>
> Key: NIFIREG-89
> URL: https://issues.apache.org/jira/browse/NIFIREG-89
> Project: NiFi Registry
>  Issue Type: Improvement
>Affects Versions: 0.1.0
>Reporter: Danny Lane
>Priority: Minor
>  Labels: usability
>
> Currently when you land on the root or a NiFi Registry deployment you get a 
> 404 response from Jetty.
> It was suggested in the mailing list to add a page similar to the NiFi 'You 
> may have mistyped...' page.
> This is item to track that suggestion and associated work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-89) Add a default URL handler for root '/' instead of returning a 404

2018-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351510#comment-16351510
 ] 

ASF GitHub Bot commented on NIFIREG-89:
---

Github user dannylane closed the pull request at:

https://github.com/apache/nifi-registry/pull/74


> Add a default URL handler for root '/' instead of returning a 404
> -
>
> Key: NIFIREG-89
> URL: https://issues.apache.org/jira/browse/NIFIREG-89
> Project: NiFi Registry
>  Issue Type: Improvement
>Affects Versions: 0.1.0
>Reporter: Danny Lane
>Assignee: Danny Lane
>Priority: Minor
>  Labels: usability
>
> Currently when you land on the root or a NiFi Registry deployment you get a 
> 404 response from Jetty.
> It was suggested in the mailing list to add a page similar to the NiFi 'You 
> may have mistyped...' page.
> This is item to track that suggestion and associated work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #74: NIFIREG-89 Add landing page for root URL '/'

2018-02-03 Thread dannylane
Github user dannylane closed the pull request at:

https://github.com/apache/nifi-registry/pull/74


---


[jira] [Commented] (NIFIREG-89) Add a default URL handler for root '/' instead of returning a 404

2018-02-03 Thread Andrew Grande (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-89?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351509#comment-16351509
 ] 

Andrew Grande commented on NIFIREG-89:
--

As much as I love NiFi's 'you may have meant /nifi', I'm vouching for just 
redirecting the user to the correct context path. It's hard enough to memorize 
dozens of various ports, no need to add the context path to the pile. Thanks!

> Add a default URL handler for root '/' instead of returning a 404
> -
>
> Key: NIFIREG-89
> URL: https://issues.apache.org/jira/browse/NIFIREG-89
> Project: NiFi Registry
>  Issue Type: Improvement
>Affects Versions: 0.1.0
>Reporter: Danny Lane
>Assignee: Danny Lane
>Priority: Minor
>  Labels: usability
>
> Currently when you land on the root or a NiFi Registry deployment you get a 
> 404 response from Jetty.
> It was suggested in the mailing list to add a page similar to the NiFi 'You 
> may have mistyped...' page.
> This is item to track that suggestion and associated work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4538) Add Process Group information to Search results

2018-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351504#comment-16351504
 ] 

ASF GitHub Bot commented on NIFI-4538:
--

Github user yuri1969 commented on the issue:

https://github.com/apache/nifi/pull/2364
  
@mcgilman In my experience, the top level group information was not really 
as useful as the parent one. So I don't mind the change you proposed.

Anyway, @mattyb149 should be asked since he had created the original 
request.


> Add Process Group information to Search results
> ---
>
> Key: NIFI-4538
> URL: https://issues.apache.org/jira/browse/NIFI-4538
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Matt Burgess
>Assignee: Yuri
>Priority: Major
> Attachments: Screenshot from 2017-12-23 21-08-45.png, Screenshot from 
> 2017-12-23 21-42-24.png
>
>
> When querying for components in the Search bar, no Process Group (PG) 
> information is displayed. When copies of PGs are made on the canvas, the 
> search results can be hard to navigate, as you may jump into a different PG 
> than what you're looking for.
> I propose adding (conditionally, based on user permissions) the immediate 
> parent PG name and/or ID, as well as the top-level PG. In this case I mean 
> top-level being the highest parent PG except root, unless the component's 
> immediate parent PG is root, in which case it wouldn't need to be displayed 
> (or could be displayed as the root PG, albeit a duplicate of the immediate).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2364: NIFI-4538 - Add Process Group information to...

2018-02-03 Thread yuri1969
Github user yuri1969 commented on the issue:

https://github.com/apache/nifi/pull/2364
  
@mcgilman In my experience, the top level group information was not really 
as useful as the parent one. So I don't mind the change you proposed.

Anyway, @mattyb149 should be asked since he had created the original 
request.


---


[jira] [Commented] (NIFI-4289) Implement put processor for InfluxDB

2018-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351433#comment-16351433
 ] 

ASF GitHub Bot commented on NIFI-4289:
--

Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2101#discussion_r165818221
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/PutInfluxDB.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.influxdb.InfluxDB;
+import java.io.ByteArrayOutputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","insert", "write", "put", "timeseries"})
+@CapabilityDescription("Processor to write the content of a FlowFile (in 
line protocol 
https://docs.influxdata.com/influxdb/v1.3/write_protocols/line_protocol_tutorial/)
 to InfluxDB (https://www.influxdb.com/). "
++ "  The flow file can contain single measurement point or 
multiple measurement points separated by line seperator.  The timestamp (last 
field) should be in nano-seconds resolution.")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+})
+public class PutInfluxDB extends AbstractInfluxDBProcessor {
+
+public static AllowableValue CONSISTENCY_LEVEL_ALL = new 
AllowableValue("ALL", "All", "Return success when all nodes have responded with 
write success");
+public static AllowableValue CONSISTENCY_LEVEL_ANY = new 
AllowableValue("ANY", "Any", "Return success when any nodes have responded with 
write success");
+public static AllowableValue CONSISTENCY_LEVEL_ONE = new 
AllowableValue("ONE", "One", "Return success when one node has responded with 
write success");
+public static AllowableValue CONSISTENCY_LEVEL_QUORUM = new 
AllowableValue("QUORUM", "Quorum", "Return success when a majority of nodes 
have responded with write success");
+
+public static final PropertyDescriptor CONSISTENCY_LEVEL = new 
PropertyDescriptor.Builder()
+.name("influxdb-consistency-level")
+.displayName("Consistency Level")
+.description("InfluxDB consistency level")
+.required(true)
+.defaultValue(CONSISTENCY_LEVEL_ONE.getValue())
+.expressionLanguageSupported(true)
+.allowableValues(CONSISTENCY_LEVEL_ONE, CONSISTENCY_LEVEL_ANY, 
CONSISTENCY_LEVEL_ALL, CONSISTENCY_LEVEL_QUORUM)
+.build();
+
+public static final PropertyDescriptor RETENTION_POLICY = new 

[GitHub] nifi pull request #2101: NIFI-4289 - InfluxDB put processor

2018-02-03 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2101#discussion_r165818221
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/PutInfluxDB.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.influxdb.InfluxDB;
+import java.io.ByteArrayOutputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","insert", "write", "put", "timeseries"})
+@CapabilityDescription("Processor to write the content of a FlowFile (in 
line protocol 
https://docs.influxdata.com/influxdb/v1.3/write_protocols/line_protocol_tutorial/)
 to InfluxDB (https://www.influxdb.com/). "
++ "  The flow file can contain single measurement point or 
multiple measurement points separated by line seperator.  The timestamp (last 
field) should be in nano-seconds resolution.")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+})
+public class PutInfluxDB extends AbstractInfluxDBProcessor {
+
+public static AllowableValue CONSISTENCY_LEVEL_ALL = new 
AllowableValue("ALL", "All", "Return success when all nodes have responded with 
write success");
+public static AllowableValue CONSISTENCY_LEVEL_ANY = new 
AllowableValue("ANY", "Any", "Return success when any nodes have responded with 
write success");
+public static AllowableValue CONSISTENCY_LEVEL_ONE = new 
AllowableValue("ONE", "One", "Return success when one node has responded with 
write success");
+public static AllowableValue CONSISTENCY_LEVEL_QUORUM = new 
AllowableValue("QUORUM", "Quorum", "Return success when a majority of nodes 
have responded with write success");
+
+public static final PropertyDescriptor CONSISTENCY_LEVEL = new 
PropertyDescriptor.Builder()
+.name("influxdb-consistency-level")
+.displayName("Consistency Level")
+.description("InfluxDB consistency level")
+.required(true)
+.defaultValue(CONSISTENCY_LEVEL_ONE.getValue())
+.expressionLanguageSupported(true)
+.allowableValues(CONSISTENCY_LEVEL_ONE, CONSISTENCY_LEVEL_ANY, 
CONSISTENCY_LEVEL_ALL, CONSISTENCY_LEVEL_QUORUM)
+.build();
+
+public static final PropertyDescriptor RETENTION_POLICY = new 
PropertyDescriptor.Builder()
+.name("influxdb-retention-policy")
+.displayName("Retention Policy")
+.description("Retention policy for the saving the records")
+.defaultValue("autogen")
+ 

[jira] [Commented] (NIFI-4289) Implement put processor for InfluxDB

2018-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351429#comment-16351429
 ] 

ASF GitHub Bot commented on NIFI-4289:
--

Github user mans2singh commented on the issue:

https://github.com/apache/nifi/pull/2101
  
@MikeThomsen - 

If the fields, tags and timestamp for a measurement are the same, they are 
considered to be the same record. 

Regarding size limit - I did not find any mention of size limit for posting 
data in the influxdb docs. I think this all depends on use cases and with the 
size limit processor property available, the nifi admin configure the values 
based on their influx db cluster and load.

Let me know if I have missed anything or anything else required.

Thanks again for your advice/comments.


> Implement put processor for InfluxDB
> 
>
> Key: NIFI-4289
> URL: https://issues.apache.org/jira/browse/NIFI-4289
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.3.0
> Environment: All
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: insert, measurements,, put, timeseries
>
> Support inserting time series measurements into InfluxDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2101: NIFI-4289 - InfluxDB put processor

2018-02-03 Thread mans2singh
Github user mans2singh commented on the issue:

https://github.com/apache/nifi/pull/2101
  
@MikeThomsen - 

If the fields, tags and timestamp for a measurement are the same, they are 
considered to be the same record. 

Regarding size limit - I did not find any mention of size limit for posting 
data in the influxdb docs. I think this all depends on use cases and with the 
size limit processor property available, the nifi admin configure the values 
based on their influx db cluster and load.

Let me know if I have missed anything or anything else required.

Thanks again for your advice/comments.


---


[jira] [Updated] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication

2018-02-03 Thread Mike Thomsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-4838:
---
Description: 
It shouldn't wait until the end to do a commit() call because the effect is 
that GetMongo looks like it has hung to a user who is pulling a very large data 
set.

It should also have an option for running a count query to get the current 
approximate count of documents that would match the query and append an 
attribute that indicates where a flowfile stands in the total result count. Ex:

query.progress.point.start = 2500

query.progress.point.end = 5000

query.count.estimate = 17,568,231

  was:It shouldn't wait until the end to do a commit() call because the effect 
is that GetMongo looks like it has hung to a user who is pulling a very large 
data set.

Summary: Make GetMongo support multiple commits and give some progress 
indication  (was: Make GetMongo support multiple commits)

> Make GetMongo support multiple commits and give some progress indication
> 
>
> Key: NIFI-4838
> URL: https://issues.apache.org/jira/browse/NIFI-4838
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> It shouldn't wait until the end to do a commit() call because the effect is 
> that GetMongo looks like it has hung to a user who is pulling a very large 
> data set.
> It should also have an option for running a count query to get the current 
> approximate count of documents that would match the query and append an 
> attribute that indicates where a flowfile stands in the total result count. 
> Ex:
> query.progress.point.start = 2500
> query.progress.point.end = 5000
> query.count.estimate = 17,568,231



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4838) Make GetMongo support multiple commits

2018-02-03 Thread Mike Thomsen (JIRA)
Mike Thomsen created NIFI-4838:
--

 Summary: Make GetMongo support multiple commits
 Key: NIFI-4838
 URL: https://issues.apache.org/jira/browse/NIFI-4838
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Mike Thomsen
Assignee: Mike Thomsen


It shouldn't wait until the end to do a commit() call because the effect is 
that GetMongo looks like it has hung to a user who is pulling a very large data 
set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4289) Implement put processor for InfluxDB

2018-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351378#comment-16351378
 ] 

ASF GitHub Bot commented on NIFI-4289:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2101#discussion_r165812834
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/AbstractInfluxDBProcessor.java
 ---
@@ -45,15 +50,26 @@
 
 public static final PropertyDescriptor INFLUX_DB_URL = new 
PropertyDescriptor.Builder()
 .name("influxdb-url")
-.displayName("InfluxDB connection url")
-.description("InfluxDB url to connect to")
+.displayName("InfluxDB connection URL")
+.description("InfluxDB URL to connect to. Eg: 
http://influxdb:8086;)
+.defaultValue("http://localhost:8086;)
--- End diff --

Solid improvement here.


> Implement put processor for InfluxDB
> 
>
> Key: NIFI-4289
> URL: https://issues.apache.org/jira/browse/NIFI-4289
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.3.0
> Environment: All
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: insert, measurements,, put, timeseries
>
> Support inserting time series measurements into InfluxDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4289) Implement put processor for InfluxDB

2018-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351379#comment-16351379
 ] 

ASF GitHub Bot commented on NIFI-4289:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2101#discussion_r165812983
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/PutInfluxDB.java
 ---
@@ -78,18 +81,33 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("Successful FlowFiles that are saved to InfluxDB 
are routed to this relationship").build();
+
+static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("FlowFiles were not saved to InfluxDB are routed 
to this relationship").build();
+
+static final Relationship REL_RETRY = new 
Relationship.Builder().name("retry")
+.description("FlowFiles were not saved to InfluxDB due to 
retryable exception are routed to this relationship").build();
+
+static final Relationship REL_MAX_SIZE_EXCEEDED = new 
Relationship.Builder().name("failure-max-size")
--- End diff --

Something to consider here...

With PutHBaseRecord and (still pending merge) DeleteHBaseRow, I had a 
similar situation. Users could easily chuck several hundred thousand HBase 
operations at the processor all at once. So what was suggested to me, and I did 
with both of them, was to break up the incoming flowfile into chunks and then 
add a "retry.index" attribute to the flowfile if it failed. That way, users 
could loop REL_RETRY to the processor and get everything ingested.

Though that might not apply in this case because InfluxDB doesn't have an 
ID field that I know of. If you replay an event with the same timestamp, does 
it overwrite or does it just add a new one?


> Implement put processor for InfluxDB
> 
>
> Key: NIFI-4289
> URL: https://issues.apache.org/jira/browse/NIFI-4289
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.3.0
> Environment: All
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: insert, measurements,, put, timeseries
>
> Support inserting time series measurements into InfluxDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2101: NIFI-4289 - InfluxDB put processor

2018-02-03 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2101#discussion_r165812834
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/AbstractInfluxDBProcessor.java
 ---
@@ -45,15 +50,26 @@
 
 public static final PropertyDescriptor INFLUX_DB_URL = new 
PropertyDescriptor.Builder()
 .name("influxdb-url")
-.displayName("InfluxDB connection url")
-.description("InfluxDB url to connect to")
+.displayName("InfluxDB connection URL")
+.description("InfluxDB URL to connect to. Eg: 
http://influxdb:8086;)
+.defaultValue("http://localhost:8086;)
--- End diff --

Solid improvement here.


---


[GitHub] nifi pull request #2101: NIFI-4289 - InfluxDB put processor

2018-02-03 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2101#discussion_r165812983
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/PutInfluxDB.java
 ---
@@ -78,18 +81,33 @@
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("Successful FlowFiles that are saved to InfluxDB 
are routed to this relationship").build();
+
+static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("FlowFiles were not saved to InfluxDB are routed 
to this relationship").build();
+
+static final Relationship REL_RETRY = new 
Relationship.Builder().name("retry")
+.description("FlowFiles were not saved to InfluxDB due to 
retryable exception are routed to this relationship").build();
+
+static final Relationship REL_MAX_SIZE_EXCEEDED = new 
Relationship.Builder().name("failure-max-size")
--- End diff --

Something to consider here...

With PutHBaseRecord and (still pending merge) DeleteHBaseRow, I had a 
similar situation. Users could easily chuck several hundred thousand HBase 
operations at the processor all at once. So what was suggested to me, and I did 
with both of them, was to break up the incoming flowfile into chunks and then 
add a "retry.index" attribute to the flowfile if it failed. That way, users 
could loop REL_RETRY to the processor and get everything ingested.

Though that might not apply in this case because InfluxDB doesn't have an 
ID field that I know of. If you replay an event with the same timestamp, does 
it overwrite or does it just add a new one?


---


[jira] [Commented] (NIFI-4289) Implement put processor for InfluxDB

2018-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351375#comment-16351375
 ] 

ASF GitHub Bot commented on NIFI-4289:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2101#discussion_r165812819
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/PutInfluxDB.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.influxdb.InfluxDB;
+import java.io.ByteArrayOutputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","insert", "write", "put", "timeseries"})
+@CapabilityDescription("Processor to write the content of a FlowFile (in 
line protocol 
https://docs.influxdata.com/influxdb/v1.3/write_protocols/line_protocol_tutorial/)
 to InfluxDB (https://www.influxdb.com/). "
++ "  The flow file can contain single measurement point or 
multiple measurement points separated by line seperator.  The timestamp (last 
field) should be in nano-seconds resolution.")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+})
+public class PutInfluxDB extends AbstractInfluxDBProcessor {
+
+public static AllowableValue CONSISTENCY_LEVEL_ALL = new 
AllowableValue("ALL", "All", "Return success when all nodes have responded with 
write success");
+public static AllowableValue CONSISTENCY_LEVEL_ANY = new 
AllowableValue("ANY", "Any", "Return success when any nodes have responded with 
write success");
+public static AllowableValue CONSISTENCY_LEVEL_ONE = new 
AllowableValue("ONE", "One", "Return success when one node has responded with 
write success");
+public static AllowableValue CONSISTENCY_LEVEL_QUORUM = new 
AllowableValue("QUORUM", "Quorum", "Return success when a majority of nodes 
have responded with write success");
+
+public static final PropertyDescriptor CONSISTENCY_LEVEL = new 
PropertyDescriptor.Builder()
+.name("influxdb-consistency-level")
+.displayName("Consistency Level")
+.description("InfluxDB consistency level")
+.required(true)
+.defaultValue(CONSISTENCY_LEVEL_ONE.getValue())
+.expressionLanguageSupported(true)
+.allowableValues(CONSISTENCY_LEVEL_ONE, CONSISTENCY_LEVEL_ANY, 
CONSISTENCY_LEVEL_ALL, CONSISTENCY_LEVEL_QUORUM)
+.build();
+
+public static final PropertyDescriptor RETENTION_POLICY = new 

[GitHub] nifi pull request #2101: NIFI-4289 - InfluxDB put processor

2018-02-03 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2101#discussion_r165812819
  
--- Diff: 
nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/PutInfluxDB.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.influxdb;
+
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.influxdb.InfluxDB;
+import java.io.ByteArrayOutputStream;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@SupportsBatching
+@Tags({"influxdb", "measurement","insert", "write", "put", "timeseries"})
+@CapabilityDescription("Processor to write the content of a FlowFile (in 
line protocol 
https://docs.influxdata.com/influxdb/v1.3/write_protocols/line_protocol_tutorial/)
 to InfluxDB (https://www.influxdb.com/). "
++ "  The flow file can contain single measurement point or 
multiple measurement points separated by line seperator.  The timestamp (last 
field) should be in nano-seconds resolution.")
+@WritesAttributes({
+@WritesAttribute(attribute = 
AbstractInfluxDBProcessor.INFLUX_DB_ERROR_MESSAGE, description = "InfluxDB 
error message"),
+})
+public class PutInfluxDB extends AbstractInfluxDBProcessor {
+
+public static AllowableValue CONSISTENCY_LEVEL_ALL = new 
AllowableValue("ALL", "All", "Return success when all nodes have responded with 
write success");
+public static AllowableValue CONSISTENCY_LEVEL_ANY = new 
AllowableValue("ANY", "Any", "Return success when any nodes have responded with 
write success");
+public static AllowableValue CONSISTENCY_LEVEL_ONE = new 
AllowableValue("ONE", "One", "Return success when one node has responded with 
write success");
+public static AllowableValue CONSISTENCY_LEVEL_QUORUM = new 
AllowableValue("QUORUM", "Quorum", "Return success when a majority of nodes 
have responded with write success");
+
+public static final PropertyDescriptor CONSISTENCY_LEVEL = new 
PropertyDescriptor.Builder()
+.name("influxdb-consistency-level")
+.displayName("Consistency Level")
+.description("InfluxDB consistency level")
+.required(true)
+.defaultValue(CONSISTENCY_LEVEL_ONE.getValue())
+.expressionLanguageSupported(true)
+.allowableValues(CONSISTENCY_LEVEL_ONE, CONSISTENCY_LEVEL_ANY, 
CONSISTENCY_LEVEL_ALL, CONSISTENCY_LEVEL_QUORUM)
+.build();
+
+public static final PropertyDescriptor RETENTION_POLICY = new 
PropertyDescriptor.Builder()
+.name("influxdb-retention-policy")
+.displayName("Retention Policy")
+.description("Retention policy for the saving the records")
+.defaultValue("autogen")
+