Re: Purpose of Disallowing Attribute Expression

2016-05-12 Thread Michael Moser
Hi,

NIFI-1077 [1] has discussed this a bit in the past, when
ConvertCharacterSet was improved to support expression language.  A JIRA
ticket is needed to spur action on these requests.

An interesting case to help this would be to improve the IdentifyMimeType
processor to detect character encodings on text data.  Apache Tika can do
it with an EncodingDetector [2], so why not take advantage since it's
already part of IdentifyMimeType?  I think this would be cool so I wrote
NIFI-1874 [3].

-- MIke

[1] - https://issues.apache.org/jira/browse/NIFI-1077
[2] -
https://tika.apache.org/1.12/api/org/apache/tika/detect/EncodingDetector.html
[3] - https://issues.apache.org/jira/browse/NIFI-1874



On Thu, May 12, 2016 at 3:52 PM, dale.chang13 
wrote:

> Joe Witt wrote
> > It is generally quite easy to enable for Property Descriptors which
> > accept user supplied strings.  And this is one that does seem like a
> > candidate.  Were you wanting it to look at a flowfile attribute to be
> > the way of indicating the character set?
> >
> > Thinking through this example the challenges that come to mind are:
> > - What to do if the flow file doesn't have the charset indicated as an
> > attribute?
> > - What to do if the charset indicated by the flowfile attribute isn't
> > supported?
> >
> > There are various cases to consider is all and your idea is a good one
> > to pursue in my view.  We had wanted to make it be an enumerated value
> > at one point so users could only selected from known/valid charsets.
> > But your idea is good too.
>
> Yes, setting the character set or other properties as a flowfile attribute
> would be helpful. I have already tweaked Extract Text in order to support
> expression language as well as providing UTF-8 as the default character set
> and remove its mandatory specification
>
> I suppose the ExtractText processor could route to an "invalid character
> set" relationship if there is a conflict. That would require a character
> set
> detection service at the least though.
>
> I only asked because our limitation was to use as much out-of-the-box
> functionality and as little custom processors as possible for maintenance's
> sake.
>
> Would it be possible to implement this change (more properties supporting
> expression language) in future releases? I know it would warrant an
> in-depth
> discussion on the goals that NiFi would like to achieve
>
>
>
> --
> View this message in context:
> http://apache-nifi-developer-list.39713.n7.nabble.com/Purpose-of-Disallowing-Attribute-Expression-tp10221p10227.html
> Sent from the Apache NiFi Developer List mailing list archive at
> Nabble.com.
>


[GitHub] nifi pull request: NIFI-1866 ProcessException handling in Standard...

2016-05-12 Thread pvillard31
GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/439

NIFI-1866 ProcessException handling in StandardProcessSession



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-1866

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/439.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #439


commit 925e87ffd64f35ac47ae41c8466d217d0eccad36
Author: Pierre Villard 
Date:   2016-05-12T21:31:45Z

NIFI-1866 ProcessException handling in StandardProcessSession




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Trouble with the LDAP Authentication Provider

2016-05-12 Thread Ricky Saltzer
Using the following provider on 0.6.1, I'm faced with a ClassCastException.
It might also be worth noting that I face the same exception when
attempting to us the KerberosProvider option.

*Provider:*

ldap-provider
org.apache.nifi.ldap.LdapProvider
SIMPLE

dethklok\toki
bananasticker











FOLLOW
10 secs
10 secs

ldap://ldap.metalocalypse.com
CN=Users,DC=metalocalypse,DC=local
foo

12 hours


*Exception:*
Caused by: java.lang.ClassCastException: class
org.apache.nifi.ldap.LdapProvider
at java.lang.Class.asSubclass(Class.java:3208) ~[na:1.7.0_79]
at
org.apache.nifi.authorization.AuthorityProviderFactoryBean.createAuthorityProvider(AuthorityProviderFactoryBean.java:173)
~[na:na]
at
org.apache.nifi.authorization.AuthorityProviderFactoryBean.getObject(AuthorityProviderFactoryBean.java:111)
~[na:na]
at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:168)
~[na:na]
... 75 common frames omitted


Re: Purpose of Disallowing Attribute Expression

2016-05-12 Thread dale.chang13
Joe Witt wrote
> It is generally quite easy to enable for Property Descriptors which
> accept user supplied strings.  And this is one that does seem like a
> candidate.  Were you wanting it to look at a flowfile attribute to be
> the way of indicating the character set?
> 
> Thinking through this example the challenges that come to mind are:
> - What to do if the flow file doesn't have the charset indicated as an
> attribute?
> - What to do if the charset indicated by the flowfile attribute isn't
> supported?
> 
> There are various cases to consider is all and your idea is a good one
> to pursue in my view.  We had wanted to make it be an enumerated value
> at one point so users could only selected from known/valid charsets.
> But your idea is good too.

Yes, setting the character set or other properties as a flowfile attribute
would be helpful. I have already tweaked Extract Text in order to support
expression language as well as providing UTF-8 as the default character set
and remove its mandatory specification

I suppose the ExtractText processor could route to an "invalid character
set" relationship if there is a conflict. That would require a character set
detection service at the least though.

I only asked because our limitation was to use as much out-of-the-box
functionality and as little custom processors as possible for maintenance's
sake.

Would it be possible to implement this change (more properties supporting
expression language) in future releases? I know it would warrant an in-depth
discussion on the goals that NiFi would like to achieve



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/Purpose-of-Disallowing-Attribute-Expression-tp10221p10227.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


[GitHub] nifi pull request: NIFI-1564: No supported viewer issue

2016-05-12 Thread pvillard31
Github user pvillard31 commented on the pull request:

https://github.com/apache/nifi/pull/421#issuecomment-218876809
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] Using the new 'help wanted' tool from comdev

2016-05-12 Thread Sean Busbey
It's been in a few talks at ApacheCon. We could ask on comdev if they
have stats on actual use thus far.

I believe the way helpwanted is supposed to work is a new person
browses to a specific task and then signs up to work on it, so I
suspect an umbrella entry wouldn't work.

On Thu, May 12, 2016 at 1:17 PM, Joe Witt  wrote:
> Sean,
>
> We talked in the past about being good at tagging JIRAs as beginner.
> That still seems like a good approach.  Can we simply have the task be
> a link to a JIRA report of those tagged items?
>
> Do we have any good information showing that folks interested in
> helping apache projects are using that?  Certainly sounds like a good
> idea just curious if it is getting traction.
>
> Thanks
> Joe
>
> On Thu, May 12, 2016 at 4:13 PM, Sean Busbey  wrote:
>> Hi folks!
>>
>> ASF comdev has put up a great new tool for funneling in new folks:
>>
>> https://helpwanted.apache.org/
>>
>> How about we brainstorm a few things here (maybe some beginner JIRAs
>> we can flesh out a little?) and then file?
>>
>> -Sean



-- 
busbey


Re: [DISCUSS] Using the new 'help wanted' tool from comdev

2016-05-12 Thread Joe Witt
Sean,

We talked in the past about being good at tagging JIRAs as beginner.
That still seems like a good approach.  Can we simply have the task be
a link to a JIRA report of those tagged items?

Do we have any good information showing that folks interested in
helping apache projects are using that?  Certainly sounds like a good
idea just curious if it is getting traction.

Thanks
Joe

On Thu, May 12, 2016 at 4:13 PM, Sean Busbey  wrote:
> Hi folks!
>
> ASF comdev has put up a great new tool for funneling in new folks:
>
> https://helpwanted.apache.org/
>
> How about we brainstorm a few things here (maybe some beginner JIRAs
> we can flesh out a little?) and then file?
>
> -Sean


Re: Purpose of Disallowing Attribute Expression

2016-05-12 Thread Joe Witt
Hello

It is generally quite easy to enable for Property Descriptors which
accept user supplied strings.  And this is one that does seem like a
candidate.  Were you wanting it to look at a flowfile attribute to be
the way of indicating the character set?

Thinking through this example the challenges that come to mind are:
- What to do if the flow file doesn't have the charset indicated as an
attribute?
- What to do if the charset indicated by the flowfile attribute isn't supported?

There are various cases to consider is all and your idea is a good one
to pursue in my view.  We had wanted to make it be an enumerated value
at one point so users could only selected from known/valid charsets.
But your idea is good too.

Thanks
Joe

On Thu, May 12, 2016 at 2:58 PM, dale.chang13  wrote:
> What is the purpose of not allowing a Processor property to support
> expression language? Not allowing a property such as "Character set" in the
> ExtractText Processor is proving to be a hindrance. Would it affect NiFi
> under the hood if it were otherwise?
>
>
>
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/Purpose-of-Disallowing-Attribute-Expression-tp10221.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


[DISCUSS] Using the new 'help wanted' tool from comdev

2016-05-12 Thread Sean Busbey
Hi folks!

ASF comdev has put up a great new tool for funneling in new folks:

https://helpwanted.apache.org/

How about we brainstorm a few things here (maybe some beginner JIRAs
we can flesh out a little?) and then file?

-Sean


Purpose of Disallowing Attribute Expression

2016-05-12 Thread dale.chang13
What is the purpose of not allowing a Processor property to support
expression language? Not allowing a property such as "Character set" in the
ExtractText Processor is proving to be a hindrance. Would it affect NiFi
under the hood if it were otherwise?



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/Purpose-of-Disallowing-Attribute-Expression-tp10221.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


[GitHub] nifi pull request: NIFI-1755 (0.x branch) Fixed remote process gro...

2016-05-12 Thread pvillard31
GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/438

NIFI-1755 (0.x branch) Fixed remote process group status counts by on…

…ly considering connected remote ports

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-1755-0.x

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/438.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #438


commit b5bcc4d722d3e6e432bbb499eec4313d21d50e50
Author: Pierre Villard 
Date:   2016-05-12T19:53:14Z

NIFI-1755 (0.x branch) Fixed remote process group status counts by only 
considering connected remote ports




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1755 Fixed remote process group status cou...

2016-05-12 Thread pvillard31
Github user pvillard31 commented on the pull request:

https://github.com/apache/nifi/pull/347#issuecomment-218867090
  
@olegz Thanks Oleg! Not exactly what I had in mind but it does the trick :) 
(I was trying to set up an environment with two simple processors sending flow 
files to RPG with different frequencies and ensure the correct values of 
counters in RPG status). This PR is updated for master branch and I just sent a 
PR for 0.x branch.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Required, either-or properties

2016-05-12 Thread Russell Bateman
We'll make it work!

Thanks.

On Thu, May 12, 2016 at 11:56 AM, Joe Witt  wrote:

> Russ,
>
> Yeah - I recommend you mark them both optional then use customValidate
> to check for 'should never happen cases' and then provide validation
> errors for this scenarios which explain to the user proper handling.
> You can also use the PropertyDescriptor of each property to explain
> its intended relationship to other properties.
>
> Does that seem like it will take care of your case?
>
> Thanks
> Joe
>
> On Thu, May 12, 2016 at 1:35 PM, Russell Bateman
>  wrote:
> > Joe,
> >
> > Thanks for your reply.
> >
> > As I'm thinking about it, validation of the property value isn't so much
> my
> > problem. It's documentation.
> >
> > If I mark the property documentation for both properties as required,
> then
> > my consumer will wonder what supplying both would mean. However, one of
> the
> > two, but never both is required. If both are supplied (whatever that
> would
> > mean in the mind of the consumer), I ignore the template on the
> filesystem
> > path since I check for the existence of the direct content property
> first.
> >
> > Is this dilemma a candidate for the /customValidate/ method you mention?
> >
> > Best,
> > Russ
> >
> >
> > On 05/12/2016 11:28 AM, Joe Witt wrote:
> >>
> >> Russell,
> >>
> >> Validators on property descriptors help with validating that property
> >> alone.  But in the processor API there is 'customValidate' method you
> >> can implement which is used to do things like compound/conditional
> >> validation.
> >>
> >> Thanks
> >> Joe
> >>
> >> On Thu, May 12, 2016 at 1:25 PM, Russell Bateman
> >>  wrote:
> >>>
> >>> How are folk specifying processor properties in the case where one of
> two
> >>> properties is required, but not both? I'm just wondering if there's a
> >>> best
> >>> practice here or must I say something in the description?
> >>>
> >>> For example, I've written a processor that implements Apache Velocity
> >>> templating. I require the template content be passed either directly as
> >>> the
> >>> value of a property, "Template content", or a filesystem path to this
> >>> content in a property, "Template filepath".
> >>>
> >
>


Re: Required, either-or properties

2016-05-12 Thread Joe Witt
Russ,

Yeah - I recommend you mark them both optional then use customValidate
to check for 'should never happen cases' and then provide validation
errors for this scenarios which explain to the user proper handling.
You can also use the PropertyDescriptor of each property to explain
its intended relationship to other properties.

Does that seem like it will take care of your case?

Thanks
Joe

On Thu, May 12, 2016 at 1:35 PM, Russell Bateman
 wrote:
> Joe,
>
> Thanks for your reply.
>
> As I'm thinking about it, validation of the property value isn't so much my
> problem. It's documentation.
>
> If I mark the property documentation for both properties as required, then
> my consumer will wonder what supplying both would mean. However, one of the
> two, but never both is required. If both are supplied (whatever that would
> mean in the mind of the consumer), I ignore the template on the filesystem
> path since I check for the existence of the direct content property first.
>
> Is this dilemma a candidate for the /customValidate/ method you mention?
>
> Best,
> Russ
>
>
> On 05/12/2016 11:28 AM, Joe Witt wrote:
>>
>> Russell,
>>
>> Validators on property descriptors help with validating that property
>> alone.  But in the processor API there is 'customValidate' method you
>> can implement which is used to do things like compound/conditional
>> validation.
>>
>> Thanks
>> Joe
>>
>> On Thu, May 12, 2016 at 1:25 PM, Russell Bateman
>>  wrote:
>>>
>>> How are folk specifying processor properties in the case where one of two
>>> properties is required, but not both? I'm just wondering if there's a
>>> best
>>> practice here or must I say something in the description?
>>>
>>> For example, I've written a processor that implements Apache Velocity
>>> templating. I require the template content be passed either directly as
>>> the
>>> value of a property, "Template content", or a filesystem path to this
>>> content in a property, "Template filepath".
>>>
>


Re: Required, either-or properties

2016-05-12 Thread Russell Bateman

Joe,

Thanks for your reply.

As I'm thinking about it, validation of the property value isn't so much 
my problem. It's documentation.


If I mark the property documentation for both properties as required, 
then my consumer will wonder what supplying both would mean. However, 
one of the two, but never both is required. If both are supplied 
(whatever that would mean in the mind of the consumer), I ignore the 
template on the filesystem path since I check for the existence of the 
direct content property first.


Is this dilemma a candidate for the /customValidate/ method you mention?

Best,
Russ

On 05/12/2016 11:28 AM, Joe Witt wrote:

Russell,

Validators on property descriptors help with validating that property
alone.  But in the processor API there is 'customValidate' method you
can implement which is used to do things like compound/conditional
validation.

Thanks
Joe

On Thu, May 12, 2016 at 1:25 PM, Russell Bateman
 wrote:

How are folk specifying processor properties in the case where one of two
properties is required, but not both? I'm just wondering if there's a best
practice here or must I say something in the description?

For example, I've written a processor that implements Apache Velocity
templating. I require the template content be passed either directly as the
value of a property, "Template content", or a filesystem path to this
content in a property, "Template filepath".





Re: Required, either-or properties

2016-05-12 Thread Joe Witt
Russell,

Validators on property descriptors help with validating that property
alone.  But in the processor API there is 'customValidate' method you
can implement which is used to do things like compound/conditional
validation.

Thanks
Joe

On Thu, May 12, 2016 at 1:25 PM, Russell Bateman
 wrote:
> How are folk specifying processor properties in the case where one of two
> properties is required, but not both? I'm just wondering if there's a best
> practice here or must I say something in the description?
>
> For example, I've written a processor that implements Apache Velocity
> templating. I require the template content be passed either directly as the
> value of a property, "Template content", or a filesystem path to this
> content in a property, "Template filepath".
>


Required, either-or properties

2016-05-12 Thread Russell Bateman
How are folk specifying processor properties in the case where one of 
two properties is required, but not both? I'm just wondering if there's 
a best practice here or must I say something in the description?


For example, I've written a processor that implements Apache Velocity 
templating. I require the template content be passed either directly as 
the value of a property, "Template content", or a filesystem path to 
this content in a property, "Template filepath".




[GitHub] nifi pull request: NIFI-1742: Initial support for component level ...

2016-05-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/435


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


ReplaceTextWithMapping mapping question

2016-05-12 Thread idioma
Hi,
is it possible to match the same field value in your input file to 2 or more
different mapping values from your mapping file? Does it only work in a 1-1
fashion? 

Thank you




--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/ReplaceTextWithMapping-mapping-question-tp10212.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


[GitHub] nifi pull request: NIFI-1742: Initial support for component level ...

2016-05-12 Thread bbende
Github user bbende commented on the pull request:

https://github.com/apache/nifi/pull/435#issuecomment-218813109
  
+1 Verified component level revisions working as expected, full build 
passes with contrib-check


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1742: Initial support for component level ...

2016-05-12 Thread bbende
Github user bbende commented on the pull request:

https://github.com/apache/nifi/pull/435#issuecomment-218805610
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: Nifi 1540 - AWS Kinesis Get and Put Processors

2016-05-12 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/239#discussion_r63047926
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/kinesis/consumer/GetKinesis.java
 ---
@@ -0,0 +1,238 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.kinesis.consumer;
+
+import java.io.ByteArrayInputStream;
+import java.net.InetAddress;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnShutdown;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.exception.ProcessException;
+import 
org.apache.nifi.processors.aws.credentials.provider.service.AWSCredentialsProviderService;
+import org.apache.nifi.util.StopWatch;
+
+import 
com.amazonaws.services.kinesis.clientlibrary.exceptions.InvalidStateException;
+import 
com.amazonaws.services.kinesis.clientlibrary.exceptions.KinesisClientLibDependencyException;
+import 
com.amazonaws.services.kinesis.clientlibrary.exceptions.ShutdownException;
+import 
com.amazonaws.services.kinesis.clientlibrary.exceptions.ThrottlingException;
+import 
com.amazonaws.services.kinesis.clientlibrary.lib.worker.InitialPositionInStream;
+import 
com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisClientLibConfiguration;
+import com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker;
+import 
com.amazonaws.services.kinesis.clientlibrary.types.InitializationInput;
+import 
com.amazonaws.services.kinesis.clientlibrary.types.ProcessRecordsInput;
+import com.amazonaws.services.kinesis.clientlibrary.types.ShutdownInput;
+import com.amazonaws.services.kinesis.model.Record;
+
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_FORBIDDEN)
+@Tags({ "amazon", "aws", "kinesis", "get", "stream" })
+@CapabilityDescription("Get the records from the specified Amazon Kinesis 
stream")
+@WritesAttributes({
+@WritesAttribute(attribute = 
GetKinesis.AWS_KINESIS_CONSUMER_RECORD_APPROX_ARRIVAL_TIMESTAMP, description = 
"Approximate arrival time of the record"),
+@WritesAttribute(attribute = 
GetKinesis.AWS_KINESIS_CONSUMER_RECORD_PARTITION_KEY, description = "Partition 
key of the record"),
+@WritesAttribute(attribute = 
GetKinesis.AWS_KINESIS_CONSUMER_RECORD_SEQUENCE_NUMBER, description = "Sequence 
number of the record"),
+@WritesAttribute(attribute = 
GetKinesis.AWS_KINESIS_CONSUMER_MILLIS_SECONDS_BEHIND, description = "Consumer 
lag for processing records"),
+@WritesAttribute(attribute = 
GetKinesis.KINESIS_CONSUMER_RECORD_START_TIMESTAMP, description = "Timestamp 
when the particular batch of records was processed "),
+@WritesAttribute(attribute = 
GetKinesis.KINESIS_CONSUMER_RECORD_NUBMER, description = "Record number of the 
record processed in that batch")
+})
+public class GetKinesis extends AbstractKinesisConsumerProcessor 
implements RecordsHandler {
+
+/**
+ * Attributes written by processor
+ */
+public static final String AWS_KINESIS_CONSUMER_RECOR

[GitHub] nifi pull request: Nifi 1540 - AWS Kinesis Get and Put Processors

2016-05-12 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/239#discussion_r63047805
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/AbstractBaseAWSProcessor.java
 ---
@@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import 
org.apache.nifi.processors.aws.credentials.provider.service.AWSCredentialsProviderService;
+import org.apache.nifi.ssl.SSLContextService;
+
+import com.amazonaws.Protocol;
+import com.amazonaws.regions.Region;
+import com.amazonaws.regions.Regions;
+
+/**
+ * This is a base class of Nifi AWS Processors.  This class contains basic 
property descriptors, AWS credentials and relationships and
+ * is not dependent on AmazonWebServiceClient classes.  It's subclasses 
add support for interacting with AWS respective
+ * clients.
+ *
+ * @see AbstractAWSCredentialsProviderProcessor
+ * @see AbstractAWSProcessor
+ * @see AmazonWebServiceClient
+ */
+public abstract class AbstractBaseAWSProcessor extends 
AbstractSessionFactoryProcessor {
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("FlowFiles are routed to success 
relationship").build();
+public static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("FlowFiles are routed to failure 
relationship").build();
+public static final Set relationships = 
Collections.unmodifiableSet(
+new HashSet<>(Arrays.asList(REL_SUCCESS, REL_FAILURE)));
+/**
+ * AWS credentials provider service
+ *
+ * @see  http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/AWSCredentialsProvider.html";>AWSCredentialsProvider
+ */
+public static final PropertyDescriptor 
AWS_CREDENTIALS_PROVIDER_SERVICE = new PropertyDescriptor.Builder()
+.name("AWS Credentials Provider service")
+.description("The Controller Service that is used to obtain 
aws credentials provider")
+.required(false)
+
.identifiesControllerService(AWSCredentialsProviderService.class)
+.build();
+
+public static final PropertyDescriptor CREDENTIALS_FILE = new 
PropertyDescriptor.Builder()
+.name("Credentials File")
+.expressionLanguageSupported(false)
+.required(false)
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
+.build();
+public static final PropertyDescriptor ACCESS_KEY = new 
PropertyDescriptor.Builder()
+.name("Access Key")
+.expressionLanguageSupported(true)
+.required(false)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.sensitive(true)
+.build();
+public static final PropertyDescriptor SECRET_KEY = new 
PropertyDescrip

[GitHub] nifi pull request: Nifi 1540 - AWS Kinesis Get and Put Processors

2016-05-12 Thread mans2singh
Github user mans2singh commented on a diff in the pull request:

https://github.com/apache/nifi/pull/239#discussion_r63047632
  
--- Diff: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/AbstractBaseAWSProcessor.java
 ---
@@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.processor.AbstractSessionFactoryProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessSessionFactory;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import 
org.apache.nifi.processors.aws.credentials.provider.service.AWSCredentialsProviderService;
+import org.apache.nifi.ssl.SSLContextService;
+
+import com.amazonaws.Protocol;
+import com.amazonaws.regions.Region;
+import com.amazonaws.regions.Regions;
+
+/**
+ * This is a base class of Nifi AWS Processors.  This class contains basic 
property descriptors, AWS credentials and relationships and
+ * is not dependent on AmazonWebServiceClient classes.  It's subclasses 
add support for interacting with AWS respective
+ * clients.
+ *
+ * @see AbstractAWSCredentialsProviderProcessor
+ * @see AbstractAWSProcessor
+ * @see AmazonWebServiceClient
+ */
+public abstract class AbstractBaseAWSProcessor extends 
AbstractSessionFactoryProcessor {
+
+public static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success")
+.description("FlowFiles are routed to success 
relationship").build();
+public static final Relationship REL_FAILURE = new 
Relationship.Builder().name("failure")
+.description("FlowFiles are routed to failure 
relationship").build();
+public static final Set relationships = 
Collections.unmodifiableSet(
+new HashSet<>(Arrays.asList(REL_SUCCESS, REL_FAILURE)));
+/**
+ * AWS credentials provider service
+ *
+ * @see  http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/AWSCredentialsProvider.html";>AWSCredentialsProvider
+ */
+public static final PropertyDescriptor 
AWS_CREDENTIALS_PROVIDER_SERVICE = new PropertyDescriptor.Builder()
+.name("AWS Credentials Provider service")
+.description("The Controller Service that is used to obtain 
aws credentials provider")
+.required(false)
+
.identifiesControllerService(AWSCredentialsProviderService.class)
+.build();
+
+public static final PropertyDescriptor CREDENTIALS_FILE = new 
PropertyDescriptor.Builder()
+.name("Credentials File")
+.expressionLanguageSupported(false)
+.required(false)
+.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
+.build();
+public static final PropertyDescriptor ACCESS_KEY = new 
PropertyDescriptor.Builder()
+.name("Access Key")
+.expressionLanguageSupported(true)
+.required(false)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.sensitive(true)
+.build();
+public static final PropertyDescriptor SECRET_KEY = new 
PropertyDescrip

[GitHub] nifi pull request: NIFI-1872: Ignore failing unit test for now unt...

2016-05-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/437


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1872: Ignore failing unit test for now unt...

2016-05-12 Thread joewitt
Github user joewitt commented on the pull request:

https://github.com/apache/nifi/pull/437#issuecomment-218803112
  
i am certainly a +1 on ignoring this given that it provides a 
false-positive test failure making the build unstable.  I think this is both 
1.0 and 0.x thing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1872: Ignore failing unit test for now unt...

2016-05-12 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/437

NIFI-1872: Ignore failing unit test for now until we can properly address

Since it is on master it's best to ignore, since the problem appears to be 
the lifecycle of the unit test

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1872

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/437.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #437


commit 687a686b21457df2f5503ff5535c3d18e5949189
Author: Mark Payne 
Date:   2016-05-12T15:57:40Z

NIFI-1872: Ignore failing unit test for now until we can properly address; 
since it is on master it's best to ignore, since the problem appears to be the 
lifecycle of the unit test




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1296, NIFI-1680, NIFI-1764, NIFI-1837, NIF...

2016-05-12 Thread joewitt
Github user joewitt commented on the pull request:

https://github.com/apache/nifi/pull/366#issuecomment-218793849
  
Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 441.183 sec 
- in org.apache.nifi.processors.kafka.pubsub.ConsumeKafkaTest
Exception: java.lang.OutOfMemoryError thrown from the 
UncaughtExceptionHandler in thread "main"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1742: Initial support for component level ...

2016-05-12 Thread mcgilman
Github user mcgilman commented on the pull request:

https://github.com/apache/nifi/pull/435#issuecomment-218790360
  
The PR has been updated with the work I described last night.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1296, NIFI-1680, NIFI-1764, NIFI-1837, NIF...

2016-05-12 Thread joewitt
Github user joewitt commented on the pull request:

https://github.com/apache/nifi/pull/366#issuecomment-218787020
  
[INFO] reporting-task.css (2512b) -> reporting-task.css (1264b)[50%] -> 
reporting-task.css.gz (488b)[19%]
Tests run: 5, Failures: 2, Errors: 0, Skipped: 1, Time elapsed: 81.054 sec 
<<< FAILURE! - in 
org.apache.nifi.processors.kafka.pubsub.AbstractKafkaProcessorLifecycelTest

validateConcurrencyWithAllFailures(org.apache.nifi.processors.kafka.pubsub.AbstractKafkaProcessorLifecycelTest)
  Time elapsed: 29.154 sec  <<< FAILURE!
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.nifi.processors.kafka.pubsub.AbstractKafkaProcessorLifecycelTest.validateConcurrencyWithAllFailures(AbstractKafkaProcessorLifecycelTest.java:369)


validateConcurrencyWithAllSuccesses(org.apache.nifi.processors.kafka.pubsub.AbstractKafkaProcessorLifecycelTest)
  Time elapsed: 33.284 sec  <<< FAILURE!
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.nifi.processors.kafka.pubsub.AbstractKafkaProcessorLifecycelTest.validateConcurrencyWithAllSuccesses(AbstractKafkaProcessorLifecycelTest.java:291)



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1858 Adding site-to-site reporting bundle

2016-05-12 Thread bbende
Github user bbende commented on the pull request:

https://github.com/apache/nifi/pull/436#issuecomment-218766817
  
@markap14 here is the corresponding PR for merging the site-to-site 
reporting bundle into master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1858 Adding site-to-site reporting bundle

2016-05-12 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/436

NIFI-1858 Adding site-to-site reporting bundle

PR for merging this into master.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-1858-MASTER

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/436.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #436


commit ce13a2e02b88ab6f9a51c440c122645ba242a89f
Author: Bryan Bende 
Date:   2016-05-12T13:59:27Z

NIFI-1858 Adding site-to-site reporting bundle




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1858 Adding SiteToSiteProvenanceReportingT...

2016-05-12 Thread bbende
Github user bbende commented on the pull request:

https://github.com/apache/nifi/pull/419#issuecomment-218765899
  
@markap14 thanks, going to close this PR and initiate a new one against 
master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1858 Adding SiteToSiteProvenanceReportingT...

2016-05-12 Thread bbende
Github user bbende closed the pull request at:

https://github.com/apache/nifi/pull/419


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1296, NIFI-1680, NIFI-1764, NIFI-1837, NIF...

2016-05-12 Thread joewitt
Github user joewitt commented on the pull request:

https://github.com/apache/nifi/pull/366#issuecomment-218763305
  
applying the patch version of this fails.
merging this branch fails.
  uto-merging 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-processors/src/test/resources/zookeeper.properties
CONFLICT (add/add): Merge conflict in 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-processors/src/test/resources/zookeeper.properties
Auto-merging 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-processors/src/test/resources/server.properties
CONFLICT (add/add): Merge conflict in 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-processors/src/test/resources/server.properties
Auto-merging 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-processors/src/test/resources/log4j.properties
CONFLICT (add/add): Merge conflict in 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-processors/src/test/resources/log4j.properties
Auto-merging 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-processors/src/main/resources/docs/org.apache.nifi.processors.kafka.pubsub.PublishKafka/additionalDetails.html
CONFLICT (add/add): Merge conflict in 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-processors/src/main/resources/docs/org.apache.nifi.processors.kafka.pubsub.PublishKafka/additionalDetails.html
Auto-merging 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-processors/src/main/resources/docs/org.apache.nifi.processors.kafka.pubsub.ConsumeKafka/additionalDetails.html
CONFLICT (add/add): Merge conflict in 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-processors/src/main/resources/docs/org.apache.nifi.processors.kafka.pubsub.ConsumeKafka/additionalDetails.html
Auto-merging 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-processors/pom.xml
CONFLICT (add/add): Merge conflict in 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-processors/pom.xml
Auto-merging 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-nar/src/main/resources/META-INF/NOTICE
CONFLICT (add/add): Merge conflict in 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-nar/src/main/resources/META-INF/NOTICE
Auto-merging 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-nar/src/main/resources/META-INF/LICENSE
CONFLICT (add/add): Merge conflict in 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-pubsub-nar/src/main/resources/META-INF/LICENSE
Automatic merge failed; fix conflicts and then commit the result.

The logs for travis-ci show an unused import (probably fixed by now).  
Please update patch to resolve merge conflicts and I'll try again.  Also, 
likely worthwhile to have an 0.x and a master version of this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Reg: Get files from ftp

2016-05-12 Thread Mark Payne
Sourav,

Some of the older List*** Processors have a property for a "Distributed Cache 
Service." If that is
populated, then the Processor will pull the state from there upon restart. 
However, newer versions
of NiFi do not store state there - they will only restore from there upon 
restart. State is stored only
in ZooKeeper (whichever instance is configured in your conf/state-mangement.xml 
file). If there is
no ZooKeeper instance that is properly configured there, it will fail to store 
any state and you'll end
up re-listing those files when NiFi restarts.

Thanks
-Mark


> On May 12, 2016, at 1:13 AM, Sourav Gulati  
> wrote:
> 
> Mark,
> 
> My question is that I am running NIFI in clustered mode but embedded 
> zookeeper is not running. However, I can see that  it is still saving state 
> of the files being listed.
> 
> Since zookeeper is not running, where the state is being saved by default in 
> clustered mode?
> 
> Regards,
> Sourav Gulati
> 
> -Original Message-
> From: Mark Payne [mailto:marka...@hotmail.com]
> Sent: Wednesday, May 11, 2016 7:17 PM
> To: dev@nifi.apache.org
> Subject: Re: Reg: Get files from ftp
> 
> Sourav,
> 
> If you run an embedded zookeeper, then yes, it runs within the NiFi JVM and 
> stores state (by default) in the ./state/zookeeper directory.
> 
> Thanks
> -Mark
> 
> 
>> On May 11, 2016, at 9:14 AM, Sourav Gulati  
>> wrote:
>> 
>> Mark,
>> 
>> Does zookeeper process runs inside Nifi JVM? If yes, what is the default 
>> path of zookeeper data directory?
>> 
>> Regards,
>> Sourav Gulati
>> 
>> -Original Message-
>> From: Mark Payne [mailto:marka...@hotmail.com]
>> Sent: Wednesday, May 11, 2016 6:37 PM
>> To: dev@nifi.apache.org
>> Subject: Re: Reg: Get files from ftp
>> 
>> Sourav,
>> 
>> If your NiFi instance is clustered, it will store the information in
>> ZooKeeper. If not clustered, it will store the state in a local file.
>> This is done because in a cluster, you typically want to run your
>> List*** Processors on Primary Node only, and this allows another node to 
>> pick up where the previous one left off if the Primary Node changes. Of 
>> course, storing all of the files that have been listed can become very 
>> verbose so it stores only a small amount of data -- the timestamp of the 
>> latest file discovered and the timestamp of the latest file process/listed. 
>> It can then use this information to determine if files are new or modified 
>> without storing much info.
>> 
>> Thanks
>> -Mark
>> 
>>> On May 11, 2016, at 12:39 AM, Sourav Gulati  
>>> wrote:
>>> 
>>> Thanks Matthew,
>>> A quick question: Where does it store the state of files already listed?
>>> 
>>> 
>>> Regards,
>>> Sourav Gulati
>>> 
>>> -Original Message-
>>> From: Matthew Clarke [mailto:matt.clarke@gmail.com]
>>> Sent: Wednesday, May 11, 2016 3:37 AM
>>> To: dev@nifi.apache.org
>>> Subject: Re: Reg: Get files from ftp
>>> 
>>> The list type processors are designed to use NiFi state management to keep 
>>> from listing the same files twice. The fetch type processors with retrieve 
>>> files based on the FlowFiles it is fed. Typically those FlowFiles it works 
>>> from come from the corresponding list processor.
>>> On May 10, 2016 8:56 AM, "Mark Payne"  wrote:
>>> 
 Sourav,
 
 Sure. Within the nifi-standard-processors bundle are a few classes
 that would be important here.
 First is the AbstractListProcessor. You'll want to use this as your
 base class for ListFTP. Also, FetchFileTransfer will be the class
 that you'll extend for the FetchFTP processor.
 
 The ListSFTP and FetchSFTP are great examples to look at as examples.
 
 Additionally, the GetFTP and GetSFTP are good examples to look at as
 to how the FTP & SFTP implementations differ. They basically differ
 in the Property Descriptors provided and the FileTransfer object
 that is used.
 
 If you have any questions, please feel free to reach out to this
 mailing list. Very happy to help however we can!
 
 Thanks
 -Mark
 
 
> On May 10, 2016, at 1:30 AM, Sourav Gulati
> 
 wrote:
> 
> Sure Mark. I am interested to work on it. Please provide some
> pointers
 regarding that.
> 
> Also, I will check if Sftp can be used. So ListSFTP / FetchSFTP
> won't
 pick files more than once?
> 
> Regards,
> Sourav Gulati
> 
> -Original Message-
> From: Mark Payne [mailto:marka...@hotmail.com]
> Sent: Monday, May 09, 2016 5:34 PM
> To: dev@nifi.apache.org
> Subject: Re: Reg: Get files from ftp
> 
> Sourav,
> 
> We have begun transitioning from many of the Get*** Processors to
 List*** and Fetch*** Processors.
> There is a ListSFTP / FetchSFTP processor set but not currently a
 List/Fetch FTP. Is SFTP a possibility for you? Would you be
 interested in working on a List/Fetch FTP Processor set?
> 
> Thanks
> -Mark

Re: Exception while restarting the Nifi Cluster

2016-05-12 Thread Mark Payne
Sourav,

This certainly is something that we can improve upon, and there is actually 
already a
ticket created to handle this [1]. The idea is that we want to startup with 
some sort of
placeholder there so that the user is able to replace the processor with 
whatever makes
sense.

Thanks
-Mark


[1] https://issues.apache.org/jira/browse/NIFI-1052 




> On May 12, 2016, at 1:40 AM, Sourav Gulati  
> wrote:
> 
> Hi Nifi Team,
> 
> We analyzed the issue and found that it is because of custom processor 
> library missing in lib.
> 
> Steps to reproduce:
> 1. Create nar of custom processor and add it to the lib.
> 2. Start nifi and create a flow using that custom processor.
> 3. Stop nifi and remove the nar of custom processor
> 4. Start nifi
> 
> While starting it throws this exception. I think instead of Nifi instance not 
> getting started and throwing error in such case. It should start and that 
> particular flow remains disabled which is using this library.
> 
> Regards,
> Sourav Gulati
> 
> -Original Message-
> From: dale.chang13 [mailto:dale.chan...@outlook.com]
> Sent: Wednesday, May 11, 2016 5:24 PM
> To: dev@nifi.apache.org
> Subject: Re: Exception while restarting the Nifi Cluster
> 
> Rahul Dahiya wrote
>> Hi Team,
>> 
>> 
>> I am getting below exception while trying to restart the NiFi nodes :
>> 
>> 
>> java.lang.Exception: Unable to load flow due to: java.io.IOException:
>> org.apache.nifi.cluster.ConnectionException: Failed to connect node to
>> cluster because local flow controller partially updated.
>> Administrator should disconnect node and review flow for corruption.
>>at
>> org.apache.nifi.web.server.JettyServer.start(JettyServer.java:783)
>> ~[nifi-jetty-0.6.1.jar:0.6.1]
>>at org.apache.nifi.NiFi.
>> 
>> (NiFi.java:137) [nifi-runtime-0.6.1.jar:0.6.1]
>>at org.apache.nifi.NiFi.main(NiFi.java:227)
>> [nifi-runtime-0.6.1.jar:0.6.1]
>> 
>> 
>> 
>> Based on the following link :
>> 
>> https://mail-archives.apache.org/mod_mbox/nifi-dev/201508.mbox/%
> 
>> 3CBAY172-W19DF8C4EDE001FC6B8CA0ECE6B0@
> 
>> %3E
>> 
>> 
>> it seems that the issue could be because of trailing while space /
>> incorrect entries in the following properties files
>> 
>> 
>> nifi.sensitive.props.key
>> nifi.sensitive.props.algorithm
>> nifi.sensitive.props.provider
>> 
>> 
>> I've checked these properties files on all the nodes and they are
>> exactly the same on all nodes with no trailing white space.
>> 
>> 
>> Can someone please help on what could be the root cause of this
>> problem and how can it be resolved .
>> 
>> 
>> Also I don't want to clean the nifi working directories (I know it
>> will work fine on cleaning the directories). Thanks in advance for the help.
>> 
>> 
>> Regards,
>> 
>> Rahul
> 
> Could you give us the rest of the stack trace? It should contain more 
> information that would help us further diagnose your problem
> 
> 
> 
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/Exception-while-restarting-the-Nifi-Cluster-tp10135p10137.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.
> 
> 
> 
> 
> 
> 
> 
> 
> NOTE: This message may contain information that is confidential, proprietary, 
> privileged or otherwise protected by law. The message is intended solely for 
> the named addressee. If received in error, please destroy and notify the 
> sender. Any use of this email is prohibited when received in error. Impetus 
> does not represent, warrant and/or guarantee, that the integrity of this 
> communication has been maintained nor that the communication is free of 
> errors, virus, interception or interference.