Re: notEmpty

2016-03-22 Thread Andy LoPresto
Paul,

Your comments regarding the accuracy of the expression language documentation 
were in a different message, so it’s entirely possible and likely that Matt has 
not seen them yet. In general, when a member of the team or community has 
responded to a message, the remainder of the community does not reply unless 
they have additional information or are interested in furthering the 
conversation. As your comments were addressed in the other thread, I would not 
expect Matt to respond to them here.

My understanding is that your request is for a single expression language 
method which evaluates *n* terms and will return true if all of the terms are 
populated (i.e. not null, not empty, and not whitespace characters only). Thus 
far, composability has been favored, as with limited developer resources, it is 
unlikely and inefficient for us to try and produce a single method for all 
possible combinations. Rather, we have focused on building the common component 
methods which users can combine to form their complex queries. At this point, 
I’m not inclined to see a significant advantage of “allAttributes(x, y, 
z):notEmpty()” over “allAttributes(x, y, z):isEmpty():not()” but if you feel 
differently, please explain why. Without looking into it further, I don’t 
believe it would be a significant development effort, but right now all the 
developers are pretty heavily-loaded with new feature work. Our roadmap is 
fairly accelerated with many highly-requested features.

Matt’s answer above does provide the result you are looking for as we 
understand it. I accept that the “English” reading of the expression language 
may indicate otherwise, but the empirical evidence from running the expression 
through the parser indicates it returns true if and only if all attributes 
specified have “non-empty” values as defined above. This is probably another 
opportunity for us to improve the documentation around these methods (changing 
the name of the methods or the order of operation of chained methods would 
break backward compatibility). I wrote a unit test case which specifically 
evaluates against the following three scenarios:

* all attributes are empty -> FALSE
* some attributes are empty, some are populated -> FALSE
* all attributes are populated -> TRUE

I have provided a very truncated version of TestQuery.java [1] with only the 
relevant unit test and helper method here [2]. The query you want can be 
expressed as:

Java String: `"${allAttributes(\"attr1\", \"attr2\", 
\"attr3\"):isEmpty():not()}"`

Typed into processor property: `${allAttributes("attr1", "attr2", 
"attr3"):isEmpty():not()}`

Note: Backticks (`) are for formatting and should not be included when typing 
the query.

If it helps for understanding, this is how I mentally parse the chain:

`allAttributes(“1”, “2”, “3”).isEmpty().not()` -> `“1”.isEmpty().not() && 
“2”.isEmpty().not() && “3”.isEmpty().not()`

One piece of advice: everyone on the lists is trying to help the community and 
offering the expertise they can, and often in their free/personal time. On 
mailing lists like this, typing long phrases in all caps comes across 
aggressively or as “yelling” and can dissuade people from spending their energy 
assisting with requests. Keeping the lists welcoming and encouraging for users 
of all skill levels is a goal of the community in order to encourage as much 
participation as possible.

Please let us know if you have any further issues or suggestions. Thanks.

[1] 
https://github.com/alopresto/nifi/blob/28c2a3e5a65e640935f0259bd162967ef3e0b396/nifi-commons/nifi-expression-language/src/test/java/org/apache/nifi/attribute/expression/language/TestQuery.java
[2] https://gist.github.com/alopresto/faff5051f3c7b7102990

Andy LoPresto
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Mar 22, 2016, at 3:39 PM, Paul Nahay  wrote:
> 
> Thanks for responding, however...
> 
> 
> 
> Your expression below evaluates to true if IT IS NOT THE CASE THAT ALL THREE 
> ATTRIBUTES ARE EMPTY.
> 
> 
> 
> But I need an expression that evalutes to true if IT IS THE CASE THAT ALL 
> THREE ATTRIBUTES ARE NOT EMPTY.
> 
> 
> 
> There are two entirely different things.
> 
> 
> 
> Thus, if attr1 is set (thus is “not empty”), but the others are not (thus 
> they “are empty”), your expression evaluates to true.
> 
> 
> 
> But I need to ensure that NONE of the three “are empty”. I want an expression 
> that in the case given above evaluates to false.
> 
> 
> 
> Actually, I know how to make an expression that does what I need, however, it 
> is more confusing than it needs to be. The simpler expression that one would 
> desire to write REQUIRES that you offered a “notEmpty” function, thus would 
> be expressible as:
> 
> 
> 
> ${allAttributes('attr1','attr2','attr3'):notEmpty()}
> 
> 
> 
> Again, this gives true only if all three attributes are not empty. The 
> expression you suggest gives true if even 

RE: notEmpty

2016-03-22 Thread Paul Nahay
Thanks for responding, however...

 

Your expression below evaluates to true if IT IS NOT THE CASE THAT ALL THREE 
ATTRIBUTES ARE EMPTY.

 

But I need an expression that evalutes to true if IT IS THE CASE THAT ALL THREE 
ATTRIBUTES ARE NOT EMPTY.

 

There are two entirely different things.

 

Thus, if attr1 is set (thus is “not empty”), but the others are not (thus they 
“are empty”), your expression evaluates to true.

 

But I need to ensure that NONE of the three “are empty”. I want an expression 
that in the case given above evaluates to false.

 

Actually, I know how to make an expression that does what I need, however, it 
is more confusing than it needs to be. The simpler expression that one would 
desire to write REQUIRES that you offered a “notEmpty” function, thus would be 
expressible as:

 

${allAttributes('attr1','attr2','attr3'):notEmpty()}

 

Again, this gives true only if all three attributes are not empty. The 
expression you suggest gives true if even only ONE of the attributes is not 
empty!

 

The lack of “notEmpty()” seems a very grave omission on your part. 

 

And, you have no comment about the errors in your documentation for “isEmpty” 
and “allAttributes” that I told you?

 

--Paul Nahay

 

 

 

From: Matthew Clarke [mailto:matt.clarke@gmail.com] 
Sent: Tuesday, March 22, 2016 11:07 AM
To: dev@nifi.apache.org; Paul Nahay
Subject: Re: notEmpty

 

Paul,

You can achieve what you are trying to do by using the not function.

 

let assume the attributes you want to check to make sure they have a vlaue set 
are attr1, attr2, and attr3.

The expression would be 
${allAttributes('attr1','attr2','attr3'):isEmpty():not()}

Thanks,

Matt

 

On Tue, Mar 22, 2016 at 9:46 AM, Paul Nahay  wrote:

https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#isempty

Why don't you have function "notEmpty"? This would be useful when combined with 
"allAttributes".

I can't see any way, with your current set of functions, to determine that all 
of a set of attributes are not empty, using the "allAttributes" function.

You have "isNull" and "notNull", why don't you have "notEmpty"?

Paul Nahay
pna...@sprynet.com

 



[GitHub] nifi pull request: NIFI-1665 fixed GetKafka to reset consumer in c...

2016-03-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/296


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1645 refactored PutKafka

2016-03-22 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/295#discussion_r57103757
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/PutKafka.java
 ---
@@ -371,11 +371,12 @@ private FlowFile cleanUpFlowFileIfNecessary(FlowFile 
flowFile, ProcessSession se
 /**
  *
  */
-   private Object determinePartition(SplittableMessageContext 
messageContext, ProcessContext context, FlowFile flowFile) {
+private Integer determinePartition(SplittableMessageContext 
messageContext, ProcessContext context,
+FlowFile flowFile) {
String partitionStrategy = 
context.getProperty(PARTITION_STRATEGY).getValue();
-   String partitionValue = null;
+Integer partitionValue = null;
if 
(partitionStrategy.equalsIgnoreCase(USER_DEFINED_PARTITIONING.getValue())) {
-   partitionValue = 
context.getProperty(PARTITION).evaluateAttributeExpressions(flowFile).getValue();
+partitionValue = 
Integer.parseInt(context.getProperty(PARTITION).evaluateAttributeExpressions(flowFile).getValue());
--- End diff --

Agrred and done, will wait for your morning once-over and will make one 
last push


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1645 refactored PutKafka

2016-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/295#discussion_r57096148
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/KafkaPublisher.java
 ---
@@ -104,21 +128,52 @@ BitSet publish(SplittableMessageContext 
messageContext, InputStream contentStrea
 byte[] content = scanner.next().getBytes();
 if (content.length > 0){
 byte[] key = messageContext.getKeyBytes();
-partitionKey = partitionKey == null ? key : 
partitionKey;// the whole thing may still be null
 String topicName = messageContext.getTopicName();
+if (partitionKey == null && key != null) {
+partitionKey = this.getPartition(key, topicName);
+}
 if (prevFailedSegmentIndexes == null || 
prevFailedSegmentIndexes.get(segmentCounter)) {
-KeyedMessage message = new 
KeyedMessage(topicName, key, partitionKey, content);
-if (!this.toKafka(message)) {
-failedSegments.set(segmentCounter);
-}
+ProducerRecord message = new 
ProducerRecord(topicName, partitionKey, key, content);
+sendFutures.add(this.toKafka(message));
 }
 }
 segmentCounter++;
 }
 }
+segmentCounter = 0;
+for (Future future : sendFutures) {
+try {
+future.get(this.ackWaitTime, TimeUnit.MILLISECONDS);
--- End diff --

Here, we could end up waiting the ack wait time for each one of the 
messages. If we sent 1,000 messages we could wait 1,000 times the length of the 
ack wait time. Perhaps we should instead do something like:
```
final long endTime = System.currentTimeMillis() + ackWaitTime;
for (Future future : sendFutures) {
long toWait = endTime - System.currentTimeMillis();
if (toWait < 0 && !future.isDone() {
  // consider failure
} else {
try {
future.get(toWait, TimeUnit.MILLISECONDS);
}
catch (...) {
...
}
   }
}
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Errors in your Documentation

2016-03-22 Thread Andy LoPresto
I’ve also created a ticket [1] for the documentation improvements so you can 
follow the progress on that. As this is an open source project, submissions are 
always welcomed as well. You can follow the steps in the contributor guide [2] 
and quickstart guide [3] for guidance on submitting a pull request.

[1] https://issues.apache.org/jira/browse/NIFI-1670 

[2] https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide
[3] https://nifi.apache.org/quickstart.html

Andy LoPresto
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Mar 22, 2016, at 5:03 PM, Andy LoPresto  wrote:
> 
> Thanks Paul.
> 
> Bugs and feature requests can be submitted here [1] which will ensure they 
> are seen by the entire team/community. The most helpful reports include 
> screenshots when applicable, current system description, and potential use 
> cases or unit tests to verify issue resolution.
> 
> Very quickly, you can accomplish a “non-edit mode” by assigning 
> “ROLE_MONITOR” to a user in the authorized-users.xml configuration file, but 
> currently a feature whereby a single user can switch back and forth between 
> those two modes instantaneously does not exist. Can you describe the use case 
> where you see this being valuable? Are you having trouble and accidentally 
> modifying items on the canvas?
> 
> [1] 
> https://issues.apache.org/jira/browse/NIFI/?selectedTab=com.atlassian.jira.jira-projects-plugin:summary-panel
>  
> 
> 
> Andy LoPresto
> alopresto.apa...@gmail.com 
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> 
>> On Mar 22, 2016, at 4:55 PM, Paul Nahay > > wrote:
>> 
>> Great.
>> 
>> I can tell you a list of things I think need to be improved in NiFi. The 
>> most important is having two “modes” that one is always in, an “edit mode”, 
>> which is basically what you have 100% of the time now, and a “non-edit 
>> mode”, where the user cannot move things (inadvertently) around on the 
>> canvas.
>> 
>> Oh, and you desperately need an “undo”.
>> 
>> Paul
>> 
>> From: Andy LoPresto [mailto:alopresto.apa...@gmail.com 
>> ]
>> Sent: Tuesday, March 22, 2016 7:51 PM
>> To: dev@nifi.apache.org ; Paul Nahay
>> Cc: Jonathan Wood
>> Subject: Re: Errors in your Documentation
>> 
>> Thanks Paul. We always welcome feedback that helps us improve the product 
>> and documentation. I can’t promise those documentation fixes will be 
>> released in 0.6.0 but we will try to get them out as soon as possible.
>> 
>> Andy LoPresto
>> alopresto.apa...@gmail.com 
>> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>> 
>>> On Mar 22, 2016, at 5:52 AM, Paul Nahay >> > wrote:
>>> 
>>> I'm looking at:
>>> 
>>> https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html 
>>> 
>>> 
>>> 
>>> isEmpty
>>> Description: The isEmpty function returns true if the Subject is null or 
>>> contains only white-space (new line, carriage return, space, tab), false 
>>> otherwise.
>>> 
>>> This logically implies that isEmpty returns FALSE if the Subject contains 
>>> NO CHARACTERS AT ALL (not even white-space). This makes no sense at all.
>>> 
>>> 
>>> allAttributes
>>> Description: Checks to see if any of the given attributes, match the given 
>>> condition.
>>> 
>>> Hopefully you actually mean "all", not "any".
>>> 
>>> What's funny here is that THESE TWO functions were the ones I initially 
>>> needed, and your documentation has errors for BOTH of them. I'll expect 
>>> then to find more errors in your documentation, and will report them to you 
>>> as I find them.
>>> 
>>> Feedback confirming the errors will be appreciated.
>>> 
>>> Paul Nahay
>>> pna...@sprynet.com 



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Errors in your Documentation

2016-03-22 Thread Andy LoPresto
Thanks Paul.

Bugs and feature requests can be submitted here [1] which will ensure they are 
seen by the entire team/community. The most helpful reports include screenshots 
when applicable, current system description, and potential use cases or unit 
tests to verify issue resolution.

Very quickly, you can accomplish a “non-edit mode” by assigning “ROLE_MONITOR” 
to a user in the authorized-users.xml configuration file, but currently a 
feature whereby a single user can switch back and forth between those two modes 
instantaneously does not exist. Can you describe the use case where you see 
this being valuable? Are you having trouble and accidentally modifying items on 
the canvas?

[1] 
https://issues.apache.org/jira/browse/NIFI/?selectedTab=com.atlassian.jira.jira-projects-plugin:summary-panel
 


Andy LoPresto
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Mar 22, 2016, at 4:55 PM, Paul Nahay  wrote:
> 
> Great.
> 
> I can tell you a list of things I think need to be improved in NiFi. The most 
> important is having two “modes” that one is always in, an “edit mode”, which 
> is basically what you have 100% of the time now, and a “non-edit mode”, where 
> the user cannot move things (inadvertently) around on the canvas.
> 
> Oh, and you desperately need an “undo”.
> 
> Paul
> 
> From: Andy LoPresto [mailto:alopresto.apa...@gmail.com]
> Sent: Tuesday, March 22, 2016 7:51 PM
> To: dev@nifi.apache.org; Paul Nahay
> Cc: Jonathan Wood
> Subject: Re: Errors in your Documentation
> 
> Thanks Paul. We always welcome feedback that helps us improve the product and 
> documentation. I can’t promise those documentation fixes will be released in 
> 0.6.0 but we will try to get them out as soon as possible.
> 
> Andy LoPresto
> alopresto.apa...@gmail.com 
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> 
>> On Mar 22, 2016, at 5:52 AM, Paul Nahay > > wrote:
>> 
>> I'm looking at:
>> 
>> https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html 
>> 
>> 
>> 
>> isEmpty
>> Description: The isEmpty function returns true if the Subject is null or 
>> contains only white-space (new line, carriage return, space, tab), false 
>> otherwise.
>> 
>> This logically implies that isEmpty returns FALSE if the Subject contains NO 
>> CHARACTERS AT ALL (not even white-space). This makes no sense at all.
>> 
>> 
>> allAttributes
>> Description: Checks to see if any of the given attributes, match the given 
>> condition.
>> 
>> Hopefully you actually mean "all", not "any".
>> 
>> What's funny here is that THESE TWO functions were the ones I initially 
>> needed, and your documentation has errors for BOTH of them. I'll expect then 
>> to find more errors in your documentation, and will report them to you as I 
>> find them.
>> 
>> Feedback confirming the errors will be appreciated.
>> 
>> Paul Nahay
>> pna...@sprynet.com 


signature.asc
Description: Message signed with OpenPGP using GPGMail


RE: Errors in your Documentation

2016-03-22 Thread Paul Nahay
Great.

 

I can tell you a list of things I think need to be improved in NiFi. The most 
important is having two “modes” that one is always in, an “edit mode”, which is 
basically what you have 100% of the time now, and a “non-edit mode”, where the 
user cannot move things (inadvertently) around on the canvas.

 

Oh, and you desperately need an “undo”.

 

Paul

 

From: Andy LoPresto [mailto:alopresto.apa...@gmail.com] 
Sent: Tuesday, March 22, 2016 7:51 PM
To: dev@nifi.apache.org; Paul Nahay
Cc: Jonathan Wood
Subject: Re: Errors in your Documentation

 

Thanks Paul. We always welcome feedback that helps us improve the product and 
documentation. I can’t promise those documentation fixes will be released in 
0.6.0 but we will try to get them out as soon as possible. 

 

Andy LoPresto

alopresto.apa...@gmail.com

PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

 

On Mar 22, 2016, at 5:52 AM, Paul Nahay  wrote:

 

I'm looking at:

https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html




isEmpty

Description: The isEmpty function returns true if the Subject is null or 
contains only white-space (new line, carriage return, space, tab), false 
otherwise.

This logically implies that isEmpty returns FALSE if the Subject contains NO 
CHARACTERS AT ALL (not even white-space). This makes no sense at all.




allAttributes

Description: Checks to see if any of the given attributes, match the given 
condition.

Hopefully you actually mean "all", not "any".

What's funny here is that THESE TWO functions were the ones I initially needed, 
and your documentation has errors for BOTH of them. I'll expect then to find 
more errors in your documentation, and will report them to you as I find them. 

Feedback confirming the errors will be appreciated.

Paul Nahay
pna...@sprynet.com

 



Re: Errors in your Documentation

2016-03-22 Thread Andy LoPresto
Thanks Paul. We always welcome feedback that helps us improve the product and 
documentation. I can’t promise those documentation fixes will be released in 
0.6.0 but we will try to get them out as soon as possible.

Andy LoPresto
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Mar 22, 2016, at 5:52 AM, Paul Nahay  wrote:
> 
> I'm looking at:
> 
> https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html
> 
>> isEmpty
> Description: The isEmpty function returns true if the Subject is null or 
> contains only white-space (new line, carriage return, space, tab), false 
> otherwise.
> 
> This logically implies that isEmpty returns FALSE if the Subject contains NO 
> CHARACTERS AT ALL (not even white-space). This makes no sense at all.
> 
>> allAttributes
> Description: Checks to see if any of the given attributes, match the given 
> condition.
> 
> Hopefully you actually mean "all", not "any".
> 
> What's funny here is that THESE TWO functions were the ones I initially 
> needed, and your documentation has errors for BOTH of them. I'll expect then 
> to find more errors in your documentation, and will report them to you as I 
> find them.
> 
> Feedback confirming the errors will be appreciated.
> 
> Paul Nahay
> pna...@sprynet.com



signature.asc
Description: Message signed with OpenPGP using GPGMail


[GitHub] nifi pull request: NIFI-1645 refactored PutKafka

2016-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/295#discussion_r57087522
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/PutKafka.java
 ---
@@ -371,11 +371,12 @@ private FlowFile cleanUpFlowFileIfNecessary(FlowFile 
flowFile, ProcessSession se
 /**
  *
  */
-   private Object determinePartition(SplittableMessageContext 
messageContext, ProcessContext context, FlowFile flowFile) {
+private Integer determinePartition(SplittableMessageContext 
messageContext, ProcessContext context,
+FlowFile flowFile) {
String partitionStrategy = 
context.getProperty(PARTITION_STRATEGY).getValue();
-   String partitionValue = null;
+Integer partitionValue = null;
if 
(partitionStrategy.equalsIgnoreCase(USER_DEFINED_PARTITIONING.getValue())) {
-   partitionValue = 
context.getProperty(PARTITION).evaluateAttributeExpressions(flowFile).getValue();
+partitionValue = 
Integer.parseInt(context.getProperty(PARTITION).evaluateAttributeExpressions(flowFile).getValue());
--- End diff --

We should probably wrap the Integer.parseInt() in a try/catch and if an 
Exception is thrown return null?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1563: Federate requests and merge response...

2016-03-22 Thread mcgilman
Github user mcgilman commented on the pull request:

https://github.com/apache/nifi/pull/294#issuecomment-23951
  
@markap14 Proposed commit for my comments [1]. If this looks good to you, 
I'll include them in your PR.

[1] 
https://github.com/mcgilman/nifi/commit/9796e7620cb064653293d8bc0b2293b8b063f3b7


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1420 Fixing minor bugs in GetSplunk

2016-03-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/299


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1420 Fixing minor bugs in GetSplunk

2016-03-22 Thread mcgilman
Github user mcgilman commented on the pull request:

https://github.com/apache/nifi/pull/299#issuecomment-199973350
  
Build and contrib check is good. Verified functionality with time zone on 
formatted date string, new time field strategy, state clearing issue, and start 
time to eliminate possible data duplication. +1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1666: Fixed bug with EL evaluation in PutE...

2016-03-22 Thread mattyb149
Github user mattyb149 closed the pull request at:

https://github.com/apache/nifi/pull/298


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1666: Fixed bug with EL evaluation in PutE...

2016-03-22 Thread JPercivall
Github user JPercivall commented on the pull request:

https://github.com/apache/nifi/pull/298#issuecomment-199960791
  
+1

Reviewed the code, did a contrib check build and verified the functionality 
in a NiFi instance against ES 2.1.1.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1645 refactored PutKafka

2016-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/295#discussion_r57044106
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/Partitioners.java
 ---
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.kafka;
+
+import java.util.Random;
+
+import kafka.producer.Partitioner;
+import kafka.utils.VerifiableProperties;
+
+/**
+ * Collection of implementation of common Kafka {@link Partitioner}s.
+ */
+final public class Partitioners {
+
+private Partitioners() {
+}
+/**
+ * {@link Partitioner} that implements 'round-robin' mechanism which 
evenly
+ * distributes load between all available partitions.
+ */
+public static class RoundRobinPartitioner implements Partitioner {
+private volatile int index;
--- End diff --

OK, i wasn't sure yet how it was being accessed exactly. Just wanted to 
double-check


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1420 Fixing minor bugs in GetSplunk

2016-03-22 Thread mcgilman
Github user mcgilman commented on the pull request:

https://github.com/apache/nifi/pull/299#issuecomment-199951509
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1420 Fixing minor bugs in GetSplunk

2016-03-22 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/299

NIFI-1420 Fixing minor bugs in GetSplunk

- Adding a Time Zone property so the Managed time ranges use the provided 
time zone when formatting the date strings
- Adding a Time Field Strategy property to choose between searching event 
time or index time
- Making the next iteration use previousLastTime + 1 ms to avoid overlap
- Fixing bug where GetSplunk incorrectly cleared state on a restart of NiFi

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-1420-GetSplunk-Issues

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/299.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #299


commit 3aed0ed8dbc46f4acb302f0cd63f7f9fde2c1713
Author: Bryan Bende 
Date:   2016-03-22T15:14:03Z

NIFI-1420 Fixing minor bugs in GetSplunk
- Adding a Time Zone property so the Managed time ranges use the provided 
time zone when formatting the date strings
- Adding a Time Field Strategy property to choose between searching event 
time or index time
- Making the next iteration use previousLastTime + 1 ms to avoid overlap
- Fixing bug where GetSplunk incorrectly cleared state on a restart of NiFi




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1645 refactored PutKafka

2016-03-22 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/295#discussion_r57040700
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/Partitioners.java
 ---
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.kafka;
+
+import java.util.Random;
+
+import kafka.producer.Partitioner;
+import kafka.utils.VerifiableProperties;
+
+/**
+ * Collection of implementation of common Kafka {@link Partitioner}s.
+ */
+final public class Partitioners {
+
+private Partitioners() {
+}
+/**
+ * {@link Partitioner} that implements 'round-robin' mechanism which 
evenly
+ * distributes load between all available partitions.
+ */
+public static class RoundRobinPartitioner implements Partitioner {
+private volatile int index;
--- End diff --

The use of volatile has two meanings and in this case it's to ensure the 
visibility between the threads (basically T1 sets the value and T2 must see it 
right after it's been set. Without volatile it is actually not guaranteed. 
Think eventual consistency). It was never meant to enforce sequential 
distribution since in multi-threaded environment it's somewhat menaingless


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1664 Preferring System.nanoTime to System....

2016-03-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/297


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1664 Preferring System.nanoTime to System....

2016-03-22 Thread apiri
Github user apiri commented on the pull request:

https://github.com/apache/nifi/pull/297#issuecomment-199945870
  
Great, thanks everyone for scoping this out.  I'll merge those exclusions 
in with my commit and get them off to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1664 Preferring System.nanoTime to System....

2016-03-22 Thread mcgilman
Github user mcgilman commented on the pull request:

https://github.com/apache/nifi/pull/297#issuecomment-199945136
  
+1

The build looks good in both OSX and Windows 10.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Error at startup

2016-03-22 Thread Matt Gilman
Pierre,

The error message is really strange. It appears that a number of classes
did not compile correctly. Was there any unexpected output during the maven
build? I just verified the build locally on OSX and Windows 10. In Windows
10 I copied the zip, extracted, and executed run-nifi.bat without
additionally configuration.

Matt

On Tue, Mar 22, 2016 at 10:47 AM, Pierre Villard <
pierre.villard...@gmail.com> wrote:

> I did a full build with Maven, took the generated zip
> (nifi-0.6.0-SNAPSHOT-bin.zip), unzipped it, and executed the run-nifi.bat.
> So it is a clean instance. As you said, the local-provider is correctly
> set:
>
> 
> local-provider
>
>
> org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider
> ./state/local
> 
>
>
> 2016-03-22 15:38 GMT+01:00 Matt Gilman :
>
> > Pierre,
> >
> > Are you attempting to upgrade an existing instance? If so, what version
> are
> > you coming from? I'm wondering if there is some configuration missing
> after
> > the upgrade. Are you able to start up the built assembly using default
> > configuration?
> >
> > In your /conf/state-management.xml can you verify the
> > configuration of the local-provider? By default, it's configured to
> > use
> >
> org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider
> > with a Directory property that points to ./state/local.
> >
> > I think the error in the logs is having trouble with that property.
> >
> > Matt
> >
> >
> > On Tue, Mar 22, 2016 at 10:16 AM, Pierre Villard <
> > pierre.villard...@gmail.com> wrote:
> >
> > > OK. Logs are here:
> > > https://raw.githubusercontent.com/pvillard31/share/master/nifi-app.log
> > >
> > > 2016-03-22 15:12 GMT+01:00 Matt Burgess :
> > >
> > > > I can't see them, perhaps the attachment is being removed. Can you
> > paste
> > > > the text from the logs into the email?
> > > >
> > > > Thanks,
> > > > Matt
> > > >
> > > > On Tue, Mar 22, 2016 at 10:10 AM, Pierre Villard <
> > > > pierre.villard...@gmail.com> wrote:
> > > >
> > > > > Erf that's strange, I do see logs from my side.
> > > > > Is it better?
> > > > >
> > > > > 2016-03-22 15:06 GMT+01:00 Oleg Zhurakousky <
> > > > ozhurakou...@hortonworks.com>
> > > > > :
> > > > >
> > > > >> Pierre, no logs ;)
> > > > >>
> > > > >> > On Mar 22, 2016, at 10:03 AM, Pierre Villard <
> > > > >> pierre.villard...@gmail.com> wrote:
> > > > >> >
> > > > >> > Hi,
> > > > >> >
> > > > >> > I updated my local checkout to current master and did a
> successful
> > > > >> maven build. When trying to start generated binaries, I have a
> bunch
> > > of
> > > > >> errors and NIFI does not start. See attached logs.
> > > > >> >
> > > > >> > Does someone experience the same issue?
> > > > >> >
> > > > >> > Thanks,
> > > > >> > Pierre
> > > > >> >
> > > > >>
> > > > >>
> > > > >
> > > >
> > >
> >
>


[GitHub] nifi pull request: NIFI-1666: Fixed bug with EL evaluation in PutE...

2016-03-22 Thread mattyb149
GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/298

NIFI-1666: Fixed bug with EL evaluation in PutElasticsearch processor



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-1666

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/298.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #298


commit 22768c2f8df0de9aa90496e21f5645f6a0741000
Author: Matt Burgess 
Date:   2016-03-22T18:00:25Z

NIFI-1666: Fixed bug with EL evaluation in PutElasticsearch processor




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1664 Preferring System.nanoTime to System....

2016-03-22 Thread JPercivall
Github user JPercivall commented on the pull request:

https://github.com/apache/nifi/pull/297#issuecomment-199939742
  
+1

Reviewed the code and did a contrib check build on Windows 8, looks good.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Closing in on the Apache NiFi 0.6.0 release

2016-03-22 Thread Matt Burgess
All,

We found an issue with the PutElasticsearch processor as well, it doesn't
evaluate EL expressions using the flow file attributes for the Index and
Document Type properties. This is a simple fix, I've written it up as
NIFI-1666 [1] and will have a PR very shortly. I'd like to get this into
0.6.0 as well.

Thanks,
Matt

[1] https://issues.apache.org/jira/browse/NIFI-1666

On Tue, Mar 22, 2016 at 1:18 PM, Bryan Bende  wrote:

> All,
>
> While testing the original RC I came across a couple of issues with the new
> GetSplunk processor.
>
> The first issue relates to being able to specify the timezone through a
> processor property, so that the timezone used for searching will match with
> splunk's timezone.
> The second is is that GetSplunk incorrectly clears it's state when NiFi
> starts up causing to pull data it has already pulled.
>
> I'd like to re-open NIFI-1420 and submit fixes for these problems before we
> make the next RC. I should be able to do this shortly.
>
> Thanks,
>
> Bryan
>
> On Tue, Mar 22, 2016 at 11:50 AM, Aldrin Piri 
> wrote:
>
> > Joe,
> >
> > Looking through the associated tickets, both sound like worthwhile
> > additions and can hold off until those items get through reviews.
> >
> > --Aldrin
> >
> > On Tue, Mar 22, 2016 at 11:47 AM, Joe Witt  wrote:
> >
> > > Aldrin,
> > >
> > > NIFI-1665 appears to correct a problematic behavior when pulling from
> > > Kafka and when timeouts can occur.  Definitely think we should get
> > > this in the build.  I also see that NIFI-1645 is up and given the
> > > trouble that is causing for use of delimiter function we should engage
> > > on this.
> > >
> > > Since you're working the windows build issue and these are in play do
> > > you mind waiting a bit before sending the new RC ?
> > >
> > > Thanks
> > > Joe
> > >
> > > On Mon, Mar 21, 2016 at 1:42 PM, Aldrin Piri 
> > wrote:
> > > > All,
> > > >
> > > > It looks like the last ticket for 0.6.0 has been merged and resolved.
> > > >
> > > > I will begin the RC process shortly working off of commit
> > > > 736896246cf021dbed31d4eb1e22e0755e4705f0 [1] [2].
> > > >
> > > > [1]
> > > >
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=736896246cf021dbed31d4eb1e22e0755e4705f0
> > > > [2]
> > > >
> > >
> >
> https://github.com/apache/nifi/commit/736896246cf021dbed31d4eb1e22e0755e4705f0
> > > >
> > > > On Mon, Mar 21, 2016 at 1:48 AM, Tony Kurc  wrote:
> > > >
> > > >> The Locale issue was reviewed, confirmed as fixed by reporter and
> > merged
> > > >> in.
> > > >>
> > > >> On Sun, Mar 20, 2016 at 10:35 PM, Joe Witt 
> > wrote:
> > > >>
> > > >> > Team,
> > > >> >
> > > >> > There are a couple finishing touches PRs to fix a big defect in
> > > >> > SplitText for certain input types, improve locale handling and
> test
> > > >> > behavior for Kit bundle, and to clean up content viewing from
> > > >> > connections.
> > > >> >
> > > >> > Getting good input on findings folks have so please keep it coming
> > as
> > > >> > that helps ensure a solid/healthy RC.
> > > >> >
> > > >> > Thanks
> > > >> > Joe
> > > >> >
> > > >> > On Sat, Mar 19, 2016 at 6:21 PM, Tony Kurc 
> > wrote:
> > > >> > > Recommend https://issues.apache.org/jira/browse/NIFI-1651 be
> > > included
> > > >> in
> > > >> > > 0.6.0
> > > >> > >
> > > >> > > On Wed, Mar 16, 2016 at 4:08 PM, Joe Witt 
> > > wrote:
> > > >> > >
> > > >> > >> Team,
> > > >> > >>
> > > >> > >> Ok sooo close.  We have 5 tickets remaining.
> > > >> > >>
> > > >> > >> - Additional functionality/cleanup for SplitText [1]
> > > >> > >> [status] Still in discussions. Recommend we move this change to
> > > 0.7.0.
> > > >> > >> Solid effort on both code contributor and reviewer side but
> this
> > > is a
> > > >> > >> tricky one.
> > > >> > >>
> > > >> > >> - Support Kerberos based authentication to REST API [2]
> > > >> > >> [status] PR is in. Reviewing and PR tweaking appears active.
> > Looks
> > > >> > >> quite close and comments indicate great results.
> > > >> > >>
> > > >> > >> - Add Kerberos support to HBase processors [3]
> > > >> > >> [status] Patch in. Under review.  Running on live test system
> > with
> > > >> > >> great results.
> > > >> > >>
> > > >> > >> - Add support for Spring Context loaded processors (Spring
> > > >> > >> Integrations, Camel, ...) [4]
> > > >> > >> [status] Appears ready. Getting review feedback.
> > > >> > >>
> > > >> > >> - Zookeeper interaction for NiFI state management should limit
> > > state
> > > >> to
> > > >> > >> 1MB [6]
> > > >> > >> [status] Patch is in and review under way.  Looks close.
> > > >> > >>
> > > >> > >> [1] https://issues.apache.org/jira/browse/NIFI-1118
> > > >> > >> [2] https://issues.apache.org/jira/browse/NIFI-1274
> > > >> > >> [3] https://issues.apache.org/jira/browse/NIFI-1488
> > > >> > >> [4] 

[GitHub] nifi pull request: NIFI-1664 Preferring System.nanoTime to System....

2016-03-22 Thread JPercivall
Github user JPercivall commented on the pull request:

https://github.com/apache/nifi/pull/297#issuecomment-199936658
  
With the added pom exclusions the build passes


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: NiFi-1660

2016-03-22 Thread McDermott, Chris Kevin (MSDU - STaTS/StorefrontRemote)
Hi Aldrin,

I agree that an enhancement to the existing processor could fulfill the same 
goal.  My thinking was that adding to the EL would essential add the 
functionality to all processors that support the EL.  Of course the adding to 
the EL does not mean that the existing processor could not be enhanced.  In 
fact I was also planning on doing that as a follow up.  

I still planning on doing the EL extension; its nearly done.  But if the pull 
request isn’t ultimately accepted, that’s fine as its a good exercise for me.

Cheers,
Chris





On 3/22/16, 12:51 PM, "Aldrin Piri"  wrote:

>Chris,
>
>Awesome that you are ready to dive in and get this fixed up.  The
>functionality is certainly one that would be helpful.
>I do find myself a bit torn on whether the inclusion of this as EL is
>preferred instead of the extension of EvaluateJsonPath as suggested
>in NIFI-1567 [1].  Are there use cases that would not be fulfilled via
>enhancements to the existing processor that I might be overlooking?
>
>Thanks!
>
>[1] https://issues.apache.org/jira/browse/NIFI-1567
>
>On Tue, Mar 22, 2016 at 12:45 PM, Matt Burgess  wrote:
>
>> Chris,
>>
>> Great to hear! For the following steps, I'm assuming it will be a
>> one-argument function called "jsonPath" that takes a String containing a
>> JSON Path expression to be evaluated on the "subject" of the function. So I
>> am picturing it used like:
>>
>> ${ my.json.attribute:jsonPath("$.path.to.my.data") }
>>
>> To add such a function to EL, you'll likely want to do something like the
>> following:
>>
>> 1) Add a token for the function name to the Lexer:
>>
>> https://github.com/apache/nifi/blob/master/nifi-commons/nifi-expression-language/src/main/antlr3/org/apache/nifi/attribute/expression/language/antlr/AttributeExpressionLexer.g#L120
>>
>> 2) Add the token to the oneArgString rule:
>>
>> https://github.com/apache/nifi/blob/master/nifi-commons/nifi-expression-language/src/main/antlr3/org/apache/nifi/attribute/expression/language/antlr/AttributeExpressionParser.g#L77
>>
>> 3) Add a JsonPathEvaluator class (or whatever its called) to the functions
>> package:
>>
>> https://github.com/apache/nifi/tree/master/nifi-commons/nifi-expression-language/src/main/java/org/apache/nifi/attribute/expression/language/evaluation/functions
>>
>> 4) Add a case to Query.buildFunctionEvaluator() to create and use a new
>> JsonPathEvaluator:
>>
>> https://github.com/apache/nifi/blob/master/nifi-commons/nifi-expression-language/src/main/java/org/apache/nifi/attribute/expression/language/Query.java#L1061
>>
>> 5) Add unit test(s) to TestQuery to test your function:
>>
>> https://github.com/apache/nifi/blob/master/nifi-commons/nifi-expression-language/src/test/java/org/apache/nifi/attribute/expression/language/TestQuery.java
>>
>> Also whatever dependencies you bring in (Jayway, e.g.) will need to be
>> added to the EL module's POM:
>>
>> https://github.com/apache/nifi/blob/master/nifi-commons/nifi-expression-language/pom.xml
>> .
>> If a parent POM has declared a version already (such as the NiFi parent POM
>> declaring jayway 2.0.0 at present), you may want to keep that unless you
>> need to override for some reason.
>>
>> Looking forward to your contribution, please let me/us know if you run into
>> any trouble or have any questions.
>>
>> Cheers,
>> Matt
>>
>>
>> On Tue, Mar 22, 2016 at 10:59 AM, McDermott, Chris Kevin (MSDU -
>> STaTS/StorefrontRemote)  wrote:
>>
>> > Hi folks,
>> >
>> > I’m a newbie NiFi user but long time Java developer.  I’ve entered a Jira
>> > issue to extend the expression language to add a function to evaluate a
>> > Json path against the subject.  Looking at the code I feel this is very
>> > doable, and I am up for task so I’m planning on tackling it.  I’ve read
>> the
>> > developers guide, etc.  Please, any advice and direction is most welcome.
>> >
>> > Thanks,
>> >
>> > Chris McDermott.
>> >
>>


Re: Closing in on the Apache NiFi 0.6.0 release

2016-03-22 Thread Bryan Bende
All,

While testing the original RC I came across a couple of issues with the new
GetSplunk processor.

The first issue relates to being able to specify the timezone through a
processor property, so that the timezone used for searching will match with
splunk's timezone.
The second is is that GetSplunk incorrectly clears it's state when NiFi
starts up causing to pull data it has already pulled.

I'd like to re-open NIFI-1420 and submit fixes for these problems before we
make the next RC. I should be able to do this shortly.

Thanks,

Bryan

On Tue, Mar 22, 2016 at 11:50 AM, Aldrin Piri  wrote:

> Joe,
>
> Looking through the associated tickets, both sound like worthwhile
> additions and can hold off until those items get through reviews.
>
> --Aldrin
>
> On Tue, Mar 22, 2016 at 11:47 AM, Joe Witt  wrote:
>
> > Aldrin,
> >
> > NIFI-1665 appears to correct a problematic behavior when pulling from
> > Kafka and when timeouts can occur.  Definitely think we should get
> > this in the build.  I also see that NIFI-1645 is up and given the
> > trouble that is causing for use of delimiter function we should engage
> > on this.
> >
> > Since you're working the windows build issue and these are in play do
> > you mind waiting a bit before sending the new RC ?
> >
> > Thanks
> > Joe
> >
> > On Mon, Mar 21, 2016 at 1:42 PM, Aldrin Piri 
> wrote:
> > > All,
> > >
> > > It looks like the last ticket for 0.6.0 has been merged and resolved.
> > >
> > > I will begin the RC process shortly working off of commit
> > > 736896246cf021dbed31d4eb1e22e0755e4705f0 [1] [2].
> > >
> > > [1]
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=736896246cf021dbed31d4eb1e22e0755e4705f0
> > > [2]
> > >
> >
> https://github.com/apache/nifi/commit/736896246cf021dbed31d4eb1e22e0755e4705f0
> > >
> > > On Mon, Mar 21, 2016 at 1:48 AM, Tony Kurc  wrote:
> > >
> > >> The Locale issue was reviewed, confirmed as fixed by reporter and
> merged
> > >> in.
> > >>
> > >> On Sun, Mar 20, 2016 at 10:35 PM, Joe Witt 
> wrote:
> > >>
> > >> > Team,
> > >> >
> > >> > There are a couple finishing touches PRs to fix a big defect in
> > >> > SplitText for certain input types, improve locale handling and test
> > >> > behavior for Kit bundle, and to clean up content viewing from
> > >> > connections.
> > >> >
> > >> > Getting good input on findings folks have so please keep it coming
> as
> > >> > that helps ensure a solid/healthy RC.
> > >> >
> > >> > Thanks
> > >> > Joe
> > >> >
> > >> > On Sat, Mar 19, 2016 at 6:21 PM, Tony Kurc 
> wrote:
> > >> > > Recommend https://issues.apache.org/jira/browse/NIFI-1651 be
> > included
> > >> in
> > >> > > 0.6.0
> > >> > >
> > >> > > On Wed, Mar 16, 2016 at 4:08 PM, Joe Witt 
> > wrote:
> > >> > >
> > >> > >> Team,
> > >> > >>
> > >> > >> Ok sooo close.  We have 5 tickets remaining.
> > >> > >>
> > >> > >> - Additional functionality/cleanup for SplitText [1]
> > >> > >> [status] Still in discussions. Recommend we move this change to
> > 0.7.0.
> > >> > >> Solid effort on both code contributor and reviewer side but this
> > is a
> > >> > >> tricky one.
> > >> > >>
> > >> > >> - Support Kerberos based authentication to REST API [2]
> > >> > >> [status] PR is in. Reviewing and PR tweaking appears active.
> Looks
> > >> > >> quite close and comments indicate great results.
> > >> > >>
> > >> > >> - Add Kerberos support to HBase processors [3]
> > >> > >> [status] Patch in. Under review.  Running on live test system
> with
> > >> > >> great results.
> > >> > >>
> > >> > >> - Add support for Spring Context loaded processors (Spring
> > >> > >> Integrations, Camel, ...) [4]
> > >> > >> [status] Appears ready. Getting review feedback.
> > >> > >>
> > >> > >> - Zookeeper interaction for NiFI state management should limit
> > state
> > >> to
> > >> > >> 1MB [6]
> > >> > >> [status] Patch is in and review under way.  Looks close.
> > >> > >>
> > >> > >> [1] https://issues.apache.org/jira/browse/NIFI-1118
> > >> > >> [2] https://issues.apache.org/jira/browse/NIFI-1274
> > >> > >> [3] https://issues.apache.org/jira/browse/NIFI-1488
> > >> > >> [4] https://issues.apache.org/jira/browse/NIFI-1571
> > >> > >> [5] https://issues.apache.org/jira/browse/NIFI-1626
> > >> > >>
> > >> > >> Thanks
> > >> > >>
> > >> > >> On Wed, Mar 16, 2016 at 4:04 PM, Joe Witt 
> > wrote:
> > >> > >> > Team,
> > >> > >> >
> > >> > >> > Ok sooo close.  We have 6 tickets remaining.
> > >> > >> >
> > >> > >> > - Additional functionality/cleanup for SplitText [1]
> > >> > >> > [status] Still in discussions. Recommend we move this change to
> > >> 0.7.0.
> > >> > >> > Solid effort on both code contributor and reviewer side but
> this
> > is
> > >> a
> > >> > >> > tricky one.
> > >> > >> >
> > >> > >> > - Support Kerberos based authentication to REST API [2]
> > >> > >> > [status] PR 

[GitHub] nifi pull request: NIFI-1645 refactored PutKafka

2016-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/295#discussion_r57026907
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/PutKafka.java
 ---
@@ -446,444 +336,136 @@ protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String
 .build();
 }
 
-
 @Override
-public void onTrigger(final ProcessContext context, final 
ProcessSessionFactory sessionFactory) throws ProcessException {
-FlowFileMessageBatch batch;
-while ((batch = completeBatches.poll()) != null) {
-batch.completeSession();
-}
+protected Collection customValidate(final 
ValidationContext validationContext) {
+final List results = new ArrayList<>();
 
-final ProcessSession session = sessionFactory.createSession();
-final FlowFile flowFile = session.get();
-if (flowFile != null){
-Future consumptionFuture = this.executor.submit(new 
Callable() {
-@Override
-public Void call() throws Exception {
-doOnTrigger(context, session, flowFile);
-return null;
-}
-});
-try {
-consumptionFuture.get(this.deadlockTimeout, 
TimeUnit.MILLISECONDS);
-} catch (InterruptedException e) {
-consumptionFuture.cancel(true);
-Thread.currentThread().interrupt();
-getLogger().warn("Interrupted while sending messages", e);
-} catch (ExecutionException e) {
-throw new IllegalStateException(e);
-} catch (TimeoutException e) {
-consumptionFuture.cancel(true);
-getLogger().warn("Timed out after " + this.deadlockTimeout 
+ " milliseconds while sending messages", e);
-}
-} else {
-context.yield();
+final String partitionStrategy = 
validationContext.getProperty(PARTITION_STRATEGY).getValue();
+if 
(partitionStrategy.equalsIgnoreCase(USER_DEFINED_PARTITIONING.getValue())
+&& !validationContext.getProperty(PARTITION).isSet()) {
+results.add(new 
ValidationResult.Builder().subject("Partition").valid(false)
+.explanation("The  property must be set 
when configured to use the User-Defined Partitioning Strategy")
+.build());
 }
+return results;
 }
 
-private void doOnTrigger(final ProcessContext context, ProcessSession 
session, final FlowFile flowFile) throws ProcessException {
-final String topic = 
context.getProperty(TOPIC).evaluateAttributeExpressions(flowFile).getValue();
-final String key = 
context.getProperty(KEY).evaluateAttributeExpressions(flowFile).getValue();
-final byte[] keyBytes = key == null ? null : 
key.getBytes(StandardCharsets.UTF_8);
-String delimiter = 
context.getProperty(MESSAGE_DELIMITER).evaluateAttributeExpressions(flowFile).getValue();
-if (delimiter != null) {
-delimiter = delimiter.replace("\\n", "\n").replace("\\r", 
"\r").replace("\\t", "\t");
-}
-
-final Producer producer = getProducer();
-
-if (delimiter == null) {
-// Send the entire FlowFile as a single message.
-final byte[] value = new byte[(int) flowFile.getSize()];
-session.read(flowFile, new InputStreamCallback() {
-@Override
-public void process(final InputStream in) throws 
IOException {
-StreamUtils.fillBuffer(in, value);
-}
-});
-
-final Integer partition;
-try {
-partition = getPartition(context, flowFile, topic);
-} catch (final Exception e) {
-getLogger().error("Failed to obtain a partition for {} due 
to {}", new Object[] {flowFile, e});
-session.transfer(session.penalize(flowFile), REL_FAILURE);
-session.commit();
-return;
-}
-
-final ProducerRecord producerRecord = new 
ProducerRecord<>(topic, partition, keyBytes, value);
-
-final FlowFileMessageBatch messageBatch = new 
FlowFileMessageBatch(session, flowFile, topic);
-messageBatch.setNumMessages(1);
-activeBatches.add(messageBatch);
-
-try {
-producer.send(producerRecord, new Callback() {
-@Override
- 

Re: NiFi-1660

2016-03-22 Thread Aldrin Piri
Chris,

Awesome that you are ready to dive in and get this fixed up.  The
functionality is certainly one that would be helpful.
I do find myself a bit torn on whether the inclusion of this as EL is
preferred instead of the extension of EvaluateJsonPath as suggested
in NIFI-1567 [1].  Are there use cases that would not be fulfilled via
enhancements to the existing processor that I might be overlooking?

Thanks!

[1] https://issues.apache.org/jira/browse/NIFI-1567

On Tue, Mar 22, 2016 at 12:45 PM, Matt Burgess  wrote:

> Chris,
>
> Great to hear! For the following steps, I'm assuming it will be a
> one-argument function called "jsonPath" that takes a String containing a
> JSON Path expression to be evaluated on the "subject" of the function. So I
> am picturing it used like:
>
> ${ my.json.attribute:jsonPath("$.path.to.my.data") }
>
> To add such a function to EL, you'll likely want to do something like the
> following:
>
> 1) Add a token for the function name to the Lexer:
>
> https://github.com/apache/nifi/blob/master/nifi-commons/nifi-expression-language/src/main/antlr3/org/apache/nifi/attribute/expression/language/antlr/AttributeExpressionLexer.g#L120
>
> 2) Add the token to the oneArgString rule:
>
> https://github.com/apache/nifi/blob/master/nifi-commons/nifi-expression-language/src/main/antlr3/org/apache/nifi/attribute/expression/language/antlr/AttributeExpressionParser.g#L77
>
> 3) Add a JsonPathEvaluator class (or whatever its called) to the functions
> package:
>
> https://github.com/apache/nifi/tree/master/nifi-commons/nifi-expression-language/src/main/java/org/apache/nifi/attribute/expression/language/evaluation/functions
>
> 4) Add a case to Query.buildFunctionEvaluator() to create and use a new
> JsonPathEvaluator:
>
> https://github.com/apache/nifi/blob/master/nifi-commons/nifi-expression-language/src/main/java/org/apache/nifi/attribute/expression/language/Query.java#L1061
>
> 5) Add unit test(s) to TestQuery to test your function:
>
> https://github.com/apache/nifi/blob/master/nifi-commons/nifi-expression-language/src/test/java/org/apache/nifi/attribute/expression/language/TestQuery.java
>
> Also whatever dependencies you bring in (Jayway, e.g.) will need to be
> added to the EL module's POM:
>
> https://github.com/apache/nifi/blob/master/nifi-commons/nifi-expression-language/pom.xml
> .
> If a parent POM has declared a version already (such as the NiFi parent POM
> declaring jayway 2.0.0 at present), you may want to keep that unless you
> need to override for some reason.
>
> Looking forward to your contribution, please let me/us know if you run into
> any trouble or have any questions.
>
> Cheers,
> Matt
>
>
> On Tue, Mar 22, 2016 at 10:59 AM, McDermott, Chris Kevin (MSDU -
> STaTS/StorefrontRemote)  wrote:
>
> > Hi folks,
> >
> > I’m a newbie NiFi user but long time Java developer.  I’ve entered a Jira
> > issue to extend the expression language to add a function to evaluate a
> > Json path against the subject.  Looking at the code I feel this is very
> > doable, and I am up for task so I’m planning on tackling it.  I’ve read
> the
> > developers guide, etc.  Please, any advice and direction is most welcome.
> >
> > Thanks,
> >
> > Chris McDermott.
> >
>


Re: NiFi-1660

2016-03-22 Thread Matt Burgess
Chris,

Great to hear! For the following steps, I'm assuming it will be a
one-argument function called "jsonPath" that takes a String containing a
JSON Path expression to be evaluated on the "subject" of the function. So I
am picturing it used like:

${ my.json.attribute:jsonPath("$.path.to.my.data") }

To add such a function to EL, you'll likely want to do something like the
following:

1) Add a token for the function name to the Lexer:
https://github.com/apache/nifi/blob/master/nifi-commons/nifi-expression-language/src/main/antlr3/org/apache/nifi/attribute/expression/language/antlr/AttributeExpressionLexer.g#L120

2) Add the token to the oneArgString rule:
https://github.com/apache/nifi/blob/master/nifi-commons/nifi-expression-language/src/main/antlr3/org/apache/nifi/attribute/expression/language/antlr/AttributeExpressionParser.g#L77

3) Add a JsonPathEvaluator class (or whatever its called) to the functions
package:
https://github.com/apache/nifi/tree/master/nifi-commons/nifi-expression-language/src/main/java/org/apache/nifi/attribute/expression/language/evaluation/functions

4) Add a case to Query.buildFunctionEvaluator() to create and use a new
JsonPathEvaluator:
https://github.com/apache/nifi/blob/master/nifi-commons/nifi-expression-language/src/main/java/org/apache/nifi/attribute/expression/language/Query.java#L1061

5) Add unit test(s) to TestQuery to test your function:
https://github.com/apache/nifi/blob/master/nifi-commons/nifi-expression-language/src/test/java/org/apache/nifi/attribute/expression/language/TestQuery.java

Also whatever dependencies you bring in (Jayway, e.g.) will need to be
added to the EL module's POM:
https://github.com/apache/nifi/blob/master/nifi-commons/nifi-expression-language/pom.xml.
If a parent POM has declared a version already (such as the NiFi parent POM
declaring jayway 2.0.0 at present), you may want to keep that unless you
need to override for some reason.

Looking forward to your contribution, please let me/us know if you run into
any trouble or have any questions.

Cheers,
Matt


On Tue, Mar 22, 2016 at 10:59 AM, McDermott, Chris Kevin (MSDU -
STaTS/StorefrontRemote)  wrote:

> Hi folks,
>
> I’m a newbie NiFi user but long time Java developer.  I’ve entered a Jira
> issue to extend the expression language to add a function to evaluate a
> Json path against the subject.  Looking at the code I feel this is very
> doable, and I am up for task so I’m planning on tackling it.  I’ve read the
> developers guide, etc.  Please, any advice and direction is most welcome.
>
> Thanks,
>
> Chris McDermott.
>


[GitHub] nifi pull request: NIFI-1664 Preferring System.nanoTime to System....

2016-03-22 Thread apiri
Github user apiri commented on the pull request:

https://github.com/apache/nifi/pull/297#issuecomment-199897484
  
Not sure why this came up, but have included it in 
a2e679bbbc4f612a9fc61a51d47aba693789a44b.  If that checks out, I can squash 
this with the actual changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1664 Preferring System.nanoTime to System....

2016-03-22 Thread mcgilman
Github user mcgilman commented on the pull request:

https://github.com/apache/nifi/pull/297#issuecomment-199894397
  
I am seeing the same contrib check failure that @JPercivall mentioned above 
running Windows 10.

> Unapproved licenses:
> 
>   
C:/Users/gilman/Documents/sandbox/mcgilman/nifi/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/resources/TestEncryptContent/unsalted_128_raw.enc

My environment: 

> mvn -version
> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 
2015-11-10T11:41:47-05:00)
> Maven home: C:\Program Files\apache-maven-3.3.9\bin\..
> Java version: 1.8.0_73, vendor: Oracle Corporation
> Java home: C:\Program Files\Java\jdk1.8.0_73\jre
> Default locale: en_US, platform encoding: Cp1252
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "dos"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1645 refactored PutKafka

2016-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/295#discussion_r57019067
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/Partitioners.java
 ---
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.kafka;
+
+import java.util.Random;
+
+import kafka.producer.Partitioner;
+import kafka.utils.VerifiableProperties;
+
+/**
+ * Collection of implementation of common Kafka {@link Partitioner}s.
+ */
+final public class Partitioners {
+
+private Partitioners() {
+}
+/**
+ * {@link Partitioner} that implements 'round-robin' mechanism which 
evenly
+ * distributes load between all available partitions.
+ */
+public static class RoundRobinPartitioner implements Partitioner {
+private volatile int index;
+
+public RoundRobinPartitioner(VerifiableProperties props) {
+}
+
+@Override
+public int partition(Object key, int numberOfPartitions) {
+int partitionIndex = this.next(numberOfPartitions);
+return partitionIndex;
+}
+
+private int next(int numberOfPartitions) {
+if (index == numberOfPartitions) {
+index = 0;
+}
+int indexToReturn = index++;
+return indexToReturn;
+}
+}
+
+/**
+ * {@link Partitioner} that implements 'random' mechanism which 
randomly
+ * distributes the load between all available partitions.
+ */
+public static class RandomPartitioner implements Partitioner {
+private final Random random;
+
+public RandomPartitioner(VerifiableProperties props) {
+this.random = new Random();
+}
+
+@Override
+public int partition(Object key, int numberOfPartitions) {
+return this.random.nextInt(numberOfPartitions);
+}
+}
+
+/**
+ * {@link Partitioner} that implements 'key hash' mechanism which
+ * distributes the load between all available partitions based on 
hashing
+ * the value of the key.
+ */
+public static class HashPartitioner implements Partitioner {
+public HashPartitioner(VerifiableProperties props) {
+}
+
+@Override
+public int partition(Object key, int numberOfPartitions) {
+return (key.hashCode() & Integer.MAX_VALUE) % 
numberOfPartitions;
--- End diff --

What is the logic here behind `& Integer.MAX_VALUE`? Since `hashCode()` 
will return an int, doing a bit-wise and with `Integer.MAX_VALUE` would just 
return the same result as `hashCode()` returned, no?

Also, `key` could be null. Is this going to cause a problem?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1645 refactored PutKafka

2016-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/295#discussion_r57018795
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/Partitioners.java
 ---
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.kafka;
+
+import java.util.Random;
+
+import kafka.producer.Partitioner;
+import kafka.utils.VerifiableProperties;
+
+/**
+ * Collection of implementation of common Kafka {@link Partitioner}s.
+ */
+final public class Partitioners {
+
+private Partitioners() {
+}
+/**
+ * {@link Partitioner} that implements 'round-robin' mechanism which 
evenly
+ * distributes load between all available partitions.
+ */
+public static class RoundRobinPartitioner implements Partitioner {
+private volatile int index;
--- End diff --

The use of 'volatile' here implies that this is intended to be thread-safe. 
However, the next() method is not thread-safe. Specifically, Thread 1 could 
check `(index == numberOfPartitions)` and get `false`. Then Thread 2 could do 
the same thing. Then Thread 1 evaluates `indexToReturn = index++` resulting in 
index == numberOfPartitions; Thread 2 then evaluates `indexToReturn = index++` 
which results in indexToReturn == numberOfPartitions, which is larger than it 
should be.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1645 refactored PutKafka

2016-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/295#discussion_r57017510
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/KafkaPublisher.java
 ---
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.kafka;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+import java.util.Scanner;
+
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ProcessorLog;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import kafka.javaapi.producer.Producer;
+import kafka.producer.KeyedMessage;
+import kafka.producer.Partitioner;
+import kafka.producer.ProducerConfig;
+
+/**
+ * Wrapper over {@link KafkaProducer} to assist {@link PutKafka} processor 
with
+ * sending content of {@link FlowFile}s to Kafka.
+ */
+public class KafkaPublisher implements AutoCloseable {
+
+private static final Logger logger = 
LoggerFactory.getLogger(KafkaPublisher.class);
+
+private final Producer producer;
+
+private ProcessorLog processLog;
+
+/**
+ * Creates an instance of this class as well as the instance of the
+ * corresponding Kafka {@link KafkaProducer} using provided Kafka
+ * configuration properties.
+ */
+KafkaPublisher(Properties kafkaProperties) {
+ProducerConfig producerConfig = new 
ProducerConfig(kafkaProperties);
+this.producer = new Producer<>(producerConfig);
+}
+
+/**
+ *
+ */
+void setProcessLog(ProcessorLog processLog) {
+this.processLog = processLog;
+}
+
+/**
+ * Publishes messages to Kafka topic. It supports three publishing
+ * mechanisms.
+ * 
+ * Sending the entire content stream as a single Kafka 
message.
+ * Splitting the incoming content stream into chunks and sending
+ * individual chunks as separate Kafka messages.
+ * Splitting the incoming content stream into chunks and sending 
only
+ * the chunks that have failed previously @see
+ * {@link SplittableMessageContext#getFailedSegments()}.
+ * 
+ * This method assumes content stream affinity where it is expected 
that the
+ * content stream that represents the same Kafka message(s) will 
remain the
+ * same across possible retries. This is required specifically for 
cases
+ * where delimiter is used and a single content stream may represent
+ * multiple Kafka messages. The failed segment list will keep the 
index of
+ * of each content stream segment that had failed to be sent to Kafka, 
so
+ * upon retry only the failed segments are sent.
+ *
+ * @param messageContext
+ *instance of {@link SplittableMessageContext} which hold
+ *context information about the message to be sent
+ * @param contentStream
+ *instance of open {@link InputStream} carrying the 
content of
+ *the message(s) to be send to Kafka
+ * @param partitionKey
+ *the value of the partition key. Only relevant is user 
wishes
+ *to provide a custom partition key instead of relying on
+ *variety of provided {@link Partitioner}(s)
+ * @return The list containing the failed segment indexes for messages 
that
+ * failed to be sent to Kafka.
+ */
+List publish(SplittableMessageContext messageContext, 
InputStream contentStream, Object partitionKey) {
+List prevFailedSegmentIndexes = 
messageContext.getFailedSegments();
+List failedSegments = new ArrayList<>();
+int segmentCounter = 

[GitHub] nifi pull request: NIFI-1664 Preferring System.nanoTime to System....

2016-03-22 Thread apiri
Github user apiri commented on the pull request:

https://github.com/apache/nifi/pull/297#issuecomment-199890235
  
I can add this as part of the commit.  Did everything else check out okay 
though?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation

2016-03-22 Thread jvwing
Github user jvwing commented on a diff in the pull request:

https://github.com/apache/nifi/pull/267#discussion_r57017952
  
--- Diff: 
nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/FileIdentityProvider.java
 ---
@@ -0,0 +1,216 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.authentication.file;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import javax.xml.XMLConstants;
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBElement;
+import javax.xml.bind.JAXBException;
+import javax.xml.bind.Unmarshaller;
+import javax.xml.bind.ValidationEvent;
+import javax.xml.bind.ValidationEventHandler;
+import javax.xml.transform.stream.StreamSource;
+import javax.xml.validation.Schema;
+import javax.xml.validation.SchemaFactory;
+
+import org.apache.nifi.authentication.AuthenticationResponse;
+import org.apache.nifi.authentication.LoginCredentials;
+import org.apache.nifi.authentication.LoginIdentityProvider;
+import 
org.apache.nifi.authentication.LoginIdentityProviderConfigurationContext;
+import 
org.apache.nifi.authentication.LoginIdentityProviderInitializationContext;
+import org.apache.nifi.authentication.exception.IdentityAccessException;
+import 
org.apache.nifi.authentication.exception.InvalidLoginCredentialsException;
+import org.apache.nifi.authorization.exception.ProviderCreationException;
+import 
org.apache.nifi.authorization.exception.ProviderDestructionException;
+import org.apache.nifi.authentication.file.generated.UserCredentials;
+import org.apache.nifi.authentication.file.generated.UserCredentialsList;
+import org.apache.nifi.util.FormatUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;
+import org.springframework.security.crypto.password.PasswordEncoder;
+
+
+/**
+ * Identity provider for simple username/password authentication backed by 
a local credentials file.  The credentials
+ * file contains usernames and password hashes in bcrypt format.  Any 
compatible bcrypt "2a" implementation may be used
+ * to populate the credentials file.
+ * 
+ * The XML format of the credentials file is as follows:
+ * 
+ * {@code
+ * 
+ * 
+ * 
+ * 
+ * 
+ * 
+ * }
+ * 
+ */
+public class FileIdentityProvider implements LoginIdentityProvider {
+
+static final String PROPERTY_CREDENTIALS_FILE = "Credentials File";
+static final String PROPERTY_EXPIRATION_PERIOD = "Authentication 
Expiration";
+
+private static final Logger logger = 
LoggerFactory.getLogger(FileIdentityProvider.class);
+private static final String CREDENTIALS_XSD = "/credentials.xsd";
+private static final String JAXB_GENERATED_PATH = 
"org.apache.nifi.authentication.file.generated";
+private static final JAXBContext JAXB_CONTEXT = 
initializeJaxbContext();
+
+private String issuer;
+private long expirationPeriodMilliseconds;
+private String credentialsFilePath;
+private PasswordEncoder passwordEncoder = new BCryptPasswordEncoder();
+private String identifier;
+
+private static JAXBContext initializeJaxbContext() {
+try {
+return JAXBContext.newInstance(JAXB_GENERATED_PATH,  
FileIdentityProvider.class.getClassLoader());
+} catch (JAXBException e) {
+throw new RuntimeException("Failed creating JAXBContext for " 
+ FileIdentityProvider.class.getCanonicalName());
+}
+}
+
+private static ValidationEventHandler defaultValidationEventHandler = 
new ValidationEventHandler() {
+@Override
+public boolean handleEvent(ValidationEvent event) {
+return false;

[GitHub] nifi pull request: NIFI-1664 Preferring System.nanoTime to System....

2016-03-22 Thread JPercivall
Github user JPercivall commented on the pull request:

https://github.com/apache/nifi/pull/297#issuecomment-199888509
  
A very weird error with contrib check I am only getting windows 8, an 
unapproved license for:

nifi/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/resources/TestEncryptContent/unsalted_128_raw.enc

It seems this should be excluded in the pom for standard processors but it 
isn't. What is more confusing is that neither Mac nor Travis fails to find this 
contrib check. Even more so, it looks like salted_128_raw.enc should fail as 
well but it doesn't on any tested machine.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1645 refactored PutKafka

2016-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/295#discussion_r57016835
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/KafkaPublisher.java
 ---
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.kafka;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+import java.util.Scanner;
+
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ProcessorLog;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import kafka.javaapi.producer.Producer;
+import kafka.producer.KeyedMessage;
+import kafka.producer.Partitioner;
+import kafka.producer.ProducerConfig;
+
+/**
+ * Wrapper over {@link KafkaProducer} to assist {@link PutKafka} processor 
with
+ * sending content of {@link FlowFile}s to Kafka.
+ */
+public class KafkaPublisher implements AutoCloseable {
+
+private static final Logger logger = 
LoggerFactory.getLogger(KafkaPublisher.class);
+
+private final Producer producer;
+
+private ProcessorLog processLog;
+
+/**
+ * Creates an instance of this class as well as the instance of the
+ * corresponding Kafka {@link KafkaProducer} using provided Kafka
+ * configuration properties.
+ */
+KafkaPublisher(Properties kafkaProperties) {
+ProducerConfig producerConfig = new 
ProducerConfig(kafkaProperties);
+this.producer = new Producer<>(producerConfig);
+}
+
+/**
+ *
+ */
+void setProcessLog(ProcessorLog processLog) {
+this.processLog = processLog;
+}
+
+/**
+ * Publishes messages to Kafka topic. It supports three publishing
+ * mechanisms.
+ * 
+ * Sending the entire content stream as a single Kafka 
message.
+ * Splitting the incoming content stream into chunks and sending
+ * individual chunks as separate Kafka messages.
+ * Splitting the incoming content stream into chunks and sending 
only
+ * the chunks that have failed previously @see
+ * {@link SplittableMessageContext#getFailedSegments()}.
+ * 
+ * This method assumes content stream affinity where it is expected 
that the
+ * content stream that represents the same Kafka message(s) will 
remain the
+ * same across possible retries. This is required specifically for 
cases
+ * where delimiter is used and a single content stream may represent
+ * multiple Kafka messages. The failed segment list will keep the 
index of
+ * of each content stream segment that had failed to be sent to Kafka, 
so
+ * upon retry only the failed segments are sent.
+ *
+ * @param messageContext
+ *instance of {@link SplittableMessageContext} which hold
+ *context information about the message to be sent
+ * @param contentStream
+ *instance of open {@link InputStream} carrying the 
content of
+ *the message(s) to be send to Kafka
+ * @param partitionKey
+ *the value of the partition key. Only relevant is user 
wishes
+ *to provide a custom partition key instead of relying on
+ *variety of provided {@link Partitioner}(s)
+ * @return The list containing the failed segment indexes for messages 
that
+ * failed to be sent to Kafka.
+ */
+List publish(SplittableMessageContext messageContext, 
InputStream contentStream, Object partitionKey) {
+List prevFailedSegmentIndexes = 
messageContext.getFailedSegments();
+List failedSegments = new ArrayList<>();
+int segmentCounter = 

[GitHub] nifi pull request: NIFI-1645 refactored PutKafka

2016-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/295#discussion_r57015759
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/KafkaPublisher.java
 ---
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.kafka;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+import java.util.Scanner;
+
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ProcessorLog;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import kafka.javaapi.producer.Producer;
+import kafka.producer.KeyedMessage;
+import kafka.producer.Partitioner;
+import kafka.producer.ProducerConfig;
+
+/**
+ * Wrapper over {@link KafkaProducer} to assist {@link PutKafka} processor 
with
+ * sending content of {@link FlowFile}s to Kafka.
+ */
+public class KafkaPublisher implements AutoCloseable {
+
+private static final Logger logger = 
LoggerFactory.getLogger(KafkaPublisher.class);
+
+private final Producer producer;
+
+private ProcessorLog processLog;
+
+/**
+ * Creates an instance of this class as well as the instance of the
+ * corresponding Kafka {@link KafkaProducer} using provided Kafka
+ * configuration properties.
+ */
+KafkaPublisher(Properties kafkaProperties) {
+ProducerConfig producerConfig = new 
ProducerConfig(kafkaProperties);
+this.producer = new Producer<>(producerConfig);
+}
+
+/**
+ *
+ */
+void setProcessLog(ProcessorLog processLog) {
+this.processLog = processLog;
+}
+
+/**
+ * Publishes messages to Kafka topic. It supports three publishing
+ * mechanisms.
+ * 
+ * Sending the entire content stream as a single Kafka 
message.
+ * Splitting the incoming content stream into chunks and sending
+ * individual chunks as separate Kafka messages.
+ * Splitting the incoming content stream into chunks and sending 
only
+ * the chunks that have failed previously @see
+ * {@link SplittableMessageContext#getFailedSegments()}.
+ * 
+ * This method assumes content stream affinity where it is expected 
that the
+ * content stream that represents the same Kafka message(s) will 
remain the
+ * same across possible retries. This is required specifically for 
cases
+ * where delimiter is used and a single content stream may represent
+ * multiple Kafka messages. The failed segment list will keep the 
index of
+ * of each content stream segment that had failed to be sent to Kafka, 
so
+ * upon retry only the failed segments are sent.
+ *
+ * @param messageContext
+ *instance of {@link SplittableMessageContext} which hold
+ *context information about the message to be sent
+ * @param contentStream
+ *instance of open {@link InputStream} carrying the 
content of
+ *the message(s) to be send to Kafka
+ * @param partitionKey
+ *the value of the partition key. Only relevant is user 
wishes
+ *to provide a custom partition key instead of relying on
+ *variety of provided {@link Partitioner}(s)
+ * @return The list containing the failed segment indexes for messages 
that
+ * failed to be sent to Kafka.
+ */
+List publish(SplittableMessageContext messageContext, 
InputStream contentStream, Object partitionKey) {
+List prevFailedSegmentIndexes = 
messageContext.getFailedSegments();
--- End diff --

Rather than List here, did you consider using a BitSet? 

[GitHub] nifi pull request: NIFI-1645 refactored PutKafka

2016-03-22 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/295#discussion_r57015299
  
--- Diff: 
nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-processors/src/main/java/org/apache/nifi/processors/kafka/KafkaPublisher.java
 ---
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.kafka;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+import java.util.Scanner;
+
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ProcessorLog;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import kafka.javaapi.producer.Producer;
+import kafka.producer.KeyedMessage;
+import kafka.producer.Partitioner;
+import kafka.producer.ProducerConfig;
+
+/**
+ * Wrapper over {@link KafkaProducer} to assist {@link PutKafka} processor 
with
+ * sending content of {@link FlowFile}s to Kafka.
+ */
+public class KafkaPublisher implements AutoCloseable {
+
+private static final Logger logger = 
LoggerFactory.getLogger(KafkaPublisher.class);
+
+private final Producer producer;
+
+private ProcessorLog processLog;
+
+/**
+ * Creates an instance of this class as well as the instance of the
+ * corresponding Kafka {@link KafkaProducer} using provided Kafka
+ * configuration properties.
+ */
+KafkaPublisher(Properties kafkaProperties) {
+ProducerConfig producerConfig = new 
ProducerConfig(kafkaProperties);
+this.producer = new Producer<>(producerConfig);
+}
+
+/**
+ *
+ */
+void setProcessLog(ProcessorLog processLog) {
+this.processLog = processLog;
+}
+
+/**
+ * Publishes messages to Kafka topic. It supports three publishing
+ * mechanisms.
+ * 
+ * Sending the entire content stream as a single Kafka 
message.
+ * Splitting the incoming content stream into chunks and sending
+ * individual chunks as separate Kafka messages.
+ * Splitting the incoming content stream into chunks and sending 
only
+ * the chunks that have failed previously @see
+ * {@link SplittableMessageContext#getFailedSegments()}.
+ * 
+ * This method assumes content stream affinity where it is expected 
that the
+ * content stream that represents the same Kafka message(s) will 
remain the
+ * same across possible retries. This is required specifically for 
cases
+ * where delimiter is used and a single content stream may represent
+ * multiple Kafka messages. The failed segment list will keep the 
index of
+ * of each content stream segment that had failed to be sent to Kafka, 
so
+ * upon retry only the failed segments are sent.
+ *
+ * @param messageContext
+ *instance of {@link SplittableMessageContext} which hold
+ *context information about the message to be sent
+ * @param contentStream
+ *instance of open {@link InputStream} carrying the 
content of
+ *the message(s) to be send to Kafka
+ * @param partitionKey
+ *the value of the partition key. Only relevant is user 
wishes
+ *to provide a custom partition key instead of relying on
+ *variety of provided {@link Partitioner}(s)
+ * @return The list containing the failed segment indexes for messages 
that
+ * failed to be sent to Kafka.
+ */
+List publish(SplittableMessageContext messageContext, 
InputStream contentStream, Object partitionKey) {
+List prevFailedSegmentIndexes = 
messageContext.getFailedSegments();
+List failedSegments = new ArrayList<>();
+int segmentCounter = 

[GitHub] nifi pull request: NIFI-1664 Preferring System.nanoTime to System....

2016-03-22 Thread pvillard31
Github user pvillard31 commented on the pull request:

https://github.com/apache/nifi/pull/297#issuecomment-199871907
  
Testing on Windows 7


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1664 Preferring System.nanoTime to System....

2016-03-22 Thread pvillard31
Github user pvillard31 commented on the pull request:

https://github.com/apache/nifi/pull/297#issuecomment-199874747
  
I'm a +1, all tests previously in failure are now OK.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Closing in on the Apache NiFi 0.6.0 release

2016-03-22 Thread Joe Witt
Aldrin,

NIFI-1665 appears to correct a problematic behavior when pulling from
Kafka and when timeouts can occur.  Definitely think we should get
this in the build.  I also see that NIFI-1645 is up and given the
trouble that is causing for use of delimiter function we should engage
on this.

Since you're working the windows build issue and these are in play do
you mind waiting a bit before sending the new RC ?

Thanks
Joe

On Mon, Mar 21, 2016 at 1:42 PM, Aldrin Piri  wrote:
> All,
>
> It looks like the last ticket for 0.6.0 has been merged and resolved.
>
> I will begin the RC process shortly working off of commit
> 736896246cf021dbed31d4eb1e22e0755e4705f0 [1] [2].
>
> [1]
> https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=736896246cf021dbed31d4eb1e22e0755e4705f0
> [2]
> https://github.com/apache/nifi/commit/736896246cf021dbed31d4eb1e22e0755e4705f0
>
> On Mon, Mar 21, 2016 at 1:48 AM, Tony Kurc  wrote:
>
>> The Locale issue was reviewed, confirmed as fixed by reporter and merged
>> in.
>>
>> On Sun, Mar 20, 2016 at 10:35 PM, Joe Witt  wrote:
>>
>> > Team,
>> >
>> > There are a couple finishing touches PRs to fix a big defect in
>> > SplitText for certain input types, improve locale handling and test
>> > behavior for Kit bundle, and to clean up content viewing from
>> > connections.
>> >
>> > Getting good input on findings folks have so please keep it coming as
>> > that helps ensure a solid/healthy RC.
>> >
>> > Thanks
>> > Joe
>> >
>> > On Sat, Mar 19, 2016 at 6:21 PM, Tony Kurc  wrote:
>> > > Recommend https://issues.apache.org/jira/browse/NIFI-1651 be included
>> in
>> > > 0.6.0
>> > >
>> > > On Wed, Mar 16, 2016 at 4:08 PM, Joe Witt  wrote:
>> > >
>> > >> Team,
>> > >>
>> > >> Ok sooo close.  We have 5 tickets remaining.
>> > >>
>> > >> - Additional functionality/cleanup for SplitText [1]
>> > >> [status] Still in discussions. Recommend we move this change to 0.7.0.
>> > >> Solid effort on both code contributor and reviewer side but this is a
>> > >> tricky one.
>> > >>
>> > >> - Support Kerberos based authentication to REST API [2]
>> > >> [status] PR is in. Reviewing and PR tweaking appears active.  Looks
>> > >> quite close and comments indicate great results.
>> > >>
>> > >> - Add Kerberos support to HBase processors [3]
>> > >> [status] Patch in. Under review.  Running on live test system with
>> > >> great results.
>> > >>
>> > >> - Add support for Spring Context loaded processors (Spring
>> > >> Integrations, Camel, ...) [4]
>> > >> [status] Appears ready. Getting review feedback.
>> > >>
>> > >> - Zookeeper interaction for NiFI state management should limit state
>> to
>> > >> 1MB [6]
>> > >> [status] Patch is in and review under way.  Looks close.
>> > >>
>> > >> [1] https://issues.apache.org/jira/browse/NIFI-1118
>> > >> [2] https://issues.apache.org/jira/browse/NIFI-1274
>> > >> [3] https://issues.apache.org/jira/browse/NIFI-1488
>> > >> [4] https://issues.apache.org/jira/browse/NIFI-1571
>> > >> [5] https://issues.apache.org/jira/browse/NIFI-1626
>> > >>
>> > >> Thanks
>> > >>
>> > >> On Wed, Mar 16, 2016 at 4:04 PM, Joe Witt  wrote:
>> > >> > Team,
>> > >> >
>> > >> > Ok sooo close.  We have 6 tickets remaining.
>> > >> >
>> > >> > - Additional functionality/cleanup for SplitText [1]
>> > >> > [status] Still in discussions. Recommend we move this change to
>> 0.7.0.
>> > >> > Solid effort on both code contributor and reviewer side but this is
>> a
>> > >> > tricky one.
>> > >> >
>> > >> > - Support Kerberos based authentication to REST API [2]
>> > >> > [status] PR is in. Reviewing and PR tweaking appears active.  Looks
>> > >> > quite close and comments indicate great results.
>> > >> >
>> > >> > - Add Kerberos support to HBase processors [3]
>> > >> > [status] Patch in. Under review.  Running on live test system with
>> > >> > great results.
>> > >> >
>> > >> > - Add support for Spring Context loaded processors (Spring
>> > >> > Integrations, Camel, ...) [4]
>> > >> > [status] Appears ready. Getting review feedback.
>> > >> >
>> > >> > - Support based database snapshot/query/change capture [5]
>> > >> > [status] Appears ready. Needs review.
>> > >> >
>> > >> > - Zookeeper interaction for NiFI state management should limit state
>> > to
>> > >> 1MB [6]
>> > >> > [status] Patch is in and review under way.  Looks close.
>> > >> >
>> > >> > [1] https://issues.apache.org/jira/browse/NIFI-1118
>> > >> > [2] https://issues.apache.org/jira/browse/NIFI-1274
>> > >> > [3] https://issues.apache.org/jira/browse/NIFI-1488
>> > >> > [4] https://issues.apache.org/jira/browse/NIFI-1571
>> > >> > [5] https://issues.apache.org/jira/browse/NIFI-1575
>> > >> > [6] https://issues.apache.org/jira/browse/NIFI-1626
>> > >> >
>> > >> > Thanks
>> > >> > Joe
>> > >> >
>> > >> > On Tue, Mar 15, 2016 at 11:56 AM, Aldrin Piri > >
>> > >> wrote:
>> > >> >> Sounds 

[GitHub] nifi pull request: NIFI-1664 Preferring System.nanoTime to System....

2016-03-22 Thread JPercivall
Github user JPercivall commented on the pull request:

https://github.com/apache/nifi/pull/297#issuecomment-199870316
  
I am testing on Windows 8.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1664 Preferring System.nanoTime to System....

2016-03-22 Thread mcgilman
Github user mcgilman commented on the pull request:

https://github.com/apache/nifi/pull/297#issuecomment-199867101
  
I can verify the tests on Windows 10.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1664 Preferring System.nanoTime to System....

2016-03-22 Thread apiri
GitHub user apiri opened a pull request:

https://github.com/apache/nifi/pull/297

NIFI-1664 Preferring System.nanoTime to System.currentTimeMillis and …

…providing explicit handling of timestamps for files in those tests that 
are testing other attributes of the ListFile process besides timestamp which 
could lead to erroneous transmissions depending on exactly when files were 
created.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apiri/incubator-nifi NIFI-1664

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/297.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #297


commit e5fb07ca82ed3a192cf4810ea331ccfc21a5d36f
Author: Aldrin Piri 
Date:   2016-03-22T15:10:08Z

NIFI-1664 Preferring System.nanoTime to System.currentTimeMillis and 
providing explicit handling of timestamps for files in those tests that are 
testing other attributes of the ListFile process besides timestamp which could 
lead to erroneous transmissions depending on exactly when files were created.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1665 fixed GetKafka to reset consumer in c...

2016-03-22 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/296

NIFI-1665 fixed GetKafka to reset consumer in case of timeout



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1665

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/296.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #296


commit 4840991d1feb9f10413adddcbd82bca563268999
Author: Oleg Zhurakousky 
Date:   2016-03-22T15:19:39Z

NIFI-1665 fixed GetKafka to reset consumer in case of timeout




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: notEmpty

2016-03-22 Thread Matthew Clarke
Paul,
You can achieve what you are trying to do by using the not function.

let assume the attributes you want to check to make sure they have a vlaue
set are attr1, attr2, and attr3.

The expression would be
*${allAttributes('attr1','attr2','attr3'):isEmpty():not()}*

Thanks,
Matt

On Tue, Mar 22, 2016 at 9:46 AM, Paul Nahay  wrote:

>
> https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#isempty
>
> Why don't you have function "notEmpty"? This would be useful when combined
> with "allAttributes".
>
> I can't see any way, with your current set of functions, to determine that
> all of a set of attributes are not empty, using the "allAttributes"
> function.
>
> You have "isNull" and "notNull", why don't you have "notEmpty"?
>
> Paul Nahay
> pna...@sprynet.com
>


[GitHub] nifi pull request: NIFI-1645 refactored PutKafka

2016-03-22 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/295

NIFI-1645 refactored PutKafka

- used newest API available in 0.8.* version
- added PutKafka integration tests
- Kafka module code coverage is at 85%

NIFI-1645 polishing

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1645

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/295.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #295


commit ed182df8629ab821d97ca437c2824d6788146c8d
Author: Oleg Zhurakousky 
Date:   2016-03-22T14:50:07Z

NIFI-1645 refactored PutKafka
- used newest API available in 0.8.* version
- added PutKafka integration tests
- Kafka module code coverage is at 85%

NIFI-1645 polishing




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Error at startup

2016-03-22 Thread Pierre Villard
I did a full build with Maven, took the generated zip
(nifi-0.6.0-SNAPSHOT-bin.zip), unzipped it, and executed the run-nifi.bat.
So it is a clean instance. As you said, the local-provider is correctly set:


local-provider

org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider
./state/local



2016-03-22 15:38 GMT+01:00 Matt Gilman :

> Pierre,
>
> Are you attempting to upgrade an existing instance? If so, what version are
> you coming from? I'm wondering if there is some configuration missing after
> the upgrade. Are you able to start up the built assembly using default
> configuration?
>
> In your /conf/state-management.xml can you verify the
> configuration of the local-provider? By default, it's configured to
> use
> org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider
> with a Directory property that points to ./state/local.
>
> I think the error in the logs is having trouble with that property.
>
> Matt
>
>
> On Tue, Mar 22, 2016 at 10:16 AM, Pierre Villard <
> pierre.villard...@gmail.com> wrote:
>
> > OK. Logs are here:
> > https://raw.githubusercontent.com/pvillard31/share/master/nifi-app.log
> >
> > 2016-03-22 15:12 GMT+01:00 Matt Burgess :
> >
> > > I can't see them, perhaps the attachment is being removed. Can you
> paste
> > > the text from the logs into the email?
> > >
> > > Thanks,
> > > Matt
> > >
> > > On Tue, Mar 22, 2016 at 10:10 AM, Pierre Villard <
> > > pierre.villard...@gmail.com> wrote:
> > >
> > > > Erf that's strange, I do see logs from my side.
> > > > Is it better?
> > > >
> > > > 2016-03-22 15:06 GMT+01:00 Oleg Zhurakousky <
> > > ozhurakou...@hortonworks.com>
> > > > :
> > > >
> > > >> Pierre, no logs ;)
> > > >>
> > > >> > On Mar 22, 2016, at 10:03 AM, Pierre Villard <
> > > >> pierre.villard...@gmail.com> wrote:
> > > >> >
> > > >> > Hi,
> > > >> >
> > > >> > I updated my local checkout to current master and did a successful
> > > >> maven build. When trying to start generated binaries, I have a bunch
> > of
> > > >> errors and NIFI does not start. See attached logs.
> > > >> >
> > > >> > Does someone experience the same issue?
> > > >> >
> > > >> > Thanks,
> > > >> > Pierre
> > > >> >
> > > >>
> > > >>
> > > >
> > >
> >
>


Re: Error at startup

2016-03-22 Thread Matt Gilman
Pierre,

Are you attempting to upgrade an existing instance? If so, what version are
you coming from? I'm wondering if there is some configuration missing after
the upgrade. Are you able to start up the built assembly using default
configuration?

In your /conf/state-management.xml can you verify the
configuration of the local-provider? By default, it's configured to
use 
org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider
with a Directory property that points to ./state/local.

I think the error in the logs is having trouble with that property.

Matt


On Tue, Mar 22, 2016 at 10:16 AM, Pierre Villard <
pierre.villard...@gmail.com> wrote:

> OK. Logs are here:
> https://raw.githubusercontent.com/pvillard31/share/master/nifi-app.log
>
> 2016-03-22 15:12 GMT+01:00 Matt Burgess :
>
> > I can't see them, perhaps the attachment is being removed. Can you paste
> > the text from the logs into the email?
> >
> > Thanks,
> > Matt
> >
> > On Tue, Mar 22, 2016 at 10:10 AM, Pierre Villard <
> > pierre.villard...@gmail.com> wrote:
> >
> > > Erf that's strange, I do see logs from my side.
> > > Is it better?
> > >
> > > 2016-03-22 15:06 GMT+01:00 Oleg Zhurakousky <
> > ozhurakou...@hortonworks.com>
> > > :
> > >
> > >> Pierre, no logs ;)
> > >>
> > >> > On Mar 22, 2016, at 10:03 AM, Pierre Villard <
> > >> pierre.villard...@gmail.com> wrote:
> > >> >
> > >> > Hi,
> > >> >
> > >> > I updated my local checkout to current master and did a successful
> > >> maven build. When trying to start generated binaries, I have a bunch
> of
> > >> errors and NIFI does not start. See attached logs.
> > >> >
> > >> > Does someone experience the same issue?
> > >> >
> > >> > Thanks,
> > >> > Pierre
> > >> >
> > >>
> > >>
> > >
> >
>


[GitHub] nifi pull request: NIFI-1563: Federate requests and merge response...

2016-03-22 Thread mcgilman
Github user mcgilman commented on the pull request:

https://github.com/apache/nifi/pull/294#issuecomment-199837163
  
Also, in nf-cluster-search.js we can delete line 72 and 172. We no longer 
need to set:

`nf.SummaryTable.systemDiagnosticsUrl`

Setting the clusterNodeId on the preceding line should be sufficient. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1563: Federate requests and merge response...

2016-03-22 Thread mcgilman
Github user mcgilman commented on the pull request:

https://github.com/apache/nifi/pull/294#issuecomment-199835774
  
@markap14 Unfortunately, the changeset is too big for me to comment on 
directly. But in nf-counters-table.js the Ajax request on line 281 does not 
need to set the nodewise flag to true. We do not currently provide a node by 
node breakdown of the Counters. When we do offer that, we should add that flag 
back in.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Error at startup

2016-03-22 Thread Pierre Villard
OK. Logs are here:
https://raw.githubusercontent.com/pvillard31/share/master/nifi-app.log

2016-03-22 15:12 GMT+01:00 Matt Burgess :

> I can't see them, perhaps the attachment is being removed. Can you paste
> the text from the logs into the email?
>
> Thanks,
> Matt
>
> On Tue, Mar 22, 2016 at 10:10 AM, Pierre Villard <
> pierre.villard...@gmail.com> wrote:
>
> > Erf that's strange, I do see logs from my side.
> > Is it better?
> >
> > 2016-03-22 15:06 GMT+01:00 Oleg Zhurakousky <
> ozhurakou...@hortonworks.com>
> > :
> >
> >> Pierre, no logs ;)
> >>
> >> > On Mar 22, 2016, at 10:03 AM, Pierre Villard <
> >> pierre.villard...@gmail.com> wrote:
> >> >
> >> > Hi,
> >> >
> >> > I updated my local checkout to current master and did a successful
> >> maven build. When trying to start generated binaries, I have a bunch of
> >> errors and NIFI does not start. See attached logs.
> >> >
> >> > Does someone experience the same issue?
> >> >
> >> > Thanks,
> >> > Pierre
> >> >
> >>
> >>
> >
>


Re: Error at startup

2016-03-22 Thread Matt Burgess
I can't see them, perhaps the attachment is being removed. Can you paste
the text from the logs into the email?

Thanks,
Matt

On Tue, Mar 22, 2016 at 10:10 AM, Pierre Villard <
pierre.villard...@gmail.com> wrote:

> Erf that's strange, I do see logs from my side.
> Is it better?
>
> 2016-03-22 15:06 GMT+01:00 Oleg Zhurakousky 
> :
>
>> Pierre, no logs ;)
>>
>> > On Mar 22, 2016, at 10:03 AM, Pierre Villard <
>> pierre.villard...@gmail.com> wrote:
>> >
>> > Hi,
>> >
>> > I updated my local checkout to current master and did a successful
>> maven build. When trying to start generated binaries, I have a bunch of
>> errors and NIFI does not start. See attached logs.
>> >
>> > Does someone experience the same issue?
>> >
>> > Thanks,
>> > Pierre
>> >
>>
>>
>


Re: Error at startup

2016-03-22 Thread Pierre Villard
Erf that's strange, I do see logs from my side.
Is it better?

2016-03-22 15:06 GMT+01:00 Oleg Zhurakousky :

> Pierre, no logs ;)
>
> > On Mar 22, 2016, at 10:03 AM, Pierre Villard <
> pierre.villard...@gmail.com> wrote:
> >
> > Hi,
> >
> > I updated my local checkout to current master and did a successful maven
> build. When trying to start generated binaries, I have a bunch of errors
> and NIFI does not start. See attached logs.
> >
> > Does someone experience the same issue?
> >
> > Thanks,
> > Pierre
> >
>
>


Re: Error at startup

2016-03-22 Thread Oleg Zhurakousky
Pierre, no logs ;)

> On Mar 22, 2016, at 10:03 AM, Pierre Villard  
> wrote:
> 
> Hi,
> 
> I updated my local checkout to current master and did a successful maven 
> build. When trying to start generated binaries, I have a bunch of errors and 
> NIFI does not start. See attached logs.
> 
> Does someone experience the same issue?
> 
> Thanks,
> Pierre
> 



Error at startup

2016-03-22 Thread Pierre Villard
Hi,

I updated my local checkout to current master and did a successful maven
build. When trying to start generated binaries, I have a bunch of errors
and NIFI does not start. See attached logs.

Does someone experience the same issue?

Thanks,
Pierre


[GitHub] nifi pull request: NIFI-1563: Federate requests and merge response...

2016-03-22 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/294#discussion_r56988394
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-client-dto/src/main/java/org/apache/nifi/web/api/dto/CounterDTO.java
 ---
@@ -98,4 +98,12 @@ public void setValueCount(Long valueCount) {
 this.valueCount = valueCount;
 }
 
+@Override
+public CounterDTO clone() {
--- End diff --

@markap14 There appears to be about 5 instances of super.clone(). Given the 
ambiguous specification on the implementation of Object.clone and the 
inconsistency of doing a manual clone explicitly elsewhere can we update these?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


notEmpty

2016-03-22 Thread Paul Nahay
https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#isempty

Why don't you have function "notEmpty"? This would be useful when combined with 
"allAttributes". 

I can't see any way, with your current set of functions, to determine that all 
of a set of attributes are not empty, using the "allAttributes" function. 

You have "isNull" and "notNull", why don't you have "notEmpty"?

Paul Nahay
pna...@sprynet.com


Re: [VOTE] Release Apache NiFi 0.6.0 (RC1)

2016-03-22 Thread Aldrin Piri
All,

Given the issues I am cancelling this vote.  There were some miscues in the
artifact generation process that have caused issues and clouded the
process.  Please look for an updated vote RC and helper.

Thank you for your efforts and time and apologies for the hiccups.

On Tue, Mar 22, 2016 at 8:00 AM, Aldrin Piri  wrote:

> Drat, the announce email was wrong (likely glossed over it copying and
> pasting), but the md5sum provided to the repository is correct
> ( nifi-0.6.0-source-release.zip.md5). I did an md5 against the artifact as
> provided to SVN using the actually prepared md5 checksum.
>
> # apiri in ~/Development/verify/release-0.6.0/dist/nifi/nifi-0.6.0
> [7:55:56]
> $ cat nifi-0.6.0-source-release.zip.md5
> 1597a93574b928b7c78e3014d1eca416%
>
> # apiri in ~/Development/verify/release-0.6.0/dist/nifi/nifi-0.6.0
> [7:55:59]
> $ md5sum nifi-0.6.0-source-release.zip
> 1597a93574b928b7c78e3014d1eca416  nifi-0.6.0-source-release.zip
>
> On Tue, Mar 22, 2016 at 7:37 AM, Matt Burgess  wrote:
>
>> I'm getting an MD5 checksum mismatch on the ZIP:
>>
>> < MD5(nifi-0.6.0-source-release.zip)= 1559157db000d53221aeabc5dd607cfc
>> ---
>> > MD5(nifi-0.6.0-source-release.zip)= 1597a93574b928b7c78e3014d1eca416
>>
>> On Mon, Mar 21, 2016 at 11:08 PM, Andy LoPresto <
>> alopresto.apa...@gmail.com>
>> wrote:
>>
>> > Thanks to Aldrin and Matt Burgess, we were able to push a new signature
>> > for each artifact to the repository and verify it. Please resume release
>> > verification.
>> >
>> > hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
>> > alopresto
>> >  0s @ 20:01:08 $ rmf nifi-0.6.0-source-release.zip.asc
>> > nifi-0.6.0-source-release.zip.asc
>> > hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
>> > alopresto
>> >  0s @ 20:01:14 $ wget
>> >
>> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.0/nifi-0.6.0-source-release.zip.asc
>> > --2016-03-21 20:01:16--
>> >
>> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.0/nifi-0.6.0-source-release.zip.asc
>> > Resolving dist.apache.org... 209.188.14.144
>> > Connecting to dist.apache.org|209.188.14.144|:443... connected.
>> > HTTP request sent, awaiting response... 200 OK
>> > Length: 801 [text/plain]
>> > Saving to: 'nifi-0.6.0-source-release.zip.asc'
>> >
>> > nifi-0.6.0-source-release.zip.asc
>> > 100%[>]
>> > 801  --.-KB/sin 0s
>> >
>> > 2016-03-21 20:01:16 (34.7 MB/s) - 'nifi-0.6.0-source-release.zip.asc'
>> > saved [801/801]
>> >
>> > hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
>> > alopresto
>> >  0s @ 20:01:17 $ diff nifi-0.6.0-source-release.zip.asc aldrin-new.asc
>> > hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
>> > alopresto
>> >  0s @ 20:01:29 $ gpg --verify -v aldrin-new.asc
>> > nifi-0.6.0-source-release.zip
>> > gpg: Signature made Mon Mar 21 19:39:05 2016 PDT using RSA key ID
>> 4CFE5D00
>> > gpg: using PGP trust model
>> > gpg: Good signature from "Aldrin Piri (Code Signing Key) <
>> > ald...@apache.org>" [full]
>> > gpg: binary signature, digest algorithm SHA512
>> > hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
>> > alopresto
>> >  0s @ 20:01:38 $ gpg --verify -v nifi-0.6.0-source-release.zip.asc
>> > nifi-0.6.0-source-release.zip
>> > gpg: Signature made Mon Mar 21 19:39:05 2016 PDT using RSA key ID
>> 4CFE5D00
>> > gpg: using PGP trust model
>> > gpg: Good signature from "Aldrin Piri (Code Signing Key) <
>> > ald...@apache.org>" [full]
>> > gpg: binary signature, digest algorithm SHA512
>> > hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
>> > alopresto
>> >  0s @ 20:01:50 $
>> >
>> > hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
>> > alopresto
>> >  0s @ 20:01:50 $ wget
>> >
>> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.0/nifi-0.6.0-bin.zip.asc
>> > --2016-03-21 20:03:43--
>> >
>> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.0/nifi-0.6.0-bin.zip.asc
>> > Resolving dist.apache.org... 209.188.14.144
>> > Connecting to dist.apache.org|209.188.14.144|:443... connected.
>> > HTTP request sent, awaiting response... 200 OK
>> > Length: 801 [text/plain]
>> > Saving to: 'nifi-0.6.0-bin.zip.asc'
>> >
>> > nifi-0.6.0-bin.zip.asc
>> > 100%[>]
>> > 801  --.-KB/sin 0s
>> >
>> > 2016-03-21 20:03:43 (27.3 MB/s) - 'nifi-0.6.0-bin.zip.asc' saved
>> [801/801]
>> >
>> > hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
>> > alopresto
>> >  0s @ 20:03:44 $ wget
>> >
>> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.0/nifi-0.6.0-bin.zip
>> > --2016-03-21 20:03:47--
>> >
>> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.0/nifi-0.6.0-bin.zip
>> > Resolving dist.apache.org... 209.188.14.144
>> > Connecting to dist.apache.org|209.188.14.144|:443... connected.
>> > HTTP 

Re: [VOTE] Release Apache NiFi 0.6.0 (RC1)

2016-03-22 Thread Matt Burgess
I'm getting an MD5 checksum mismatch on the ZIP:

< MD5(nifi-0.6.0-source-release.zip)= 1559157db000d53221aeabc5dd607cfc
---
> MD5(nifi-0.6.0-source-release.zip)= 1597a93574b928b7c78e3014d1eca416

On Mon, Mar 21, 2016 at 11:08 PM, Andy LoPresto 
wrote:

> Thanks to Aldrin and Matt Burgess, we were able to push a new signature
> for each artifact to the repository and verify it. Please resume release
> verification.
>
> hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
> alopresto
>  0s @ 20:01:08 $ rmf nifi-0.6.0-source-release.zip.asc
> nifi-0.6.0-source-release.zip.asc
> hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
> alopresto
>  0s @ 20:01:14 $ wget
> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.0/nifi-0.6.0-source-release.zip.asc
> --2016-03-21 20:01:16--
> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.0/nifi-0.6.0-source-release.zip.asc
> Resolving dist.apache.org... 209.188.14.144
> Connecting to dist.apache.org|209.188.14.144|:443... connected.
> HTTP request sent, awaiting response... 200 OK
> Length: 801 [text/plain]
> Saving to: 'nifi-0.6.0-source-release.zip.asc'
>
> nifi-0.6.0-source-release.zip.asc
> 100%[>]
> 801  --.-KB/sin 0s
>
> 2016-03-21 20:01:16 (34.7 MB/s) - 'nifi-0.6.0-source-release.zip.asc'
> saved [801/801]
>
> hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
> alopresto
>  0s @ 20:01:17 $ diff nifi-0.6.0-source-release.zip.asc aldrin-new.asc
> hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
> alopresto
>  0s @ 20:01:29 $ gpg --verify -v aldrin-new.asc
> nifi-0.6.0-source-release.zip
> gpg: Signature made Mon Mar 21 19:39:05 2016 PDT using RSA key ID 4CFE5D00
> gpg: using PGP trust model
> gpg: Good signature from "Aldrin Piri (Code Signing Key) <
> ald...@apache.org>" [full]
> gpg: binary signature, digest algorithm SHA512
> hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
> alopresto
>  0s @ 20:01:38 $ gpg --verify -v nifi-0.6.0-source-release.zip.asc
> nifi-0.6.0-source-release.zip
> gpg: Signature made Mon Mar 21 19:39:05 2016 PDT using RSA key ID 4CFE5D00
> gpg: using PGP trust model
> gpg: Good signature from "Aldrin Piri (Code Signing Key) <
> ald...@apache.org>" [full]
> gpg: binary signature, digest algorithm SHA512
> hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
> alopresto
>  0s @ 20:01:50 $
>
> hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
> alopresto
>  0s @ 20:01:50 $ wget
> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.0/nifi-0.6.0-bin.zip.asc
> --2016-03-21 20:03:43--
> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.0/nifi-0.6.0-bin.zip.asc
> Resolving dist.apache.org... 209.188.14.144
> Connecting to dist.apache.org|209.188.14.144|:443... connected.
> HTTP request sent, awaiting response... 200 OK
> Length: 801 [text/plain]
> Saving to: 'nifi-0.6.0-bin.zip.asc'
>
> nifi-0.6.0-bin.zip.asc
> 100%[>]
> 801  --.-KB/sin 0s
>
> 2016-03-21 20:03:43 (27.3 MB/s) - 'nifi-0.6.0-bin.zip.asc' saved [801/801]
>
> hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
> alopresto
>  0s @ 20:03:44 $ wget
> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.0/nifi-0.6.0-bin.zip
> --2016-03-21 20:03:47--
> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.0/nifi-0.6.0-bin.zip
> Resolving dist.apache.org... 209.188.14.144
> Connecting to dist.apache.org|209.188.14.144|:443... connected.
> HTTP request sent, awaiting response... 200 OK
> Length: 441687095 (421M) [application/octet-stream]
> Saving to: 'nifi-0.6.0-bin.zip'
>
> nifi-0.6.0-bin.zip
> 100%[>]
> 421.23M  10.7MB/sin 44s
>
> 2016-03-21 20:04:31 (9.59 MB/s) - 'nifi-0.6.0-bin.zip' saved
> [441687095/441687095]
>
> hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
> alopresto
>  44s @ 20:04:32 $ gpg --verify -v nifi-0.6.0-bin.zip.asc
> gpg: assuming signed data in 'nifi-0.6.0-bin.zip'
> gpg: Signature made Mon Mar 21 19:49:21 2016 PDT using RSA key ID 4CFE5D00
> gpg: using PGP trust model
> gpg: Good signature from "Aldrin Piri (Code Signing Key) <
> ald...@apache.org>" [full]
> gpg: binary signature, digest algorithm SHA512
> hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
> alopresto
>  2s @ 20:05:04 $
>
> hw12203:/Users/alopresto/Workspace/scratch/release_verification/0.6.0
> alopresto
>  2s @ 20:05:04 $ wget
> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.0/nifi-0.6.0-bin.tar.gz
> --2016-03-21 20:06:34--
> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.0/nifi-0.6.0-bin.tar.gz
> Resolving dist.apache.org... 209.188.14.144
> Connecting to dist.apache.org|209.188.14.144|:443... connected.
> HTTP request sent, awaiting response... 200 OK
> 

[GitHub] nifi pull request: NIFI-1620 Allow empty Content-Type in InvokeHTT...

2016-03-22 Thread pvillard31
Github user pvillard31 commented on the pull request:

https://github.com/apache/nifi/pull/272#issuecomment-199714780
  
@taftster Thanks Adam. I updated the PR regarding your comments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---