[GitHub] ijokarumawak commented on a change in pull request #3246: NIFI-5929 Support for IBM MQ multi-instance queue managers

2019-01-07 Thread GitBox
ijokarumawak commented on a change in pull request #3246: NIFI-5929 Support for 
IBM MQ multi-instance queue managers
URL: https://github.com/apache/nifi/pull/3246#discussion_r245868656
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProvider.java
 ##
 @@ -185,23 +185,31 @@ public void disable() {
  * service configuration. For example, 'channel' property will correspond 
to
  * 'setChannel(..) method and 'queueManager' property will correspond to
  * setQueueManager(..) method with a single argument.
- * 
+ * 
 
 Review comment:
   Separated paragraphs can provide better readability for this long java doc. 
I'd leave the `` tag as is. Or adding more of it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ijokarumawak commented on a change in pull request #3246: NIFI-5929 Support for IBM MQ multi-instance queue managers

2019-01-07 Thread GitBox
ijokarumawak commented on a change in pull request #3246: NIFI-5929 Support for 
IBM MQ multi-instance queue managers
URL: https://github.com/apache/nifi/pull/3246#discussion_r245882407
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProvider.java
 ##
 @@ -210,17 +218,23 @@ private void 
setConnectionFactoryProperties(ConfigurationContext context) {
 if (descriptor.isDynamic()) {
 this.setProperty(propertyName, entry.getValue());
 } else {
-if (propertyName.equals(BROKER)) {
-String brokerValue = 
context.getProperty(descriptor).evaluateAttributeExpressions().getValue();
-if 
(context.getProperty(CONNECTION_FACTORY_IMPL).evaluateAttributeExpressions().getValue().startsWith("org.apache.activemq"))
 {
+if (descriptor == BROKER_URI) {
+String brokerValue = 
context.getProperty(BROKER_URI).evaluateAttributeExpressions().getValue();
+String connectionFactoryValue = 
context.getProperty(CONNECTION_FACTORY_IMPL).evaluateAttributeExpressions().getValue();
+if 
(connectionFactoryValue.startsWith("org.apache.activemq")) {
 this.setProperty("brokerURL", brokerValue);
+} else if 
(connectionFactoryValue.startsWith("com.tibco.tibjms")) {
+this.setProperty("serverUrl", brokerValue);
 } else {
+// Try to parse broker URI as colon separated 
host/port pair
 String[] hostPort = brokerValue.split(":");
+// If broker URI indeed was colon separated host/port 
pair
 if (hostPort.length == 2) {
 this.setProperty("hostName", hostPort[0]);
 this.setProperty("port", hostPort[1]);
-} else if (hostPort.length != 2) {
-this.setProperty("serverUrl", brokerValue); // for 
tibco
+} else if 
(connectionFactoryValue.startsWith("com.ibm.mq.jms")) {
+// Assuming IBM MQ style broker was specified, 
e.g. "myhost(1414)" and "myhost01(1414),myhost02(1414)"
+this.setProperty("connectionNameList", 
brokerValue);
 
 Review comment:
   After reading some articles and API specs for connectionNameList property, 
just wondering if setting `connectionNameList` by a dynamic property work. Did 
you try before working on this PR?
   
   What if both `host` and `port` pair and `connectionNameList` are set?
   
   > Specifies the hosts to which the client will attempt to reconnect to after 
its connection is broken.
   
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_7.1.0/com.ibm.mq.javadoc.doc/WMQJMSClasses/com/ibm/mq/jms/MQConnectionFactory.html#setConnectionNameList_java.lang.String_


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ijokarumawak commented on a change in pull request #3246: NIFI-5929 Support for IBM MQ multi-instance queue managers

2019-01-07 Thread GitBox
ijokarumawak commented on a change in pull request #3246: NIFI-5929 Support for 
IBM MQ multi-instance queue managers
URL: https://github.com/apache/nifi/pull/3246#discussion_r245879771
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/test/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProviderTest.java
 ##
 @@ -101,4 +101,157 @@ public void 
validateGetConnectionFactoryFailureIfServiceNotConfigured() throws E
 new JMSConnectionFactoryProvider().getConnectionFactory();
 }
 
+@Test
+public void validWithSingleTestBroker() throws Exception {
+TestRunner runner = TestRunners.newTestRunner(mock(Processor.class));
+
+JMSConnectionFactoryProvider cfProvider = new 
JMSConnectionFactoryProvider();
+runner.addControllerService("cfProvider", cfProvider);
+
+String clientLib = 
this.getClass().getResource("/dummy-lib.jar").toURI().toString();
+
+runner.setProperty(cfProvider, 
JMSConnectionFactoryProvider.BROKER_URI, "myhost:1234");
+runner.setProperty(cfProvider, 
JMSConnectionFactoryProvider.CLIENT_LIB_DIR_PATH, clientLib);
+runner.setProperty(cfProvider, 
JMSConnectionFactoryProvider.CONNECTION_FACTORY_IMPL,
+"org.apache.nifi.jms.testcflib.TestConnectionFactory");
+
+runner.assertValid(cfProvider);
 
 Review comment:
   Since JMSConnectionFactoryProvider doesn't implement `customValidate` method 
and the NonEmptyBrokerURIValidator used for validating the BROKER_URI property 
doesn't check connection factory impl value, these test variations do not 
assert what is expected.
   
   Alternative approach would be creating a sub-class of 
JMSConnectionFactoryProvider only for testing purpose which overrides 
`setProperty(String propertyName, Object propertyValue)` then do assertion 
within the overridden setProperty method. This way, we can make sure what 
property is set by combination of broker uri and connection factory impl.
   
   How do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ijokarumawak commented on a change in pull request #3246: NIFI-5929 Support for IBM MQ multi-instance queue managers

2019-01-07 Thread GitBox
ijokarumawak commented on a change in pull request #3246: NIFI-5929 Support for 
IBM MQ multi-instance queue managers
URL: https://github.com/apache/nifi/pull/3246#discussion_r245880875
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProvider.java
 ##
 @@ -185,23 +185,31 @@ public void disable() {
  * service configuration. For example, 'channel' property will correspond 
to
  * 'setChannel(..) method and 'queueManager' property will correspond to
  * setQueueManager(..) method with a single argument.
- * 
+ * 
  * There are also few adjustments to accommodate well known brokers. For
  * example ActiveMQ ConnectionFactory accepts address of the Message Broker
  * in a form of URL while IBMs in the form of host/port pair (more common).
  * So this method will use value retrieved from the 'BROKER_URI' static
  * property 'as is' if ConnectionFactory implementation is coming from
- * ActiveMQ and for all others (for now) the 'BROKER_URI' value will be
+ * ActiveMQ or Tibco. For all others (for now) the 'BROKER_URI' value will 
be
  * split on ':' and the resulting pair will be used to execute
  * setHostName(..) and setPort(..) methods on the provided
- * ConnectionFactory. This may need to be maintained and adjusted to
- * accommodate other implementation of ConnectionFactory, but only for
- * URL/Host/Port issue. All other properties are set as dynamic properties
- * where user essentially provides both property name and value, The bean
- * convention is also explained in user manual for this component with 
links
- * pointing to documentation of various ConnectionFactories.
+ * ConnectionFactory. An exception to this if the ConnectionFactory
+ * implementation is coming from IBM MQ and multiple brokers are listed,
+ * in this case setConnectionNameList(..) method is executed.
+ * This may need to be maintained and adjusted to accommodate other
+ * implementation of ConnectionFactory, but only for URL/Host/Port issue.
+ * All other properties are set as dynamic properties where user 
essentially
+ * provides both property name and value, The bean convention is also
+ * explained in user manual for this component with links pointing to
+ * documentation of various ConnectionFactories.
 
 Review comment:
   Thanks for elaborating the Javadocs. Very helpful to understand how the same 
property can be used differently based on the target MQ broker systems.
   
   The sad thing is, these helpful information will not be seen by general NiFi 
users from NiFi UI usage.
   
   I think this is a good timing to adding more on 'Additional Details' page 
for this component. As a user I'd like to see following information laid out 
nicely in the additional doc page:
   
   - Target MQ System: i.e. ActiveMQ, IBM, Tibco ... etc
   - MQ ConnectionFactory Implementation
   - Broker URI: Especially for the IBM case where single queue manager 
instance and multiple ones.
   
   We can link to external doc pages for each vender specific information from 
there.
   
   How do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-5920) Processor to tag an existing S3 object

2019-01-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736673#comment-16736673
 ] 

ASF subversion and git services commented on NIFI-5920:
---

Commit a8e59e52af5144c6238230bd792d01ca6daafadf in nifi's branch 
refs/heads/master from Stephen Goodman
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=a8e59e5 ]

NIFI-5920: Tagging an object in S3

Unit tests and functionality for tagging an object in S3.

Set FlowFile attributes directly from tags retrieved from S3.

Add guard clauses to ensure evaluated properties are not blank.

This closes #3239.

Signed-off-by: Koji Kawamura 


> Processor to tag an existing S3 object
> --
>
> Key: NIFI-5920
> URL: https://issues.apache.org/jira/browse/NIFI-5920
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.9.0
>Reporter: Stephen Goodman
>Priority: Minor
> Fix For: 1.9.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> A common use case for data aggregation projects is to use AWS S3 as data 
> staging zone. Files are uploaded to S3 buckets from various sources, and NiFi 
> is used to fetch the S3 objects and perform additional processing.
> AWS S3 lifecycle management policies allow transitioning S3 objects from 
> standard storage to glacier storage for cost savings in long-term storage. A 
> lifecycle management rule can be set to transition an object that has a 
> particular tag. 
> Currently, the only way to tag an S3 object via NiFi is to through the 
> PutS3Object processor. For the above use case, where an S3 object was 
> uploaded by a second- or third-party, it would be very helpful to have a 
> TagS3Processor that can tag an existing S3 object, so that the s3 lifecycle 
> management policy can be triggered after NiFi has processed the S3 object.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5928) Use a serializable version of NiFiDataPacket in the Flink source

2019-01-07 Thread ambition (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ambition resolved NIFI-5928.

Resolution: Not A Bug

> Use a serializable version of NiFiDataPacket in the Flink source
> 
>
> Key: NIFI-5928
> URL: https://issues.apache.org/jira/browse/NIFI-5928
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.8.0
> Environment: jdk 1.8
> scala 2.11
> flink 1.6.2
> nifi 1.8 
>Reporter: ambition
>Priority: Minor
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Use a serializable version of NiFiDataPacket in the Flink source like exists 
> spark receiver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5920) Processor to tag an existing S3 object

2019-01-07 Thread Koji Kawamura (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura resolved NIFI-5920.
-
Resolution: Fixed

Thanks again [~sgoodman]! I've added your Jira account to NiFi contributor 
role. You can assign yourself to any NiFi JIRAs now. Looking forward to see 
more contributions!

> Processor to tag an existing S3 object
> --
>
> Key: NIFI-5920
> URL: https://issues.apache.org/jira/browse/NIFI-5920
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.9.0
>Reporter: Stephen Goodman
>Assignee: Stephen Goodman
>Priority: Minor
> Fix For: 1.9.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> A common use case for data aggregation projects is to use AWS S3 as data 
> staging zone. Files are uploaded to S3 buckets from various sources, and NiFi 
> is used to fetch the S3 objects and perform additional processing.
> AWS S3 lifecycle management policies allow transitioning S3 objects from 
> standard storage to glacier storage for cost savings in long-term storage. A 
> lifecycle management rule can be set to transition an object that has a 
> particular tag. 
> Currently, the only way to tag an S3 object via NiFi is to through the 
> PutS3Object processor. For the above use case, where an S3 object was 
> uploaded by a second- or third-party, it would be very helpful to have a 
> TagS3Processor that can tag an existing S3 object, so that the s3 lifecycle 
> management policy can be triggered after NiFi has processed the S3 object.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5920) Processor to tag an existing S3 object

2019-01-07 Thread Koji Kawamura (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura reassigned NIFI-5920:
---

Assignee: Stephen Goodman

> Processor to tag an existing S3 object
> --
>
> Key: NIFI-5920
> URL: https://issues.apache.org/jira/browse/NIFI-5920
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.9.0
>Reporter: Stephen Goodman
>Assignee: Stephen Goodman
>Priority: Minor
> Fix For: 1.9.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> A common use case for data aggregation projects is to use AWS S3 as data 
> staging zone. Files are uploaded to S3 buckets from various sources, and NiFi 
> is used to fetch the S3 objects and perform additional processing.
> AWS S3 lifecycle management policies allow transitioning S3 objects from 
> standard storage to glacier storage for cost savings in long-term storage. A 
> lifecycle management rule can be set to transition an object that has a 
> particular tag. 
> Currently, the only way to tag an S3 object via NiFi is to through the 
> PutS3Object processor. For the above use case, where an S3 object was 
> uploaded by a second- or third-party, it would be very helpful to have a 
> TagS3Processor that can tag an existing S3 object, so that the s3 lifecycle 
> management policy can be triggered after NiFi has processed the S3 object.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] asfgit closed pull request #3239: NIFI-5920: Unit tests and functionality for tagging an object in S3.

2019-01-07 Thread GitBox
asfgit closed pull request #3239: NIFI-5920: Unit tests and functionality for 
tagging an object in S3.
URL: https://github.com/apache/nifi/pull/3239
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/TagS3Object.java
 
b/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/TagS3Object.java
new file mode 100644
index 00..6c9d72a143
--- /dev/null
+++ 
b/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/TagS3Object.java
@@ -0,0 +1,201 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import com.amazonaws.AmazonServiceException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.GetObjectTaggingRequest;
+import com.amazonaws.services.s3.model.GetObjectTaggingResult;
+import com.amazonaws.services.s3.model.ObjectTagging;
+import com.amazonaws.services.s3.model.SetObjectTaggingRequest;
+import com.amazonaws.services.s3.model.Tag;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.StringUtils;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import java.util.regex.Pattern;
+import java.util.stream.Collectors;
+
+
+@SupportsBatching
+@WritesAttributes({
+@WritesAttribute(attribute = "s3.tag.___", description = "The tags 
associated with the S3 object will be " +
+"written as part of the FlowFile attributes")})
+@SeeAlso({PutS3Object.class, FetchS3Object.class, ListS3.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Tag"})
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Sets tags on a FlowFile within an Amazon S3 Bucket. " +
+"If attempting to tag a file that does not exist, FlowFile is routed 
to success.")
+public class TagS3Object extends AbstractS3Processor {
+
+public static final PropertyDescriptor TAG_KEY = new 
PropertyDescriptor.Builder()
+.name("tag-key")
+.displayName("Tag Key")
+.description("The key of the tag that will be set on the S3 
Object")
+.addValidator(new StandardValidators.StringLengthValidator(1, 127))
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(true)
+.build();
+
+public static final PropertyDescriptor TAG_VALUE = new 
PropertyDescriptor.Builder()
+.name("tag-value")
+.displayName("Tag Value")
+.description("The value of the tag that will be set on the S3 
Object")
+.addValidator(new StandardValidators.StringLengthValidator(1, 255))
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(true)
+.build();
+
+public static final PropertyDescriptor APPEND_TAG = new 
PropertyDescriptor.Builder()
+.name("append-tag")
+.displayName("Append Tag")
+ 

[GitHub] ijokarumawak commented on issue #3239: NIFI-5920: Unit tests and functionality for tagging an object in S3.

2019-01-07 Thread GitBox
ijokarumawak commented on issue #3239: NIFI-5920: Unit tests and functionality 
for tagging an object in S3.
URL: https://github.com/apache/nifi/pull/3239#issuecomment-452159949
 
 
   Thanks for the updates. LGTM +1. I confirmed the processor works as expected 
with my S3 bucket. Thanks @sbgoodm merging to master!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (NIFI-5887) JOLTTransformJSON advanced window JSON format validation should accept EL without double quote

2019-01-07 Thread Koji Kawamura (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-5887:

Component/s: Extensions

> JOLTTransformJSON advanced window JSON format validation should accept EL 
> without double quote
> --
>
> Key: NIFI-5887
> URL: https://issues.apache.org/jira/browse/NIFI-5887
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.7.0
>Reporter: meh
>Priority: Major
>  Labels: attribute, jolt, json
> Attachments: NIFI-5887_JOLT_with_JSON_attribute_value.xml
>
>
> JOLTTransformJSON advanced configuration window allows user to design a Jolt 
> spec using NiFi EL. Also it does provide a feature to set FlowFile attributes 
> to confirm conversion result for testing.
> However, the advanced window spec JSON format validation doesn't allow EL 
> expression without double quote. Because of this, user can not try a JOLT 
> spec using EL which references a FlowFile attribute having a valid JSON 
> object. See details below.
> It would be more user friendly if the advanced window allows a Jolt spec even 
> if it is not a valid JSON format. Probably adding a toggle switch to disable 
> JSON validation might be a solution.
> 
> i have a attribute (that produced by a REST service and catched by invokeHTTP 
> processor) in JSON format like this:
> {code:java}
> test => {"key":"value"}{code}
> and then i want to put it in flows JSON content using JOLT processor, my 
> content is something like this:
> {code:java}
> { "id": 123, "user": "foo" }{code}
> and my JOLT specification is this:
> {code:java}
> [{ "operation": "default", "spec": { "interest": "${test}" } }]{code}
> the problem is here that, in JOLT advanced window with test attribute nifi 
> cannot put json object and shown this error:
> {quote}*"Error occurred during transformation"*
> {quote}
> and when run processor this detailed error is become alerted:
> {quote}*"unable to unmarshal json to an object"*
> {quote}
> 
> my desired result is this:
> {code:java}
> { "id": 123, "user": "foo", "interest": {"key":"value"} }{code}
> is this a bug or am i wrong?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5887) unable to unmarshal json to an object

2019-01-07 Thread Koji Kawamura (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736583#comment-16736583
 ] 

Koji Kawamura commented on NIFI-5887:
-

[~mehrdad22] There are few things better to be discussed separately on this 
JIRA:

1. Referring FlowFile attribute via EL within JOLT spec whose value is a JSON

I was able to do that without error. Since the attribute value is a JSON object 
string representation, you don't have to wrap the EL part with double quote. By 
removing the double quote, JoltTransformJSON works fine. The result FlowFile 
content is what you expected. Please check the attached template for detail.

2. JOLTTransformJSON advanced window JSON format validation doesn't allow EL 
without double quote

Current validation doesn't allow:
{code}
"interest": ${test}
{code}
It has to be double quoted in order to pass the JSON format validation:
{code}
"interest": "${test}"
{code}
But if we wrap the EL with double quote, we can't refer JSON formatted string 
value.

3. JoltTransformJSON advance window doesn't accept "F"
There is already a JIRA for this. NIFI-5238

Based on above, I'm going to change this JIRA title to only focus on addressing 
the 2nd point of above list.

> unable to unmarshal json to an object
> -
>
> Key: NIFI-5887
> URL: https://issues.apache.org/jira/browse/NIFI-5887
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: meh
>Priority: Major
>  Labels: attribute, jolt, json
> Attachments: NIFI-5887_JOLT_with_JSON_attribute_value.xml
>
>
> i have a attribute (that produced by a REST service and catched by invokeHTTP 
> processor) in JSON format like this:
> {code:java}
> test => {"key":"value"}{code}
> and then i want to put it in flows JSON content using JOLT processor, my 
> content is something like this:
> {code:java}
> { "id": 123, "user": "foo" }{code}
> and my JOLT specification is this:
> {code:java}
> [{ "operation": "default", "spec": { "interest": "${test}" } }]{code}
> the problem is here that, in JOLT advanced window with test attribute nifi 
> cannot put json object and shown this error:
> {quote}*"Error occurred during transformation"*
> {quote}
> and when run processor this detailed error is become alerted:
> {quote}*"unable to unmarshal json to an object"*
> {quote}
> 
> my desired result is this:
> {code:java}
> { "id": 123, "user": "foo", "interest": {"key":"value"} }{code}
> is this a bug or am i wrong?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5887) JOLTTransformJSON advanced window JSON format validation should accept EL without double quote

2019-01-07 Thread Koji Kawamura (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-5887:

Summary: JOLTTransformJSON advanced window JSON format validation should 
accept EL without double quote  (was: unable to unmarshal json to an object)

> JOLTTransformJSON advanced window JSON format validation should accept EL 
> without double quote
> --
>
> Key: NIFI-5887
> URL: https://issues.apache.org/jira/browse/NIFI-5887
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.0
>Reporter: meh
>Priority: Major
>  Labels: attribute, jolt, json
> Attachments: NIFI-5887_JOLT_with_JSON_attribute_value.xml
>
>
> i have a attribute (that produced by a REST service and catched by invokeHTTP 
> processor) in JSON format like this:
> {code:java}
> test => {"key":"value"}{code}
> and then i want to put it in flows JSON content using JOLT processor, my 
> content is something like this:
> {code:java}
> { "id": 123, "user": "foo" }{code}
> and my JOLT specification is this:
> {code:java}
> [{ "operation": "default", "spec": { "interest": "${test}" } }]{code}
> the problem is here that, in JOLT advanced window with test attribute nifi 
> cannot put json object and shown this error:
> {quote}*"Error occurred during transformation"*
> {quote}
> and when run processor this detailed error is become alerted:
> {quote}*"unable to unmarshal json to an object"*
> {quote}
> 
> my desired result is this:
> {code:java}
> { "id": 123, "user": "foo", "interest": {"key":"value"} }{code}
> is this a bug or am i wrong?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5887) JOLTTransformJSON advanced window JSON format validation should accept EL without double quote

2019-01-07 Thread Koji Kawamura (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-5887:

Description: 
JOLTTransformJSON advanced configuration window allows user to design a Jolt 
spec using NiFi EL. Also it does provide a feature to set FlowFile attributes 
to confirm conversion result for testing.

However, the advanced window spec JSON format validation doesn't allow EL 
expression without double quote. Because of this, user can not try a JOLT spec 
using EL which references a FlowFile attribute having a valid JSON object. See 
details below.

It would be more user friendly if the advanced window allows a Jolt spec even 
if it is not a valid JSON format. Probably adding a toggle switch to disable 
JSON validation might be a solution.





i have a attribute (that produced by a REST service and catched by invokeHTTP 
processor) in JSON format like this:
{code:java}
test => {"key":"value"}{code}
and then i want to put it in flows JSON content using JOLT processor, my 
content is something like this:
{code:java}
{ "id": 123, "user": "foo" }{code}
and my JOLT specification is this:
{code:java}
[{ "operation": "default", "spec": { "interest": "${test}" } }]{code}
the problem is here that, in JOLT advanced window with test attribute nifi 
cannot put json object and shown this error:
{quote}*"Error occurred during transformation"*
{quote}
and when run processor this detailed error is become alerted:
{quote}*"unable to unmarshal json to an object"*
{quote}

my desired result is this:
{code:java}
{ "id": 123, "user": "foo", "interest": {"key":"value"} }{code}
is this a bug or am i wrong?

  was:


i have a attribute (that produced by a REST service and catched by invokeHTTP 
processor) in JSON format like this:
{code:java}
test => {"key":"value"}{code}
and then i want to put it in flows JSON content using JOLT processor, my 
content is something like this:
{code:java}
{ "id": 123, "user": "foo" }{code}
and my JOLT specification is this:
{code:java}
[{ "operation": "default", "spec": { "interest": "${test}" } }]{code}
the problem is here that, in JOLT advanced window with test attribute nifi 
cannot put json object and shown this error:
{quote}*"Error occurred during transformation"*
{quote}
and when run processor this detailed error is become alerted:
{quote}*"unable to unmarshal json to an object"*
{quote}

my desired result is this:
{code:java}
{ "id": 123, "user": "foo", "interest": {"key":"value"} }{code}
is this a bug or am i wrong?


> JOLTTransformJSON advanced window JSON format validation should accept EL 
> without double quote
> --
>
> Key: NIFI-5887
> URL: https://issues.apache.org/jira/browse/NIFI-5887
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.0
>Reporter: meh
>Priority: Major
>  Labels: attribute, jolt, json
> Attachments: NIFI-5887_JOLT_with_JSON_attribute_value.xml
>
>
> JOLTTransformJSON advanced configuration window allows user to design a Jolt 
> spec using NiFi EL. Also it does provide a feature to set FlowFile attributes 
> to confirm conversion result for testing.
> However, the advanced window spec JSON format validation doesn't allow EL 
> expression without double quote. Because of this, user can not try a JOLT 
> spec using EL which references a FlowFile attribute having a valid JSON 
> object. See details below.
> It would be more user friendly if the advanced window allows a Jolt spec even 
> if it is not a valid JSON format. Probably adding a toggle switch to disable 
> JSON validation might be a solution.
> 
> i have a attribute (that produced by a REST service and catched by invokeHTTP 
> processor) in JSON format like this:
> {code:java}
> test => {"key":"value"}{code}
> and then i want to put it in flows JSON content using JOLT processor, my 
> content is something like this:
> {code:java}
> { "id": 123, "user": "foo" }{code}
> and my JOLT specification is this:
> {code:java}
> [{ "operation": "default", "spec": { "interest": "${test}" } }]{code}
> the problem is here that, in JOLT advanced window with test attribute nifi 
> cannot put json object and shown this error:
> {quote}*"Error occurred during transformation"*
> {quote}
> and when run processor this detailed error is become alerted:
> {quote}*"unable to unmarshal json to an object"*
> {quote}
> 
> my desired result is this:
> {code:java}
> { "id": 123, "user": "foo", "interest": {"key":"value"} }{code}
> is this a bug or am i wrong?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5887) JOLTTransformJSON advanced window JSON format validation should accept EL without double quote

2019-01-07 Thread Koji Kawamura (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-5887:

Description: 


i have a attribute (that produced by a REST service and catched by invokeHTTP 
processor) in JSON format like this:
{code:java}
test => {"key":"value"}{code}
and then i want to put it in flows JSON content using JOLT processor, my 
content is something like this:
{code:java}
{ "id": 123, "user": "foo" }{code}
and my JOLT specification is this:
{code:java}
[{ "operation": "default", "spec": { "interest": "${test}" } }]{code}
the problem is here that, in JOLT advanced window with test attribute nifi 
cannot put json object and shown this error:
{quote}*"Error occurred during transformation"*
{quote}
and when run processor this detailed error is become alerted:
{quote}*"unable to unmarshal json to an object"*
{quote}

my desired result is this:
{code:java}
{ "id": 123, "user": "foo", "interest": {"key":"value"} }{code}
is this a bug or am i wrong?

  was:
i have a attribute (that produced by a REST service and catched by invokeHTTP 
processor) in JSON format like this:
{code:java}
test => {"key":"value"}{code}
and then i want to put it in flows JSON content using JOLT processor, my 
content is something like this:
{code:java}
{ "id": 123, "user": "foo" }{code}
and my JOLT specification is this:
{code:java}
[{ "operation": "default", "spec": { "interest": "${test}" } }]{code}
the problem is here that, in JOLT advanced window with test attribute nifi 
cannot put json object and shown this error:
{quote}*"Error occurred during transformation"*
{quote}
and when run processor this detailed error is become alerted:
{quote}*"unable to unmarshal json to an object"*
{quote}

my desired result is this:
{code:java}
{ "id": 123, "user": "foo", "interest": {"key":"value"} }{code}
is this a bug or am i wrong?


> JOLTTransformJSON advanced window JSON format validation should accept EL 
> without double quote
> --
>
> Key: NIFI-5887
> URL: https://issues.apache.org/jira/browse/NIFI-5887
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.0
>Reporter: meh
>Priority: Major
>  Labels: attribute, jolt, json
> Attachments: NIFI-5887_JOLT_with_JSON_attribute_value.xml
>
>
> i have a attribute (that produced by a REST service and catched by invokeHTTP 
> processor) in JSON format like this:
> {code:java}
> test => {"key":"value"}{code}
> and then i want to put it in flows JSON content using JOLT processor, my 
> content is something like this:
> {code:java}
> { "id": 123, "user": "foo" }{code}
> and my JOLT specification is this:
> {code:java}
> [{ "operation": "default", "spec": { "interest": "${test}" } }]{code}
> the problem is here that, in JOLT advanced window with test attribute nifi 
> cannot put json object and shown this error:
> {quote}*"Error occurred during transformation"*
> {quote}
> and when run processor this detailed error is become alerted:
> {quote}*"unable to unmarshal json to an object"*
> {quote}
> 
> my desired result is this:
> {code:java}
> { "id": 123, "user": "foo", "interest": {"key":"value"} }{code}
> is this a bug or am i wrong?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5887) unable to unmarshal json to an object

2019-01-07 Thread Koji Kawamura (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-5887:

Issue Type: Improvement  (was: Bug)

> unable to unmarshal json to an object
> -
>
> Key: NIFI-5887
> URL: https://issues.apache.org/jira/browse/NIFI-5887
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.0
>Reporter: meh
>Priority: Major
>  Labels: attribute, jolt, json
> Attachments: NIFI-5887_JOLT_with_JSON_attribute_value.xml
>
>
> i have a attribute (that produced by a REST service and catched by invokeHTTP 
> processor) in JSON format like this:
> {code:java}
> test => {"key":"value"}{code}
> and then i want to put it in flows JSON content using JOLT processor, my 
> content is something like this:
> {code:java}
> { "id": 123, "user": "foo" }{code}
> and my JOLT specification is this:
> {code:java}
> [{ "operation": "default", "spec": { "interest": "${test}" } }]{code}
> the problem is here that, in JOLT advanced window with test attribute nifi 
> cannot put json object and shown this error:
> {quote}*"Error occurred during transformation"*
> {quote}
> and when run processor this detailed error is become alerted:
> {quote}*"unable to unmarshal json to an object"*
> {quote}
> 
> my desired result is this:
> {code:java}
> { "id": 123, "user": "foo", "interest": {"key":"value"} }{code}
> is this a bug or am i wrong?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5887) unable to unmarshal json to an object

2019-01-07 Thread Koji Kawamura (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Kawamura updated NIFI-5887:

Attachment: NIFI-5887_JOLT_with_JSON_attribute_value.xml

> unable to unmarshal json to an object
> -
>
> Key: NIFI-5887
> URL: https://issues.apache.org/jira/browse/NIFI-5887
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: meh
>Priority: Major
>  Labels: attribute, jolt, json
> Attachments: NIFI-5887_JOLT_with_JSON_attribute_value.xml
>
>
> i have a attribute (that produced by a REST service and catched by invokeHTTP 
> processor) in JSON format like this:
> {code:java}
> test => {"key":"value"}{code}
> and then i want to put it in flows JSON content using JOLT processor, my 
> content is something like this:
> {code:java}
> { "id": 123, "user": "foo" }{code}
> and my JOLT specification is this:
> {code:java}
> [{ "operation": "default", "spec": { "interest": "${test}" } }]{code}
> the problem is here that, in JOLT advanced window with test attribute nifi 
> cannot put json object and shown this error:
> {quote}*"Error occurred during transformation"*
> {quote}
> and when run processor this detailed error is become alerted:
> {quote}*"unable to unmarshal json to an object"*
> {quote}
> 
> my desired result is this:
> {code:java}
> { "id": 123, "user": "foo", "interest": {"key":"value"} }{code}
> is this a bug or am i wrong?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ambition119 closed pull request #3244: NIFI-5928 Use NiFiDataPacket in the Flink source

2019-01-07 Thread GitBox
ambition119 closed pull request #3244: NIFI-5928 Use  NiFiDataPacket in the 
Flink source
URL: https://github.com/apache/nifi/pull/3244
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/nifi-external/nifi-flink-source/pom.xml 
b/nifi-external/nifi-flink-source/pom.xml
new file mode 100644
index 00..d2e356f64e
--- /dev/null
+++ b/nifi-external/nifi-flink-source/pom.xml
@@ -0,0 +1,45 @@
+
+
+http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd";>
+4.0.0
+
+org.apache.nifi
+nifi-external
+1.9.0-SNAPSHOT
+
+org.apache.nifi
+nifi-flink-source
+
+
+1.7.1
+2.12
+
+
+
+
+org.apache.flink
+flink-streaming-java_${scala.version}
+${flink.version}
+provided
+
+
+org.apache.nifi
+nifi-site-to-site-client
+${project.version}
+
+
+
+
diff --git 
a/nifi-external/nifi-flink-source/src/main/java/org/apache/nifi/flink/NiFiDataPacket.java
 
b/nifi-external/nifi-flink-source/src/main/java/org/apache/nifi/flink/NiFiDataPacket.java
new file mode 100644
index 00..d772205bc6
--- /dev/null
+++ 
b/nifi-external/nifi-flink-source/src/main/java/org/apache/nifi/flink/NiFiDataPacket.java
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.flink;
+
+import java.util.Map;
+
+/**
+ * 
+ * The NiFiDataPacket provides a packaging around a NiFi FlowFile. It wraps 
both
+ * a FlowFile's content and its attributes so that they can be processed by 
Flink
+ * 
+ */
+public interface NiFiDataPacket {
+
+/**
+ * @return the contents of a NiFi FlowFile
+ */
+byte[] getContent();
+
+/**
+ * @return a Map of attributes that are associated with the NiFi FlowFile
+ */
+Map getAttributes();
+}
diff --git 
a/nifi-external/nifi-flink-source/src/main/java/org/apache/nifi/flink/NiFiSource.java
 
b/nifi-external/nifi-flink-source/src/main/java/org/apache/nifi/flink/NiFiSource.java
new file mode 100644
index 00..6658afcb93
--- /dev/null
+++ 
b/nifi-external/nifi-flink-source/src/main/java/org/apache/nifi/flink/NiFiSource.java
@@ -0,0 +1,196 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.flink;
+
+import org.apache.flink.api.common.functions.StoppableFunction;
+import org.apache.flink.configuration.Configuration;
+import 
org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction;
+import org.apache.nifi.remote.Transaction;
+import org.apache.nifi.remote.TransferDirection;
+import org.apache.nifi.remote.client.SiteToSiteClient;
+import org.apache.nifi.remote.client.SiteToSiteClientConfig;
+import org.apache.nifi.remote.protocol.DataPacket;
+import org.apache.nifi.stream.io.StreamUtils;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * 
+ * The NiFiSource is a Reliable Receiver that provides a way to
+ * pull data from Apache NiFi so that it can be processed by Flink.
+ * The NiFi Receiver co

[GitHub] ambition119 commented on issue #3244: NIFI-5928 Use NiFiDataPacket in the Flink source

2019-01-07 Thread GitBox
ambition119 commented on issue #3244: NIFI-5928 Use  NiFiDataPacket in the 
Flink source
URL: https://github.com/apache/nifi/pull/3244#issuecomment-452140487
 
 
   > @ambition119 currently the flink nifi connectors are part of flink, so 
shouldn't this PR be against the flink repo?
   > 
   > 
https://github.com/apache/flink/tree/master/flink-connectors/flink-connector-nifi
   
   ok


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kei-miyauchi opened a new pull request #3249: NIFI-5841 Fix memory leak of PutHive3Streaming

2019-01-07 Thread GitBox
kei-miyauchi opened a new pull request #3249: NIFI-5841 Fix memory leak of 
PutHive3Streaming
URL: https://github.com/apache/nifi/pull/3249
 
 
   PutHive3Streaming must not call `ShutdownHookManager.addShutdownHook` 
because Hive3.1 adds shutdownhook within connect(). See 
[HiveStreamingConnection.java](https://github.com/apache/hive/blob/bcc7df95824831a8d2f1524e4048dfc23ab98c19/streaming/src/java/org/apache/hive/streaming/HiveStreamingConnection.java#L335).
   For each connection redundant shutdownhook occurs memory leak.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] erichanson5 opened a new pull request #3248: NIFI-6311: Added ability to set Transaction Isolation Level on Database connections for QueryDatabaseTable processor

2019-01-07 Thread GitBox
erichanson5 opened a new pull request #3248: NIFI-6311: Added ability to set 
Transaction Isolation Level on Database connections for QueryDatabaseTable 
processor
URL: https://github.com/apache/nifi/pull/3248
 
 
   NIFI-6311: Added ability to set Transaction Isolation Level on Database 
connections for QueryDatabaseTable processor


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-5934) NiFi privilege - Allow modify but not operate component

2019-01-07 Thread Andy LoPresto (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736190#comment-16736190
 ] 

Andy LoPresto commented on NIFI-5934:
-

There is some context to this request in the discussion 
[here|https://community.hortonworks.com/questions/231143/nifi-privilege-allow-modify-but-not-operate-compon.html?childToView=231308#comment-231308].
 I'm not sure that this level of separation makes sense. I understand the 
desire to separate roles between configuration and operation, but the way that 
this is implemented in the back-end may make providing this challenging. I 
would not expect to see this in a 1.x release. 

> NiFi privilege - Allow modify but not operate component
> ---
>
> Key: NIFI-5934
> URL: https://issues.apache.org/jira/browse/NIFI-5934
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core UI
>Affects Versions: 1.8.0
>Reporter: Yahya
>Priority: Major
>
> Create NiFi privilege to modify the component only without the ability to 
> operate it. This is needed to segregate the duties where a user will 
> create/modify the components/flow and another user will run them. 
> Using the "Modify component" privilege, the user is able to operate as well 
> even if the "Operate" Privilege.  is removed
>  
> "Modify Privilege" should not include the "Operate  Privilege", Both 
> privileges should be selected if you want to Modify and Operate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5935) Handle Error during Ldap User/Group Sync

2019-01-07 Thread Matt Gilman (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-5935:
--
Status: Patch Available  (was: Open)

> Handle Error during Ldap User/Group Sync 
> -
>
> Key: NIFI-5935
> URL: https://issues.apache.org/jira/browse/NIFI-5935
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Need to address error handling in background thread perform user/group sync. 
> If an error occurs it's possible that subsequent syncs will not be performed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] mcgilman opened a new pull request #3247: NIFI-5935: Handle error in LDAP sync background thread

2019-01-07 Thread GitBox
mcgilman opened a new pull request #3247: NIFI-5935: Handle error in LDAP sync 
background thread
URL: https://github.com/apache/nifi/pull/3247
 
 
   NIFI-5935:
   - Ensuring exceptions are handled in the ldap user/group sync background 
thread.
   - Adding additional logging around what users/groups were discovered.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-5936) MockProcessSession remove() does not report a "DROP" provenance event

2019-01-07 Thread Joseph Percivall (JIRA)
Joseph Percivall created NIFI-5936:
--

 Summary: MockProcessSession remove() does not report a "DROP" 
provenance event
 Key: NIFI-5936
 URL: https://issues.apache.org/jira/browse/NIFI-5936
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Joseph Percivall


The StandardProcessSession remove method emits a "DROPPED" provenance event[1] 
whereas the MockProcessSession does not[2]. MockProcessSession should mimic the 
Standard as closely as possible. 


[1] 
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java#L2002
[2] 
https://github.com/apache/nifi/blob/master/nifi-mock/src/main/java/org/apache/nifi/util/MockProcessSession.java#L620



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] asfgit closed pull request #3207: NIFI-5879: Fixed bug in FileSystemRepository that can occur if an Inp…

2019-01-07 Thread GitBox
asfgit closed pull request #3207: NIFI-5879: Fixed bug in FileSystemRepository 
that can occur if an Inp…
URL: https://github.com/apache/nifi/pull/3207
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
 
b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
index c041f5c91b..125cd500e8 100644
--- 
a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
+++ 
b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/FileSystemRepository.java
@@ -864,9 +864,16 @@ public InputStream read(final ContentClaim claim) throws 
IOException {
 
 }
 
-// see javadocs for claim.getLength() as to why we do this.
+// A claim length of -1 indicates that the claim is still being 
written to and we don't know
+// the length. In this case, we don't limit the Input Stream. If the 
Length has been populated, though,
+// it is possible that the Length could then be extended. However, we 
do want to avoid ever allowing the
+// stream to read past the end of the Content Claim. To accomplish 
this, we use a LimitedInputStream but
+// provide a LongSupplier for the length instead of a Long value. this 
allows us to continue reading until
+// we get to the end of the Claim, even if the Claim grows. This may 
happen, for instance, if we obtain an
+// InputStream for this claim, then read from it, write more to the 
claim, and then attempt to read again. In
+// such a case, since we have written to that same Claim, we should 
still be able to read those bytes.
 if (claim.getLength() >= 0) {
-return new LimitedInputStream(fis, claim.getLength());
+return new LimitedInputStream(fis, claim::getLength);
 } else {
 return fis;
 }
diff --git 
a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java
 
b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java
index 4354dc416b..cc3ac19905 100644
--- 
a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java
+++ 
b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java
@@ -2267,7 +2267,9 @@ public InputStream read(FlowFile source) {
 final StandardRepositoryRecord record = getRecord(source);
 
 try {
-ensureNotAppending(record.getCurrentClaim());
+final ContentClaim currentClaim = record.getCurrentClaim();
+ensureNotAppending(currentClaim);
+claimCache.flush(currentClaim);
 } catch (final IOException e) {
 throw new FlowFileAccessException("Failed to access ContentClaim 
for " + source.toString(), e);
 }
diff --git 
a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/io/LimitedInputStream.java
 
b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/io/LimitedInputStream.java
index 74597ae51e..7c32cc8c08 100644
--- 
a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/io/LimitedInputStream.java
+++ 
b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/io/LimitedInputStream.java
@@ -18,21 +18,36 @@
 
 import java.io.IOException;
 import java.io.InputStream;
+import java.util.Objects;
+import java.util.function.LongSupplier;
 
 public class LimitedInputStream extends InputStream {
 
 private final InputStream in;
-private long limit;
+private final long limit;
+private final LongSupplier limitSupplier;
 private long bytesRead = 0;
+private long markOffset = -1L;
+
+public LimitedInputStream(final InputStream in, final LongSupplier 
limitSupplier) {
+this.in = in;
+this.limitSupplier = Objects.requireNonNull(limitSupplier);
+this.limit = -1;
+}
 
 public LimitedInputStream(final Input

[jira] [Commented] (NIFI-5879) ContentNotFoundException thrown if a FlowFile's content claim is read, then written to, then read again, within the same ProcessSession

2019-01-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736004#comment-16736004
 ] 

ASF subversion and git services commented on NIFI-5879:
---

Commit cf41c10546d940aa86d0287bbeb2cdaf4a6c8a2a in nifi's branch 
refs/heads/master from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=cf41c10 ]

NIFI-5879: Fixed bug in FileSystemRepository that can occur if an InputStream 
is obtained, then more data is written to the Content Claim - the InputStream 
would end before allowing the sequential data to be read. Also fixed bugs in 
LimitedInputStream related to available(), mark(), and reset() and the 
corresponding unit tests. Additionally, found that one call to 
StandardProcessSession.read() was not properly flushing the output of any 
Content Claim that has been written to before attempting to read it.

Signed-off-by: Matthew Burgess 

This closes #3207


> ContentNotFoundException thrown if a FlowFile's content claim is read, then 
> written to, then read again, within the same ProcessSession
> ---
>
> Key: NIFI-5879
> URL: https://issues.apache.org/jira/browse/NIFI-5879
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following Processor can be used to replicate the issue.
> If a processor reads content, then attempts to write to the content, then 
> read what was just written, a ContentNotFoundException will be thrown.
>  
> /*
>  * Licensed to the Apache Software Foundation (ASF) under one or more
>  * contributor license agreements. See the NOTICE file distributed with
>  * this work for additional information regarding copyright ownership.
>  * The ASF licenses this file to You under the Apache License, Version 2.0
>  * (the "License"); you may not use this file except in compliance with
>  * the License. You may obtain a copy of the License at
>  *
>  * http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>  * See the License for the specific language governing permissions and
>  * limitations under the License.
>  */
> package org.apache.nifi.processors.standard;
> import org.apache.nifi.components.PropertyDescriptor;
> import org.apache.nifi.components.PropertyDescriptor.Builder;
> import org.apache.nifi.flowfile.FlowFile;
> import org.apache.nifi.processor.AbstractProcessor;
> import org.apache.nifi.processor.ProcessContext;
> import org.apache.nifi.processor.ProcessSession;
> import org.apache.nifi.processor.Relationship;
> import org.apache.nifi.processor.exception.ProcessException;
> import org.apache.nifi.stream.io.StreamUtils;
> import java.io.IOException;
> import java.io.InputStream;
> import java.util.ArrayList;
> import java.util.Collections;
> import java.util.List;
> import java.util.Set;
> import static org.apache.nifi.expression.ExpressionLanguageScope.NONE;
> import static 
> org.apache.nifi.processor.util.StandardValidators.POSITIVE_INTEGER_VALIDATOR;
> public class ReplicateWeirdness extends AbstractProcessor {
>  static final PropertyDescriptor CLONE_ITERATIONS = new Builder()
>  .name("Iterations")
>  .displayName("Iterations")
>  .description("Number of Iterations")
>  .required(true)
>  .addValidator(POSITIVE_INTEGER_VALIDATOR)
>  .expressionLanguageSupported(NONE)
>  .defaultValue("1")
>  .build();
>  static final PropertyDescriptor WRITE_ITERATIONS = new Builder()
>  .name("Write Iterations")
>  .displayName("Write Iterations")
>  .description("Write Iterations")
>  .required(true)
>  .addValidator(POSITIVE_INTEGER_VALIDATOR)
>  .expressionLanguageSupported(NONE)
>  .defaultValue("2")
>  .build();
>  static final PropertyDescriptor READ_FIRST = new Builder()
>  .name("Read First")
>  .displayName("Read First")
>  .description("Read First")
>  .required(true)
>  .allowableValues("true", "false")
>  .expressionLanguageSupported(NONE)
>  .defaultValue("false")
>  .build();
>  static final Relationship REL_SUCCESS = new Relationship.Builder()
>  .name("success")
>  .build();
>  @Override
>  public Set getRelationships() {
>  return Collections.singleton(REL_SUCCESS);
>  }
>  @Override
>  protected List getSupportedPropertyDescriptors() {
>  final List properties = new ArrayList<>();
>  properties.add(CLONE_ITERATIONS);
>  properties.add(WRITE_ITERATIONS);
>  properties.add(READ_FIRST);
>  return properties;
>  }
>  @Over

[jira] [Updated] (NIFI-5879) ContentNotFoundException thrown if a FlowFile's content claim is read, then written to, then read again, within the same ProcessSession

2019-01-07 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5879:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ContentNotFoundException thrown if a FlowFile's content claim is read, then 
> written to, then read again, within the same ProcessSession
> ---
>
> Key: NIFI-5879
> URL: https://issues.apache.org/jira/browse/NIFI-5879
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The following Processor can be used to replicate the issue.
> If a processor reads content, then attempts to write to the content, then 
> read what was just written, a ContentNotFoundException will be thrown.
>  
> /*
>  * Licensed to the Apache Software Foundation (ASF) under one or more
>  * contributor license agreements. See the NOTICE file distributed with
>  * this work for additional information regarding copyright ownership.
>  * The ASF licenses this file to You under the Apache License, Version 2.0
>  * (the "License"); you may not use this file except in compliance with
>  * the License. You may obtain a copy of the License at
>  *
>  * http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>  * See the License for the specific language governing permissions and
>  * limitations under the License.
>  */
> package org.apache.nifi.processors.standard;
> import org.apache.nifi.components.PropertyDescriptor;
> import org.apache.nifi.components.PropertyDescriptor.Builder;
> import org.apache.nifi.flowfile.FlowFile;
> import org.apache.nifi.processor.AbstractProcessor;
> import org.apache.nifi.processor.ProcessContext;
> import org.apache.nifi.processor.ProcessSession;
> import org.apache.nifi.processor.Relationship;
> import org.apache.nifi.processor.exception.ProcessException;
> import org.apache.nifi.stream.io.StreamUtils;
> import java.io.IOException;
> import java.io.InputStream;
> import java.util.ArrayList;
> import java.util.Collections;
> import java.util.List;
> import java.util.Set;
> import static org.apache.nifi.expression.ExpressionLanguageScope.NONE;
> import static 
> org.apache.nifi.processor.util.StandardValidators.POSITIVE_INTEGER_VALIDATOR;
> public class ReplicateWeirdness extends AbstractProcessor {
>  static final PropertyDescriptor CLONE_ITERATIONS = new Builder()
>  .name("Iterations")
>  .displayName("Iterations")
>  .description("Number of Iterations")
>  .required(true)
>  .addValidator(POSITIVE_INTEGER_VALIDATOR)
>  .expressionLanguageSupported(NONE)
>  .defaultValue("1")
>  .build();
>  static final PropertyDescriptor WRITE_ITERATIONS = new Builder()
>  .name("Write Iterations")
>  .displayName("Write Iterations")
>  .description("Write Iterations")
>  .required(true)
>  .addValidator(POSITIVE_INTEGER_VALIDATOR)
>  .expressionLanguageSupported(NONE)
>  .defaultValue("2")
>  .build();
>  static final PropertyDescriptor READ_FIRST = new Builder()
>  .name("Read First")
>  .displayName("Read First")
>  .description("Read First")
>  .required(true)
>  .allowableValues("true", "false")
>  .expressionLanguageSupported(NONE)
>  .defaultValue("false")
>  .build();
>  static final Relationship REL_SUCCESS = new Relationship.Builder()
>  .name("success")
>  .build();
>  @Override
>  public Set getRelationships() {
>  return Collections.singleton(REL_SUCCESS);
>  }
>  @Override
>  protected List getSupportedPropertyDescriptors() {
>  final List properties = new ArrayList<>();
>  properties.add(CLONE_ITERATIONS);
>  properties.add(WRITE_ITERATIONS);
>  properties.add(READ_FIRST);
>  return properties;
>  }
>  @Override
>  public void onTrigger(final ProcessContext context, final ProcessSession 
> session) throws ProcessException {
>  FlowFile original = session.get();
>  if (original == null) {
>  return;
>  }
>  try (final InputStream in = session.read(original)) {
>  final long originalLength = countBytes(in);
>  getLogger().info("Original FlowFile is " + originalLength + " bytes");
>  } catch (final IOException e) {
>  throw new ProcessException(e);
>  }
>  final int cloneIterations = 
> context.getProperty(CLONE_ITERATIONS).asInteger();
>  final int writeIterations = 
> context.getProperty(WRITE_ITERATIONS).asInteger();
>  final boolean readFirst = context.getProperty(READ_FIRST).asBoolean();
>  for (int i=0; i < cloneIterations; i++) {
>  FlowFile clo

[GitHub] mattyb149 commented on issue #3207: NIFI-5879: Fixed bug in FileSystemRepository that can occur if an Inp…

2019-01-07 Thread GitBox
mattyb149 commented on issue #3207: NIFI-5879: Fixed bug in 
FileSystemRepository that can occur if an Inp…
URL: https://github.com/apache/nifi/pull/3207#issuecomment-451985507
 
 
   +1 LGTM, ran build with unit tests. Thanks for the improvement! Merging to 
master


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-5854) Enhance time unit features

2019-01-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735995#comment-16735995
 ] 

ASF subversion and git services commented on NIFI-5854:
---

Commit b59fa5af1f3232581e1b3903e3e2f408d9daa323 in nifi's branch 
refs/heads/master from Andy LoPresto
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b59fa5a ]

NIFI-5854 Added skeleton logic to convert decimal time units.
Added helper methods.
Added unit tests.

NIFI-5854 [WIP] Cleaned up logic.
Resolved failing unit tests due to error message change.

NIFI-5854 [WIP] All helper method unit tests pass.

NIFI-5854 [WIP] FormatUtils#getPreciseTimeDuration() now handles all tested 
inputs correctly.
Added unit tests.

NIFI-5854 [WIP] FormatUtils#getTimeDuration() still using long.
Added unit tests.
Renamed existing unit tests to reflect method under test.

NIFI-5854 FormatUtils#getTimeDuration() returns long but now accepts decimal 
inputs.
Added @Deprecation warnings (will update callers where possible).
All unit tests pass.

NIFI-5854 Fixed unit tests (ran in IDE but not Maven) due to int overflows.
Fixed checkstyle issues.

NIFI-5854 Fixed typo in Javadoc.

NIFI-5854 Fixed typo in Javadoc.

Signed-off-by: Matthew Burgess 

This closes #3193


> Enhance time unit features
> --
>
> Key: NIFI-5854
> URL: https://issues.apache.org/jira/browse/NIFI-5854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: parsing, time, units
>
> There is some ambiguity with time units (specifically around processor 
> properties). Two features which I think should be added:
> * Currently only whole numbers are parsed correctly. For example, {{10 
> milliseconds}} and {{0.010 seconds}} are functionally equivalent, but only 
> the former will be parsed. This is due to the regex used in 
> {{StandardValidators.TIME_PERIOD_VALIDATOR}} which relies on 
> {{FormatUtils.TIME_DURATION_REGEX}} (see below). Decimal amounts should be 
> parsed
> * The enumerated time units are *nanoseconds, milliseconds, seconds, minutes, 
> hours, days, weeks*. While I don't intend to extend this to "millennia", etc. 
> as every unit including and above *months* would be ambiguous, *microseconds* 
> seems like a valid and missing unit
> *Definition of {{FormatUtils.TIME_DURATION_REGEX}}:*
> {code}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> public static final Pattern TIME_DURATION_PATTERN = 
> Pattern.compile(TIME_DURATION_REGEX);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5854) Enhance time unit features

2019-01-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735993#comment-16735993
 ] 

ASF subversion and git services commented on NIFI-5854:
---

Commit b59fa5af1f3232581e1b3903e3e2f408d9daa323 in nifi's branch 
refs/heads/master from Andy LoPresto
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b59fa5a ]

NIFI-5854 Added skeleton logic to convert decimal time units.
Added helper methods.
Added unit tests.

NIFI-5854 [WIP] Cleaned up logic.
Resolved failing unit tests due to error message change.

NIFI-5854 [WIP] All helper method unit tests pass.

NIFI-5854 [WIP] FormatUtils#getPreciseTimeDuration() now handles all tested 
inputs correctly.
Added unit tests.

NIFI-5854 [WIP] FormatUtils#getTimeDuration() still using long.
Added unit tests.
Renamed existing unit tests to reflect method under test.

NIFI-5854 FormatUtils#getTimeDuration() returns long but now accepts decimal 
inputs.
Added @Deprecation warnings (will update callers where possible).
All unit tests pass.

NIFI-5854 Fixed unit tests (ran in IDE but not Maven) due to int overflows.
Fixed checkstyle issues.

NIFI-5854 Fixed typo in Javadoc.

NIFI-5854 Fixed typo in Javadoc.

Signed-off-by: Matthew Burgess 

This closes #3193


> Enhance time unit features
> --
>
> Key: NIFI-5854
> URL: https://issues.apache.org/jira/browse/NIFI-5854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: parsing, time, units
>
> There is some ambiguity with time units (specifically around processor 
> properties). Two features which I think should be added:
> * Currently only whole numbers are parsed correctly. For example, {{10 
> milliseconds}} and {{0.010 seconds}} are functionally equivalent, but only 
> the former will be parsed. This is due to the regex used in 
> {{StandardValidators.TIME_PERIOD_VALIDATOR}} which relies on 
> {{FormatUtils.TIME_DURATION_REGEX}} (see below). Decimal amounts should be 
> parsed
> * The enumerated time units are *nanoseconds, milliseconds, seconds, minutes, 
> hours, days, weeks*. While I don't intend to extend this to "millennia", etc. 
> as every unit including and above *months* would be ambiguous, *microseconds* 
> seems like a valid and missing unit
> *Definition of {{FormatUtils.TIME_DURATION_REGEX}}:*
> {code}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> public static final Pattern TIME_DURATION_PATTERN = 
> Pattern.compile(TIME_DURATION_REGEX);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5854) Enhance time unit features

2019-01-07 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess resolved NIFI-5854.

Resolution: Fixed

> Enhance time unit features
> --
>
> Key: NIFI-5854
> URL: https://issues.apache.org/jira/browse/NIFI-5854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: parsing, time, units
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There is some ambiguity with time units (specifically around processor 
> properties). Two features which I think should be added:
> * Currently only whole numbers are parsed correctly. For example, {{10 
> milliseconds}} and {{0.010 seconds}} are functionally equivalent, but only 
> the former will be parsed. This is due to the regex used in 
> {{StandardValidators.TIME_PERIOD_VALIDATOR}} which relies on 
> {{FormatUtils.TIME_DURATION_REGEX}} (see below). Decimal amounts should be 
> parsed
> * The enumerated time units are *nanoseconds, milliseconds, seconds, minutes, 
> hours, days, weeks*. While I don't intend to extend this to "millennia", etc. 
> as every unit including and above *months* would be ambiguous, *microseconds* 
> seems like a valid and missing unit
> *Definition of {{FormatUtils.TIME_DURATION_REGEX}}:*
> {code}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> public static final Pattern TIME_DURATION_PATTERN = 
> Pattern.compile(TIME_DURATION_REGEX);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5854) Enhance time unit features

2019-01-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735994#comment-16735994
 ] 

ASF subversion and git services commented on NIFI-5854:
---

Commit b59fa5af1f3232581e1b3903e3e2f408d9daa323 in nifi's branch 
refs/heads/master from Andy LoPresto
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b59fa5a ]

NIFI-5854 Added skeleton logic to convert decimal time units.
Added helper methods.
Added unit tests.

NIFI-5854 [WIP] Cleaned up logic.
Resolved failing unit tests due to error message change.

NIFI-5854 [WIP] All helper method unit tests pass.

NIFI-5854 [WIP] FormatUtils#getPreciseTimeDuration() now handles all tested 
inputs correctly.
Added unit tests.

NIFI-5854 [WIP] FormatUtils#getTimeDuration() still using long.
Added unit tests.
Renamed existing unit tests to reflect method under test.

NIFI-5854 FormatUtils#getTimeDuration() returns long but now accepts decimal 
inputs.
Added @Deprecation warnings (will update callers where possible).
All unit tests pass.

NIFI-5854 Fixed unit tests (ran in IDE but not Maven) due to int overflows.
Fixed checkstyle issues.

NIFI-5854 Fixed typo in Javadoc.

NIFI-5854 Fixed typo in Javadoc.

Signed-off-by: Matthew Burgess 

This closes #3193


> Enhance time unit features
> --
>
> Key: NIFI-5854
> URL: https://issues.apache.org/jira/browse/NIFI-5854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: parsing, time, units
>
> There is some ambiguity with time units (specifically around processor 
> properties). Two features which I think should be added:
> * Currently only whole numbers are parsed correctly. For example, {{10 
> milliseconds}} and {{0.010 seconds}} are functionally equivalent, but only 
> the former will be parsed. This is due to the regex used in 
> {{StandardValidators.TIME_PERIOD_VALIDATOR}} which relies on 
> {{FormatUtils.TIME_DURATION_REGEX}} (see below). Decimal amounts should be 
> parsed
> * The enumerated time units are *nanoseconds, milliseconds, seconds, minutes, 
> hours, days, weeks*. While I don't intend to extend this to "millennia", etc. 
> as every unit including and above *months* would be ambiguous, *microseconds* 
> seems like a valid and missing unit
> *Definition of {{FormatUtils.TIME_DURATION_REGEX}}:*
> {code}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> public static final Pattern TIME_DURATION_PATTERN = 
> Pattern.compile(TIME_DURATION_REGEX);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5854) Enhance time unit features

2019-01-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735990#comment-16735990
 ] 

ASF subversion and git services commented on NIFI-5854:
---

Commit b59fa5af1f3232581e1b3903e3e2f408d9daa323 in nifi's branch 
refs/heads/master from Andy LoPresto
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b59fa5a ]

NIFI-5854 Added skeleton logic to convert decimal time units.
Added helper methods.
Added unit tests.

NIFI-5854 [WIP] Cleaned up logic.
Resolved failing unit tests due to error message change.

NIFI-5854 [WIP] All helper method unit tests pass.

NIFI-5854 [WIP] FormatUtils#getPreciseTimeDuration() now handles all tested 
inputs correctly.
Added unit tests.

NIFI-5854 [WIP] FormatUtils#getTimeDuration() still using long.
Added unit tests.
Renamed existing unit tests to reflect method under test.

NIFI-5854 FormatUtils#getTimeDuration() returns long but now accepts decimal 
inputs.
Added @Deprecation warnings (will update callers where possible).
All unit tests pass.

NIFI-5854 Fixed unit tests (ran in IDE but not Maven) due to int overflows.
Fixed checkstyle issues.

NIFI-5854 Fixed typo in Javadoc.

NIFI-5854 Fixed typo in Javadoc.

Signed-off-by: Matthew Burgess 

This closes #3193


> Enhance time unit features
> --
>
> Key: NIFI-5854
> URL: https://issues.apache.org/jira/browse/NIFI-5854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: parsing, time, units
>
> There is some ambiguity with time units (specifically around processor 
> properties). Two features which I think should be added:
> * Currently only whole numbers are parsed correctly. For example, {{10 
> milliseconds}} and {{0.010 seconds}} are functionally equivalent, but only 
> the former will be parsed. This is due to the regex used in 
> {{StandardValidators.TIME_PERIOD_VALIDATOR}} which relies on 
> {{FormatUtils.TIME_DURATION_REGEX}} (see below). Decimal amounts should be 
> parsed
> * The enumerated time units are *nanoseconds, milliseconds, seconds, minutes, 
> hours, days, weeks*. While I don't intend to extend this to "millennia", etc. 
> as every unit including and above *months* would be ambiguous, *microseconds* 
> seems like a valid and missing unit
> *Definition of {{FormatUtils.TIME_DURATION_REGEX}}:*
> {code}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> public static final Pattern TIME_DURATION_PATTERN = 
> Pattern.compile(TIME_DURATION_REGEX);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5854) Enhance time unit features

2019-01-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735991#comment-16735991
 ] 

ASF subversion and git services commented on NIFI-5854:
---

Commit b59fa5af1f3232581e1b3903e3e2f408d9daa323 in nifi's branch 
refs/heads/master from Andy LoPresto
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b59fa5a ]

NIFI-5854 Added skeleton logic to convert decimal time units.
Added helper methods.
Added unit tests.

NIFI-5854 [WIP] Cleaned up logic.
Resolved failing unit tests due to error message change.

NIFI-5854 [WIP] All helper method unit tests pass.

NIFI-5854 [WIP] FormatUtils#getPreciseTimeDuration() now handles all tested 
inputs correctly.
Added unit tests.

NIFI-5854 [WIP] FormatUtils#getTimeDuration() still using long.
Added unit tests.
Renamed existing unit tests to reflect method under test.

NIFI-5854 FormatUtils#getTimeDuration() returns long but now accepts decimal 
inputs.
Added @Deprecation warnings (will update callers where possible).
All unit tests pass.

NIFI-5854 Fixed unit tests (ran in IDE but not Maven) due to int overflows.
Fixed checkstyle issues.

NIFI-5854 Fixed typo in Javadoc.

NIFI-5854 Fixed typo in Javadoc.

Signed-off-by: Matthew Burgess 

This closes #3193


> Enhance time unit features
> --
>
> Key: NIFI-5854
> URL: https://issues.apache.org/jira/browse/NIFI-5854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: parsing, time, units
>
> There is some ambiguity with time units (specifically around processor 
> properties). Two features which I think should be added:
> * Currently only whole numbers are parsed correctly. For example, {{10 
> milliseconds}} and {{0.010 seconds}} are functionally equivalent, but only 
> the former will be parsed. This is due to the regex used in 
> {{StandardValidators.TIME_PERIOD_VALIDATOR}} which relies on 
> {{FormatUtils.TIME_DURATION_REGEX}} (see below). Decimal amounts should be 
> parsed
> * The enumerated time units are *nanoseconds, milliseconds, seconds, minutes, 
> hours, days, weeks*. While I don't intend to extend this to "millennia", etc. 
> as every unit including and above *months* would be ambiguous, *microseconds* 
> seems like a valid and missing unit
> *Definition of {{FormatUtils.TIME_DURATION_REGEX}}:*
> {code}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> public static final Pattern TIME_DURATION_PATTERN = 
> Pattern.compile(TIME_DURATION_REGEX);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5854) Enhance time unit features

2019-01-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735992#comment-16735992
 ] 

ASF subversion and git services commented on NIFI-5854:
---

Commit b59fa5af1f3232581e1b3903e3e2f408d9daa323 in nifi's branch 
refs/heads/master from Andy LoPresto
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b59fa5a ]

NIFI-5854 Added skeleton logic to convert decimal time units.
Added helper methods.
Added unit tests.

NIFI-5854 [WIP] Cleaned up logic.
Resolved failing unit tests due to error message change.

NIFI-5854 [WIP] All helper method unit tests pass.

NIFI-5854 [WIP] FormatUtils#getPreciseTimeDuration() now handles all tested 
inputs correctly.
Added unit tests.

NIFI-5854 [WIP] FormatUtils#getTimeDuration() still using long.
Added unit tests.
Renamed existing unit tests to reflect method under test.

NIFI-5854 FormatUtils#getTimeDuration() returns long but now accepts decimal 
inputs.
Added @Deprecation warnings (will update callers where possible).
All unit tests pass.

NIFI-5854 Fixed unit tests (ran in IDE but not Maven) due to int overflows.
Fixed checkstyle issues.

NIFI-5854 Fixed typo in Javadoc.

NIFI-5854 Fixed typo in Javadoc.

Signed-off-by: Matthew Burgess 

This closes #3193


> Enhance time unit features
> --
>
> Key: NIFI-5854
> URL: https://issues.apache.org/jira/browse/NIFI-5854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: parsing, time, units
>
> There is some ambiguity with time units (specifically around processor 
> properties). Two features which I think should be added:
> * Currently only whole numbers are parsed correctly. For example, {{10 
> milliseconds}} and {{0.010 seconds}} are functionally equivalent, but only 
> the former will be parsed. This is due to the regex used in 
> {{StandardValidators.TIME_PERIOD_VALIDATOR}} which relies on 
> {{FormatUtils.TIME_DURATION_REGEX}} (see below). Decimal amounts should be 
> parsed
> * The enumerated time units are *nanoseconds, milliseconds, seconds, minutes, 
> hours, days, weeks*. While I don't intend to extend this to "millennia", etc. 
> as every unit including and above *months* would be ambiguous, *microseconds* 
> seems like a valid and missing unit
> *Definition of {{FormatUtils.TIME_DURATION_REGEX}}:*
> {code}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> public static final Pattern TIME_DURATION_PATTERN = 
> Pattern.compile(TIME_DURATION_REGEX);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5854) Enhance time unit features

2019-01-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735989#comment-16735989
 ] 

ASF subversion and git services commented on NIFI-5854:
---

Commit b59fa5af1f3232581e1b3903e3e2f408d9daa323 in nifi's branch 
refs/heads/master from Andy LoPresto
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b59fa5a ]

NIFI-5854 Added skeleton logic to convert decimal time units.
Added helper methods.
Added unit tests.

NIFI-5854 [WIP] Cleaned up logic.
Resolved failing unit tests due to error message change.

NIFI-5854 [WIP] All helper method unit tests pass.

NIFI-5854 [WIP] FormatUtils#getPreciseTimeDuration() now handles all tested 
inputs correctly.
Added unit tests.

NIFI-5854 [WIP] FormatUtils#getTimeDuration() still using long.
Added unit tests.
Renamed existing unit tests to reflect method under test.

NIFI-5854 FormatUtils#getTimeDuration() returns long but now accepts decimal 
inputs.
Added @Deprecation warnings (will update callers where possible).
All unit tests pass.

NIFI-5854 Fixed unit tests (ran in IDE but not Maven) due to int overflows.
Fixed checkstyle issues.

NIFI-5854 Fixed typo in Javadoc.

NIFI-5854 Fixed typo in Javadoc.

Signed-off-by: Matthew Burgess 

This closes #3193


> Enhance time unit features
> --
>
> Key: NIFI-5854
> URL: https://issues.apache.org/jira/browse/NIFI-5854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: parsing, time, units
>
> There is some ambiguity with time units (specifically around processor 
> properties). Two features which I think should be added:
> * Currently only whole numbers are parsed correctly. For example, {{10 
> milliseconds}} and {{0.010 seconds}} are functionally equivalent, but only 
> the former will be parsed. This is due to the regex used in 
> {{StandardValidators.TIME_PERIOD_VALIDATOR}} which relies on 
> {{FormatUtils.TIME_DURATION_REGEX}} (see below). Decimal amounts should be 
> parsed
> * The enumerated time units are *nanoseconds, milliseconds, seconds, minutes, 
> hours, days, weeks*. While I don't intend to extend this to "millennia", etc. 
> as every unit including and above *months* would be ambiguous, *microseconds* 
> seems like a valid and missing unit
> *Definition of {{FormatUtils.TIME_DURATION_REGEX}}:*
> {code}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> public static final Pattern TIME_DURATION_PATTERN = 
> Pattern.compile(TIME_DURATION_REGEX);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5854) Enhance time unit features

2019-01-07 Thread Matt Burgess (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5854:
---
Fix Version/s: 1.9.0

> Enhance time unit features
> --
>
> Key: NIFI-5854
> URL: https://issues.apache.org/jira/browse/NIFI-5854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: parsing, time, units
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There is some ambiguity with time units (specifically around processor 
> properties). Two features which I think should be added:
> * Currently only whole numbers are parsed correctly. For example, {{10 
> milliseconds}} and {{0.010 seconds}} are functionally equivalent, but only 
> the former will be parsed. This is due to the regex used in 
> {{StandardValidators.TIME_PERIOD_VALIDATOR}} which relies on 
> {{FormatUtils.TIME_DURATION_REGEX}} (see below). Decimal amounts should be 
> parsed
> * The enumerated time units are *nanoseconds, milliseconds, seconds, minutes, 
> hours, days, weeks*. While I don't intend to extend this to "millennia", etc. 
> as every unit including and above *months* would be ambiguous, *microseconds* 
> seems like a valid and missing unit
> *Definition of {{FormatUtils.TIME_DURATION_REGEX}}:*
> {code}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> public static final Pattern TIME_DURATION_PATTERN = 
> Pattern.compile(TIME_DURATION_REGEX);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5854) Enhance time unit features

2019-01-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735988#comment-16735988
 ] 

ASF subversion and git services commented on NIFI-5854:
---

Commit b59fa5af1f3232581e1b3903e3e2f408d9daa323 in nifi's branch 
refs/heads/master from Andy LoPresto
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b59fa5a ]

NIFI-5854 Added skeleton logic to convert decimal time units.
Added helper methods.
Added unit tests.

NIFI-5854 [WIP] Cleaned up logic.
Resolved failing unit tests due to error message change.

NIFI-5854 [WIP] All helper method unit tests pass.

NIFI-5854 [WIP] FormatUtils#getPreciseTimeDuration() now handles all tested 
inputs correctly.
Added unit tests.

NIFI-5854 [WIP] FormatUtils#getTimeDuration() still using long.
Added unit tests.
Renamed existing unit tests to reflect method under test.

NIFI-5854 FormatUtils#getTimeDuration() returns long but now accepts decimal 
inputs.
Added @Deprecation warnings (will update callers where possible).
All unit tests pass.

NIFI-5854 Fixed unit tests (ran in IDE but not Maven) due to int overflows.
Fixed checkstyle issues.

NIFI-5854 Fixed typo in Javadoc.

NIFI-5854 Fixed typo in Javadoc.

Signed-off-by: Matthew Burgess 

This closes #3193


> Enhance time unit features
> --
>
> Key: NIFI-5854
> URL: https://issues.apache.org/jira/browse/NIFI-5854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: parsing, time, units
>
> There is some ambiguity with time units (specifically around processor 
> properties). Two features which I think should be added:
> * Currently only whole numbers are parsed correctly. For example, {{10 
> milliseconds}} and {{0.010 seconds}} are functionally equivalent, but only 
> the former will be parsed. This is due to the regex used in 
> {{StandardValidators.TIME_PERIOD_VALIDATOR}} which relies on 
> {{FormatUtils.TIME_DURATION_REGEX}} (see below). Decimal amounts should be 
> parsed
> * The enumerated time units are *nanoseconds, milliseconds, seconds, minutes, 
> hours, days, weeks*. While I don't intend to extend this to "millennia", etc. 
> as every unit including and above *months* would be ambiguous, *microseconds* 
> seems like a valid and missing unit
> *Definition of {{FormatUtils.TIME_DURATION_REGEX}}:*
> {code}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> public static final Pattern TIME_DURATION_PATTERN = 
> Pattern.compile(TIME_DURATION_REGEX);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] mattyb149 commented on issue #3193: NIFI-5854 Added TimeUnit enhancements (microseconds, decimal parsing)

2019-01-07 Thread GitBox
mattyb149 commented on issue #3193: NIFI-5854 Added TimeUnit enhancements 
(microseconds, decimal parsing)
URL: https://github.com/apache/nifi/pull/3193#issuecomment-451981741
 
 
   +1 LGTM, ran full build with unit tests, tried some additional valid and 
invalid values. Thanks for the improvement! Merging to master


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asfgit closed pull request #3193: NIFI-5854 Added TimeUnit enhancements (microseconds, decimal parsing)

2019-01-07 Thread GitBox
asfgit closed pull request #3193: NIFI-5854 Added TimeUnit enhancements 
(microseconds, decimal parsing)
URL: https://github.com/apache/nifi/pull/3193
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/FormatUtils.java 
b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/FormatUtils.java
index 1c9140b7d9..7d2992f34a 100644
--- 
a/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/FormatUtils.java
+++ 
b/nifi-commons/nifi-utils/src/main/java/org/apache/nifi/util/FormatUtils.java
@@ -17,12 +17,13 @@
 package org.apache.nifi.util;
 
 import java.text.NumberFormat;
+import java.util.Arrays;
+import java.util.List;
 import java.util.concurrent.TimeUnit;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
 public class FormatUtils {
-
 private static final String UNION = "|";
 
 // for Data Sizes
@@ -41,8 +42,9 @@
 private static final String WEEKS = join(UNION, "w", "wk", "wks", "week", 
"weeks");
 
 private static final String VALID_TIME_UNITS = join(UNION, NANOS, MILLIS, 
SECS, MINS, HOURS, DAYS, WEEKS);
-public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
VALID_TIME_UNITS + ")";
+public static final String TIME_DURATION_REGEX = "([\\d.]+)\\s*(" + 
VALID_TIME_UNITS + ")";
 public static final Pattern TIME_DURATION_PATTERN = 
Pattern.compile(TIME_DURATION_REGEX);
+private static final List TIME_UNIT_MULTIPLIERS = 
Arrays.asList(1000L, 1000L, 1000L, 60L, 60L, 24L);
 
 /**
  * Formats the specified count by adding commas.
@@ -58,7 +60,7 @@ public static String formatCount(final long count) {
  * Formats the specified duration in 'mm:ss.SSS' format.
  *
  * @param sourceDuration the duration to format
- * @param sourceUnit the unit to interpret the duration
+ * @param sourceUnit the unit to interpret the duration
  * @return representation of the given time data in minutes/seconds
  */
 public static String formatMinutesSeconds(final long sourceDuration, final 
TimeUnit sourceUnit) {
@@ -79,7 +81,7 @@ public static String formatMinutesSeconds(final long 
sourceDuration, final TimeU
  * Formats the specified duration in 'HH:mm:ss.SSS' format.
  *
  * @param sourceDuration the duration to format
- * @param sourceUnit the unit to interpret the duration
+ * @param sourceUnit the unit to interpret the duration
  * @return representation of the given time data in hours/minutes/seconds
  */
 public static String formatHoursMinutesSeconds(final long sourceDuration, 
final TimeUnit sourceUnit) {
@@ -139,65 +141,230 @@ public static String formatDataSize(final double 
dataSize) {
 return format.format(dataSize) + " bytes";
 }
 
+/**
+ * Returns a time duration in the requested {@link TimeUnit} after parsing 
the {@code String}
+ * input. If the resulting value is a decimal (i.e.
+ * {@code 25 hours -> TimeUnit.DAYS = 1.04}), the value is rounded.
+ *
+ * @param value the raw String input (i.e. "28 minutes")
+ * @param desiredUnit the requested output {@link TimeUnit}
+ * @return the whole number value of this duration in the requested units
+ * @deprecated As of Apache NiFi 1.9.0, because this method only returns 
whole numbers, use {@link #getPreciseTimeDuration(String, TimeUnit)} when 
possible.
+ */
+@Deprecated
 public static long getTimeDuration(final String value, final TimeUnit 
desiredUnit) {
+return Math.round(getPreciseTimeDuration(value, desiredUnit));
+}
+
+/**
+ * Returns the parsed and converted input in the requested units.
+ * 
+ * If the value is {@code 0 <= x < 1} in the provided units, the units 
will first be converted to a smaller unit to get a value >= 1 (i.e. 0.5 seconds 
-> 500 milliseconds).
+ * This is because the underlying unit conversion cannot handle decimal 
values.
+ * 
+ * If the value is {@code x >= 1} but x is not a whole number, the units 
will first be converted to a smaller unit to attempt to get a whole number 
value (i.e. 1.5 seconds -> 1500 milliseconds).
+ * 
+ * If the value is {@code x < 1000} and the units are {@code 
TimeUnit.NANOSECONDS}, the result will be a whole number of nanoseconds, 
rounded (i.e. 123.4 ns -> 123 ns).
+ * 
+ * This method handles decimal values over {@code 1 ns}, but {@code < 1 
ns} will return {@code 0} in any other unit.
+ * 
+ * Examples:
+ * 
+ * "10 seconds", {@code TimeUnit.MILLISECONDS} -> 10_000.0
+ * "0.010 s", {@code TimeUnit.MILLISECONDS} -> 10.0
+ * "0.010 s", {@code TimeUnit.SECONDS} -> 0.010
+ * "0.010 ns", {@code TimeUnit.NANOSECONDS} -> 1
+ * "0

[jira] [Commented] (NIFI-5854) Enhance time unit features

2019-01-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735987#comment-16735987
 ] 

ASF subversion and git services commented on NIFI-5854:
---

Commit b59fa5af1f3232581e1b3903e3e2f408d9daa323 in nifi's branch 
refs/heads/master from Andy LoPresto
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b59fa5a ]

NIFI-5854 Added skeleton logic to convert decimal time units.
Added helper methods.
Added unit tests.

NIFI-5854 [WIP] Cleaned up logic.
Resolved failing unit tests due to error message change.

NIFI-5854 [WIP] All helper method unit tests pass.

NIFI-5854 [WIP] FormatUtils#getPreciseTimeDuration() now handles all tested 
inputs correctly.
Added unit tests.

NIFI-5854 [WIP] FormatUtils#getTimeDuration() still using long.
Added unit tests.
Renamed existing unit tests to reflect method under test.

NIFI-5854 FormatUtils#getTimeDuration() returns long but now accepts decimal 
inputs.
Added @Deprecation warnings (will update callers where possible).
All unit tests pass.

NIFI-5854 Fixed unit tests (ran in IDE but not Maven) due to int overflows.
Fixed checkstyle issues.

NIFI-5854 Fixed typo in Javadoc.

NIFI-5854 Fixed typo in Javadoc.

Signed-off-by: Matthew Burgess 

This closes #3193


> Enhance time unit features
> --
>
> Key: NIFI-5854
> URL: https://issues.apache.org/jira/browse/NIFI-5854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: parsing, time, units
>
> There is some ambiguity with time units (specifically around processor 
> properties). Two features which I think should be added:
> * Currently only whole numbers are parsed correctly. For example, {{10 
> milliseconds}} and {{0.010 seconds}} are functionally equivalent, but only 
> the former will be parsed. This is due to the regex used in 
> {{StandardValidators.TIME_PERIOD_VALIDATOR}} which relies on 
> {{FormatUtils.TIME_DURATION_REGEX}} (see below). Decimal amounts should be 
> parsed
> * The enumerated time units are *nanoseconds, milliseconds, seconds, minutes, 
> hours, days, weeks*. While I don't intend to extend this to "millennia", etc. 
> as every unit including and above *months* would be ambiguous, *microseconds* 
> seems like a valid and missing unit
> *Definition of {{FormatUtils.TIME_DURATION_REGEX}}:*
> {code}
> public static final String TIME_DURATION_REGEX = "(\\d+)\\s*(" + 
> VALID_TIME_UNITS + ")";
> public static final Pattern TIME_DURATION_PATTERN = 
> Pattern.compile(TIME_DURATION_REGEX);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] apiri commented on a change in pull request #465: MINIFICPP-700: Add MSI Support via CPACK

2019-01-07 Thread GitBox
apiri commented on a change in pull request #465: MINIFICPP-700: Add MSI 
Support via CPACK
URL: https://github.com/apache/nifi-minifi-cpp/pull/465#discussion_r245314405
 
 

 ##
 File path: msi/LICENSE.txt
 ##
 @@ -0,0 +1,1633 @@
+
 
 Review comment:
   Not pertinent to this PR specifically, but this did jog the mind a bit.  
While our license is correct for the source for all the thirdparty libs we use, 
this is not necessarily true for the resultant binary as we can select which 
modules are enabled.  In full correctness, I suspect we likely need the LICENSE 
to reflect which modules were enabled to avoid being unnecessarily onerous with 
the binary that is made.  If that thinking is correct, I can file an issue to 
address this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (NIFI-5935) Handle Error during Ldap User/Group Sync

2019-01-07 Thread Matt Gilman (JIRA)
Matt Gilman created NIFI-5935:
-

 Summary: Handle Error during Ldap User/Group Sync 
 Key: NIFI-5935
 URL: https://issues.apache.org/jira/browse/NIFI-5935
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Matt Gilman
Assignee: Matt Gilman


Need to address error handling in background thread perform user/group sync. If 
an error occurs it's possible that subsequent syncs will not be performed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] bbende commented on issue #3244: NIFI-5928 Use NiFiDataPacket in the Flink source

2019-01-07 Thread GitBox
bbende commented on issue #3244: NIFI-5928 Use  NiFiDataPacket in the Flink 
source
URL: https://github.com/apache/nifi/pull/3244#issuecomment-451957626
 
 
   @ambition119 currently the flink nifi connectors are part of flink, so 
shouldn't this PR be against the flink repo?
   
   
https://github.com/apache/flink/tree/master/flink-connectors/flink-connector-nifi
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sbgoodm commented on a change in pull request #3239: NIFI-5920: Unit tests and functionality for tagging an object in S3.

2019-01-07 Thread GitBox
sbgoodm commented on a change in pull request #3239: NIFI-5920: Unit tests and 
functionality for tagging an object in S3.
URL: https://github.com/apache/nifi/pull/3239#discussion_r245662811
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/TagS3Object.java
 ##
 @@ -0,0 +1,174 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import com.amazonaws.AmazonServiceException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.GetObjectTaggingRequest;
+import com.amazonaws.services.s3.model.GetObjectTaggingResult;
+import com.amazonaws.services.s3.model.ObjectTagging;
+import com.amazonaws.services.s3.model.SetObjectTaggingRequest;
+import com.amazonaws.services.s3.model.Tag;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.StringUtils;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import java.util.regex.Pattern;
+import java.util.stream.Collectors;
+
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class, FetchS3Object.class, ListS3.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Tag"})
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Sets tags on a FlowFile within an Amazon S3 Bucket. " +
+"If attempting to tag a file that does not exist, FlowFile is routed 
to success.")
+public class TagS3Object extends AbstractS3Processor {
+
+public static final PropertyDescriptor TAG_KEY = new 
PropertyDescriptor.Builder()
+.name("tag-key")
+.displayName("Tag Key")
+.description("The key of the tag that will be set on the S3 
Object")
+.addValidator(new StandardValidators.StringLengthValidator(1, 127))
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(true)
+.build();
+
+public static final PropertyDescriptor TAG_VALUE = new 
PropertyDescriptor.Builder()
+.name("tag-value")
+.displayName("Tag Value")
+.description("The value of the tag that will be set on the S3 
Object")
+.addValidator(new StandardValidators.StringLengthValidator(1, 255))
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(true)
+.build();
+
+public static final PropertyDescriptor APPEND_TAG = new 
PropertyDescriptor.Builder()
+.name("append-tag")
+.displayName("Append Tag")
+.description("If set to true, the tag will be appended to the 
existing set of tags on the S3 object. " +
+"Any existing tags with the same key as the new tag will 
be updated with the specified value. If " +
+"set to false, the existing tags will be removed and the 
new tag will be set on the S3 object.")
+.addValidator(StandardValidators.BOOLEAN_VALIDATOR)
+.allowableValues("true", "false")
+.expressionLanguageSupported(ExpressionLanguageScope.NONE)
+.required(true)
+.defaultValue("true")
+.build();
+
+public static final PropertyDescriptor VERSION_ID = new 
PropertyDescriptor.Builder()
+.name("Version")
+.displayName("Version ID")
+.description("The Version of the Object to tag")

[GitHub] sbgoodm commented on a change in pull request #3239: NIFI-5920: Unit tests and functionality for tagging an object in S3.

2019-01-07 Thread GitBox
sbgoodm commented on a change in pull request #3239: NIFI-5920: Unit tests and 
functionality for tagging an object in S3.
URL: https://github.com/apache/nifi/pull/3239#discussion_r245658674
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/TagS3Object.java
 ##
 @@ -0,0 +1,174 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import com.amazonaws.AmazonServiceException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.GetObjectTaggingRequest;
+import com.amazonaws.services.s3.model.GetObjectTaggingResult;
+import com.amazonaws.services.s3.model.ObjectTagging;
+import com.amazonaws.services.s3.model.SetObjectTaggingRequest;
+import com.amazonaws.services.s3.model.Tag;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.StringUtils;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import java.util.regex.Pattern;
+import java.util.stream.Collectors;
+
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class, FetchS3Object.class, ListS3.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Tag"})
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Sets tags on a FlowFile within an Amazon S3 Bucket. " +
+"If attempting to tag a file that does not exist, FlowFile is routed 
to success.")
+public class TagS3Object extends AbstractS3Processor {
 
 Review comment:
   will do.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sbgoodm commented on a change in pull request #3239: NIFI-5920: Unit tests and functionality for tagging an object in S3.

2019-01-07 Thread GitBox
sbgoodm commented on a change in pull request #3239: NIFI-5920: Unit tests and 
functionality for tagging an object in S3.
URL: https://github.com/apache/nifi/pull/3239#discussion_r245658602
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/TagS3Object.java
 ##
 @@ -0,0 +1,174 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import com.amazonaws.AmazonServiceException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.GetObjectTaggingRequest;
+import com.amazonaws.services.s3.model.GetObjectTaggingResult;
+import com.amazonaws.services.s3.model.ObjectTagging;
+import com.amazonaws.services.s3.model.SetObjectTaggingRequest;
+import com.amazonaws.services.s3.model.Tag;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.StringUtils;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import java.util.regex.Pattern;
+import java.util.stream.Collectors;
+
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class, FetchS3Object.class, ListS3.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Tag"})
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Sets tags on a FlowFile within an Amazon S3 Bucket. " +
+"If attempting to tag a file that does not exist, FlowFile is routed 
to success.")
+public class TagS3Object extends AbstractS3Processor {
+
+public static final PropertyDescriptor TAG_KEY = new 
PropertyDescriptor.Builder()
+.name("tag-key")
+.displayName("Tag Key")
+.description("The key of the tag that will be set on the S3 
Object")
+.addValidator(new StandardValidators.StringLengthValidator(1, 127))
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(true)
+.build();
+
+public static final PropertyDescriptor TAG_VALUE = new 
PropertyDescriptor.Builder()
+.name("tag-value")
+.displayName("Tag Value")
+.description("The value of the tag that will be set on the S3 
Object")
+.addValidator(new StandardValidators.StringLengthValidator(1, 255))
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(true)
+.build();
+
+public static final PropertyDescriptor APPEND_TAG = new 
PropertyDescriptor.Builder()
+.name("append-tag")
+.displayName("Append Tag")
+.description("If set to true, the tag will be appended to the 
existing set of tags on the S3 object. " +
+"Any existing tags with the same key as the new tag will 
be updated with the specified value. If " +
+"set to false, the existing tags will be removed and the 
new tag will be set on the S3 object.")
+.addValidator(StandardValidators.BOOLEAN_VALIDATOR)
+.allowableValues("true", "false")
+.expressionLanguageSupported(ExpressionLanguageScope.NONE)
+.required(true)
+.defaultValue("true")
+.build();
+
+public static final PropertyDescriptor VERSION_ID = new 
PropertyDescriptor.Builder()
+.name("Version")
+.displayName("Version ID")
+.description("The Version of the Object to tag")

[GitHub] sbgoodm commented on a change in pull request #3239: NIFI-5920: Unit tests and functionality for tagging an object in S3.

2019-01-07 Thread GitBox
sbgoodm commented on a change in pull request #3239: NIFI-5920: Unit tests and 
functionality for tagging an object in S3.
URL: https://github.com/apache/nifi/pull/3239#discussion_r245655917
 
 

 ##
 File path: 
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/TagS3Object.java
 ##
 @@ -0,0 +1,174 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import com.amazonaws.AmazonServiceException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.GetObjectTaggingRequest;
+import com.amazonaws.services.s3.model.GetObjectTaggingResult;
+import com.amazonaws.services.s3.model.ObjectTagging;
+import com.amazonaws.services.s3.model.SetObjectTaggingRequest;
+import com.amazonaws.services.s3.model.Tag;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.util.StringUtils;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import java.util.regex.Pattern;
+import java.util.stream.Collectors;
+
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class, FetchS3Object.class, ListS3.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Tag"})
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@CapabilityDescription("Sets tags on a FlowFile within an Amazon S3 Bucket. " +
+"If attempting to tag a file that does not exist, FlowFile is routed 
to success.")
+public class TagS3Object extends AbstractS3Processor {
+
+public static final PropertyDescriptor TAG_KEY = new 
PropertyDescriptor.Builder()
+.name("tag-key")
+.displayName("Tag Key")
+.description("The key of the tag that will be set on the S3 
Object")
+.addValidator(new StandardValidators.StringLengthValidator(1, 127))
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(true)
+.build();
+
+public static final PropertyDescriptor TAG_VALUE = new 
PropertyDescriptor.Builder()
+.name("tag-value")
+.displayName("Tag Value")
+.description("The value of the tag that will be set on the S3 
Object")
+.addValidator(new StandardValidators.StringLengthValidator(1, 255))
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(true)
+.build();
+
+public static final PropertyDescriptor APPEND_TAG = new 
PropertyDescriptor.Builder()
+.name("append-tag")
+.displayName("Append Tag")
+.description("If set to true, the tag will be appended to the 
existing set of tags on the S3 object. " +
+"Any existing tags with the same key as the new tag will 
be updated with the specified value. If " +
+"set to false, the existing tags will be removed and the 
new tag will be set on the S3 object.")
+.addValidator(StandardValidators.BOOLEAN_VALIDATOR)
+.allowableValues("true", "false")
+.expressionLanguageSupported(ExpressionLanguageScope.NONE)
+.required(true)
+.defaultValue("true")
+.build();
+
+public static final PropertyDescriptor VERSION_ID = new 
PropertyDescriptor.Builder()
+.name("Version")
 
 Review comment:
   will do


This 

[jira] [Commented] (NIFI-5887) unable to unmarshal json to an object

2019-01-07 Thread meh (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735768#comment-16735768
 ] 

meh commented on NIFI-5887:
---

one other interesting thing... you cannot type caps f ("F") in JOLT advanced 
mode
when you type it, cursor move to beginning of the line

> unable to unmarshal json to an object
> -
>
> Key: NIFI-5887
> URL: https://issues.apache.org/jira/browse/NIFI-5887
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: meh
>Priority: Major
>  Labels: attribute, jolt, json
>
> i have a attribute (that produced by a REST service and catched by invokeHTTP 
> processor) in JSON format like this:
> {code:java}
> test => {"key":"value"}{code}
> and then i want to put it in flows JSON content using JOLT processor, my 
> content is something like this:
> {code:java}
> { "id": 123, "user": "foo" }{code}
> and my JOLT specification is this:
> {code:java}
> [{ "operation": "default", "spec": { "interest": "${test}" } }]{code}
> the problem is here that, in JOLT advanced window with test attribute nifi 
> cannot put json object and shown this error:
> {quote}*"Error occurred during transformation"*
> {quote}
> and when run processor this detailed error is become alerted:
> {quote}*"unable to unmarshal json to an object"*
> {quote}
> 
> my desired result is this:
> {code:java}
> { "id": 123, "user": "foo", "interest": {"key":"value"} }{code}
> is this a bug or am i wrong?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] asfgit closed pull request #468: MINIFICPP-704 - Tests: hostname of localhost is not guaranteed to be …

2019-01-07 Thread GitBox
asfgit closed pull request #468: MINIFICPP-704 - Tests: hostname of localhost 
is not guaranteed to be …
URL: https://github.com/apache/nifi-minifi-cpp/pull/468
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/libminifi/test/unit/SocketTests.cpp 
b/libminifi/test/unit/SocketTests.cpp
index 352e6fd6..0809211f 100644
--- a/libminifi/test/unit/SocketTests.cpp
+++ b/libminifi/test/unit/SocketTests.cpp
@@ -33,7 +33,7 @@
 TEST_CASE("TestSocket", "[TestSocket1]") {
   org::apache::nifi::minifi::io::Socket 
socket(std::make_shared(std::make_shared()),
 "localhost", 8183);
   REQUIRE(-1 == socket.initialize());
-  REQUIRE("localhost" == socket.getHostname());
+  REQUIRE(socket.getHostname().rfind("localhost", 0) == 0);
   socket.closeStream();
 }
 
@@ -82,7 +82,7 @@ TEST_CASE("TestGetHostName", "[TestSocket4]") {
   REQUIRE(org::apache::nifi::minifi::io::Socket::getMyHostName().length() > 0);
 }
 
-TEST_CASE("TestWriteEndian64", "[TestSocket4]") {
+TEST_CASE("TestWriteEndian64", "[TestSocket5]") {
   std::vector buffer;
   buffer.push_back('a');
   std::shared_ptr socket_context 
= 
std::make_shared(std::make_shared());
@@ -108,7 +108,7 @@ TEST_CASE("TestWriteEndian64", "[TestSocket4]") {
   client.closeStream();
 }
 
-TEST_CASE("TestWriteEndian32", "[TestSocket5]") {
+TEST_CASE("TestWriteEndian32", "[TestSocket6]") {
   std::vector buffer;
   buffer.push_back('a');
 
@@ -143,7 +143,7 @@ TEST_CASE("TestWriteEndian32", "[TestSocket5]") {
   client.closeStream();
 }
 
-TEST_CASE("TestSocketWriteTestAfterClose", "[TestSocket6]") {
+TEST_CASE("TestSocketWriteTestAfterClose", "[TestSocket7]") {
   std::vector buffer;
   buffer.push_back('a');
 
@@ -194,7 +194,7 @@ bool createSocket() {
  * This test will create 20 threads that attempt to create contexts
  * to ensure we no longer see the segfaults.
  */
-TEST_CASE("TestTLSContextCreation", "[TestSocket6]") {
+TEST_CASE("TestTLSContextCreation", "[TestSocket8]") {
   utils::ThreadPool pool(20, true);
 
   std::vector> futures;
@@ -217,7 +217,7 @@ TEST_CASE("TestTLSContextCreation", "[TestSocket6]") {
  * MINIFI-329 was created in regards to an option existing but not
  * being properly evaluated.
  */
-TEST_CASE("TestTLSContextCreation2", "[TestSocket7]") {
+TEST_CASE("TestTLSContextCreation2", "[TestSocket9]") {
   std::shared_ptr configure = 
std::make_shared();
   configure->set("nifi.remote.input.secure", "false");
   auto factory = minifi::io::StreamFactory::getInstance(configure);
@@ -231,7 +231,7 @@ TEST_CASE("TestTLSContextCreation2", "[TestSocket7]") {
  * MINIFI-329 was created in regards to an option existing but not
  * being properly evaluated.
  */
-TEST_CASE("TestTLSContextCreationNullptr", "[TestSocket7]") {
+TEST_CASE("TestTLSContextCreationNullptr", "[TestSocket10]") {
   std::shared_ptr configure = 
std::make_shared();
   configure->set("nifi.remote.input.secure", "false");
   auto factory = minifi::io::StreamFactory::getInstance(configure);


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] phrocker opened a new pull request #469: MINIFICPP-705: Previous fix for some travis failures. Will help isola…

2019-01-07 Thread GitBox
phrocker opened a new pull request #469: MINIFICPP-705: Previous fix for some 
travis failures. Will help isola…
URL: https://github.com/apache/nifi-minifi-cpp/pull/469
 
 
   …te any others
   
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (MINIFICPP-705) Travis failure has come back

2019-01-07 Thread Mr TheSegfault (JIRA)
Mr TheSegfault created MINIFICPP-705:


 Summary: Travis failure has come back
 Key: MINIFICPP-705
 URL: https://issues.apache.org/jira/browse/MINIFICPP-705
 Project: NiFi MiNiFi C++
  Issue Type: Bug
Reporter: Mr TheSegfault
Assignee: Mr TheSegfault


Seems that curl tests will sporadically fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] asfgit closed pull request #466: MINIFICPP-701 - update alpine in docker image to 3.8

2019-01-07 Thread GitBox
asfgit closed pull request #466: MINIFICPP-701 - update alpine in docker image 
to 3.8
URL: https://github.com/apache/nifi-minifi-cpp/pull/466
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docker/Dockerfile b/docker/Dockerfile
index e66b5682..2b414f25 100644
--- a/docker/Dockerfile
+++ b/docker/Dockerfile
@@ -18,7 +18,7 @@
 
 # First stage: the build environment
 # Edge required for rocksdb
-FROM alpine:3.5 AS builder
+FROM alpine:3.8 AS builder
 MAINTAINER Apache NiFi 
 
 ARG UID
@@ -73,7 +73,7 @@ RUN cd $MINIFI_BASE_DIR \
 
 # Second stage: the runtime image
 # Edge required for rocksdb
-FROM alpine:3.5
+FROM alpine:3.8
 
 ARG UID
 ARG GID
diff --git 
a/thirdparty/libarchive-3.3.2/libarchive/archive_openssl_hmac_private.h 
b/thirdparty/libarchive-3.3.2/libarchive/archive_openssl_hmac_private.h
index 59f95b80..b4718f2a 100644
--- a/thirdparty/libarchive-3.3.2/libarchive/archive_openssl_hmac_private.h
+++ b/thirdparty/libarchive-3.3.2/libarchive/archive_openssl_hmac_private.h
@@ -28,7 +28,8 @@
 #include 
 #include 
 
-#if OPENSSL_VERSION_NUMBER < 0x1010L || defined(LIBRESSL_VERSION_NUMBER)
+#if OPENSSL_VERSION_NUMBER < 0x1010L || \
+  (defined(LIBRESSL_VERSION_NUMBER) && LIBRESSL_VERSION_NUMBER < 0x2070L)
 #include  /* malloc, free */
 #include  /* memset */
 static inline HMAC_CTX *HMAC_CTX_new(void)


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asfgit closed pull request #464: MINIFICPP-699 - fix unsigned/signed comparison

2019-01-07 Thread GitBox
asfgit closed pull request #464: MINIFICPP-699 - fix unsigned/signed comparison
URL: https://github.com/apache/nifi-minifi-cpp/pull/464
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/libminifi/include/processors/HashContent.h 
b/libminifi/include/processors/HashContent.h
index 4fdc68c3..bcfdd347 100644
--- a/libminifi/include/processors/HashContent.h
+++ b/libminifi/include/processors/HashContent.h
@@ -46,7 +46,7 @@ namespace {
 
   std::string digestToString(const unsigned char * const digest, size_t size) {
 std::stringstream ss;
-for(int i = 0; i < size; i++)
+for(size_t i = 0; i < size; i++)
 {
   ss << std::uppercase << std::hex << std::setw(2) << std::setfill('0') << 
(int)digest[i];
 }


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ottobackwards commented on issue #2956: NIFI-5537 Create Neo4J cypher execution processor

2019-01-07 Thread GitBox
ottobackwards commented on issue #2956: NIFI-5537 Create Neo4J cypher execution 
processor
URL: https://github.com/apache/nifi/pull/2956#issuecomment-451913289
 
 
   It would be nice to have an experimental annotation


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vkcelik opened a new pull request #3246: NIFI-5929 Support for IBM MQ multi-instance queue managers

2019-01-07 Thread GitBox
vkcelik opened a new pull request #3246: NIFI-5929 Support for IBM MQ 
multi-instance queue managers
URL: https://github.com/apache/nifi/pull/3246
 
 
   Thank you for submitting a contribution to Apache NiFi.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [x] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
   - [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
   - [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [x] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vkcelik closed pull request #3246: NIFI-5929 Support for IBM MQ multi-instance queue managers

2019-01-07 Thread GitBox
vkcelik closed pull request #3246: NIFI-5929 Support for IBM MQ multi-instance 
queue managers
URL: https://github.com/apache/nifi/pull/3246
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProvider.java
 
b/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProvider.java
index ecb4e7a538..0528b77ab3 100644
--- 
a/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProvider.java
+++ 
b/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProvider.java
@@ -51,10 +51,10 @@
  * 
  * It accomplishes it by adjusting current classpath by adding to it the
  * additional resources (i.e., JMS client libraries) provided by the user via
- * {@link JMSConnectionFactoryProviderDefinition#CLIENT_LIB_DIR_PATH}, allowing
+ * {@link JMSConnectionFactoryProvider#CLIENT_LIB_DIR_PATH}, allowing
  * it then to create an instance of the target {@link ConnectionFactory} based
  * on the provided
- * {@link JMSConnectionFactoryProviderDefinition#CONNECTION_FACTORY_IMPL} which
+ * {@link JMSConnectionFactoryProvider#CONNECTION_FACTORY_IMPL} which
  * can be than access via {@link #getConnectionFactory()} method.
  * 
  */
@@ -105,8 +105,8 @@
 public static final PropertyDescriptor BROKER_URI = new 
PropertyDescriptor.Builder()
 .name(BROKER)
 .displayName("Broker URI")
-.description("URI pointing to the network location of the JMS 
Message broker. For example, "
-+ "'tcp://myhost:61616' for ActiveMQ or 'myhost:1414' for 
IBM MQ")
+.description("URI pointing to the network location of the JMS 
Message broker. Example for ActiveMQ: "
++ "'tcp://myhost:61616'. Examples for IBM MQ: 
'myhost:1414' and 'myhost01(1414),myhost02(1414)'")
 .addValidator(new NonEmptyBrokerURIValidator())
 .required(true)
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
@@ -185,23 +185,31 @@ public void disable() {
  * service configuration. For example, 'channel' property will correspond 
to
  * 'setChannel(..) method and 'queueManager' property will correspond to
  * setQueueManager(..) method with a single argument.
- * 
+ * 
  * There are also few adjustments to accommodate well known brokers. For
  * example ActiveMQ ConnectionFactory accepts address of the Message Broker
  * in a form of URL while IBMs in the form of host/port pair (more common).
  * So this method will use value retrieved from the 'BROKER_URI' static
  * property 'as is' if ConnectionFactory implementation is coming from
- * ActiveMQ and for all others (for now) the 'BROKER_URI' value will be
+ * ActiveMQ or Tibco. For all others (for now) the 'BROKER_URI' value will 
be
  * split on ':' and the resulting pair will be used to execute
  * setHostName(..) and setPort(..) methods on the provided
- * ConnectionFactory. This may need to be maintained and adjusted to
- * accommodate other implementation of ConnectionFactory, but only for
- * URL/Host/Port issue. All other properties are set as dynamic properties
- * where user essentially provides both property name and value, The bean
- * convention is also explained in user manual for this component with 
links
- * pointing to documentation of various ConnectionFactories.
+ * ConnectionFactory. An exception to this if the ConnectionFactory
+ * implementation is coming from IBM MQ and multiple brokers are listed,
+ * in this case setConnectionNameList(..) method is executed.
+ * This may need to be maintained and adjusted to accommodate other
+ * implementation of ConnectionFactory, but only for URL/Host/Port issue.
+ * All other properties are set as dynamic properties where user 
essentially
+ * provides both property name and value, The bean convention is also
+ * explained in user manual for this component with links pointing to
+ * documentation of various ConnectionFactories.
  *
- * @see #setProperty(String, String) method
+ * @see http://activemq.apache.org/maven/apidocs/org/apache/activemq/ActiveMQConnectionFactory.html#setBrokerURL-java.lang.String-";>setBrokerURL(String
 brokerURL)
+ * @see https://docs.tibco.com/pub/enterprise_message_service/8.1.0/doc/html/tib_ems_api_reference/api/javadoc/com/tibco/tibjms/TibjmsConnectionFactory.html#setServerUrl(java.lang.String)">setServerUrl(String
 serverUrl)
+ * @see https://www.ibm.com/s

[GitHub] vkcelik commented on issue #3246: NIFI-5929 Support for IBM MQ multi-instance queue managers

2019-01-07 Thread GitBox
vkcelik commented on issue #3246: NIFI-5929 Support for IBM MQ multi-instance 
queue managers
URL: https://github.com/apache/nifi/pull/3246#issuecomment-451911945
 
 
   Requesting review. It might make sense that one of @ijokarumawak, 
@pvillard31 and @rwhittington does the review, because they previously worked 
on _nifi-jms-bundle_


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] arpadboda opened a new pull request #468: MINIFICPP-704 - Tests: hostname of localhost is not guaranteed to be …

2019-01-07 Thread GitBox
arpadboda opened a new pull request #468: MINIFICPP-704 - Tests: hostname of 
localhost is not guaranteed to be …
URL: https://github.com/apache/nifi-minifi-cpp/pull/468
 
 
   …"localhost"
   
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (MINIFICPP-704) Tests: hostname of localhost is not guaranteed to be "localhost"

2019-01-07 Thread Arpad Boda (JIRA)
Arpad Boda created MINIFICPP-704:


 Summary: Tests: hostname of localhost is not guaranteed to be 
"localhost"
 Key: MINIFICPP-704
 URL: https://issues.apache.org/jira/browse/MINIFICPP-704
 Project: NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Arpad Boda
Assignee: Arpad Boda
 Fix For: 0.6.0


On latest Ubuntu LTS hostname of localhost is "localhost.localdomain".

This shouldn't fail SocketTests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5929) Support for IBM MQ multi-instance queue managers (brokers)

2019-01-07 Thread Veli Kerim Celik (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Veli Kerim Celik updated NIFI-5929:
---
Description: 
Currently connections provided by JMSConnectionFactoryProvider controller 
service can connect to just a single IBM MQ queue manager. This is problematic 
when the queue manager is part of a [multi-instance queue 
manager|https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.con.doc/q018140_.html]
 setup and goes from active to standby.

The goal of this issue is to support multiple queue managers, detect 
standby/broken instance and switch to active instance. This behavior is already 
implemented in [official Java 
library|https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_7.1.0/com.ibm.mq.javadoc.doc/WMQJMSClasses/com/ibm/mq/jms/MQConnectionFactory.html#setConnectionNameList_java.lang.String_]
 and should be leveraged.

Syntax used to specify multiple queue managers: myhost01(1414),myhost02(1414)

  was:
Currently connections provided by JMSConnectionFactoryProvider controller 
service can connect to just a single IBM MQ queue manager. This is problematic 
when the queue manager is part of a [multi-instance queue 
manager|https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.con.doc/q018140_.html]
 setup and goes from active to standby.

The goal of this issue is to support multiple queue managers, detect 
standby/broken instance and switch to active instance. This behavior is already 
implemented in [official Java 
library|https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_7.1.0/com.ibm.mq.javadoc.doc/WMQJMSClasses/com/ibm/mq/jms/MQConnectionFactory.html#setConnectionNameList_java.lang.String_]
 and should be leveraged.

Syntax used to specify multiple queue managers: myhost01(1414),myhost02(1414)

+I have already implemented this improvement and are about to submit a pull 
request!+


> Support for IBM MQ multi-instance queue managers (brokers)
> --
>
> Key: NIFI-5929
> URL: https://issues.apache.org/jira/browse/NIFI-5929
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.0.0, 0.6.0, 0.7.0, 0.6.1, 1.1.0, 0.7.1, 1.2.0, 1.1.1, 
> 1.0.1, 1.3.0, 1.4.0, 0.7.4, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.7.1
>Reporter: Veli Kerim Celik
>Priority: Major
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently connections provided by JMSConnectionFactoryProvider controller 
> service can connect to just a single IBM MQ queue manager. This is 
> problematic when the queue manager is part of a [multi-instance queue 
> manager|https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.con.doc/q018140_.html]
>  setup and goes from active to standby.
> The goal of this issue is to support multiple queue managers, detect 
> standby/broken instance and switch to active instance. This behavior is 
> already implemented in [official Java 
> library|https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_7.1.0/com.ibm.mq.javadoc.doc/WMQJMSClasses/com/ibm/mq/jms/MQConnectionFactory.html#setConnectionNameList_java.lang.String_]
>  and should be leveraged.
> Syntax used to specify multiple queue managers: myhost01(1414),myhost02(1414)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)