[jira] [Commented] (NIFI-4443) Allow HDFS' processors level of flexibility for StoreInKiteDataset

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191937#comment-16191937
 ] 

ASF GitHub Bot commented on NIFI-4443:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2186


> Allow HDFS' processors level of flexibility for StoreInKiteDataset
> --
>
> Key: NIFI-4443
> URL: https://issues.apache.org/jira/browse/NIFI-4443
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Priority: Minor
>  Labels: features, security
> Fix For: 1.5.0
>
>
> Currently, StoreInKiteDataset is missing a lot of crucial properties to write 
> to HDFS, namely
> the ability to alter the CLASSPATH and a validator to use expression language 
> when pointing to Hadoop configuration files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4443) Allow HDFS' processors level of flexibility for StoreInKiteDataset

2017-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191936#comment-16191936
 ] 

ASF subversion and git services commented on NIFI-4443:
---

Commit f1bd866005e8e06e502ebf67fa6540bf379adfcf in nifi's branch 
refs/heads/master from [~lanzani]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=f1bd866 ]

NIFI-4443 Increase StoreInKiteDataset flexibility

* The configuration property CONF_XML_FILE now support Expression
  Language and reuse a Hadoop validator;
* The ADDITIONAL_CLASSPATH_RESOURCES property has been added, so that
  things such as writing to Azure Blob Storage should become possible.

This closes #2186.

Signed-off-by: Bryan Bende 


> Allow HDFS' processors level of flexibility for StoreInKiteDataset
> --
>
> Key: NIFI-4443
> URL: https://issues.apache.org/jira/browse/NIFI-4443
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Priority: Minor
>  Labels: features, security
>
> Currently, StoreInKiteDataset is missing a lot of crucial properties to write 
> to HDFS, namely
> the ability to alter the CLASSPATH and a validator to use expression language 
> when pointing to Hadoop configuration files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4443) Allow HDFS' processors level of flexibility for StoreInKiteDataset

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191877#comment-16191877
 ] 

ASF GitHub Bot commented on NIFI-4443:
--

Github user gglanzani commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2186#discussion_r142768212
  
--- Diff: 
nifi-nar-bundles/nifi-kite-bundle/nifi-kite-processors/src/main/java/org/apache/nifi/processors/kite/AbstractKiteProcessor.java
 ---
@@ -47,33 +47,16 @@
 abstract class AbstractKiteProcessor extends AbstractProcessor {
 
 private static final Splitter COMMA = Splitter.on(',').trimResults();
-protected static final Validator FILES_EXIST = new Validator() {
-@Override
-public ValidationResult validate(String subject, String 
configFiles,
-ValidationContext context) {
-if (configFiles != null && !configFiles.isEmpty()) {
-for (String file : COMMA.split(configFiles)) {
-ValidationResult result = 
StandardValidators.FILE_EXISTS_VALIDATOR
-.validate(subject, file, context);
-if (!result.isValid()) {
-return result;
-}
-}
-}
-return new ValidationResult.Builder()
-.subject(subject)
-.input(configFiles)
-.explanation("Files exist")
-.valid(true)
-.build();
-}
-};
 
 protected static final PropertyDescriptor CONF_XML_FILES
 = new PropertyDescriptor.Builder()
-.name("Hadoop configuration files")
-.description("A comma-separated list of Hadoop configuration 
files")
-.addValidator(FILES_EXIST)
+.name("hadoop-configuration-resources")
--- End diff --

Good point, fixed


> Allow HDFS' processors level of flexibility for StoreInKiteDataset
> --
>
> Key: NIFI-4443
> URL: https://issues.apache.org/jira/browse/NIFI-4443
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Priority: Minor
>  Labels: features, security
>
> Currently, StoreInKiteDataset is missing a lot of crucial properties to write 
> to HDFS, namely
> the ability to alter the CLASSPATH and a validator to use expression language 
> when pointing to Hadoop configuration files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4443) Allow HDFS' processors level of flexibility for StoreInKiteDataset

2017-10-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191607#comment-16191607
 ] 

ASF GitHub Bot commented on NIFI-4443:
--

Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2186#discussion_r142735370
  
--- Diff: 
nifi-nar-bundles/nifi-kite-bundle/nifi-kite-processors/src/main/java/org/apache/nifi/processors/kite/AbstractKiteProcessor.java
 ---
@@ -47,33 +47,16 @@
 abstract class AbstractKiteProcessor extends AbstractProcessor {
 
 private static final Splitter COMMA = Splitter.on(',').trimResults();
-protected static final Validator FILES_EXIST = new Validator() {
-@Override
-public ValidationResult validate(String subject, String 
configFiles,
-ValidationContext context) {
-if (configFiles != null && !configFiles.isEmpty()) {
-for (String file : COMMA.split(configFiles)) {
-ValidationResult result = 
StandardValidators.FILE_EXISTS_VALIDATOR
-.validate(subject, file, context);
-if (!result.isValid()) {
-return result;
-}
-}
-}
-return new ValidationResult.Builder()
-.subject(subject)
-.input(configFiles)
-.explanation("Files exist")
-.valid(true)
-.build();
-}
-};
 
 protected static final PropertyDescriptor CONF_XML_FILES
 = new PropertyDescriptor.Builder()
-.name("Hadoop configuration files")
-.description("A comma-separated list of Hadoop configuration 
files")
-.addValidator(FILES_EXIST)
+.name("hadoop-configuration-resources")
--- End diff --

I'd recommend we leave the name the same since changing the name will cause 
the processor to go invalid when someone upgrades an existing flow. 

We can still introduce displayName if you'd like to call "Hadoop 
configuration Resources" as opposed to "Hadoop configuration files", but we'd 
leave name as "Hadoop configuration files".


> Allow HDFS' processors level of flexibility for StoreInKiteDataset
> --
>
> Key: NIFI-4443
> URL: https://issues.apache.org/jira/browse/NIFI-4443
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Priority: Minor
>  Labels: features, security
>
> Currently, StoreInKiteDataset is missing a lot of crucial properties to write 
> to HDFS, namely
> the ability to alter the CLASSPATH and a validator to use expression language 
> when pointing to Hadoop configuration files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4443) Allow HDFS' processors level of flexibility for StoreInKiteDataset

2017-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187154#comment-16187154
 ] 

ASF GitHub Bot commented on NIFI-4443:
--

GitHub user gglanzani opened a pull request:

https://github.com/apache/nifi/pull/2186

[NIFI-4443] Increase StoreInKiteDataset flexibility

* The configuration property CONF_XML_FILE now support Expression
  Language and reuse a Hadoop validator;
* The ADDITIONAL_CLASSPATH_RESOURCES property has been added, so that
  things such as writing to Azure Blob Storage should become possible.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

Yes. The Jira also mention Kerberos support. I am not feeling so 
adventurous however (I'm a Python programmer). I can update the Jira and split 
it, as I believe there is still value in this PR.

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?

Note: `nifi-scripting-processors` is giving a, to me, completely unrelated 
error to the proposed changes

```java
java.lang.ClassCastException: Cannot cast 
jdk.nashorn.internal.objects.NativeArray to java.util.Collection
```

- [x] Have you written or updated unit tests to verify your changes?

I had to put in a Dataset creation in the test because otherwise I couldn't 
use `runner.assertValid()`. If there is a simpler way to do this, please let me 
know and I'll update accordingly.

- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [x] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gglanzani/nifi master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2186.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2186


commit 6ab60ee2f14700fb3e8dd2849bef4391f536fca8
Author: Giovanni Lanzani 
Date:   2017-09-29T21:21:03Z

[NIFI-4443] Increase StoreInKiteDataset flexibility

* The configuration property CONF_XML_FILE now support Expression
  Language and reuse a Hadoop validator;
* The ADDITIONAL_CLASSPATH_RESOURCES property has been added, so that
  things such as writing to Azure Blob Storage should become possible.




> Allow HDFS' processors level of flexibility for StoreInKiteDataset
> --
>
> Key: NIFI-4443
> URL: https://issues.apache.org/jira/browse/NIFI-4443
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Giovanni Lanzani
>Priority: Minor
>  Labels: features, security
>
> Currently, StoreInKiteDataset is missing a lot of crucial properties to write 
> to HDFS, namely
> Kerberos support and the ability to alter the CLASSPATH.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)