[jira] [Updated] (NIFI-4959) HandleHttpRequest processor doesn't close/release incomplete message error

2018-04-26 Thread Joseph Percivall (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Percivall updated NIFI-4959:
---
Affects Version/s: (was: 1.6.0)

> HandleHttpRequest processor doesn't close/release incomplete message error
> --
>
> Key: NIFI-4959
> URL: https://issues.apache.org/jira/browse/NIFI-4959
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: Linux, all versions of nifi-1.X
>Reporter: Wynner
>Priority: Major
> Fix For: 1.6.0
>
>
> I am doing some testing with the HandleHttpRequest processor. My specific 
> test, involves sending an incomplete request and closing the connection from 
> the sending system.  Initially, it throws the error I expect, but it keeps 
> throwing the error over and over based on the request expiration configured 
> in the StandardHttpContextMap controller service.
>  The only way to stop the error message is to stop the processor. In my test, 
> I saw one failed request throw an error six times before I stopped the 
> processor.
> It doesn't seems to terminate the request on the NiFi side.
> Sample HTTP request
>  
> POST/ HTTP/ 1.1
> Host: foo.com
> Content-Type: text/plain
> Content-Length: 130
> say=Hi
>  
> I use the telnet command to connect to the system with the processor 
> listening, post the message above , close the connection, and then the 
> processor starts throws the following error indefinitely
> 2018-03-10 01:36:37,111 ERROR [Timer-Driven Process Thread-6] 
> o.a.n.p.standard.HandleHttpRequest 
> HandleHttpRequest[id=0d8547f7-0162-1000-9b84-129af2382259] 
> HandleHttpRequest[id=0d8547f7-0162-1000-9b84-129af2382259] failed to process 
> session due to org.apache.nifi.processor.exception.FlowFileAccessException: 
> Failed to import data from 
> HttpInputOverHTTP@46e7d12e[c=15,q=0,[0]=null,s=EARLY_EOF] for 
> StandardFlowFileRecord[uuid=32bb182d-f619-4b98-b6f8-c1ed50c2736a,claim=,offset=0,name=9714775822613527,size=0]
>  due to org.apache.nifi.processor.exception.FlowFileAccessException: Unable 
> to create ContentClaim due to org.eclipse.jetty.io.EofException: Early EOF: {}
>  org.apache.nifi.processor.exception.FlowFileAccessException: Failed to 
> import data from HttpInputOverHTTP@46e7d12e[c=15,q=0,[0]=null,s=EARLY_EOF] 
> for 
> StandardFlowFileRecord[uuid=32bb182d-f619-4b98-b6f8-c1ed50c2736a,claim=,offset=0,name=9714775822613527,size=0]
>  due to org.apache.nifi.processor.exception.FlowFileAccessException: Unable 
> to create ContentClaim due to org.eclipse.jetty.io.EofException: Early EOF
>  at 
> org.apache.nifi.controller.repository.StandardProcessSession.importFrom(StandardProcessSession.java:2942)
>  at 
> org.apache.nifi.processors.standard.HandleHttpRequest.onTrigger(HandleHttpRequest.java:517)
>  at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1123)
>  at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>  at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
>  Caused by: org.apache.nifi.processor.exception.FlowFileAccessException: 
> Unable to create ContentClaim due to org.eclipse.jetty.io.EofException: Early 
> EOF
>  at 
> org.apache.nifi.controller.repository.StandardProcessSession.importFrom(StandardProcessSession.java:2935)
>  ... 13 common frames omitted
>  Caused by: org.eclipse.jetty.io.EofException: Early EOF
>  at org.eclipse.jetty.server.HttpInput$3.getError(HttpInput.java:1104)
>  at org.eclipse.jetty.server.HttpInput$3.noContent(HttpInput.java:1093)
>  at org.eclipse.jetty.server.HttpInput.read(HttpInput.java:307)
>  at java.io.InputStream.read(InputStream.java:101)
>  at org.apache.nifi.stream.io.StreamUtils.copy(StreamUtils.java:35)
>  at 
> 

[jira] [Updated] (NIFI-5033) Cannot update variable referenced in restricted components

2018-04-26 Thread Joseph Percivall (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Percivall updated NIFI-5033:
---
Affects Version/s: (was: 1.6.0)

> Cannot update variable referenced in restricted components
> --
>
> Key: NIFI-5033
> URL: https://issues.apache.org/jira/browse/NIFI-5033
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Pierre Villard
>Assignee: Matt Gilman
>Priority: Blocker
> Fix For: 1.6.0
>
>
> When updating a variable at pg level that references a restricted component 
> it will fail. It seems the code is the same for secured and unsecured 
> instance and it fails when NiFi is unsecured since the user is unknown.
> It seems it has been introduced by NIFI-4885.
> {noformat}
> 2018-03-29 21:10:30,913 INFO [Variable Registry Update Thread] 
> o.a.nifi.web.api.ProcessGroupResource In order to update Variable Registry 
> for Process Group with ID 731bbdde-0162-1000-0f00-db6543c34b50, Stopping 
> Processors
> 2018-03-29 21:10:30,913 INFO [Variable Registry Update Thread] 
> o.a.nifi.web.api.ProcessGroupResource In order to update Variable Registry 
> for Process Group with ID 731bbdde-0162-1000-0f00-db6543c34b50, Disabling 
> Controller Services
> 2018-03-29 21:10:30,913 INFO [Variable Registry Update Thread] 
> o.a.nifi.web.api.ProcessGroupResource In order to update Variable Registry 
> for Process Group with ID 731bbdde-0162-1000-0f00-db6543c34b50, Applying 
> updates to Variable Registry
> 2018-03-29 21:10:30,915 ERROR [Variable Registry Update Thread] 
> o.a.nifi.web.api.ProcessGroupResource Failed to update variable registry for 
> Process Group with ID 731bbdde-0162-1000-0f00-db6543c34b50
> org.apache.nifi.authorization.AccessDeniedException: Unknown user.
> at 
> org.apache.nifi.authorization.resource.RestrictedComponentsAuthorizableFactory$2.checkAuthorization(RestrictedComponentsAuthorizableFactory.java:68)
> at 
> org.apache.nifi.controller.ConfiguredComponent.checkAuthorization(ConfiguredComponent.java:129)
> at 
> org.apache.nifi.authorization.resource.Authorizable.checkAuthorization(Authorizable.java:183)
> at 
> org.apache.nifi.authorization.resource.Authorizable.isAuthorized(Authorizable.java:70)
> at 
> org.apache.nifi.web.api.dto.DtoFactory.createPermissionsDto(DtoFactory.java:1798)
> at 
> org.apache.nifi.web.api.dto.DtoFactory.createPermissionsDto(DtoFactory.java:1785)
> at 
> org.apache.nifi.web.api.dto.DtoFactory.lambda$createAffectedComponentEntities$73(DtoFactory.java:2485)
> at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> at 
> java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1540)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.nifi.web.api.dto.DtoFactory.createAffectedComponentEntities(DtoFactory.java:2489)
> at 
> org.apache.nifi.web.api.dto.DtoFactory.createVariableRegistryDto(DtoFactory.java:2507)
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade.lambda$updateVariableRegistry$36(StandardNiFiServiceFacade.java:950)
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade$1.update(StandardNiFiServiceFacade.java:721)
> at 
> org.apache.nifi.web.revision.NaiveRevisionManager.updateRevision(NaiveRevisionManager.java:120)
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade.updateComponent(StandardNiFiServiceFacade.java:712)
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade.updateVariableRegistry(StandardNiFiServiceFacade.java:947)
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade$$FastClassBySpringCGLIB$$358780e0.invoke()
> at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
> at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
> at 
> org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:85)
> at 
> org.apache.nifi.web.NiFiServiceFacadeLock.proceedWithWriteLock(NiFiServiceFacadeLock.java:173)
> at 
> 

[jira] [Updated] (NIFI-4795) AllowableValues for AccessPolicySummaryDTO are incorrect

2018-04-26 Thread Joseph Percivall (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Percivall updated NIFI-4795:
---
Affects Version/s: (was: 1.6.0)

> AllowableValues for AccessPolicySummaryDTO are incorrect
> 
>
> Key: NIFI-4795
> URL: https://issues.apache.org/jira/browse/NIFI-4795
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Sébastien Bouchex Bellomié
>Priority: Minor
> Fix For: 1.6.0
>
>
> org.apache.nifi.authorization.ActionEnum defines lowercase values "read" and 
> "write", however, org.apache.nifi.web.api.dto.AccessPolicySummaryDTO (action) 
> defines allowed values as "READ" and "WRITE" (uppercase)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4893) Cannot convert Avro schemas to Record schemas with default value in arrays

2018-04-26 Thread Joseph Percivall (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Percivall updated NIFI-4893:
---
Affects Version/s: (was: 1.6.0)

> Cannot convert Avro schemas to Record schemas with default value in arrays
> --
>
> Key: NIFI-4893
> URL: https://issues.apache.org/jira/browse/NIFI-4893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: issue1.zip
>
>
> Given an Avro Schema that has a default array defined, it is not possible to 
> be converted to a Nifi Record Schema.
> To reproduce the bug, try to convert the following Avro schema to Record 
> Schema:
> {code}
> {
>     "type": "record",
>     "name": "Foo1",
>     "namespace": "foo.namespace",
>     "fields": [
>         {
>             "name": "listOfInt",
>             "type": {
>                 "type": "array",
>                 "items": "int"
>             },
>             "doc": "array of ints",
>             "default": 0
>         }
>     ]
> }
> {code}
>  
> Using org.apache.nifi.avro.AvroTypeUtil class. Attached a maven project to 
> reproduce the issue and also the fix.
> * To reproduce the bug, run "mvn clean test"
> * To test the fix, run "mvn clean test -Ppatch".
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5126) QueryDatabaseTable state key leads to unexpected behavior when table name changes

2018-04-26 Thread Nicholas Carenza (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Carenza updated NIFI-5126:
---
Attachment: image-2018-04-26-10-36-45-879.png

> QueryDatabaseTable state key leads to unexpected behavior when table name 
> changes
> -
>
> Key: NIFI-5126
> URL: https://issues.apache.org/jira/browse/NIFI-5126
> Project: Apache NiFi
>  Issue Type: Bug
> Environment: Nifi 1.3.0
>Reporter: Nicholas Carenza
>Priority: Major
> Attachments: image-2018-04-26-10-36-45-879.png
>
>
> I renamed a table in my database and updated my QueryDatabaseTable to match 
> thinking it would resume from where it left off but the state variable name 
> changed with along with the table name property.
> !image-2018-04-26-10-36-45-879.png!
> So it ended up querying the full table all over again. Can we add some 
> configuration to control this behavior or remove the table name from the 
> state variable by default? Since the processor can only query from one table 
> anyways, I am not sure why the table name needs to be saved to state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5126) QueryDatabaseTable state key leads to unexpected behavior when table name changes

2018-04-26 Thread Nicholas Carenza (JIRA)
Nicholas Carenza created NIFI-5126:
--

 Summary: QueryDatabaseTable state key leads to unexpected behavior 
when table name changes
 Key: NIFI-5126
 URL: https://issues.apache.org/jira/browse/NIFI-5126
 Project: Apache NiFi
  Issue Type: Bug
 Environment: Nifi 1.3.0
Reporter: Nicholas Carenza
 Attachments: image-2018-04-26-10-36-45-879.png

I renamed a table in my database and updated my QueryDatabaseTable to match 
thinking it would resume from where it left off but the state variable name 
changed with along with the table name property.

!image-2018-04-26-10-32-55-718.png!

So it ended up querying the full table all over again. Can we add some 
configuration to control this behavior or remove the table name from the state 
variable by default? Since the processor can only query from one table anyways, 
I am not sure why the table name needs to be saved to state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5126) QueryDatabaseTable state key leads to unexpected behavior when table name changes

2018-04-26 Thread Nicholas Carenza (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Carenza updated NIFI-5126:
---
Description: 
I renamed a table in my database and updated my QueryDatabaseTable to match 
thinking it would resume from where it left off but the state variable name 
changed with along with the table name property.

!image-2018-04-26-10-36-45-879.png!

So it ended up querying the full table all over again. Can we add some 
configuration to control this behavior or remove the table name from the state 
variable by default? Since the processor can only query from one table anyways, 
I am not sure why the table name needs to be saved to state.

  was:
I renamed a table in my database and updated my QueryDatabaseTable to match 
thinking it would resume from where it left off but the state variable name 
changed with along with the table name property.

!image-2018-04-26-10-32-55-718.png!

So it ended up querying the full table all over again. Can we add some 
configuration to control this behavior or remove the table name from the state 
variable by default? Since the processor can only query from one table anyways, 
I am not sure why the table name needs to be saved to state.


> QueryDatabaseTable state key leads to unexpected behavior when table name 
> changes
> -
>
> Key: NIFI-5126
> URL: https://issues.apache.org/jira/browse/NIFI-5126
> Project: Apache NiFi
>  Issue Type: Bug
> Environment: Nifi 1.3.0
>Reporter: Nicholas Carenza
>Priority: Major
> Attachments: image-2018-04-26-10-36-45-879.png
>
>
> I renamed a table in my database and updated my QueryDatabaseTable to match 
> thinking it would resume from where it left off but the state variable name 
> changed with along with the table name property.
> !image-2018-04-26-10-36-45-879.png!
> So it ended up querying the full table all over again. Can we add some 
> configuration to control this behavior or remove the table name from the 
> state variable by default? Since the processor can only query from one table 
> anyways, I am not sure why the table name needs to be saved to state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4958) Travis job will be successful when Maven build fails + add atlas profile

2018-04-26 Thread Joseph Percivall (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Percivall updated NIFI-4958:
---
Affects Version/s: (was: 1.6.0)

> Travis job will be successful when Maven build fails + add atlas profile
> 
>
> Key: NIFI-4958
> URL: https://issues.apache.org/jira/browse/NIFI-4958
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 1.6.0
>
>
> With 
> [NIFI-4936|https://github.com/apache/nifi/commit/c71409fb5d0a3aef95b05fca9538258d2e2fb907],
>  the output of the build has been reduced but we loose the output code of the 
> Maven build command. The profile to build atlas bundle is also missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5009) PutParquet processor requires "read filesystem" restricted component permission but should be "write filesystem" permission instead

2018-04-26 Thread Joseph Percivall (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Percivall updated NIFI-5009:
---
Affects Version/s: (was: 1.6.0)

> PutParquet processor requires "read filesystem" restricted component 
> permission but should be "write filesystem" permission instead
> ---
>
> Key: NIFI-5009
> URL: https://issues.apache.org/jira/browse/NIFI-5009
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Core UI
>Reporter: Andrew Lim
>Assignee: Matt Gilman
>Priority: Minor
> Fix For: 1.6.0
>
> Attachments: PutParquet_permission.jpg
>
>
> Similar to the other Put*** restricted processors, this is a write 
> processor, so it should require "write filesystem" permissions.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2616: NIFI-5052 Added DeleteByQuery ElasticSearch process...

2018-04-26 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r184432400
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/DeleteByQueryElasticsearch.java
 ---
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.elasticsearch.DeleteOperationResponse;
+import org.apache.nifi.elasticsearch.ElasticSearchClientService;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.util.StringUtils;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_ALLOWED)
+@CapabilityDescription("Delete from an ElasticSearch index using a query. 
The query can be loaded from a flowfile body " +
+"or from the Query parameter.")
+@Tags({ "elastic", "elasticsearch", "delete", "query"})
+@WritesAttributes({
+@WritesAttribute(attribute = "elasticsearch.delete.took"),
--- End diff --

These should have descriptions as they will be displayed in the processor 
documentation


---


[jira] [Commented] (NIFI-5052) Create a "delete by query" ElasticSearch processor

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454405#comment-16454405
 ] 

ASF GitHub Bot commented on NIFI-5052:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r184432659
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/pom.xml
 ---
@@ -120,6 +115,18 @@
 1.7.0-SNAPSHOT
 provided
 
+
+org.elasticsearch
+elasticsearch
+5.6.8
+compile
--- End diff --

Not sure these need to be explicitly scoped? But if it works, no harm no 
foul...


> Create a "delete by query" ElasticSearch processor
> --
>
> Key: NIFI-5052
> URL: https://issues.apache.org/jira/browse/NIFI-5052
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2616: NIFI-5052 Added DeleteByQuery ElasticSearch process...

2018-04-26 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r184433040
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/src/test/java/org/apache/nifi/elasticsearch/integration/ElasticSearchClientService_IT.java
 ---
@@ -102,4 +129,43 @@ public void testBasicSearch() throws Exception {
 Assert.assertEquals(String.format("%s did not match", key), 
expected.get(key), docCount);
 }
 }
+
+@Test
+public void testDeleteByQuery() throws Exception {
+String query = "{\n" +
--- End diff --

Nitpick, but if ES doesn't require the query be pretty-printed, it might be 
easier to read if it were just a one-line string.


---


[jira] [Commented] (NIFI-5052) Create a "delete by query" ElasticSearch processor

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454403#comment-16454403
 ] 

ASF GitHub Bot commented on NIFI-5052:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r184433040
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/src/test/java/org/apache/nifi/elasticsearch/integration/ElasticSearchClientService_IT.java
 ---
@@ -102,4 +129,43 @@ public void testBasicSearch() throws Exception {
 Assert.assertEquals(String.format("%s did not match", key), 
expected.get(key), docCount);
 }
 }
+
+@Test
+public void testDeleteByQuery() throws Exception {
+String query = "{\n" +
--- End diff --

Nitpick, but if ES doesn't require the query be pretty-printed, it might be 
easier to read if it were just a one-line string.


> Create a "delete by query" ElasticSearch processor
> --
>
> Key: NIFI-5052
> URL: https://issues.apache.org/jira/browse/NIFI-5052
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5052) Create a "delete by query" ElasticSearch processor

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454404#comment-16454404
 ] 

ASF GitHub Bot commented on NIFI-5052:
--

Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r184432400
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-restapi-processors/src/main/java/org/apache/nifi/processors/elasticsearch/DeleteByQueryElasticsearch.java
 ---
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.elasticsearch;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.elasticsearch.DeleteOperationResponse;
+import org.apache.nifi.elasticsearch.ElasticSearchClientService;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.util.StringUtils;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_ALLOWED)
+@CapabilityDescription("Delete from an ElasticSearch index using a query. 
The query can be loaded from a flowfile body " +
+"or from the Query parameter.")
+@Tags({ "elastic", "elasticsearch", "delete", "query"})
+@WritesAttributes({
+@WritesAttribute(attribute = "elasticsearch.delete.took"),
--- End diff --

These should have descriptions as they will be displayed in the processor 
documentation


> Create a "delete by query" ElasticSearch processor
> --
>
> Key: NIFI-5052
> URL: https://issues.apache.org/jira/browse/NIFI-5052
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2616: NIFI-5052 Added DeleteByQuery ElasticSearch process...

2018-04-26 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2616#discussion_r184432659
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/pom.xml
 ---
@@ -120,6 +115,18 @@
 1.7.0-SNAPSHOT
 provided
 
+
+org.elasticsearch
+elasticsearch
+5.6.8
+compile
--- End diff --

Not sure these need to be explicitly scoped? But if it works, no harm no 
foul...


---


[GitHub] nifi issue #2662: NIFI-5124: Upgrading commons-fileupload

2018-04-26 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2662
  
Reviewing...


---


[jira] [Commented] (NIFI-5124) Upgrade commons-fileupload

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454500#comment-16454500
 ] 

ASF GitHub Bot commented on NIFI-5124:
--

GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/2662

NIFI-5124: Upgrading commons-fileupload

NIFI-5124: 
- Upgrading to the latest version of commons-fileupload.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-5124

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2662.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2662


commit 00a7b96e4ab07a9b0c98849379d13617a097e105
Author: Matt Gilman 
Date:   2018-04-26T16:35:02Z

NIFI-5124:
- Upgrading to the latest version of commons-fileupload.




> Upgrade commons-fileupload
> --
>
> Key: NIFI-5124
> URL: https://issues.apache.org/jira/browse/NIFI-5124
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Major
>
> Would like to upgrade to a newer version of commons-fileupload. The latest 
> version appears to be 1.3.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5120) AbstractListenEventProcessor should support expression language

2018-04-26 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-5120.
--
   Resolution: Fixed
Fix Version/s: 1.7.0

EL support added for port

> AbstractListenEventProcessor should support expression language
> ---
>
> Key: NIFI-5120
> URL: https://issues.apache.org/jira/browse/NIFI-5120
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.6.0
> Environment: All
>Reporter: Sébastien Bouchex Bellomié
>Priority: Minor
> Fix For: 1.7.0
>
>
> Current implementation of AbstractListenEventProcessor only supports fixed 
> value whereas using expression language can be usefull while using properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2003: NIFI-4181 CSVReader and CSVRecordSetWriter can be used by ...

2018-04-26 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2003
  
@Wesley-Lawrence is this ready for "final" review? If so, do you mind 
rebasing this PR against the latest master? There are some conflicts listed 
above. Please and thanks!


---


[GitHub] nifi pull request #2509: NIFI-543 Added annotation to indicate processor sho...

2018-04-26 Thread zenfenan
Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2509#discussion_r184442089
  
--- Diff: nifi-docs/src/main/asciidoc/developer-guide.adoc ---
@@ -1751,6 +1751,12 @@ will handle your Processor:
will always be set to `1`. This does *not*, however, mean that 
the Processor does not have to be thread-safe,
as the thread that is executing `onTrigger` may change between 
invocations.
 
+- `PrimaryNodeOnly`: Apache NiFi, when clustered, offers two modes of 
execution for Processors: "Primary Node" and
+"All Nodes". Although running in all the nodes offers better 
parallelism, some Processors are known to cause unintended
+behaviors when run in multiple nodes. For instance, some 
Processors lists or reads files from remote filesystems. If such
--- End diff --

Fixed


---


[jira] [Commented] (NIFI-543) Provide extensions a way to indicate that they can run only on primary node, if clustered

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454441#comment-16454441
 ] 

ASF GitHub Bot commented on NIFI-543:
-

Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2509#discussion_r184442473
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-processor-configuration.js
 ---
@@ -741,8 +742,8 @@
 }
 });
 
-// show the execution node option if we're cluster or 
we're currently configured to run on the primary node only
-if (nfClusterSummary.isClustered() || executionNode 
=== 'PRIMARY') {
+// show the execution node option if we're clustered 
and execution node is not restricted to run only in primary node
+if (nfClusterSummary.isClustered() && 
executionNodeRestricted !== true) {
--- End diff --

Yep. It's better. I've added it and pushed the commit. Appreciate if you 
could test it out and confirm the changes :)


> Provide extensions a way to indicate that they can run only on primary node, 
> if clustered
> -
>
> Key: NIFI-543
> URL: https://issues.apache.org/jira/browse/NIFI-543
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core Framework, Documentation  Website, Extensions
>Reporter: Mark Payne
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>
> There are Processors that are known to be problematic if run from multiple 
> nodes simultaneously. These processors should be able to use a 
> @PrimaryNodeOnly annotation (or something similar) to indicate that they can 
> be scheduled to run only on primary node if run in a cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2662: NIFI-5124: Upgrading commons-fileupload

2018-04-26 Thread mcgilman
GitHub user mcgilman opened a pull request:

https://github.com/apache/nifi/pull/2662

NIFI-5124: Upgrading commons-fileupload

NIFI-5124: 
- Upgrading to the latest version of commons-fileupload.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcgilman/nifi NIFI-5124

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2662.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2662


commit 00a7b96e4ab07a9b0c98849379d13617a097e105
Author: Matt Gilman 
Date:   2018-04-26T16:35:02Z

NIFI-5124:
- Upgrading to the latest version of commons-fileupload.




---


[jira] [Commented] (NIFIREG-162) Add Git backed persistence provider

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454567#comment-16454567
 ] 

ASF GitHub Bot commented on NIFIREG-162:


Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/112#discussion_r184458388
  
--- Diff: 
nifi-registry-framework/src/main/java/org/apache/nifi/registry/provider/flow/git/GitFlowMetaData.java
 ---
@@ -0,0 +1,384 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.registry.provider.flow.git;
+
+import org.eclipse.jgit.api.Git;
+import org.eclipse.jgit.api.PushCommand;
+import org.eclipse.jgit.api.Status;
+import org.eclipse.jgit.api.errors.GitAPIException;
+import org.eclipse.jgit.api.errors.NoHeadException;
+import org.eclipse.jgit.lib.ObjectId;
+import org.eclipse.jgit.lib.Repository;
+import org.eclipse.jgit.lib.UserConfig;
+import org.eclipse.jgit.revwalk.RevCommit;
+import org.eclipse.jgit.revwalk.RevTree;
+import org.eclipse.jgit.storage.file.FileRepositoryBuilder;
+import org.eclipse.jgit.transport.CredentialsProvider;
+import org.eclipse.jgit.transport.PushResult;
+import org.eclipse.jgit.transport.RemoteConfig;
+import org.eclipse.jgit.transport.UsernamePasswordCredentialsProvider;
+import org.eclipse.jgit.treewalk.TreeWalk;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.yaml.snakeyaml.Yaml;
+
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStreamWriter;
+import java.io.Writer;
+import java.nio.charset.StandardCharsets;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.stream.Collectors;
+
+import static java.lang.String.format;
+import static org.apache.commons.lang3.StringUtils.isEmpty;
+
+class GitFlowMetaData {
+
+static final int CURRENT_LAYOUT_VERSION = 1;
+
+static final String LAYOUT_VERSION = "layoutVer";
+static final String BUCKET_ID = "bucketId";
+static final String FLOWS = "flows";
+static final String VER = "ver";
+static final String FILE = "file";
+static final String BUCKET_FILENAME = "bucket.yml";
+
+private static final Logger logger = 
LoggerFactory.getLogger(GitFlowMetaData.class);
+
+private Repository gitRepo;
+private String remoteToPush;
+private CredentialsProvider credentialsProvider;
+
+/**
+ * Bucket ID to Bucket.
+ */
+private Map buckets = new HashMap<>();
+
+public void setRemoteToPush(String remoteToPush) {
+this.remoteToPush = remoteToPush;
+}
+
+public void setRemoteCredential(String userName, String password) {
+this.credentialsProvider = new 
UsernamePasswordCredentialsProvider(userName, password);
+}
+
+/**
+ * Open a Git repository using the specified directory.
+ * @param gitProjectRootDir a root directory of a Git project
+ * @return created Repository
+ * @throws IOException thrown when the specified directory does not 
exist,
+ * does not have read/write privilege or not containing .git directory
+ */
+private Repository openRepository(final File gitProjectRootDir) throws 
IOException {
+
+// Instead of using 
FileUtils.ensureDirectoryExistAndCanReadAndWrite, check availability manually 
here.
+// Because the util will try to create a dir if not exist.
+// The git dir should be initialized and configured by users.
+if (!gitProjectRootDir.isDirectory()) {
+throw new IOException(format("'%s' is not a directory or does 
not exist.", gitProjectRootDir));
+}
+
+if (!(gitProjectRootDir.canRead() && 

[GitHub] nifi-registry issue #112: NIFIREG-162: Support Git backed PersistenceProvide...

2018-04-26 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/112
  
Been testing this and has been looking good so far...

I was doing a test where I configured to push to a remote, but I didn't 
supply a username/password because I honestly wasn't sure if the push would 
leverage my cached credentials at the OS level. 

So the push failed with an exception like:
```

Caused by: org.eclipse.jgit.errors.TransportException: 
https://github.com/bbende/nifi-versioned-flows.git: Authentication is required 
but no CredentialsProvider has been registered
at 
org.eclipse.jgit.transport.TransportHttp.connect(TransportHttp.java:522) 
~[org.eclipse.jgit-4.11.0.201803080745-r.jar:4.11.0.201803080745-r]
at 
org.eclipse.jgit.transport.TransportHttp.openPush(TransportHttp.java:435) 
~[org.eclipse.jgit-4.11.0.201803080745-r.jar:4.11.0.201803080745-r]
at org.eclipse.jgit.transport.PushProcess.execute(PushProcess.java:160) 
~[org.eclipse.jgit-4.11.0.201803080745-r.jar:4.11.0.201803080745-r]
at org.eclipse.jgit.transport.Transport.push(Transport.java:1344) 
~[org.eclipse.jgit-4.11.0.201803080745-r.jar:4.11.0.201803080745-r]
at org.eclipse.jgit.api.PushCommand.call(PushCommand.java:169) 
~[org.eclipse.jgit-4.11.0.201803080745-r.jar:4.11.0.201803080745-r]
... 109 common frames omitted
```
This makes sense, but the result was that the flow was committed to the 
local repo, but because of the error when pushing, the error was thrown out of 
the REST layer and the response to NiFi indicated that starting version control 
failed. So it was left in a weird state where NiFi now thinks the process group 
is not under version control, but the local repo does have the first version 
saved.

Do you think we should catch any exceptions around the push inside the 
persistence provider and log them, but maybe not throw an error?

That would leave things in a more consistent state, but I'm kind of torn 
because then it may not be obvious to users that the pushes are failing unless 
they look in the registry logs.


---


[GitHub] nifi-registry issue #112: NIFIREG-162: Support Git backed PersistenceProvide...

2018-04-26 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/112
  
I noticed when I ran "git log" that the commit was made by "anonymous" 
which is correct since I was in an unsecure instance, but the email address of 
the commit ended up using the email from my ~/.gitconfig so I ended up with:

```
nifi-versioned-flows$ git log
commit 153690a2bd06d57ec416cb19b0582e2f7b138771 (HEAD -> master)
Author: anonymous 
Date:   Thu Apr 26 10:58:40 2018 -0400

Test
```
This is technically correct since that is the email address that should be 
found when there is not a more specific one, but should we make a property on 
the provider config like "Commit Email Address" ? or should we just leave it up 
to users to setup their gitconfig appropriately.

Ultimately we won't be able to have per-user email addresses anyway because 
when secured we will be using the identity of the proxied-entity as the author, 
and we have no way of knowing their email address. 


---


[jira] [Commented] (NIFI-5123) Enable extensions to use SchemaRegistryService

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454423#comment-16454423
 ] 

ASF GitHub Bot commented on NIFI-5123:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2661
  
Ran `contrib-check` and verified using the "Provenance Stream Record 
ReaderWriter XML AVRO JSON CSV (1.5.0+)" template [from the 
wiki](https://cwiki.apache.org/confluence/display/NIFI/Example+Dataflow+Templates).
 Had to update the query to look for `DROP` events rather than `FORK`, but 
everything looks good. Running a full build, if all tests pass, will merge. 


> Enable extensions to use SchemaRegistryService
> --
>
> Key: NIFI-5123
> URL: https://issues.apache.org/jira/browse/NIFI-5123
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Currently SchemaRegistryService is in the nifi-record-serialization-services 
> NAR, which means that other extensions (unless they had 
> nifi-record-serialization-services-nar as a parent which is not recommended) 
> cannot make use of this abstract class. The class uses utilities from 
> nifi-avro-record-utils such as SchemaAccessUtils, and is a very helpful base 
> class used to offer a consistent user experience for selecting schemas (by 
> name, by test, from a schema registry, etc.). Other extensions wishing to 
> provide access to a schema registry would duplicate much of the logic in 
> SchemaRegistryService, which offers challenges for consistency and 
> maintainability.
> This Jira proposes to move SchemaRegistryService into nifi-avro-record-utils, 
> where it can be leveraged by any extension that depends on it, and thus we 
> can strongly recommend that record-aware processors that will interact with a 
> schema registry use SchemaRegistryService as a parent class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5123) Enable extensions to use SchemaRegistryService

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454493#comment-16454493
 ] 

ASF GitHub Bot commented on NIFI-5123:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2661
  
+1, merged. 


> Enable extensions to use SchemaRegistryService
> --
>
> Key: NIFI-5123
> URL: https://issues.apache.org/jira/browse/NIFI-5123
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Currently SchemaRegistryService is in the nifi-record-serialization-services 
> NAR, which means that other extensions (unless they had 
> nifi-record-serialization-services-nar as a parent which is not recommended) 
> cannot make use of this abstract class. The class uses utilities from 
> nifi-avro-record-utils such as SchemaAccessUtils, and is a very helpful base 
> class used to offer a consistent user experience for selecting schemas (by 
> name, by test, from a schema registry, etc.). Other extensions wishing to 
> provide access to a schema registry would duplicate much of the logic in 
> SchemaRegistryService, which offers challenges for consistency and 
> maintainability.
> This Jira proposes to move SchemaRegistryService into nifi-avro-record-utils, 
> where it can be leveraged by any extension that depends on it, and thus we 
> can strongly recommend that record-aware processors that will interact with a 
> schema registry use SchemaRegistryService as a parent class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2661: NIFI-5123: Move SchemaRegistryService to nifi-avro-...

2018-04-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2661


---


[jira] [Commented] (NIFI-5123) Enable extensions to use SchemaRegistryService

2018-04-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454491#comment-16454491
 ] 

ASF subversion and git services commented on NIFI-5123:
---

Commit 159b64b4c8bb1b32f6ec9ddba3b98e0faa82c72a in nifi's branch 
refs/heads/master from [~ca9mbu]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=159b64b ]

NIFI-5123: Move SchemaRegistryService to nifi-avro-record-utils

This closes #2661.

Signed-off-by: Andy LoPresto 


> Enable extensions to use SchemaRegistryService
> --
>
> Key: NIFI-5123
> URL: https://issues.apache.org/jira/browse/NIFI-5123
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Currently SchemaRegistryService is in the nifi-record-serialization-services 
> NAR, which means that other extensions (unless they had 
> nifi-record-serialization-services-nar as a parent which is not recommended) 
> cannot make use of this abstract class. The class uses utilities from 
> nifi-avro-record-utils such as SchemaAccessUtils, and is a very helpful base 
> class used to offer a consistent user experience for selecting schemas (by 
> name, by test, from a schema registry, etc.). Other extensions wishing to 
> provide access to a schema registry would duplicate much of the logic in 
> SchemaRegistryService, which offers challenges for consistency and 
> maintainability.
> This Jira proposes to move SchemaRegistryService into nifi-avro-record-utils, 
> where it can be leveraged by any extension that depends on it, and thus we 
> can strongly recommend that record-aware processors that will interact with a 
> schema registry use SchemaRegistryService as a parent class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5123) Enable extensions to use SchemaRegistryService

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454494#comment-16454494
 ] 

ASF GitHub Bot commented on NIFI-5123:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2661


> Enable extensions to use SchemaRegistryService
> --
>
> Key: NIFI-5123
> URL: https://issues.apache.org/jira/browse/NIFI-5123
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Currently SchemaRegistryService is in the nifi-record-serialization-services 
> NAR, which means that other extensions (unless they had 
> nifi-record-serialization-services-nar as a parent which is not recommended) 
> cannot make use of this abstract class. The class uses utilities from 
> nifi-avro-record-utils such as SchemaAccessUtils, and is a very helpful base 
> class used to offer a consistent user experience for selecting schemas (by 
> name, by test, from a schema registry, etc.). Other extensions wishing to 
> provide access to a schema registry would duplicate much of the logic in 
> SchemaRegistryService, which offers challenges for consistency and 
> maintainability.
> This Jira proposes to move SchemaRegistryService into nifi-avro-record-utils, 
> where it can be leveraged by any extension that depends on it, and thus we 
> can strongly recommend that record-aware processors that will interact with a 
> schema registry use SchemaRegistryService as a parent class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-162) Add Git backed persistence provider

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454379#comment-16454379
 ] 

ASF GitHub Bot commented on NIFIREG-162:


Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/112
  
I noticed when I ran "git log" that the commit was made by "anonymous" 
which is correct since I was in an unsecure instance, but the email address of 
the commit ended up using the email from my ~/.gitconfig so I ended up with:

```
nifi-versioned-flows$ git log
commit 153690a2bd06d57ec416cb19b0582e2f7b138771 (HEAD -> master)
Author: anonymous 
Date:   Thu Apr 26 10:58:40 2018 -0400

Test
```
This is technically correct since that is the email address that should be 
found when there is not a more specific one, but should we make a property on 
the provider config like "Commit Email Address" ? or should we just leave it up 
to users to setup their gitconfig appropriately.

Ultimately we won't be able to have per-user email addresses anyway because 
when secured we will be using the identity of the proxied-entity as the author, 
and we have no way of knowing their email address. 


> Add Git backed persistence provider
> ---
>
> Key: NIFIREG-162
> URL: https://issues.apache.org/jira/browse/NIFIREG-162
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>Priority: Major
>
> Currently, NiFi Registry provides FileSystemFlowPersistenceProvider, which 
> stores Flow snapshot files into local file system. It simply manages snapshot 
> versions by creating directories with version numbers.
> While it works, there are also demands for using Git as a version control and 
> persistence mechanism.
> A Git backend persistence repository would be beneficial in following aspects:
> * Git is a SCM (Source Control Management) that manages commits, branches, 
> file diffs, patches natively and provide ways to contribute and apply changes 
> among users
> * Local and remote Git repositories can construct a distributed reliable 
> storage
> * There are several Git repository services on the internet which can be used 
> as remote Git repositories those can be used as backup storages
> There are few things with current NiFi Registry framework and existing 
> FileSystemFlowPersistenceProvider those may not be Git friendly:
> * Bucket id and Flow id are UUID and not recognizable by human, if those 
> files have human readable names, many Git commands and tools can be used 
> easier.
> * Current serialized Flow snapshots are binary files having header bytes and 
> XML encoded flow contents. If those are pure ASCII format, Git can provide 
> better diffs among commits, that can provide better UX in terms of 
> controlling Flow snapshot versions
> * NiFi Registry userid which can be used as author in Git commit is not 
> available in FlowSnapshotContext
> Also, if we are going to add another Persistence Provider implementation, we 
> also need to provide a way to migrate existing persisted files so that those 
> can be used by new one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5052) Create a "delete by query" ElasticSearch processor

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454397#comment-16454397
 ] 

ASF GitHub Bot commented on NIFI-5052:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2616
  
Reviewing...


> Create a "delete by query" ElasticSearch processor
> --
>
> Key: NIFI-5052
> URL: https://issues.apache.org/jira/browse/NIFI-5052
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2509: NIFI-543 Added annotation to indicate processor sho...

2018-04-26 Thread zenfenan
Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2509#discussion_r184442473
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-processor-configuration.js
 ---
@@ -741,8 +742,8 @@
 }
 });
 
-// show the execution node option if we're cluster or 
we're currently configured to run on the primary node only
-if (nfClusterSummary.isClustered() || executionNode 
=== 'PRIMARY') {
+// show the execution node option if we're clustered 
and execution node is not restricted to run only in primary node
+if (nfClusterSummary.isClustered() && 
executionNodeRestricted !== true) {
--- End diff --

Yep. It's better. I've added it and pushed the commit. Appreciate if you 
could test it out and confirm the changes :)


---


[jira] [Commented] (NIFI-543) Provide extensions a way to indicate that they can run only on primary node, if clustered

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454436#comment-16454436
 ] 

ASF GitHub Bot commented on NIFI-543:
-

Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2509#discussion_r184442089
  
--- Diff: nifi-docs/src/main/asciidoc/developer-guide.adoc ---
@@ -1751,6 +1751,12 @@ will handle your Processor:
will always be set to `1`. This does *not*, however, mean that 
the Processor does not have to be thread-safe,
as the thread that is executing `onTrigger` may change between 
invocations.
 
+- `PrimaryNodeOnly`: Apache NiFi, when clustered, offers two modes of 
execution for Processors: "Primary Node" and
+"All Nodes". Although running in all the nodes offers better 
parallelism, some Processors are known to cause unintended
+behaviors when run in multiple nodes. For instance, some 
Processors lists or reads files from remote filesystems. If such
--- End diff --

Fixed


> Provide extensions a way to indicate that they can run only on primary node, 
> if clustered
> -
>
> Key: NIFI-543
> URL: https://issues.apache.org/jira/browse/NIFI-543
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core Framework, Documentation  Website, Extensions
>Reporter: Mark Payne
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>
> There are Processors that are known to be problematic if run from multiple 
> nodes simultaneously. These processors should be able to use a 
> @PrimaryNodeOnly annotation (or something similar) to indicate that they can 
> be scheduled to run only on primary node if run in a cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2003: NIFI-4181 CSVReader and CSVRecordSetWriter can be used by ...

2018-04-26 Thread Wesley-Lawrence
Github user Wesley-Lawrence commented on the issue:

https://github.com/apache/nifi/pull/2003
  
@mattyb149 Looks like a lot has changed in the last 9 months. 

The code I added here leveraged classes related to Schema registries, but 
those classes have been moved into a more Avro specific package 
(`nifi-avro-record-utils`), where the CSV stuff is still under a standard 
package (`nifi-standard-record-utils`). Looks like it'll take some work to 
abstract the schema-registry specific stuff away from Avro, so the CSV 
reader/writers can leverage it.

Sadly, I don't have the time to get back deep in NiFi right now, so I'm OK 
with closing this PR so a more updated solution can be worked on.


---


[jira] [Commented] (NIFI-4181) CSVReader and CSVRecordSetWriter services should be able to work given an explicit list of columns.

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454540#comment-16454540
 ] 

ASF GitHub Bot commented on NIFI-4181:
--

Github user Wesley-Lawrence commented on the issue:

https://github.com/apache/nifi/pull/2003
  
@mattyb149 Looks like a lot has changed in the last 9 months. 

The code I added here leveraged classes related to Schema registries, but 
those classes have been moved into a more Avro specific package 
(`nifi-avro-record-utils`), where the CSV stuff is still under a standard 
package (`nifi-standard-record-utils`). Looks like it'll take some work to 
abstract the schema-registry specific stuff away from Avro, so the CSV 
reader/writers can leverage it.

Sadly, I don't have the time to get back deep in NiFi right now, so I'm OK 
with closing this PR so a more updated solution can be worked on.


> CSVReader and CSVRecordSetWriter services should be able to work given an 
> explicit list of columns.
> ---
>
> Key: NIFI-4181
> URL: https://issues.apache.org/jira/browse/NIFI-4181
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Wesley L Lawrence
>Priority: Minor
> Attachments: NIFI-4181.patch
>
>
> Currently, to read or write a CSV file with *Record processors, the CSVReader 
> and CSVRecordSetWriters need to be given an avro schema. For CSV, a simple 
> column definition can also work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2616: NIFI-5052 Added DeleteByQuery ElasticSearch processor.

2018-04-26 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2616
  
Reviewing...


---


[jira] [Created] (NIFI-5125) Unable to select Controller Service UUID from Bulletin Board

2018-04-26 Thread Mark Bean (JIRA)
Mark Bean created NIFI-5125:
---

 Summary: Unable to select Controller Service UUID from Bulletin 
Board
 Key: NIFI-5125
 URL: https://issues.apache.org/jira/browse/NIFI-5125
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.6.0
Reporter: Mark Bean


When Selecting the linked UUID value of a Controller Service from the Bulletin 
Board, the result is "Error: Unable to find the specified component." Expected 
behavior would be to open the appropriate Controller Services tab (either 
Component or Reporting Tasks)

Perhaps related is the fact that Controller Service UUID values are not 
searchable from the toolbar. This ticket may need to add this capability as 
well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2661: NIFI-5123: Move SchemaRegistryService to nifi-avro-record-...

2018-04-26 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2661
  
+1, merged. 


---


[jira] [Commented] (NIFIREG-162) Add Git backed persistence provider

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454369#comment-16454369
 ] 

ASF GitHub Bot commented on NIFIREG-162:


Github user bbende commented on the issue:

https://github.com/apache/nifi-registry/pull/112
  
Been testing this and has been looking good so far...

I was doing a test where I configured to push to a remote, but I didn't 
supply a username/password because I honestly wasn't sure if the push would 
leverage my cached credentials at the OS level. 

So the push failed with an exception like:
```

Caused by: org.eclipse.jgit.errors.TransportException: 
https://github.com/bbende/nifi-versioned-flows.git: Authentication is required 
but no CredentialsProvider has been registered
at 
org.eclipse.jgit.transport.TransportHttp.connect(TransportHttp.java:522) 
~[org.eclipse.jgit-4.11.0.201803080745-r.jar:4.11.0.201803080745-r]
at 
org.eclipse.jgit.transport.TransportHttp.openPush(TransportHttp.java:435) 
~[org.eclipse.jgit-4.11.0.201803080745-r.jar:4.11.0.201803080745-r]
at org.eclipse.jgit.transport.PushProcess.execute(PushProcess.java:160) 
~[org.eclipse.jgit-4.11.0.201803080745-r.jar:4.11.0.201803080745-r]
at org.eclipse.jgit.transport.Transport.push(Transport.java:1344) 
~[org.eclipse.jgit-4.11.0.201803080745-r.jar:4.11.0.201803080745-r]
at org.eclipse.jgit.api.PushCommand.call(PushCommand.java:169) 
~[org.eclipse.jgit-4.11.0.201803080745-r.jar:4.11.0.201803080745-r]
... 109 common frames omitted
```
This makes sense, but the result was that the flow was committed to the 
local repo, but because of the error when pushing, the error was thrown out of 
the REST layer and the response to NiFi indicated that starting version control 
failed. So it was left in a weird state where NiFi now thinks the process group 
is not under version control, but the local repo does have the first version 
saved.

Do you think we should catch any exceptions around the push inside the 
persistence provider and log them, but maybe not throw an error?

That would leave things in a more consistent state, but I'm kind of torn 
because then it may not be obvious to users that the pushes are failing unless 
they look in the registry logs.


> Add Git backed persistence provider
> ---
>
> Key: NIFIREG-162
> URL: https://issues.apache.org/jira/browse/NIFIREG-162
> Project: NiFi Registry
>  Issue Type: Improvement
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>Priority: Major
>
> Currently, NiFi Registry provides FileSystemFlowPersistenceProvider, which 
> stores Flow snapshot files into local file system. It simply manages snapshot 
> versions by creating directories with version numbers.
> While it works, there are also demands for using Git as a version control and 
> persistence mechanism.
> A Git backend persistence repository would be beneficial in following aspects:
> * Git is a SCM (Source Control Management) that manages commits, branches, 
> file diffs, patches natively and provide ways to contribute and apply changes 
> among users
> * Local and remote Git repositories can construct a distributed reliable 
> storage
> * There are several Git repository services on the internet which can be used 
> as remote Git repositories those can be used as backup storages
> There are few things with current NiFi Registry framework and existing 
> FileSystemFlowPersistenceProvider those may not be Git friendly:
> * Bucket id and Flow id are UUID and not recognizable by human, if those 
> files have human readable names, many Git commands and tools can be used 
> easier.
> * Current serialized Flow snapshots are binary files having header bytes and 
> XML encoded flow contents. If those are pure ASCII format, Git can provide 
> better diffs among commits, that can provide better UX in terms of 
> controlling Flow snapshot versions
> * NiFi Registry userid which can be used as author in Git commit is not 
> available in FlowSnapshotContext
> Also, if we are going to add another Persistence Provider implementation, we 
> also need to provide a way to migrate existing persisted files so that those 
> can be used by new one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2661: NIFI-5123: Move SchemaRegistryService to nifi-avro-record-...

2018-04-26 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2661
  
Ran `contrib-check` and verified using the "Provenance Stream Record 
ReaderWriter XML AVRO JSON CSV (1.5.0+)" template [from the 
wiki](https://cwiki.apache.org/confluence/display/NIFI/Example+Dataflow+Templates).
 Had to update the query to look for `DROP` events rather than `FORK`, but 
everything looks good. Running a full build, if all tests pass, will merge. 


---


[jira] [Updated] (NIFI-5123) Enable extensions to use SchemaRegistryService

2018-04-26 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-5123:

   Resolution: Fixed
Fix Version/s: 1.7.0
   Status: Resolved  (was: Patch Available)

> Enable extensions to use SchemaRegistryService
> --
>
> Key: NIFI-5123
> URL: https://issues.apache.org/jira/browse/NIFI-5123
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.7.0
>
>
> Currently SchemaRegistryService is in the nifi-record-serialization-services 
> NAR, which means that other extensions (unless they had 
> nifi-record-serialization-services-nar as a parent which is not recommended) 
> cannot make use of this abstract class. The class uses utilities from 
> nifi-avro-record-utils such as SchemaAccessUtils, and is a very helpful base 
> class used to offer a consistent user experience for selecting schemas (by 
> name, by test, from a schema registry, etc.). Other extensions wishing to 
> provide access to a schema registry would duplicate much of the logic in 
> SchemaRegistryService, which offers challenges for consistency and 
> maintainability.
> This Jira proposes to move SchemaRegistryService into nifi-avro-record-utils, 
> where it can be leveraged by any extension that depends on it, and thus we 
> can strongly recommend that record-aware processors that will interact with a 
> schema registry use SchemaRegistryService as a parent class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5124) Upgrade commons-fileupload

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454502#comment-16454502
 ] 

ASF GitHub Bot commented on NIFI-5124:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2662
  
Reviewing...


> Upgrade commons-fileupload
> --
>
> Key: NIFI-5124
> URL: https://issues.apache.org/jira/browse/NIFI-5124
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Major
>
> Would like to upgrade to a newer version of commons-fileupload. The latest 
> version appears to be 1.3.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-registry pull request #112: NIFIREG-162: Support Git backed Persistence...

2018-04-26 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi-registry/pull/112#discussion_r184458388
  
--- Diff: 
nifi-registry-framework/src/main/java/org/apache/nifi/registry/provider/flow/git/GitFlowMetaData.java
 ---
@@ -0,0 +1,384 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.registry.provider.flow.git;
+
+import org.eclipse.jgit.api.Git;
+import org.eclipse.jgit.api.PushCommand;
+import org.eclipse.jgit.api.Status;
+import org.eclipse.jgit.api.errors.GitAPIException;
+import org.eclipse.jgit.api.errors.NoHeadException;
+import org.eclipse.jgit.lib.ObjectId;
+import org.eclipse.jgit.lib.Repository;
+import org.eclipse.jgit.lib.UserConfig;
+import org.eclipse.jgit.revwalk.RevCommit;
+import org.eclipse.jgit.revwalk.RevTree;
+import org.eclipse.jgit.storage.file.FileRepositoryBuilder;
+import org.eclipse.jgit.transport.CredentialsProvider;
+import org.eclipse.jgit.transport.PushResult;
+import org.eclipse.jgit.transport.RemoteConfig;
+import org.eclipse.jgit.transport.UsernamePasswordCredentialsProvider;
+import org.eclipse.jgit.treewalk.TreeWalk;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.yaml.snakeyaml.Yaml;
+
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStreamWriter;
+import java.io.Writer;
+import java.nio.charset.StandardCharsets;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.stream.Collectors;
+
+import static java.lang.String.format;
+import static org.apache.commons.lang3.StringUtils.isEmpty;
+
+class GitFlowMetaData {
+
+static final int CURRENT_LAYOUT_VERSION = 1;
+
+static final String LAYOUT_VERSION = "layoutVer";
+static final String BUCKET_ID = "bucketId";
+static final String FLOWS = "flows";
+static final String VER = "ver";
+static final String FILE = "file";
+static final String BUCKET_FILENAME = "bucket.yml";
+
+private static final Logger logger = 
LoggerFactory.getLogger(GitFlowMetaData.class);
+
+private Repository gitRepo;
+private String remoteToPush;
+private CredentialsProvider credentialsProvider;
+
+/**
+ * Bucket ID to Bucket.
+ */
+private Map buckets = new HashMap<>();
+
+public void setRemoteToPush(String remoteToPush) {
+this.remoteToPush = remoteToPush;
+}
+
+public void setRemoteCredential(String userName, String password) {
+this.credentialsProvider = new 
UsernamePasswordCredentialsProvider(userName, password);
+}
+
+/**
+ * Open a Git repository using the specified directory.
+ * @param gitProjectRootDir a root directory of a Git project
+ * @return created Repository
+ * @throws IOException thrown when the specified directory does not 
exist,
+ * does not have read/write privilege or not containing .git directory
+ */
+private Repository openRepository(final File gitProjectRootDir) throws 
IOException {
+
+// Instead of using 
FileUtils.ensureDirectoryExistAndCanReadAndWrite, check availability manually 
here.
+// Because the util will try to create a dir if not exist.
+// The git dir should be initialized and configured by users.
+if (!gitProjectRootDir.isDirectory()) {
+throw new IOException(format("'%s' is not a directory or does 
not exist.", gitProjectRootDir));
+}
+
+if (!(gitProjectRootDir.canRead() && 
gitProjectRootDir.canWrite())) {
+throw new IOException(format("Directory '%s' does not have 
read/write privilege.", gitProjectRootDir));
+}
+
+// Search .git dir but avoid searching parent directories.
  

[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453609#comment-16453609
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184272408
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/DeleteHBaseCells.java
 ---
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Scanner;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@WritesAttributes({
+@WritesAttribute(attribute = "error.line", description = "The line 
number of the error."),
+@WritesAttribute(attribute = "error.msg", description = "The 
message explaining the error.")
+})
+@Tags({"hbase", "delete", "cell", "cells", "visibility"})
+@CapabilityDescription("This processor allows the user to delete 
individual HBase cells by specifying one or more lines " +
+"in the flowfile content that are a sequence composed of row ID, 
column family, column qualifier and associated visibility labels " +
+"if visibility labels are enabled and in use. A user-defined 
separator is used to separate each of these pieces of data on each " +
+"line, with  being the default separator.")
+public class DeleteHBaseCells extends AbstractDeleteHBase {
+static final PropertyDescriptor SEPARATOR = new 
PropertyDescriptor.Builder()
+.name("delete-hbase-cell-separator")
+.displayName("Separator")
+.description("Each line of the flowfile content is separated 
into components for building a delete using this" +
+"separator. It should be something other than a single 
colon or a comma because these are values that " +
+"are associated with columns and visibility labels 
respectively. To delete a row with ID xyz, column family abc, " +
+"column qualifier def and visibility label PII, 
one would specify xyzabcdefPII given the default " +
+"value")
+.required(true)
+.defaultValue("")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+static final String ERROR_LINE = "error.line";
+static final String ERROR_MSG  = "error.msg";
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+final List properties = new ArrayList<>();
+properties.add(HBASE_CLIENT_SERVICE);
+properties.add(TABLE_NAME);
+properties.add(SEPARATOR);
+
+return properties;
+}
+
+private FlowFile writeErrorAttributes(int line, String msg, FlowFile 
file, ProcessSession session) {
+file = session.putAttribute(file, ERROR_LINE, 
String.valueOf(line));
+file = session.putAttribute(file, ERROR_MSG, msg != null ? msg : 
"");
+return file;
+}
+
+private 

[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453608#comment-16453608
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184271984
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractPutHBase.java
 ---
@@ -131,6 +133,59 @@ public void onScheduled(final ProcessContext context) {
 clientService = 
context.getProperty(HBASE_CLIENT_SERVICE).asControllerService(HBaseClientService.class);
 }
 
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
--- End diff --

Thanks for making this to use dynamic properties it became more powerful! 
Would you add some docs on this functionality? You can use `@DynamicProperty` 
annotation to do so. Please refer HBase client service on how it is displayed 
in docs.

https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.6.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453617#comment-16453617
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184279789
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -142,6 +161,16 @@
 .allowableValues(NULL_FIELD_EMPTY, NULL_FIELD_SKIP)
 .build();
 
+protected static final PropertyDescriptor VISIBILITY_RECORD_PATH = new 
PropertyDescriptor.Builder()
+.name("put-hb-rec-visibility-record-path")
+.displayName("Visibility String Record Path Root")
+.description("A record path that points to part of the record 
which contains a path to a mapping of visibility strings to record paths")
+.required(false)
+.addValidator(Validator.VALID)
+.build();
--- End diff --

I thought the record path is for pointing a record field containing a 
single visibility expression String value at the first time. But this expect 
the record path target to be a Map, which contains keys for specifying column 
qualifier to apply the visibility and value is the visibility expression.

I propose to describe that at lease. Moreover, it would be helpful if we 
provide an Additional details page with some input record or json, sample 
configurations and result in HBase cells. This feature can be useful but fairy 
complex to use at the first time.


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453616#comment-16453616
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184278583
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -75,6 +83,17 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
+protected static final PropertyDescriptor DEFAULT_VISIBILITY_STRING = 
new PropertyDescriptor.Builder()
--- End diff --

Is there any reason to not use `pickVisibiliyString` at PutHBaseRecord?


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453604#comment-16453604
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184266326
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_ClientService.java
 ---
@@ -297,6 +305,8 @@ protected Connection createConnection(final 
ConfigurationContext context) throws
 keyTab = credentialsService.getKeytab();
 }
 
+this.principal = principal; //Set so it is usable from 
getLabelsForCurrentUser
--- End diff --

This `principal` field is not used any longer? I couldn't find 
`getLabelsForCurrentUser`.


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453607#comment-16453607
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184269182
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_ClientService.java
 ---
@@ -336,51 +348,86 @@ public void shutdown() {
 }
 }
 
+private static final byte[] EMPTY_VIS_STRING;
+
+static {
+try {
+EMPTY_VIS_STRING = "".getBytes("UTF-8");
+} catch (UnsupportedEncodingException e) {
+throw new RuntimeException(e);
+}
+}
+
+private List buildPuts(byte[] rowKey, List columns) {
+List retVal = new ArrayList<>();
+
+try {
+Put put = null;
+
+for (final PutColumn column : columns) {
+if (put == null || (put.getCellVisibility() == null && 
column.getVisibility() != null) || ( put.getCellVisibility() != null
+&& 
!put.getCellVisibility().getExpression().equals(column.getVisibility())
+)) {
+put = new Put(rowKey);
+
+if (column.getVisibility() != null) {
+put.setCellVisibility(new 
CellVisibility(column.getVisibility()));
+}
+retVal.add(put);
+}
+
+if (column.getTimestamp() != null) {
+put.addColumn(
+column.getColumnFamily(),
+column.getColumnQualifier(),
+column.getTimestamp(),
+column.getBuffer());
+} else {
+put.addColumn(
+column.getColumnFamily(),
+column.getColumnQualifier(),
+column.getBuffer());
+}
+}
+} catch (DeserializationException de) {
+getLogger().error("Error writing cell visibility statement.", 
de);
+throw new RuntimeException(de);
+}
+
+return retVal;
+}
+
 @Override
 public void put(final String tableName, final Collection 
puts) throws IOException {
 try (final Table table = 
connection.getTable(TableName.valueOf(tableName))) {
 // Create one Put per row
 final Map rowPuts = new HashMap<>();
+final Map sorted = new HashMap<>();
+final List newPuts = new ArrayList<>();
+
 for (final PutFlowFile putFlowFile : puts) {
-//this is used for the map key as a byte[] does not work 
as a key.
 final String rowKeyString = new 
String(putFlowFile.getRow(), StandardCharsets.UTF_8);
-Put put = rowPuts.get(rowKeyString);
-if (put == null) {
-put = new Put(putFlowFile.getRow());
-rowPuts.put(rowKeyString, put);
+List columns = sorted.get(rowKeyString);
+if (columns == null) {
+columns = new ArrayList<>();
+sorted.put(rowKeyString, columns);
 }
 
-for (final PutColumn column : putFlowFile.getColumns()) {
-if (column.getTimestamp() != null) {
-put.addColumn(
-column.getColumnFamily(),
-column.getColumnQualifier(),
-column.getTimestamp(),
-column.getBuffer());
-} else {
-put.addColumn(
-column.getColumnFamily(),
-column.getColumnQualifier(),
-column.getBuffer());
-}
-}
+columns.addAll(putFlowFile.getColumns());
+}
+
+for (final Map.Entry entry : 
sorted.entrySet()) {
+
newPuts.addAll(buildPuts(entry.getKey().getBytes(StandardCharsets.UTF_8), 
entry.getValue()));
 }
 
-table.put(new ArrayList<>(rowPuts.values()));
+table.put(new ArrayList<>(newPuts)); 

[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184266796
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase-client-service-api/src/main/java/org/apache/nifi/hbase/HBaseClientService.java
 ---
@@ -127,6 +139,25 @@
 
 void delete(String tableName, List rowIds) throws IOException;
 
+/**
+ * Deletes a list of cells from HBase. This is intended to be used 
with granual delete operations.
+ *
+ * @param tableName the name of an HBase table.
+ * @param deletes a list of DeleteRequest objects.
+ * @throws IOException thrown when there are communication errors with 
HBase
+ */
+void deleteCells(String tableName, List deletes) throws 
IOException;
+
+/**
+ * Deletes a list of rows in HBase. All cells are deleted.
--- End diff --

The 'All cells are deleted' should be updated to state that only matched 
cells are deleted if target cells have visibility label expression.


---


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453606#comment-16453606
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184266796
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase-client-service-api/src/main/java/org/apache/nifi/hbase/HBaseClientService.java
 ---
@@ -127,6 +139,25 @@
 
 void delete(String tableName, List rowIds) throws IOException;
 
+/**
+ * Deletes a list of cells from HBase. This is intended to be used 
with granual delete operations.
+ *
+ * @param tableName the name of an HBase table.
+ * @param deletes a list of DeleteRequest objects.
+ * @throws IOException thrown when there are communication errors with 
HBase
+ */
+void deleteCells(String tableName, List deletes) throws 
IOException;
+
+/**
+ * Deletes a list of rows in HBase. All cells are deleted.
--- End diff --

The 'All cells are deleted' should be updated to state that only matched 
cells are deleted if target cells have visibility label expression.


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184288776
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractPutHBase.java
 ---
@@ -131,6 +133,59 @@ public void onScheduled(final ProcessContext context) {
 clientService = 
context.getProperty(HBASE_CLIENT_SERVICE).asControllerService(HBaseClientService.class);
 }
 
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+if (propertyDescriptorName.startsWith("visibility.")) {
+String[] parts = propertyDescriptorName.split("\\.");
+String displayName;
+String description;
+
+if (parts.length == 2) {
+displayName = String.format("Column Family %s Default 
Visibility", parts[1]);
+description = String.format("Default visibility setting 
for %s", parts[1]);
+} else if (parts.length == 3) {
+displayName = String.format("Column Qualifier %s.%s 
Default Visibility", parts[1], parts[2]);
+description = String.format("Default visibility setting 
for %s.%s", parts[1], parts[2]);
+} else {
+return null;
+}
+
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(displayName)
+.description(description)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.dynamic(true)
+.build();
+}
+
+return null;
+}
+
+protected String pickVisibilityString(String columnFamily, String 
columnQualifier, FlowFile flowFile, ProcessContext context) {
+final String lookupKey = String.format("visibility.%s.%s", 
columnFamily, columnQualifier);
+final String fromAttribute = flowFile.getAttribute(lookupKey);
+if (fromAttribute != null) {
+return fromAttribute;
+} else {
+PropertyValue descriptor = context.getProperty(lookupKey);
+if (descriptor == null) {
+descriptor = 
context.getProperty(String.format("visibility.%s", columnFamily));
+}
+
+String retVal = descriptor != null ? descriptor.getValue() : 
null;
+
+return retVal;
+}
+}
+
+protected String pickVisibilityString(String defaultVisibilityString, 
String columnFamily, String columnQualifier, FlowFile flowFile) {
--- End diff --

This method is not used any longer. Please remove it.


---


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453611#comment-16453611
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184272910
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/DeleteHBaseCells.java
 ---
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Scanner;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@WritesAttributes({
+@WritesAttribute(attribute = "error.line", description = "The line 
number of the error."),
+@WritesAttribute(attribute = "error.msg", description = "The 
message explaining the error.")
+})
+@Tags({"hbase", "delete", "cell", "cells", "visibility"})
+@CapabilityDescription("This processor allows the user to delete 
individual HBase cells by specifying one or more lines " +
+"in the flowfile content that are a sequence composed of row ID, 
column family, column qualifier and associated visibility labels " +
+"if visibility labels are enabled and in use. A user-defined 
separator is used to separate each of these pieces of data on each " +
+"line, with  being the default separator.")
+public class DeleteHBaseCells extends AbstractDeleteHBase {
--- End diff --

Do you think it would be helpful if this processor and DeleteHBaseRow 
support default visibility label expression as PutHBaseXX processors?


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453618#comment-16453618
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184287966
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractPutHBase.java
 ---
@@ -131,6 +133,59 @@ public void onScheduled(final ProcessContext context) {
 clientService = 
context.getProperty(HBASE_CLIENT_SERVICE).asControllerService(HBaseClientService.class);
 }
 
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+if (propertyDescriptorName.startsWith("visibility.")) {
+String[] parts = propertyDescriptorName.split("\\.");
+String displayName;
+String description;
+
+if (parts.length == 2) {
+displayName = String.format("Column Family %s Default 
Visibility", parts[1]);
+description = String.format("Default visibility setting 
for %s", parts[1]);
+} else if (parts.length == 3) {
+displayName = String.format("Column Qualifier %s.%s 
Default Visibility", parts[1], parts[2]);
+description = String.format("Default visibility setting 
for %s.%s", parts[1], parts[2]);
+} else {
+return null;
+}
+
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(displayName)
+.description(description)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.dynamic(true)
+.build();
+}
+
+return null;
+}
+
+protected String pickVisibilityString(String columnFamily, String 
columnQualifier, FlowFile flowFile, ProcessContext context) {
+final String lookupKey = String.format("visibility.%s.%s", 
columnFamily, columnQualifier);
+final String fromAttribute = flowFile.getAttribute(lookupKey);
+if (fromAttribute != null) {
+return fromAttribute;
+} else {
+PropertyValue descriptor = context.getProperty(lookupKey);
+if (descriptor == null) {
--- End diff --

This condition does not work as expected. Because even if there is only 
'visibility.family' defined, `context.getProperty(visibility.family.qualifier)` 
returns a non-null object which having its value null. Please change this to 
`(descriptor == null || !descriptor.isSet())`. Also please add some unit tests.


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184269182
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_ClientService.java
 ---
@@ -336,51 +348,86 @@ public void shutdown() {
 }
 }
 
+private static final byte[] EMPTY_VIS_STRING;
+
+static {
+try {
+EMPTY_VIS_STRING = "".getBytes("UTF-8");
+} catch (UnsupportedEncodingException e) {
+throw new RuntimeException(e);
+}
+}
+
+private List buildPuts(byte[] rowKey, List columns) {
+List retVal = new ArrayList<>();
+
+try {
+Put put = null;
+
+for (final PutColumn column : columns) {
+if (put == null || (put.getCellVisibility() == null && 
column.getVisibility() != null) || ( put.getCellVisibility() != null
+&& 
!put.getCellVisibility().getExpression().equals(column.getVisibility())
+)) {
+put = new Put(rowKey);
+
+if (column.getVisibility() != null) {
+put.setCellVisibility(new 
CellVisibility(column.getVisibility()));
+}
+retVal.add(put);
+}
+
+if (column.getTimestamp() != null) {
+put.addColumn(
+column.getColumnFamily(),
+column.getColumnQualifier(),
+column.getTimestamp(),
+column.getBuffer());
+} else {
+put.addColumn(
+column.getColumnFamily(),
+column.getColumnQualifier(),
+column.getBuffer());
+}
+}
+} catch (DeserializationException de) {
+getLogger().error("Error writing cell visibility statement.", 
de);
+throw new RuntimeException(de);
+}
+
+return retVal;
+}
+
 @Override
 public void put(final String tableName, final Collection 
puts) throws IOException {
 try (final Table table = 
connection.getTable(TableName.valueOf(tableName))) {
 // Create one Put per row
 final Map rowPuts = new HashMap<>();
+final Map sorted = new HashMap<>();
+final List newPuts = new ArrayList<>();
+
 for (final PutFlowFile putFlowFile : puts) {
-//this is used for the map key as a byte[] does not work 
as a key.
 final String rowKeyString = new 
String(putFlowFile.getRow(), StandardCharsets.UTF_8);
-Put put = rowPuts.get(rowKeyString);
-if (put == null) {
-put = new Put(putFlowFile.getRow());
-rowPuts.put(rowKeyString, put);
+List columns = sorted.get(rowKeyString);
+if (columns == null) {
+columns = new ArrayList<>();
+sorted.put(rowKeyString, columns);
 }
 
-for (final PutColumn column : putFlowFile.getColumns()) {
-if (column.getTimestamp() != null) {
-put.addColumn(
-column.getColumnFamily(),
-column.getColumnQualifier(),
-column.getTimestamp(),
-column.getBuffer());
-} else {
-put.addColumn(
-column.getColumnFamily(),
-column.getColumnQualifier(),
-column.getBuffer());
-}
-}
+columns.addAll(putFlowFile.getColumns());
+}
+
+for (final Map.Entry entry : 
sorted.entrySet()) {
+
newPuts.addAll(buildPuts(entry.getKey().getBytes(StandardCharsets.UTF_8), 
entry.getValue()));
 }
 
-table.put(new ArrayList<>(rowPuts.values()));
+table.put(new ArrayList<>(newPuts)); /*rowPuts.values()));*/
--- End diff --

Do we have to wrap it with new ArrayList here? If not, lets just pass 
`newPuts` to avoid unnecessary object creation.


---


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453610#comment-16453610
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184267868
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/pom.xml
 ---
@@ -25,7 +25,7 @@
 nifi-hbase_1_1_2-client-service
 jar
 
-1.1.2
+1.1.13
--- End diff --

@bbende Is this gap between here and the nar name going to be an issue 
somewhere? Do we need to create nifi-hbase_1_1_13-client-service?


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453620#comment-16453620
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184276347
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseCell.java
 ---
@@ -82,6 +79,8 @@ protected PutFlowFile createPut(final ProcessSession 
session, final ProcessConte
 final String columnQualifier = 
context.getProperty(COLUMN_QUALIFIER).evaluateAttributeExpressions(flowFile).getValue();
 final String timestampValue = 
context.getProperty(TIMESTAMP).evaluateAttributeExpressions(flowFile).getValue();
 
+final String visibilityStringToUse = 
pickVisibilityString(columnFamily, columnQualifier, flowFile, context);
--- End diff --

Since PutHBaseCell is mutating only one cell, it is more intuitive to 
provide a configuration property directly specifying a visibility label 
expression, IN ADDITION to the default ones using dynamic properties. Thoughts?


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453612#comment-16453612
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184274827
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -381,6 +384,7 @@ public void onTrigger(ProcessContext context, 
ProcessSession session) throws Pro
 limitRows,
 isReversed,
 columns,
+authorizations,
--- End diff --

Do you know if HBase scan API let ResultHander get the visibility label 
that is set to the cells retrieved by this scan? If so, do we want to include 
that in output contents? I think if HBase returns that, we can embed those at 
RowSerializer implementations such as JsonFullRowSerializer.

It can make scan -> delete flow easier.


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453603#comment-16453603
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184265359
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/DeleteHBaseCells.java
 ---
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Scanner;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@WritesAttributes({
+@WritesAttribute(attribute = "error.line", description = "The line 
number of the error."),
+@WritesAttribute(attribute = "error.msg", description = "The 
message explaining the error.")
+})
+@Tags({"hbase", "delete", "cell", "cells", "visibility"})
+@CapabilityDescription("This processor allows the user to delete 
individual HBase cells by specifying one or more lines " +
+"in the flowfile content that are a sequence composed of row ID, 
column family, column qualifier and associated visibility labels " +
+"if visibility labels are enabled and in use. A user-defined 
separator is used to separate each of these pieces of data on each " +
+"line, with  being the default separator.")
+public class DeleteHBaseCells extends AbstractDeleteHBase {
+static final PropertyDescriptor SEPARATOR = new 
PropertyDescriptor.Builder()
+.name("delete-hbase-cell-separator")
+.displayName("Separator")
+.description("Each line of the flowfile content is separated 
into components for building a delete using this" +
+"separator. It should be something other than a single 
colon or a comma because these are values that " +
+"are associated with columns and visibility labels 
respectively. To delete a row with ID xyz, column family abc, " +
+"column qualifier def and visibility label PII, 
one would specify xyzabcdefPII given the default " +
+"value")
+.required(true)
+.defaultValue("")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+static final String ERROR_LINE = "error.line";
+static final String ERROR_MSG  = "error.msg";
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+final List properties = new ArrayList<>();
+properties.add(HBASE_CLIENT_SERVICE);
+properties.add(TABLE_NAME);
+properties.add(SEPARATOR);
+
+return properties;
+}
+
+private FlowFile writeErrorAttributes(int line, String msg, FlowFile 
file, ProcessSession session) {
+file = session.putAttribute(file, ERROR_LINE, 
String.valueOf(line));
+file = session.putAttribute(file, ERROR_MSG, msg != null ? msg : 
"");
+return file;
+}
+
+private 

[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453615#comment-16453615
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184277876
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -368,9 +412,18 @@ protected PutFlowFile createPut(ProcessContext 
context, Record record, RecordSch
 timestamp = null;
 }
 
+RecordField visField = null;
+Map visSettings = null;
+if (recordPath != null) {
+final RecordPathResult result = 
recordPath.evaluate(record);
+FieldValue fv = 
result.getSelectedFields().findFirst().get();
+visField = fv.getField();
+visSettings = (Map)fv.getValue();
+}
+
 List columns = new ArrayList<>();
 for (String name : schema.getFieldNames()) {
-if (name.equals(rowFieldName) || 
name.equals(timestampFieldName)) {
+if (name.equals(rowFieldName) || 
name.equals(timestampFieldName) || (visField != null && 
name.equals(visField.getFieldName( {
--- End diff --

Good stuff, I'd expect PutHBaseJson has the same capability.


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453621#comment-16453621
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184288776
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractPutHBase.java
 ---
@@ -131,6 +133,59 @@ public void onScheduled(final ProcessContext context) {
 clientService = 
context.getProperty(HBASE_CLIENT_SERVICE).asControllerService(HBaseClientService.class);
 }
 
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+if (propertyDescriptorName.startsWith("visibility.")) {
+String[] parts = propertyDescriptorName.split("\\.");
+String displayName;
+String description;
+
+if (parts.length == 2) {
+displayName = String.format("Column Family %s Default 
Visibility", parts[1]);
+description = String.format("Default visibility setting 
for %s", parts[1]);
+} else if (parts.length == 3) {
+displayName = String.format("Column Qualifier %s.%s 
Default Visibility", parts[1], parts[2]);
+description = String.format("Default visibility setting 
for %s.%s", parts[1], parts[2]);
+} else {
+return null;
+}
+
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(displayName)
+.description(description)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.dynamic(true)
+.build();
+}
+
+return null;
+}
+
+protected String pickVisibilityString(String columnFamily, String 
columnQualifier, FlowFile flowFile, ProcessContext context) {
+final String lookupKey = String.format("visibility.%s.%s", 
columnFamily, columnQualifier);
+final String fromAttribute = flowFile.getAttribute(lookupKey);
+if (fromAttribute != null) {
+return fromAttribute;
+} else {
+PropertyValue descriptor = context.getProperty(lookupKey);
+if (descriptor == null) {
+descriptor = 
context.getProperty(String.format("visibility.%s", columnFamily));
+}
+
+String retVal = descriptor != null ? descriptor.getValue() : 
null;
+
+return retVal;
+}
+}
+
+protected String pickVisibilityString(String defaultVisibilityString, 
String columnFamily, String columnQualifier, FlowFile flowFile) {
--- End diff --

This method is not used any longer. Please remove it.


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184268871
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_ClientService.java
 ---
@@ -336,51 +348,86 @@ public void shutdown() {
 }
 }
 
+private static final byte[] EMPTY_VIS_STRING;
+
+static {
+try {
+EMPTY_VIS_STRING = "".getBytes("UTF-8");
+} catch (UnsupportedEncodingException e) {
+throw new RuntimeException(e);
+}
+}
+
+private List buildPuts(byte[] rowKey, List columns) {
+List retVal = new ArrayList<>();
+
+try {
+Put put = null;
+
+for (final PutColumn column : columns) {
+if (put == null || (put.getCellVisibility() == null && 
column.getVisibility() != null) || ( put.getCellVisibility() != null
+&& 
!put.getCellVisibility().getExpression().equals(column.getVisibility())
+)) {
+put = new Put(rowKey);
+
+if (column.getVisibility() != null) {
+put.setCellVisibility(new 
CellVisibility(column.getVisibility()));
+}
+retVal.add(put);
+}
+
+if (column.getTimestamp() != null) {
+put.addColumn(
+column.getColumnFamily(),
+column.getColumnQualifier(),
+column.getTimestamp(),
+column.getBuffer());
+} else {
+put.addColumn(
+column.getColumnFamily(),
+column.getColumnQualifier(),
+column.getBuffer());
+}
+}
+} catch (DeserializationException de) {
+getLogger().error("Error writing cell visibility statement.", 
de);
+throw new RuntimeException(de);
+}
+
+return retVal;
+}
+
 @Override
 public void put(final String tableName, final Collection 
puts) throws IOException {
 try (final Table table = 
connection.getTable(TableName.valueOf(tableName))) {
 // Create one Put per row
 final Map rowPuts = new HashMap<>();
--- End diff --

This `rowPuts` is not used any longer. Please remove it.


---


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453619#comment-16453619
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184285304
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractPutHBase.java
 ---
@@ -131,6 +133,59 @@ public void onScheduled(final ProcessContext context) {
 clientService = 
context.getProperty(HBASE_CLIENT_SERVICE).asControllerService(HBaseClientService.class);
 }
 
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+if (propertyDescriptorName.startsWith("visibility.")) {
+String[] parts = propertyDescriptorName.split("\\.");
+String displayName;
+String description;
+
+if (parts.length == 2) {
+displayName = String.format("Column Family %s Default 
Visibility", parts[1]);
+description = String.format("Default visibility setting 
for %s", parts[1]);
+} else if (parts.length == 3) {
+displayName = String.format("Column Qualifier %s.%s 
Default Visibility", parts[1], parts[2]);
+description = String.format("Default visibility setting 
for %s.%s", parts[1], parts[2]);
+} else {
+return null;
+}
+
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(displayName)
+.description(description)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.dynamic(true)
+.build();
--- End diff --

These properties should be able to support EL from incoming FlowFiles. In 
case if user would like to define default visibility based on FlowFile content 
type ... etc


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453605#comment-16453605
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184268871
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_ClientService.java
 ---
@@ -336,51 +348,86 @@ public void shutdown() {
 }
 }
 
+private static final byte[] EMPTY_VIS_STRING;
+
+static {
+try {
+EMPTY_VIS_STRING = "".getBytes("UTF-8");
+} catch (UnsupportedEncodingException e) {
+throw new RuntimeException(e);
+}
+}
+
+private List buildPuts(byte[] rowKey, List columns) {
+List retVal = new ArrayList<>();
+
+try {
+Put put = null;
+
+for (final PutColumn column : columns) {
+if (put == null || (put.getCellVisibility() == null && 
column.getVisibility() != null) || ( put.getCellVisibility() != null
+&& 
!put.getCellVisibility().getExpression().equals(column.getVisibility())
+)) {
+put = new Put(rowKey);
+
+if (column.getVisibility() != null) {
+put.setCellVisibility(new 
CellVisibility(column.getVisibility()));
+}
+retVal.add(put);
+}
+
+if (column.getTimestamp() != null) {
+put.addColumn(
+column.getColumnFamily(),
+column.getColumnQualifier(),
+column.getTimestamp(),
+column.getBuffer());
+} else {
+put.addColumn(
+column.getColumnFamily(),
+column.getColumnQualifier(),
+column.getBuffer());
+}
+}
+} catch (DeserializationException de) {
+getLogger().error("Error writing cell visibility statement.", 
de);
+throw new RuntimeException(de);
+}
+
+return retVal;
+}
+
 @Override
 public void put(final String tableName, final Collection 
puts) throws IOException {
 try (final Table table = 
connection.getTable(TableName.valueOf(tableName))) {
 // Create one Put per row
 final Map rowPuts = new HashMap<>();
--- End diff --

This `rowPuts` is not used any longer. Please remove it.


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453614#comment-16453614
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184275993
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractPutHBase.java
 ---
@@ -131,6 +133,59 @@ public void onScheduled(final ProcessContext context) {
 clientService = 
context.getProperty(HBASE_CLIENT_SERVICE).asControllerService(HBaseClientService.class);
 }
 
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+if (propertyDescriptorName.startsWith("visibility.")) {
+String[] parts = propertyDescriptorName.split("\\.");
+String displayName;
+String description;
+
+if (parts.length == 2) {
+displayName = String.format("Column Family %s Default 
Visibility", parts[1]);
+description = String.format("Default visibility setting 
for %s", parts[1]);
+} else if (parts.length == 3) {
+displayName = String.format("Column Qualifier %s.%s 
Default Visibility", parts[1], parts[2]);
+description = String.format("Default visibility setting 
for %s.%s", parts[1], parts[2]);
+} else {
+return null;
+}
+
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(displayName)
+.description(description)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.dynamic(true)
+.build();
+}
+
+return null;
+}
+
+protected String pickVisibilityString(String columnFamily, String 
columnQualifier, FlowFile flowFile, ProcessContext context) {
+final String lookupKey = String.format("visibility.%s.%s", 
columnFamily, columnQualifier);
--- End diff --

`visibility.%s.%s` will be `visibility.f1.` if column family if `f1` and 
columnQualifier is null. I think it should be `visibility.f1` without the 
trailing dot in that case. How do you think?


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MINIFICPP-475) Allow preferred image dimensions/mode to be set in GetUSBCamera properties

2018-04-26 Thread JIRA

[ 
https://issues.apache.org/jira/browse/MINIFICPP-475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453576#comment-16453576
 ] 

Iyán Méndez Veiga commented on MINIFICPP-475:
-

Yes, some additional optional properties could be helpful. I would suggest to 
have a look at fswebcam:

[https://github.com/fsphil/fswebcam]

[https://man.cx/fswebcam%281%29]

Some properties that I used and are useful are:
 * resolution
 * palette
 * fps
 * frames
 * skip
 * delay

About frames and skip options, It would be useful to take some pictures and 
drop them before sending them to the next processor because some webcams need 
some shots to focus and calibrate brightness, color temperature, etc.

> Allow preferred image dimensions/mode to be set in GetUSBCamera properties
> --
>
> Key: MINIFICPP-475
> URL: https://issues.apache.org/jira/browse/MINIFICPP-475
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
>
> GetUSBCamera currently selects the highest-quality image format for a given 
> FPS. This optimizes for image quality, but can be suboptimal for performance 
> on embedded devices where users may need to have low FPS and low/small image 
> quality.
> Add additional optional properties to GetUSBCamera to allow specification of 
> preferred image dimensions/quality, and have this override automatic 
> selection if the properties are set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184278583
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -75,6 +83,17 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
+protected static final PropertyDescriptor DEFAULT_VISIBILITY_STRING = 
new PropertyDescriptor.Builder()
--- End diff --

Is there any reason to not use `pickVisibiliyString` at PutHBaseRecord?


---


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184265359
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/DeleteHBaseCells.java
 ---
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Scanner;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@WritesAttributes({
+@WritesAttribute(attribute = "error.line", description = "The line 
number of the error."),
+@WritesAttribute(attribute = "error.msg", description = "The 
message explaining the error.")
+})
+@Tags({"hbase", "delete", "cell", "cells", "visibility"})
+@CapabilityDescription("This processor allows the user to delete 
individual HBase cells by specifying one or more lines " +
+"in the flowfile content that are a sequence composed of row ID, 
column family, column qualifier and associated visibility labels " +
+"if visibility labels are enabled and in use. A user-defined 
separator is used to separate each of these pieces of data on each " +
+"line, with  being the default separator.")
+public class DeleteHBaseCells extends AbstractDeleteHBase {
+static final PropertyDescriptor SEPARATOR = new 
PropertyDescriptor.Builder()
+.name("delete-hbase-cell-separator")
+.displayName("Separator")
+.description("Each line of the flowfile content is separated 
into components for building a delete using this" +
+"separator. It should be something other than a single 
colon or a comma because these are values that " +
+"are associated with columns and visibility labels 
respectively. To delete a row with ID xyz, column family abc, " +
+"column qualifier def and visibility label PII, 
one would specify xyzabcdefPII given the default " +
+"value")
+.required(true)
+.defaultValue("")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+static final String ERROR_LINE = "error.line";
+static final String ERROR_MSG  = "error.msg";
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+final List properties = new ArrayList<>();
+properties.add(HBASE_CLIENT_SERVICE);
+properties.add(TABLE_NAME);
+properties.add(SEPARATOR);
+
+return properties;
+}
+
+private FlowFile writeErrorAttributes(int line, String msg, FlowFile 
file, ProcessSession session) {
+file = session.putAttribute(file, ERROR_LINE, 
String.valueOf(line));
+file = session.putAttribute(file, ERROR_MSG, msg != null ? msg : 
"");
+return file;
+}
+
+private void logCell(String rowId, String family, String column, 
String visibility) {
+StringBuilder sb = new StringBuilder()
+.append("Assembling cell delete for...\t")
+.append(String.format("Row 

[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184279789
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -142,6 +161,16 @@
 .allowableValues(NULL_FIELD_EMPTY, NULL_FIELD_SKIP)
 .build();
 
+protected static final PropertyDescriptor VISIBILITY_RECORD_PATH = new 
PropertyDescriptor.Builder()
+.name("put-hb-rec-visibility-record-path")
+.displayName("Visibility String Record Path Root")
+.description("A record path that points to part of the record 
which contains a path to a mapping of visibility strings to record paths")
+.required(false)
+.addValidator(Validator.VALID)
+.build();
--- End diff --

I thought the record path is for pointing a record field containing a 
single visibility expression String value at the first time. But this expect 
the record path target to be a Map, which contains keys for specifying column 
qualifier to apply the visibility and value is the visibility expression.

I propose to describe that at lease. Moreover, it would be helpful if we 
provide an Additional details page with some input record or json, sample 
configurations and result in HBase cells. This feature can be useful but fairy 
complex to use at the first time.


---


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184285304
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractPutHBase.java
 ---
@@ -131,6 +133,59 @@ public void onScheduled(final ProcessContext context) {
 clientService = 
context.getProperty(HBASE_CLIENT_SERVICE).asControllerService(HBaseClientService.class);
 }
 
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+if (propertyDescriptorName.startsWith("visibility.")) {
+String[] parts = propertyDescriptorName.split("\\.");
+String displayName;
+String description;
+
+if (parts.length == 2) {
+displayName = String.format("Column Family %s Default 
Visibility", parts[1]);
+description = String.format("Default visibility setting 
for %s", parts[1]);
+} else if (parts.length == 3) {
+displayName = String.format("Column Qualifier %s.%s 
Default Visibility", parts[1], parts[2]);
+description = String.format("Default visibility setting 
for %s.%s", parts[1], parts[2]);
+} else {
+return null;
+}
+
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(displayName)
+.description(description)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.dynamic(true)
+.build();
--- End diff --

These properties should be able to support EL from incoming FlowFiles. In 
case if user would like to define default visibility based on FlowFile content 
type ... etc


---


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184277876
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -368,9 +412,18 @@ protected PutFlowFile createPut(ProcessContext 
context, Record record, RecordSch
 timestamp = null;
 }
 
+RecordField visField = null;
+Map visSettings = null;
+if (recordPath != null) {
+final RecordPathResult result = 
recordPath.evaluate(record);
+FieldValue fv = 
result.getSelectedFields().findFirst().get();
+visField = fv.getField();
+visSettings = (Map)fv.getValue();
+}
+
 List columns = new ArrayList<>();
 for (String name : schema.getFieldNames()) {
-if (name.equals(rowFieldName) || 
name.equals(timestampFieldName)) {
+if (name.equals(rowFieldName) || 
name.equals(timestampFieldName) || (visField != null && 
name.equals(visField.getFieldName( {
--- End diff --

Good stuff, I'd expect PutHBaseJson has the same capability.


---


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184277181
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -142,6 +161,16 @@
 .allowableValues(NULL_FIELD_EMPTY, NULL_FIELD_SKIP)
 .build();
 
+protected static final PropertyDescriptor VISIBILITY_RECORD_PATH = new 
PropertyDescriptor.Builder()
+.name("put-hb-rec-visibility-record-path")
+.displayName("Visibility String Record Path Root")
+.description("A record path that points to part of the record 
which contains a path to a mapping of visibility strings to record paths")
+.required(false)
+.addValidator(Validator.VALID)
+.build();
--- End diff --

I like this configuration. Should we add similar property to PutHBaseJson, 
in order to pass visibility label within the input JSON tree? Similar to 
`Column Family` property. How do you think?


---


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184271984
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractPutHBase.java
 ---
@@ -131,6 +133,59 @@ public void onScheduled(final ProcessContext context) {
 clientService = 
context.getProperty(HBASE_CLIENT_SERVICE).asControllerService(HBaseClientService.class);
 }
 
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
--- End diff --

Thanks for making this to use dynamic properties it became more powerful! 
Would you add some docs on this functionality? You can use `@DynamicProperty` 
annotation to do so. Please refer HBase client service on how it is displayed 
in docs.

https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.6.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html


---


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184272408
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/DeleteHBaseCells.java
 ---
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Scanner;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@WritesAttributes({
+@WritesAttribute(attribute = "error.line", description = "The line 
number of the error."),
+@WritesAttribute(attribute = "error.msg", description = "The 
message explaining the error.")
+})
+@Tags({"hbase", "delete", "cell", "cells", "visibility"})
+@CapabilityDescription("This processor allows the user to delete 
individual HBase cells by specifying one or more lines " +
+"in the flowfile content that are a sequence composed of row ID, 
column family, column qualifier and associated visibility labels " +
+"if visibility labels are enabled and in use. A user-defined 
separator is used to separate each of these pieces of data on each " +
+"line, with  being the default separator.")
+public class DeleteHBaseCells extends AbstractDeleteHBase {
+static final PropertyDescriptor SEPARATOR = new 
PropertyDescriptor.Builder()
+.name("delete-hbase-cell-separator")
+.displayName("Separator")
+.description("Each line of the flowfile content is separated 
into components for building a delete using this" +
+"separator. It should be something other than a single 
colon or a comma because these are values that " +
+"are associated with columns and visibility labels 
respectively. To delete a row with ID xyz, column family abc, " +
+"column qualifier def and visibility label PII, 
one would specify xyzabcdefPII given the default " +
+"value")
+.required(true)
+.defaultValue("")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();
+
+static final String ERROR_LINE = "error.line";
+static final String ERROR_MSG  = "error.msg";
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+final List properties = new ArrayList<>();
+properties.add(HBASE_CLIENT_SERVICE);
+properties.add(TABLE_NAME);
+properties.add(SEPARATOR);
+
+return properties;
+}
+
+private FlowFile writeErrorAttributes(int line, String msg, FlowFile 
file, ProcessSession session) {
+file = session.putAttribute(file, ERROR_LINE, 
String.valueOf(line));
+file = session.putAttribute(file, ERROR_MSG, msg != null ? msg : 
"");
+return file;
+}
+
+private void logCell(String rowId, String family, String column, 
String visibility) {
+StringBuilder sb = new StringBuilder()
+.append("Assembling cell delete for...\t")
+.append(String.format("Row 

[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184266326
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_ClientService.java
 ---
@@ -297,6 +305,8 @@ protected Connection createConnection(final 
ConfigurationContext context) throws
 keyTab = credentialsService.getKeytab();
 }
 
+this.principal = principal; //Set so it is usable from 
getLabelsForCurrentUser
--- End diff --

This `principal` field is not used any longer? I couldn't find 
`getLabelsForCurrentUser`.


---


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184275993
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractPutHBase.java
 ---
@@ -131,6 +133,59 @@ public void onScheduled(final ProcessContext context) {
 clientService = 
context.getProperty(HBASE_CLIENT_SERVICE).asControllerService(HBaseClientService.class);
 }
 
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+if (propertyDescriptorName.startsWith("visibility.")) {
+String[] parts = propertyDescriptorName.split("\\.");
+String displayName;
+String description;
+
+if (parts.length == 2) {
+displayName = String.format("Column Family %s Default 
Visibility", parts[1]);
+description = String.format("Default visibility setting 
for %s", parts[1]);
+} else if (parts.length == 3) {
+displayName = String.format("Column Qualifier %s.%s 
Default Visibility", parts[1], parts[2]);
+description = String.format("Default visibility setting 
for %s.%s", parts[1], parts[2]);
+} else {
+return null;
+}
+
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(displayName)
+.description(description)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.dynamic(true)
+.build();
+}
+
+return null;
+}
+
+protected String pickVisibilityString(String columnFamily, String 
columnQualifier, FlowFile flowFile, ProcessContext context) {
+final String lookupKey = String.format("visibility.%s.%s", 
columnFamily, columnQualifier);
--- End diff --

`visibility.%s.%s` will be `visibility.f1.` if column family if `f1` and 
columnQualifier is null. I think it should be `visibility.f1` without the 
trailing dot in that case. How do you think?


---


[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453613#comment-16453613
 ] 

ASF GitHub Bot commented on NIFI-4637:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184277181
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseRecord.java
 ---
@@ -142,6 +161,16 @@
 .allowableValues(NULL_FIELD_EMPTY, NULL_FIELD_SKIP)
 .build();
 
+protected static final PropertyDescriptor VISIBILITY_RECORD_PATH = new 
PropertyDescriptor.Builder()
+.name("put-hb-rec-visibility-record-path")
+.displayName("Visibility String Record Path Root")
+.description("A record path that points to part of the record 
which contains a path to a mapping of visibility strings to record paths")
+.required(false)
+.addValidator(Validator.VALID)
+.build();
--- End diff --

I like this configuration. Should we add similar property to PutHBaseJson, 
in order to pass visibility label within the input JSON tree? Similar to 
`Column Family` property. How do you think?


> Add support for HBase visibility labels to HBase processors and controller 
> services
> ---
>
> Key: NIFI-4637
> URL: https://issues.apache.org/jira/browse/NIFI-4637
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
>
> HBase supports visibility labels, but you can't use them from NiFi because 
> there is no way to set them. The existing processors and services should be 
> upgraded to handle this capability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184267868
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/pom.xml
 ---
@@ -25,7 +25,7 @@
 nifi-hbase_1_1_2-client-service
 jar
 
-1.1.2
+1.1.13
--- End diff --

@bbende Is this gap between here and the nar name going to be an issue 
somewhere? Do we need to create nifi-hbase_1_1_13-client-service?


---


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184287966
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/AbstractPutHBase.java
 ---
@@ -131,6 +133,59 @@ public void onScheduled(final ProcessContext context) {
 clientService = 
context.getProperty(HBASE_CLIENT_SERVICE).asControllerService(HBaseClientService.class);
 }
 
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+if (propertyDescriptorName.startsWith("visibility.")) {
+String[] parts = propertyDescriptorName.split("\\.");
+String displayName;
+String description;
+
+if (parts.length == 2) {
+displayName = String.format("Column Family %s Default 
Visibility", parts[1]);
+description = String.format("Default visibility setting 
for %s", parts[1]);
+} else if (parts.length == 3) {
+displayName = String.format("Column Qualifier %s.%s 
Default Visibility", parts[1], parts[2]);
+description = String.format("Default visibility setting 
for %s.%s", parts[1], parts[2]);
+} else {
+return null;
+}
+
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.displayName(displayName)
+.description(description)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.dynamic(true)
+.build();
+}
+
+return null;
+}
+
+protected String pickVisibilityString(String columnFamily, String 
columnQualifier, FlowFile flowFile, ProcessContext context) {
+final String lookupKey = String.format("visibility.%s.%s", 
columnFamily, columnQualifier);
+final String fromAttribute = flowFile.getAttribute(lookupKey);
+if (fromAttribute != null) {
+return fromAttribute;
+} else {
+PropertyValue descriptor = context.getProperty(lookupKey);
+if (descriptor == null) {
--- End diff --

This condition does not work as expected. Because even if there is only 
'visibility.family' defined, `context.getProperty(visibility.family.qualifier)` 
returns a non-null object which having its value null. Please change this to 
`(descriptor == null || !descriptor.isSet())`. Also please add some unit tests.


---


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184272910
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/DeleteHBaseCells.java
 ---
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Scanner;
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@WritesAttributes({
+@WritesAttribute(attribute = "error.line", description = "The line 
number of the error."),
+@WritesAttribute(attribute = "error.msg", description = "The 
message explaining the error.")
+})
+@Tags({"hbase", "delete", "cell", "cells", "visibility"})
+@CapabilityDescription("This processor allows the user to delete 
individual HBase cells by specifying one or more lines " +
+"in the flowfile content that are a sequence composed of row ID, 
column family, column qualifier and associated visibility labels " +
+"if visibility labels are enabled and in use. A user-defined 
separator is used to separate each of these pieces of data on each " +
+"line, with  being the default separator.")
+public class DeleteHBaseCells extends AbstractDeleteHBase {
--- End diff --

Do you think it would be helpful if this processor and DeleteHBaseRow 
support default visibility label expression as PutHBaseXX processors?


---


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184274827
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/ScanHBase.java
 ---
@@ -381,6 +384,7 @@ public void onTrigger(ProcessContext context, 
ProcessSession session) throws Pro
 limitRows,
 isReversed,
 columns,
+authorizations,
--- End diff --

Do you know if HBase scan API let ResultHander get the visibility label 
that is set to the cells retrieved by this scan? If so, do we want to include 
that in output contents? I think if HBase returns that, we can embed those at 
RowSerializer implementations such as JsonFullRowSerializer.

It can make scan -> delete flow easier.


---


[GitHub] nifi pull request #2518: NIFI-4637 Added support for visibility labels to th...

2018-04-26 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2518#discussion_r184276347
  
--- Diff: 
nifi-nar-bundles/nifi-hbase-bundle/nifi-hbase-processors/src/main/java/org/apache/nifi/hbase/PutHBaseCell.java
 ---
@@ -82,6 +79,8 @@ protected PutFlowFile createPut(final ProcessSession 
session, final ProcessConte
 final String columnQualifier = 
context.getProperty(COLUMN_QUALIFIER).evaluateAttributeExpressions(flowFile).getValue();
 final String timestampValue = 
context.getProperty(TIMESTAMP).evaluateAttributeExpressions(flowFile).getValue();
 
+final String visibilityStringToUse = 
pickVisibilityString(columnFamily, columnQualifier, flowFile, context);
--- End diff --

Since PutHBaseCell is mutating only one cell, it is more intuitive to 
provide a configuration property directly specifying a visibility label 
expression, IN ADDITION to the default ones using dynamic properties. Thoughts?


---


[jira] [Assigned] (NIFI-5044) SelectHiveQL accept only one statement

2018-04-26 Thread Ed Berezitsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ed Berezitsky reassigned NIFI-5044:
---

Assignee: Ed Berezitsky

> SelectHiveQL accept only one statement
> --
>
> Key: NIFI-5044
> URL: https://issues.apache.org/jira/browse/NIFI-5044
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Davide Isoardi
>Assignee: Ed Berezitsky
>Priority: Critical
>
> In [this 
> |[https://github.com/apache/nifi/commit/bbc714e73ba245de7bc32fd9958667c847101f7d]
>  ] commit claims to add support to running multiple statements both on 
> SelectHiveQL and PutHiveQL; instead, it adds only the support to PutHiveQL, 
> so SelectHiveQL still lacks this important feature. @Matt Burgess, I saw that 
> you worked on that, is there any reason for this? If not, can we support it?
> If I try to execute this query:
> {quote}set hive.vectorized.execution.enabled = false; SELECT * FROM table_name
> {quote}
> I have this error:
>  
> {quote}2018-04-05 13:35:40,572 ERROR [Timer-Driven Process Thread-146] 
> o.a.nifi.processors.hive.SelectHiveQL 
> SelectHiveQL[id=243d4c17-b1fe-14af--ee8ce15e] Unable to execute 
> HiveQL select query set hive.vectorized.execution.enabled = false; SELECT * 
> FROM table_name for 
> StandardFlowFileRecord[uuid=0e035558-07ce-473b-b0d4-ac00b8b1df93,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1522824912161-2753, 
> container=default, section=705], offset=838441, 
> length=25],offset=0,name=cliente_attributi.csv,size=25] due to 
> org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> The query did not generate a result set!; routing to failure: {}
>  org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: 
> The query did not generate a result set!
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL$2.process(SelectHiveQL.java:305)
>  at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:275)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.lambda$onTrigger$0(SelectHiveQL.java:215)
>  at 
> org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
>  at 
> org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:106)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:215)
>  at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
>  at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>  at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
>  Caused by: java.sql.SQLException: The query did not generate a result set!
>  at org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:438)
>  at 
> org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
>  at 
> org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
>  at 
> org.apache.nifi.processors.hive.SelectHiveQL$2.process(SelectHiveQL.java:293)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5041) Add convenient SPNEGO/Kerberos authentication support to LivySessionController

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453688#comment-16453688
 ] 

ASF GitHub Bot commented on NIFI-5041:
--

Github user peter-toth commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2630#discussion_r184309195
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-hadoop-utils/src/main/java/org/apache/nifi/hadoop/KerberosConfiguration.java
 ---
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hadoop;
+
+import org.apache.hadoop.security.authentication.util.KerberosUtil;
+
+import javax.security.auth.login.AppConfigurationEntry;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * Modified Kerberos configuration class from {@link 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.KerberosConfiguration}
+ * that requires authentication from a keytab.
+ */
+public class KerberosConfiguration extends 
javax.security.auth.login.Configuration {
--- End diff --

I've added the new entries.


> Add convenient SPNEGO/Kerberos authentication support to LivySessionController
> --
>
> Key: NIFI-5041
> URL: https://issues.apache.org/jira/browse/NIFI-5041
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.5.0
>Reporter: Peter Toth
>Priority: Minor
>
> Livy requires SPNEGO/Kerberos authentication on a secured cluster. Initiating 
> such an authentication from NiFi is a viable by providing a 
> java.security.auth.login.config system property 
> (https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/lab/part6.html),
>  but this is a bit cumbersome and needs kinit running outside of NiFi.
> An alternative and more sophisticated solution would be to do the SPNEGO 
> negotiation programmatically.
>  * This solution would add some new properties to the LivySessionController 
> to fetch kerberos principal and password/keytab
>  * Add the required HTTP Negotiate header (with an SPNEGO token) to the 
> HttpURLConnection to do the authentication programmatically 
> (https://tools.ietf.org/html/rfc4559)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2630: NIFI-5041 Adds SPNEGO authentication to LivySession...

2018-04-26 Thread peter-toth
Github user peter-toth commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2630#discussion_r184309195
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-hadoop-utils/src/main/java/org/apache/nifi/hadoop/KerberosConfiguration.java
 ---
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hadoop;
+
+import org.apache.hadoop.security.authentication.util.KerberosUtil;
+
+import javax.security.auth.login.AppConfigurationEntry;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * Modified Kerberos configuration class from {@link 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.KerberosConfiguration}
+ * that requires authentication from a keytab.
+ */
+public class KerberosConfiguration extends 
javax.security.auth.login.Configuration {
--- End diff --

I've added the new entries.


---


[GitHub] nifi-minifi-cpp pull request #313: MINIFICPP-403: Add version into flow attr...

2018-04-26 Thread minifirocks
GitHub user minifirocks opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/313

MINIFICPP-403: Add version into flow attributes

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/minifirocks/nifi-minifi-cpp 
add_version_to_flow

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/313.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #313






---


[jira] [Commented] (MINIFICPP-403) Enable tagging of flowfiles with flow metadata information in C++

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16455601#comment-16455601
 ] 

ASF GitHub Bot commented on MINIFICPP-403:
--

GitHub user minifirocks opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/313

MINIFICPP-403: Add version into flow attributes

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/minifirocks/nifi-minifi-cpp 
add_version_to_flow

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/313.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #313






> Enable tagging of flowfiles with flow metadata information in C++
> -
>
> Key: MINIFICPP-403
> URL: https://issues.apache.org/jira/browse/MINIFICPP-403
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: bqiu
>Assignee: bqiu
>Priority: Minor
> Fix For: 1.0.0
>
>
> Provide framework level support to tag flowfiles with metadata about the flow 
> that created them.
> Design proposal
> Right now, MiNiFi support core attributes like 
> // FlowFile Attribute
> enum FlowAttribute {
>  // The flowfile's path indicates the relative directory to which a FlowFile 
> belongs and does not contain the filename
>  PATH = 0,
>  // The flowfile's absolute path indicates the absolute directory to which a 
> FlowFile belongs and does not contain the filename
>  ABSOLUTE_PATH,
>  // The filename of the FlowFile. The filename should not contain any 
> directory structure.
>  FILENAME,
>  // A unique UUID assigned to this FlowFile.
>  UUID,
>  // A numeric value indicating the FlowFile priority
>  priority,
>  // The MIME Type of this FlowFile
>  MIME_TYPE,
>  // Specifies the reason that a FlowFile is being discarded
>  DISCARD_REASON,
>  // Indicates an identifier other than the FlowFile's UUID that is known to 
> refer to this FlowFile.
>  ALTERNATE_IDENTIFIER,
>  MAX_FLOW_ATTRIBUTES
> };
> So one approach is in the flow YAML file, specific the list of core flow 
> attributes along with the processors that inject/import/create the flow files.
> When flow was created/imported/injected by this processor, we can apply these 
> core attributes to the new flow.
> Also user can define their own core attributes template and EL for populate 
> value for these core attributes, for example protocol, TTL, record route 
> (expected route), key, version, etc.
> In current implementation, FILENAME, PATH and UUID are required attributes 
> when flow was created, others are optional
> // Populate the default attributes
> addKeyedAttribute(FILENAME,
> std::to_string(getTimeNano()));
> addKeyedAttribute(PATH, DEFAULT_FLOWFILE_PATH);
> addKeyedAttribute(UUID,
> getUUIDStr())
> So if user specify the optional meta flow info section for the processor with 
> the key/value pairs as above, MiNiFI will add these key attributes to the 
> flow when flow was created by this processor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi-minifi-cpp pull request #314: Update C2 representations.

2018-04-26 Thread phrocker
GitHub user phrocker opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/314

Update C2 representations. 

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the LICENSE file?
- [ ] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phrocker/nifi-minifi-cpp C2FINISH

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/314.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #314


commit 59f46695535dacc8683cc81416aa649ec99f2711
Author: Marc Parisi 
Date:   2018-01-28T02:20:10Z

MINIFICPP-418: Add build time information, ability to deploy, and run time 
information
to c2 response and build output
MINIFICPP-395: Add transfer capability that runs a rollback command on 
failure
MINIFICPP-417: Resolve issue with RapidJson changes.

MINIFICPP-417: Resolve linter issues

MINIFICPP-418: Update package name

commit ccdddcb7cddfa4f6d58871e1357670389b4db05e
Author: Marc Parisi 
Date:   2018-04-23T12:28:38Z

MINIFICPP-418: Add flow URI introspection and update flow version

commit 9b10c31e09a26a0270f371681b3aaebc3e0572be
Author: Marc Parisi 
Date:   2018-04-23T23:06:58Z

MINIFICPP-468: Add configurable agent information and update readme




---


[jira] [Commented] (NIFI-5073) JMSConnectionFactory doesn't resolve 'variables' properly

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453791#comment-16453791
 ] 

ASF GitHub Bot commented on NIFI-5073:
--

Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2653#discussion_r184338421
  
--- Diff: 
nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProvider.java
 ---
@@ -97,7 +96,7 @@
 .description("Path to the directory with additional resources 
(i.e., JARs, configuration files etc.) to be added "
 + "to the classpath. Such resources typically 
represent target MQ client libraries for the "
 + "ConnectionFactory implementation.")
-.addValidator(new ClientLibValidator())
+.addValidator(StandardValidators.createListValidator(true, 
true, StandardValidators.createURLorFileValidator()))
--- End diff --

Yep. Thanks for pointing out. I have modified the processor to accept a 
comma separated list of paths that can be added to Classpath and moreover it 
would leverage `ClassLoaderUtils` thereby avoiding duplicate.


> JMSConnectionFactory doesn't resolve 'variables' properly
> -
>
> Key: NIFI-5073
> URL: https://issues.apache.org/jira/browse/NIFI-5073
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Matthew Clarke
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
> Attachments: 
> 0001-NIFI-5073-JMSConnectionFactoryProvider-now-resolves-.patch
>
>
> Create a new process Group.
> Add "Variables" to the process group:
> for example:
> broker_uri=tcp://localhost:4141
> client_libs=/NiFi/custom-lib-dir/MQlib
> con_factory=blah
> Then while that process group is selected, create  a controller service.
> Create JMSConnectionFactory.
> Configure this controller service to use EL for PG defined variables above:
> ${con_factory}, ${con_factory}, and ${broker_uri}
> The controller service will remain invalid because the EL statements are not 
> properly resolved to their set values.
> Doing the exact same thing above using the external NiFi registry file works 
> as expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2653: NIFI-5073: JMSConnectionFactoryProvider now resolve...

2018-04-26 Thread zenfenan
Github user zenfenan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2653#discussion_r184338421
  
--- Diff: 
nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/cf/JMSConnectionFactoryProvider.java
 ---
@@ -97,7 +96,7 @@
 .description("Path to the directory with additional resources 
(i.e., JARs, configuration files etc.) to be added "
 + "to the classpath. Such resources typically 
represent target MQ client libraries for the "
 + "ConnectionFactory implementation.")
-.addValidator(new ClientLibValidator())
+.addValidator(StandardValidators.createListValidator(true, 
true, StandardValidators.createURLorFileValidator()))
--- End diff --

Yep. Thanks for pointing out. I have modified the processor to accept a 
comma separated list of paths that can be added to Classpath and moreover it 
would leverage `ClassLoaderUtils` thereby avoiding duplicate.


---


[jira] [Updated] (NIFI-5123) Enable extensions to use SchemaRegistryService

2018-04-26 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-5123:
---
Status: Patch Available  (was: In Progress)

> Enable extensions to use SchemaRegistryService
> --
>
> Key: NIFI-5123
> URL: https://issues.apache.org/jira/browse/NIFI-5123
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Currently SchemaRegistryService is in the nifi-record-serialization-services 
> NAR, which means that other extensions (unless they had 
> nifi-record-serialization-services-nar as a parent which is not recommended) 
> cannot make use of this abstract class. The class uses utilities from 
> nifi-avro-record-utils such as SchemaAccessUtils, and is a very helpful base 
> class used to offer a consistent user experience for selecting schemas (by 
> name, by test, from a schema registry, etc.). Other extensions wishing to 
> provide access to a schema registry would duplicate much of the logic in 
> SchemaRegistryService, which offers challenges for consistency and 
> maintainability.
> This Jira proposes to move SchemaRegistryService into nifi-avro-record-utils, 
> where it can be leveraged by any extension that depends on it, and thus we 
> can strongly recommend that record-aware processors that will interact with a 
> schema registry use SchemaRegistryService as a parent class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5123) Enable extensions to use SchemaRegistryService

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16454285#comment-16454285
 ] 

ASF GitHub Bot commented on NIFI-5123:
--

GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/2661

NIFI-5123: Move SchemaRegistryService to nifi-avro-record-utils

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-5123

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2661.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2661


commit ee54d9156f23ec9738bf3f1724fbdabdaaa31540
Author: Matthew Burgess 
Date:   2018-04-26T14:19:41Z

NIFI-5123: Move SchemaRegistryService to nifi-avro-record-utils




> Enable extensions to use SchemaRegistryService
> --
>
> Key: NIFI-5123
> URL: https://issues.apache.org/jira/browse/NIFI-5123
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Currently SchemaRegistryService is in the nifi-record-serialization-services 
> NAR, which means that other extensions (unless they had 
> nifi-record-serialization-services-nar as a parent which is not recommended) 
> cannot make use of this abstract class. The class uses utilities from 
> nifi-avro-record-utils such as SchemaAccessUtils, and is a very helpful base 
> class used to offer a consistent user experience for selecting schemas (by 
> name, by test, from a schema registry, etc.). Other extensions wishing to 
> provide access to a schema registry would duplicate much of the logic in 
> SchemaRegistryService, which offers challenges for consistency and 
> maintainability.
> This Jira proposes to move SchemaRegistryService into nifi-avro-record-utils, 
> where it can be leveraged by any extension that depends on it, and thus we 
> can strongly recommend that record-aware processors that will interact with a 
> schema registry use SchemaRegistryService as a parent class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2661: NIFI-5123: Move SchemaRegistryService to nifi-avro-...

2018-04-26 Thread mattyb149
GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/2661

NIFI-5123: Move SchemaRegistryService to nifi-avro-record-utils

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-5123

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2661.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2661


commit ee54d9156f23ec9738bf3f1724fbdabdaaa31540
Author: Matthew Burgess 
Date:   2018-04-26T14:19:41Z

NIFI-5123: Move SchemaRegistryService to nifi-avro-record-utils




---


[jira] [Commented] (NIFI-5120) AbstractListenEventProcessor should support expression language

2018-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453926#comment-16453926
 ] 

ASF GitHub Bot commented on NIFI-5120:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2659


> AbstractListenEventProcessor should support expression language
> ---
>
> Key: NIFI-5120
> URL: https://issues.apache.org/jira/browse/NIFI-5120
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.6.0
> Environment: All
>Reporter: Sébastien Bouchex Bellomié
>Priority: Minor
>
> Current implementation of AbstractListenEventProcessor only supports fixed 
> value whereas using expression language can be usefull while using properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5120) AbstractListenEventProcessor should support expression language

2018-04-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453925#comment-16453925
 ] 

ASF subversion and git services commented on NIFI-5120:
---

Commit 3719a6268c3ff020ab3083751bd48e652e668695 in nifi's branch 
refs/heads/master from [~sbouc...@infovista.com]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=3719a62 ]

NIFI-5120 AbstractListenEventProcessor supports expression language

Signed-off-by: Pierre Villard 

This closes #2659.


> AbstractListenEventProcessor should support expression language
> ---
>
> Key: NIFI-5120
> URL: https://issues.apache.org/jira/browse/NIFI-5120
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.6.0
> Environment: All
>Reporter: Sébastien Bouchex Bellomié
>Priority: Minor
>
> Current implementation of AbstractListenEventProcessor only supports fixed 
> value whereas using expression language can be usefull while using properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2659: NIFI-5120 AbstractListenEventProcessor supports exp...

2018-04-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2659


---


[jira] [Created] (NIFI-5123) Enable extensions to use SchemaRegistryService

2018-04-26 Thread Matt Burgess (JIRA)
Matt Burgess created NIFI-5123:
--

 Summary: Enable extensions to use SchemaRegistryService
 Key: NIFI-5123
 URL: https://issues.apache.org/jira/browse/NIFI-5123
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Matt Burgess


Currently SchemaRegistryService is in the nifi-record-serialization-services 
NAR, which means that other extensions (unless they had 
nifi-record-serialization-services-nar as a parent which is not recommended) 
cannot make use of this abstract class. The class uses utilities from 
nifi-avro-record-utils such as SchemaAccessUtils, and is a very helpful base 
class used to offer a consistent user experience for selecting schemas (by 
name, by test, from a schema registry, etc.). Other extensions wishing to 
provide access to a schema registry would duplicate much of the logic in 
SchemaRegistryService, which offers challenges for consistency and 
maintainability.

This Jira proposes to move SchemaRegistryService into nifi-avro-record-utils, 
where it can be leveraged by any extension that depends on it, and thus we can 
strongly recommend that record-aware processors that will interact with a 
schema registry use SchemaRegistryService as a parent class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5123) Enable extensions to use SchemaRegistryService

2018-04-26 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-5123:
--

Assignee: Matt Burgess

> Enable extensions to use SchemaRegistryService
> --
>
> Key: NIFI-5123
> URL: https://issues.apache.org/jira/browse/NIFI-5123
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> Currently SchemaRegistryService is in the nifi-record-serialization-services 
> NAR, which means that other extensions (unless they had 
> nifi-record-serialization-services-nar as a parent which is not recommended) 
> cannot make use of this abstract class. The class uses utilities from 
> nifi-avro-record-utils such as SchemaAccessUtils, and is a very helpful base 
> class used to offer a consistent user experience for selecting schemas (by 
> name, by test, from a schema registry, etc.). Other extensions wishing to 
> provide access to a schema registry would duplicate much of the logic in 
> SchemaRegistryService, which offers challenges for consistency and 
> maintainability.
> This Jira proposes to move SchemaRegistryService into nifi-avro-record-utils, 
> where it can be leveraged by any extension that depends on it, and thus we 
> can strongly recommend that record-aware processors that will interact with a 
> schema registry use SchemaRegistryService as a parent class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2615: NIFI-5051 Created ElasticSearch lookup service.

2018-04-26 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2615#discussion_r184396268
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-client-service/pom.xml
 ---
@@ -57,6 +57,18 @@
 provided
 
 
+
+org.apache.nifi
+nifi-lookup-service-api
+provided
+
+
+
+org.apache.avro
+avro
+1.8.2
--- End diff --

Won't nifi-avro-record-utils bring in the Avro library?


---


  1   2   >