[jira] [Commented] (NIFI-6484) sed: can't read /opt/nifi/nifi-current/conf/nifi.properties

2019-07-25 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892927#comment-16892927
 ] 

Michael Moser commented on NIFI-6484:
-

I would make a ConfigMap that contains nifi.properties and any of the other 
configuration files in the conf/ directory that you want to modify.  Then 
modify nifi.properties in your ConfigMap to persist the flow.xml.gz into a new 
location.  For example. modify the 
nifi.flow.configuration.file=/opt/nifi/nifi-current/persistent-conf/flow.xml.gz 
and 
nifi.flow.configuration.archive.dir=/opt/nifi/nifi-current/persistent-conf/archive/
 in nifi.properties.  Your persistent volume would then be mounted at 
/opt/nifi/nifi-current/persistent-conf.

> sed: can't read /opt/nifi/nifi-current/conf/nifi.properties
> ---
>
> Key: NIFI-6484
> URL: https://issues.apache.org/jira/browse/NIFI-6484
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.9.2
> Environment: image: "apache/nifi:latest", GKE (Google Kubernetes 
> Engine)
>Reporter: thuy le
>Assignee: Michael Moser
>Priority: Major
>
>  I meet an issue when I try to deploy apache nifi on GKE (Google Kubernetes 
> Engine)
>   sed: can't read /opt/nifi/nifi-current/conf/nifi.properties
>  
> [!https://user-images.githubusercontent.com/24593553/61722336-17c3ea00-ad38-11e9-830a-e4fe56a7acb6.png|width=470,height=136!|https://user-images.githubusercontent.com/24593553/61722336-17c3ea00-ad38-11e9-830a-e4fe56a7acb6.png]
>  
> yalm file
> ---
> apiVersion: v1
> kind: PersistentVolumeClaim
> metadata:
>   name: nificonf-claim
> spec:
>   accessModes: [ "ReadWriteOnce" ]
>   storageClassName: "standard"
>   resources:
>     requests:
>   storage: 3Gi
> ---
> apiVersion: "apps/v1"
> kind: "Deployment"
> metadata:
>   name: "nificonf"
>   namespace: "default"
>   labels:
>     app: "nificonf"
> spec:
>   replicas: 1
>   selector:
>     matchLabels:
>   app: "nificonf"
>   template:
>     metadata:
>   labels:
>     app: "nificonf"
>     spec:
>   securityContext:
>     runAsUser: 1000 
>     fsGroup: 1000    
>   containers:
>   - name: "nificonf"
>     image: "apache/nifi:latest"
>     ports:
>     - containerPort: 8080
>     volumeMounts:
>   - name: nificonf-data  
>     mountPath: /opt/nifi/nifi-current/conf
>   volumes:
>     - name: nificonf-data
>   persistentVolumeClaim:
>     claimName: nificonf-claim
> ---
> apiVersion: "autoscaling/v2beta1"
> kind: "HorizontalPodAutoscaler"
> metadata:
>   name: "nificonf-hpa"
>   namespace: "default"
>   labels:
>     app: "nificonf"
> spec:
>   scaleTargetRef:
>     kind: "Deployment"
>     name: "nificonf"
>     apiVersion: "apps/v1"
>   minReplicas: 1
>   maxReplicas: 1
>   metrics:
>   - type: "Resource"
>     resource:
>   name: "cpu"
>   targetAverageUtilization: 80



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NIFI-6484) sed: can't read /opt/nifi/nifi-current/conf/nifi.properties

2019-07-25 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892778#comment-16892778
 ] 

Michael Moser commented on NIFI-6484:
-

Hi [~thuylevn]!  It looks like you have mounted a persistent volume on top of 
the container's /opt/nifi/nifi-current/conf directory.  Since this is where 
NiFi looks for its nifi.properties and other configuration files on startup, 
you have to make sure that your persistent volume has those files.  I'm not 
exactly sure of the best way to initialize your persistent volume with those 
files.

You might also look at NIFI-6071

 

> sed: can't read /opt/nifi/nifi-current/conf/nifi.properties
> ---
>
> Key: NIFI-6484
> URL: https://issues.apache.org/jira/browse/NIFI-6484
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.9.2
> Environment: image: "apache/nifi:latest", GKE (Google Kubernetes 
> Engine)
>Reporter: thuy le
>Assignee: Michael Moser
>Priority: Major
>
>  I meet an issue when I try to deploy apache nifi on GKE (Google Kubernetes 
> Engine)
>   sed: can't read /opt/nifi/nifi-current/conf/nifi.properties
>  
> [!https://user-images.githubusercontent.com/24593553/61722336-17c3ea00-ad38-11e9-830a-e4fe56a7acb6.png|width=470,height=136!|https://user-images.githubusercontent.com/24593553/61722336-17c3ea00-ad38-11e9-830a-e4fe56a7acb6.png]
>  
> yalm file
> ---
> apiVersion: v1
> kind: PersistentVolumeClaim
> metadata:
>   name: nificonf-claim
> spec:
>   accessModes: [ "ReadWriteOnce" ]
>   storageClassName: "standard"
>   resources:
>     requests:
>   storage: 3Gi
> ---
> apiVersion: "apps/v1"
> kind: "Deployment"
> metadata:
>   name: "nificonf"
>   namespace: "default"
>   labels:
>     app: "nificonf"
> spec:
>   replicas: 1
>   selector:
>     matchLabels:
>   app: "nificonf"
>   template:
>     metadata:
>   labels:
>     app: "nificonf"
>     spec:
>   securityContext:
>     runAsUser: 1000 
>     fsGroup: 1000    
>   containers:
>   - name: "nificonf"
>     image: "apache/nifi:latest"
>     ports:
>     - containerPort: 8080
>     volumeMounts:
>   - name: nificonf-data  
>     mountPath: /opt/nifi/nifi-current/conf
>   volumes:
>     - name: nificonf-data
>   persistentVolumeClaim:
>     claimName: nificonf-claim
> ---
> apiVersion: "autoscaling/v2beta1"
> kind: "HorizontalPodAutoscaler"
> metadata:
>   name: "nificonf-hpa"
>   namespace: "default"
>   labels:
>     app: "nificonf"
> spec:
>   scaleTargetRef:
>     kind: "Deployment"
>     name: "nificonf"
>     apiVersion: "apps/v1"
>   minReplicas: 1
>   maxReplicas: 1
>   metrics:
>   - type: "Resource"
>     resource:
>   name: "cpu"
>   targetAverageUtilization: 80



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (NIFI-6484) sed: can't read /opt/nifi/nifi-current/conf/nifi.properties

2019-07-25 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser reassigned NIFI-6484:
---

Assignee: Michael Moser

> sed: can't read /opt/nifi/nifi-current/conf/nifi.properties
> ---
>
> Key: NIFI-6484
> URL: https://issues.apache.org/jira/browse/NIFI-6484
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.9.2
> Environment: image: "apache/nifi:latest", GKE (Google Kubernetes 
> Engine)
>Reporter: thuy le
>Assignee: Michael Moser
>Priority: Major
>
>  I meet an issue when I try to deploy apache nifi on GKE (Google Kubernetes 
> Engine)
>   sed: can't read /opt/nifi/nifi-current/conf/nifi.properties
>  
> [!https://user-images.githubusercontent.com/24593553/61722336-17c3ea00-ad38-11e9-830a-e4fe56a7acb6.png|width=470,height=136!|https://user-images.githubusercontent.com/24593553/61722336-17c3ea00-ad38-11e9-830a-e4fe56a7acb6.png]
>  
> yalm file
> ---
> apiVersion: v1
> kind: PersistentVolumeClaim
> metadata:
>   name: nificonf-claim
> spec:
>   accessModes: [ "ReadWriteOnce" ]
>   storageClassName: "standard"
>   resources:
>     requests:
>   storage: 3Gi
> ---
> apiVersion: "apps/v1"
> kind: "Deployment"
> metadata:
>   name: "nificonf"
>   namespace: "default"
>   labels:
>     app: "nificonf"
> spec:
>   replicas: 1
>   selector:
>     matchLabels:
>   app: "nificonf"
>   template:
>     metadata:
>   labels:
>     app: "nificonf"
>     spec:
>   securityContext:
>     runAsUser: 1000 
>     fsGroup: 1000    
>   containers:
>   - name: "nificonf"
>     image: "apache/nifi:latest"
>     ports:
>     - containerPort: 8080
>     volumeMounts:
>   - name: nificonf-data  
>     mountPath: /opt/nifi/nifi-current/conf
>   volumes:
>     - name: nificonf-data
>   persistentVolumeClaim:
>     claimName: nificonf-claim
> ---
> apiVersion: "autoscaling/v2beta1"
> kind: "HorizontalPodAutoscaler"
> metadata:
>   name: "nificonf-hpa"
>   namespace: "default"
>   labels:
>     app: "nificonf"
> spec:
>   scaleTargetRef:
>     kind: "Deployment"
>     name: "nificonf"
>     apiVersion: "apps/v1"
>   minReplicas: 1
>   maxReplicas: 1
>   metrics:
>   - type: "Resource"
>     resource:
>   name: "cpu"
>   targetAverageUtilization: 80



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NIFI-6190) identifiesControllerService does not work when not inheriting from nar-bundles

2019-04-08 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16812443#comment-16812443
 ] 

Michael Moser commented on NIFI-6190:
-

Hello [~EdR].

You should be able to use your company's standard parent pom without problem.  
Just be sure that your local pom.xml references the nifi-nar-maven-plugin, as 
described here [NiFi Developers Guide - 
NARs|http://nifi.apache.org/docs/nifi-docs/html/developer-guide.html#nars]. Be 
sure to use the latest version of nifi-nar-maven-plugin.

In order to use controller services in your custom NiFi components, you must 
reference the NAR that contains the interfaces for those services.  You do this 
by adding the below dependency to the pom.xml that builds your custom NAR.  
This example makes available the services that implement the 
DistributedMapCacheClient, which you used in your example.
{noformat}

   org.apache.nifi
   nifi-standard-services-api-nar
   1.8.0
   nar
 {noformat}

> identifiesControllerService does not work when not inheriting from nar-bundles
> --
>
> Key: NIFI-6190
> URL: https://issues.apache.org/jira/browse/NIFI-6190
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Ed R
>Priority: Major
>
> My company requires that we inherit from our standard parent pom, so our 
> custom NiFi processors just bring in the required NiFi components as 
> dependencies. This works fine, except that we cannot integrate with 
> controller services.
> When we have a property that uses identifiesControllerService(), NiFi 
> attempts to find that controller in the custom processor's package and 
> version, and not the package/version of the class passed to that method.
> So if the custom processor is in package "com.company.product" with version 
> 1.0-SNAPSHOT, and it pulls in NiFi 1.8.0 components like 
> DistributedMapCacheClient, the processor builds just fine of course, but when 
> trying to configure an instance of the processor in NiFi's UI, it is unable 
> to find or create any instances of the controller service for that processor 
> because it says it can't find com.company.product.DistributedMapCacheClient 
> 1.0-SNAPSHOT.
> I even tried calling 
> context.getControllerServiceLookup().getControllerServiceIdentifiers(DistributedMapCacheClient.class)
>  and it returns an empty set.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5660) JMSPublisher should not set header properties directly in the message

2019-02-21 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser resolved NIFI-5660.
-
   Resolution: Fixed
Fix Version/s: 1.10.0

> JMSPublisher should not set header properties directly in the message
> -
>
> Key: NIFI-5660
> URL: https://issues.apache.org/jira/browse/NIFI-5660
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.7.1
>Reporter: Mark Bean
>Assignee: Mark Bean
>Priority: Major
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> JMS clients cannot set most header properties directly in the message, and 
> they have to call other methods to change the default values. Most header 
> properties are set indirectly when the provider code publishes a message. The 
> defaults for QOS properties (delivery mode, expiration and priority) have to 
> be changed by explicit calls to the Spring JMSTemplate class. The only header 
> values that can be set directly by the client code are JMSReplyTo, 
> JMSCorrelationID and JMSType. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-6038) OIDC TokenRequest fails with some OAuth2 providers

2019-02-14 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-6038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-6038:

Status: Patch Available  (was: Open)

> OIDC TokenRequest fails with some OAuth2 providers
> --
>
> Key: NIFI-6038
> URL: https://issues.apache.org/jira/browse/NIFI-6038
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Michael Moser
>Assignee: Michael Moser
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I tried to integrate NiFi with a third party OAuth2 provider using OIDC, and 
> I encountered problems. In particular I was working with ForgeRock Access 
> Manager (AM) ([AM OIDC 
> Guide|https://backstage.forgerock.com/docs/am/6/oidc1-guide/]). ForgeRock AM 
> complains that the Access Token Request sent by NiFi incorrectly contains a 
> scope parameter. Apparently it decides not to ignore the extra parameter and 
> fails instead.
> The [RFC-6749|https://tools.ietf.org/html/rfc6749#page-29] and [OAuth2 
> documentation|https://www.oauth.com/oauth2-servers/access-tokens/authorization-code-request/]
>  doesn't mention using a scope parameter in the Access Token Request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-6038) OIDC TokenRequest fails with some OAuth2 providers

2019-02-14 Thread Michael Moser (JIRA)
Michael Moser created NIFI-6038:
---

 Summary: OIDC TokenRequest fails with some OAuth2 providers
 Key: NIFI-6038
 URL: https://issues.apache.org/jira/browse/NIFI-6038
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.8.0
Reporter: Michael Moser
Assignee: Michael Moser


I tried to integrate NiFi with a third party OAuth2 provider using OIDC, and I 
encountered problems. In particular I was working with ForgeRock Access Manager 
(AM) ([AM OIDC Guide|https://backstage.forgerock.com/docs/am/6/oidc1-guide/]). 
ForgeRock AM complains that the Access Token Request sent by NiFi incorrectly 
contains a scope parameter. Apparently it decides not to ignore the extra 
parameter and fails instead.

The [RFC-6749|https://tools.ietf.org/html/rfc6749#page-29] and [OAuth2 
documentation|https://www.oauth.com/oauth2-servers/access-tokens/authorization-code-request/]
 doesn't mention using a scope parameter in the Access Token Request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5941) LogMessage routes to nonexistent failure when log level is below logback allowed

2019-01-10 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16739545#comment-16739545
 ] 

Michael Moser commented on NIFI-5941:
-

You're correct [~coffeethulhu], it's not a good user experience to require a 
logback.xml change in order to log DEBUG or TRACE level messages from this 
processor.  I would recommend in this case that we change the default 
[logback.xml|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/logback.xml#L89]
 to allow TRACE level messages from both LogMessage and LogAttribute.

> LogMessage routes to nonexistent failure when log level is below logback 
> allowed
> 
>
> Key: NIFI-5941
> URL: https://issues.apache.org/jira/browse/NIFI-5941
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.7.0, 1.8.0, 1.7.1
>Reporter: Matthew Dinep
>Priority: Major
>
> When using the LogMessage processor, if a message is configured to log at a 
> level that is below what is set in logback.xml (for example logging at "info" 
> when the default log level is "warn"), the message doesn't get logged and an 
> error is thrown because the flowfile is unable to be routed to failure (the 
> only available route on the processor is Success). Since this is a user 
> configurable log level for a specific case, the level for the message should 
> be able to override the global log level in logback.xml so as to avoid this 
> behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5941) LogMessage routes to nonexistent failure when log level is below logback allowed

2019-01-09 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738685#comment-16738685
 ] 

Michael Moser commented on NIFI-5941:
-

This is possibly the same issue as NIFI-5652.  Can you check that out 
[~coffeethulhu], and let us know?  Thank you.

> LogMessage routes to nonexistent failure when log level is below logback 
> allowed
> 
>
> Key: NIFI-5941
> URL: https://issues.apache.org/jira/browse/NIFI-5941
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.7.0, 1.7.1
>Reporter: Matthew Dinep
>Priority: Major
>
> When using the LogMessage processor, if a message is configured to log at a 
> level that is below what is set in logback.xml (for example logging at "info" 
> when the default log level is "warn"), the message doesn't get logged and an 
> error is thrown because the flowfile is unable to be routed to failure (the 
> only available route on the processor is Success). Since this is a user 
> configurable log level for a specific case, the level for the message should 
> be able to override the global log level in logback.xml so as to avoid this 
> behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5181) Update default Provenance Repository to WriteAheadProvenanceRepository

2018-10-24 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser resolved NIFI-5181.
-
   Resolution: Fixed
Fix Version/s: 1.8.0

Resolved by NIFI-5482

> Update default Provenance Repository to WriteAheadProvenanceRepository
> --
>
> Key: NIFI-5181
> URL: https://issues.apache.org/jira/browse/NIFI-5181
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.8.0
>
>
> The WriteAheadProvenanceRepository was written several releases ago in order 
> to replace the PersistentProvenanceRepository. The 
> PersistentProvenanceRepository remained the default in nifi.properties to 
> ensure that the WriteAheadProvenanceRepository was stable enough before 
> jumping right in. Time has shown that the WriteAheadProvenanceRepository is 
> not only far faster than the old implementation but that it is also more 
> stable. We should update the default now to the 
> WriteAheadProvenanceRepository.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5635) Description PutEmail - multiple senders/recipients

2018-10-07 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-5635:

   Resolution: Fixed
Fix Version/s: 1.8.0
   Status: Resolved  (was: Patch Available)

> Description PutEmail - multiple senders/recipients
> --
>
> Key: NIFI-5635
> URL: https://issues.apache.org/jira/browse/NIFI-5635
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation  Website
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Trivial
> Fix For: 1.8.0
>
>
> Improve the description of properties in PutEmail processor: multiples 
> senders/recipients can be set when configuring the processor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5654) NiFi Documentation is shown as insecure in Firefox 62

2018-10-05 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser resolved NIFI-5654.
-
Resolution: Resolved

Will be resolved when version 1.8.0 is released.

> NiFi Documentation is shown as insecure in Firefox 62
> -
>
> Key: NIFI-5654
> URL: https://issues.apache.org/jira/browse/NIFI-5654
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation  Website
>Reporter: Julian Feinauer
>Priority: Major
> Attachments: image-2018-10-01-20-41-16-531.png
>
>
> While Browsing the documentation I observed that my Browser (Firefox 62) 
> lists the Page as insecure when going to https://nifi.apache.org/docs.html 
> and Clicking on "User Guide", see Screenshot attached.
>  !image-2018-10-01-20-41-16-531.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5654) NiFi Documentation is shown as insecure in Firefox 62

2018-10-05 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16639982#comment-16639982
 ] 

Michael Moser commented on NIFI-5654:
-

Thanks for reporting this!  Looks to me like this is caused by a link to a 
missing image.  It's fixed by NIFI-5330.  The fix will be deployed when the 
1.8.0 release is published.

> NiFi Documentation is shown as insecure in Firefox 62
> -
>
> Key: NIFI-5654
> URL: https://issues.apache.org/jira/browse/NIFI-5654
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation  Website
>Reporter: Julian Feinauer
>Priority: Major
> Attachments: image-2018-10-01-20-41-16-531.png
>
>
> While Browsing the documentation I observed that my Browser (Firefox 62) 
> lists the Page as insecure when going to https://nifi.apache.org/docs.html 
> and Clicking on "User Guide", see Screenshot attached.
>  !image-2018-10-01-20-41-16-531.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3993) Upgrade embedded ZooKeeper version

2018-10-05 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-3993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16639948#comment-16639948
 ] 

Michael Moser commented on NIFI-3993:
-

If it's possible to upgrade the embedded ZK to 3.4.10, yet keep NiFi compatible 
with external ZK 3.4.6, then perhaps that's a valid compromise.

> Upgrade embedded ZooKeeper version
> --
>
> Key: NIFI-3993
> URL: https://issues.apache.org/jira/browse/NIFI-3993
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Mark Bean
>Priority: Major
>
> In a Cluster configuration, Nodes are periodically disconnected from the 
> Cluster, and then reconnected. These events correspond to the following error:
> ERROR [CommitProcessor:1] o.apache.zookeeper.server.NIOServerCnxn Unexpected 
> Exception:
> java.nio.channels.CancelledKeyException: null
> at sun.nio.ch.SelectionKeyImpl.ensureValid(SectionKeyImpl.java:73)
> at sun.nio.ch.SelectionKeyImpl.interestOps(SelctionKeyImpl.java:77)
> at 
> org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:151)
> at 
> org.apache.zookeeper.server.NIOServerCnXn.sendResopnse(NIOServerCnxn.java:1081)
> at 
> org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:404)
> at 
> org.apache.zookeeper.server.quorum.CommitProcessor.run(CommitProcessor.java:74)
> This error was reported in ZooKeeper JIRA [1], and reported as fixed in 
> version 3.4.10, the current stable build. As additional confirmation, when 
> using a stand-alone ZK, 3.4.10, rather than the embedded ZK, the above error 
> was no longer observed.
> Update NiFi to use ZK 3.4.10
> [1] https://issues.apache.org/jira/browse/ZOOKEEPER-2044



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5644) Fix typo in AbstractDatabaseFetchProcessor.java

2018-09-28 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631999#comment-16631999
 ] 

Michael Moser commented on NIFI-5644:
-

Hi [~Diego Queiroz] you can take a look at the Apache NiFi Contributor Guide, 
which describes the process.

[https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-Supplyingacontribution]

I think you want to fork the Github apache/nifi repository into your own Github 
space, push your code changes to a branch in that space, then use Github to 
submit a Pull Request from your fork's branch into apache/nifi master branch.

Thanks for contributing!

> Fix typo in AbstractDatabaseFetchProcessor.java
> ---
>
> Key: NIFI-5644
> URL: https://issues.apache.org/jira/browse/NIFI-5644
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Reporter: Diego Queiroz
>Priority: Trivial
>
> dbAdaper -> dbAdapter



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5598) Allow ConsumeJMS and PublishJMS processors to use JNDI to lookup Connection Factory

2018-09-14 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615012#comment-16615012
 ] 

Michael Moser commented on NIFI-5598:
-

Resolving this issue will likely also resolve NIFI-2701

> Allow ConsumeJMS and PublishJMS processors to use JNDI to lookup Connection 
> Factory
> ---
>
> Key: NIFI-5598
> URL: https://issues.apache.org/jira/browse/NIFI-5598
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5438) Declare volumes for the NiFi docker container for persistence by default

2018-09-07 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser resolved NIFI-5438.
-
Resolution: Fixed

marking this as Resolved

> Declare volumes for the NiFi docker container for persistence by default
> 
>
> Key: NIFI-5438
> URL: https://issues.apache.org/jira/browse/NIFI-5438
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Docker
>Affects Versions: 1.8.0
>Reporter: Peter Wilcsinszky
>Assignee: Peter Wilcsinszky
>Priority: Major
> Fix For: 1.8.0
>
>
> Volume declarations are missing from the NiFi Dockerfiles.
> *Without* that all data get lost after removing the container when there was 
> no explicit volume declarations defined.
> *With* volume declarations all the data on the declared volumes will get 
> persisted on the host without any explicit volume declarations required by 
> the user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4878) Update Docker docs to include all environment variables used on startup

2018-08-31 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-4878:

Fix Version/s: (was: 1.5.0)

> Update Docker docs to include all environment variables used on startup
> ---
>
> Key: NIFI-4878
> URL: https://issues.apache.org/jira/browse/NIFI-4878
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Docker, Documentation  Website
>Reporter: Aldrin Piri
>Priority: Major
>
> -NIFI-4824- provided updates to allow specification of the various ports in 
> the container on startup via varying environment variables to aid the issue 
> of the host whitelisting.
> It does not appear this information is readily available in our docs and has 
> caused some confusion by users when trying to connect to an instance.  We 
> need to update the docs to enumerate these scenarios as well as the 
> environment variables that are anticipated.  
> We could additionally enhance the experience via performing some logical 
> checks.  One example could be verifying that if a custom port is set and we 
> are running secure, there is also an environment variable provided.  
> Additionally, providing variables that would be unused, such as 
> NIFI_WEB_HTTP_PORT when we are running secure could also cause an 
> error/warning. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4426) Remove custom jBCrypt implementation because Java 7 is no longer supported

2018-08-30 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16597852#comment-16597852
 ] 

Michael Moser commented on NIFI-4426:
-

Very minor nit, but is there another BCrypt.java to take 
care of in 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/pom.xml?

> Remove custom jBCrypt implementation because Java 7 is no longer supported
> --
>
> Key: NIFI-4426
> URL: https://issues.apache.org/jira/browse/NIFI-4426
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Andy LoPresto
>Assignee: Nathan Gough
>Priority: Minor
>  Labels: security
> Fix For: 1.8.0
>
> Attachments: Screen Shot 2018-08-28 at 6.38.16 PM.png, Screen Shot 
> 2018-08-28 at 6.38.31 PM.png
>
>
> The {{jBCrypt}} library is included and slightly modified in order to provide 
> Java 7 compatibility because the external module is compiled for Java 8. Now 
> that NiFi doesn't support Java 7, this modification can be removed and the 
> standalone module can be depended upon via Maven as per normal. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5060) UpdateRecord substringAfter and substringAfterLast only increments by 1

2018-08-28 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-5060:

   Resolution: Fixed
Fix Version/s: 1.7.0
   Status: Resolved  (was: Patch Available)

> UpdateRecord substringAfter and substringAfterLast only increments by 1
> ---
>
> Key: NIFI-5060
> URL: https://issues.apache.org/jira/browse/NIFI-5060
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Chris Green
>Priority: Major
>  Labels: easyfix, newbie
> Fix For: 1.7.0
>
> Attachments: Validate_substringafter_Behavior.xml
>
>
> This is my first submitted issue, so please feel free to point me in the 
> correct direction if I make process mistakes.
> Replication:
> Drag a GenerateFlowFile onto the canvas and configure this property, and set 
> run schedule to some high value like 600 seconds
> "Custom Text" \{"value": "01230123"}
> Connect GenerateFlowFile with an UpdateAttribute set to add the attribute 
> "avro.schema" with a value of 
>  
> {code:java}
> { 
> "type": "record", 
> "name": "test", 
> "fields" : [{"name": "value", "type": "string"}]
> }
> {code}
>  
>  
> Connect UpdateAttribute to an UpdateRecord onto the canvas, Autoterminate 
> success and failure. Set the Record Reader to a new JSONTreeReader. On the 
> JsonTreeReader configure it to use the "Use 'Schema Text' Attribute".
> Create a JsonRecordSetWriter and set the Schema Text to:
>  
>  
> {code:java}
> { 
> "type": "record", 
> "name": "test", 
> "fields" : [
> {"name": "value", "type": "string"},
> {"name": "example1", "type": "string"},
> {"name": "example2", "type": "string"},
> {"name": "example3", "type": "string"},
> {"name": "example4", "type": "string"}
> ]
>  }
> {code}
>  
> Add the following properties to UpdateRecord
>  
> ||Heading 1||Heading 2||
> |/example1|substringAfter(/value, "1") |
> |/example2|substringAfter(/value, "123") |
> |/example3|substringAfterLast(/value, "1")|
> |/example4|substringAfterLast(/value, "123")|
>  
> Resulting record currently:
>  
> {code:java}
> [{ 
> "value" : "01230123", 
> "example1" : "230123", 
> "example2" : "30123", 
> "example3" : "23", 
> "example4" : "3" 
> }]
> {code}
>  
>  
>  
> Problem:
> When using the UpdateRecord processor, and the substringAfter() function 
> after the search phrase is found it will only increment the substring 
> returned by 1 rather than the length of the search term. 
> Based off XPath and other implementations of substringAfter functions I've 
> seen the value returned should remove the search term rather than just the 
> first character of the search term.
>  
>  
> Resulting record should be:
>  
> {code:java}
> [{ 
> "value" : "01230123", 
> "example1" : "230123", 
> "example2" : "0123", 
> "example3" : "23", 
> "example4" : "" 
> }]
> {code}
>  
>  
> I'm cleaning up a fix with test code that will change the increment from 1 to 
> the length of the search terms. 
> It appears substringBefore are not impacted by the behavior as always returns 
> the index before the found search term which is the expected behavior



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-813) UpdateAttribute does not have a failure relationship for handling EL failures

2018-08-14 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580442#comment-16580442
 ] 

Michael Moser commented on NIFI-813:


Right on, hehe.  This ticket was first!

> UpdateAttribute does not have a failure relationship for handling EL failures
> -
>
> Key: NIFI-813
> URL: https://issues.apache.org/jira/browse/NIFI-813
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Affects Versions: 0.2.1
>Reporter: Aldrin Piri
>Priority: Major
> Attachments: NIFI-813_Example.xml
>
>
> UpdateAttribute's genesis predates expression language, and as originally 
> created would have made failure improbable (impossible?) in context of 
> typical processing.  As a result, when EL was introduced, there was also a 
> means wherein the processor could fail.  Currently, there is no failure 
> relationship.  As a result, the only way to remove files that are failing due 
> to an EL issues and placed back onto the queue is through an expiration on 
> the incoming connection. One particular instance is that of formatting a bad 
> date input.  Template showing example to follow.
> It could be the case this is a deficiency in just this particular EL 
> function, but should EL become an extension point, it is entirely conceivable 
> that failure cases become an occurrence that needs to be handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-813) UpdateAttribute does not have a failure relationship for handling EL failures

2018-08-14 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580305#comment-16580305
 ] 

Michael Moser commented on NIFI-813:


[~aldrin] I think this ticket should be resolved as Duplicate (or something) 
due to the work on NIFI-5448.  Do you concur?

> UpdateAttribute does not have a failure relationship for handling EL failures
> -
>
> Key: NIFI-813
> URL: https://issues.apache.org/jira/browse/NIFI-813
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Extensions
>Affects Versions: 0.2.1
>Reporter: Aldrin Piri
>Priority: Major
> Attachments: NIFI-813_Example.xml
>
>
> UpdateAttribute's genesis predates expression language, and as originally 
> created would have made failure improbable (impossible?) in context of 
> typical processing.  As a result, when EL was introduced, there was also a 
> means wherein the processor could fail.  Currently, there is no failure 
> relationship.  As a result, the only way to remove files that are failing due 
> to an EL issues and placed back onto the queue is through an expiration on 
> the incoming connection. One particular instance is that of formatting a bad 
> date input.  Template showing example to follow.
> It could be the case this is a deficiency in just this particular EL 
> function, but should EL become an extension point, it is entirely conceivable 
> that failure cases become an occurrence that needs to be handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3672) Older versions of IBM MQ allowed integer value to be set as String

2018-08-14 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-3672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580189#comment-16580189
 ] 

Michael Moser commented on NIFI-3672:
-

I just realized I forgot to write documentation for this feature.  I'll do that 
soon.

> Older versions of IBM MQ allowed integer value to be set as String
> --
>
> Key: NIFI-3672
> URL: https://issues.apache.org/jira/browse/NIFI-3672
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.1.1
>Reporter: Andy LoPresto
>Assignee: Michael Moser
>Priority: Minor
>  Labels: beginner, jms, third-party
>
> As reported in this [Stack Overflow 
> question|https://stackoverflow.com/questions/43213416/publishjms-processor-failing-for-writing-message-to-ibm-websphere-mq],
>  the {{PublishJMS}} processor was failing to connect to an IBM MQ JMS server 
> with the exception {{com.ibm.msg.client.jms.DetailedMessageFormatException: 
> JMSCC0051: The property 'JMS_IBM_MsgType' should be set using type 
> 'java.lang.Integer', not 'java.lang.String'.}} As noted in the answer, this 
> is a [known IBM issue 
> IT02814|https://www-01.ibm.com/support/docview.wss?uid=swg1IT02814] in which 
> older (pre-7.0) versions allowed integer properties to be set as a 
> {{java.lang.String}} but new versions require it to be a 
> {{java.lang.Integer}}. The property descriptor should be changed to validate 
> an integer value and return it with the correct type when setting the 
> configuration. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-3672) Older versions of IBM MQ allowed integer value to be set as String

2018-08-14 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-3672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-3672:

Status: Patch Available  (was: Open)

> Older versions of IBM MQ allowed integer value to be set as String
> --
>
> Key: NIFI-3672
> URL: https://issues.apache.org/jira/browse/NIFI-3672
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.1.1
>Reporter: Andy LoPresto
>Assignee: Michael Moser
>Priority: Minor
>  Labels: beginner, jms, third-party
>
> As reported in this [Stack Overflow 
> question|https://stackoverflow.com/questions/43213416/publishjms-processor-failing-for-writing-message-to-ibm-websphere-mq],
>  the {{PublishJMS}} processor was failing to connect to an IBM MQ JMS server 
> with the exception {{com.ibm.msg.client.jms.DetailedMessageFormatException: 
> JMSCC0051: The property 'JMS_IBM_MsgType' should be set using type 
> 'java.lang.Integer', not 'java.lang.String'.}} As noted in the answer, this 
> is a [known IBM issue 
> IT02814|https://www-01.ibm.com/support/docview.wss?uid=swg1IT02814] in which 
> older (pre-7.0) versions allowed integer properties to be set as a 
> {{java.lang.String}} but new versions require it to be a 
> {{java.lang.Integer}}. The property descriptor should be changed to validate 
> an integer value and return it with the correct type when setting the 
> configuration. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-3672) Older versions of IBM MQ allowed integer value to be set as String

2018-08-14 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-3672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser reassigned NIFI-3672:
---

Assignee: Michael Moser

> Older versions of IBM MQ allowed integer value to be set as String
> --
>
> Key: NIFI-3672
> URL: https://issues.apache.org/jira/browse/NIFI-3672
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.1.1
>Reporter: Andy LoPresto
>Assignee: Michael Moser
>Priority: Minor
>  Labels: beginner, jms, third-party
>
> As reported in this [Stack Overflow 
> question|https://stackoverflow.com/questions/43213416/publishjms-processor-failing-for-writing-message-to-ibm-websphere-mq],
>  the {{PublishJMS}} processor was failing to connect to an IBM MQ JMS server 
> with the exception {{com.ibm.msg.client.jms.DetailedMessageFormatException: 
> JMSCC0051: The property 'JMS_IBM_MsgType' should be set using type 
> 'java.lang.Integer', not 'java.lang.String'.}} As noted in the answer, this 
> is a [known IBM issue 
> IT02814|https://www-01.ibm.com/support/docview.wss?uid=swg1IT02814] in which 
> older (pre-7.0) versions allowed integer properties to be set as a 
> {{java.lang.String}} but new versions require it to be a 
> {{java.lang.Integer}}. The property descriptor should be changed to validate 
> an integer value and return it with the correct type when setting the 
> configuration. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3993) Upgrade embedded ZooKeeper version

2018-08-14 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-3993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580091#comment-16580091
 ] 

Michael Moser commented on NIFI-3993:
-

There are a couple of CVEs against ZK 3.4.6 which are resolved in 3.4.10, so 
it's probably a good idea to give this some serious attention.

> Upgrade embedded ZooKeeper version
> --
>
> Key: NIFI-3993
> URL: https://issues.apache.org/jira/browse/NIFI-3993
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Mark Bean
>Priority: Major
>
> In a Cluster configuration, Nodes are periodically disconnected from the 
> Cluster, and then reconnected. These events correspond to the following error:
> ERROR [CommitProcessor:1] o.apache.zookeeper.server.NIOServerCnxn Unexpected 
> Exception:
> java.nio.channels.CancelledKeyException: null
> at sun.nio.ch.SelectionKeyImpl.ensureValid(SectionKeyImpl.java:73)
> at sun.nio.ch.SelectionKeyImpl.interestOps(SelctionKeyImpl.java:77)
> at 
> org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:151)
> at 
> org.apache.zookeeper.server.NIOServerCnXn.sendResopnse(NIOServerCnxn.java:1081)
> at 
> org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:404)
> at 
> org.apache.zookeeper.server.quorum.CommitProcessor.run(CommitProcessor.java:74)
> This error was reported in ZooKeeper JIRA [1], and reported as fixed in 
> version 3.4.10, the current stable build. As additional confirmation, when 
> using a stand-alone ZK, 3.4.10, rather than the embedded ZK, the above error 
> was no longer observed.
> Update NiFi to use ZK 3.4.10
> [1] https://issues.apache.org/jira/browse/ZOOKEEPER-2044



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5489) Support Attribute Expressions with AMQP Processors

2018-08-09 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser resolved NIFI-5489.
-
   Resolution: Fixed
Fix Version/s: 1.8.0

> Support Attribute Expressions with AMQP Processors
> --
>
> Key: NIFI-5489
> URL: https://issues.apache.org/jira/browse/NIFI-5489
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.1
>Reporter: Daniel
>Priority: Major
> Fix For: 1.8.0
>
>
> Particularly the fields: host, virtualhost and username.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-5118) AMQP Consumers/Processors add support for Nifi Expression Language

2018-08-09 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575232#comment-16575232
 ] 

Michael Moser edited comment on NIFI-5118 at 8/9/18 6:12 PM:
-

Resolved by NIFI-5489


was (Author: mosermw):
Resolved by NIFI-5489
[|https://issues.apache.org/jira/secure/AddComment!default.jspa?id=13127031]

> AMQP Consumers/Processors add support for Nifi Expression Language
> --
>
> Key: NIFI-5118
> URL: https://issues.apache.org/jira/browse/NIFI-5118
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Edward Armes
>Priority: Minor
> Fix For: 1.8.0
>
>
> The AMQP Consumers and Producer processors don't currently support the Nifi 
> expression language this prevents it from using the variable registry or 
> service components that provide configuration properties



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5118) AMQP Consumers/Processors add support for Nifi Expression Language

2018-08-09 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser resolved NIFI-5118.
-
   Resolution: Duplicate
Fix Version/s: 1.8.0

Resolved by NIFI-5489
[|https://issues.apache.org/jira/secure/AddComment!default.jspa?id=13127031]

> AMQP Consumers/Processors add support for Nifi Expression Language
> --
>
> Key: NIFI-5118
> URL: https://issues.apache.org/jira/browse/NIFI-5118
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Edward Armes
>Priority: Minor
> Fix For: 1.8.0
>
>
> The AMQP Consumers and Producer processors don't currently support the Nifi 
> expression language this prevents it from using the variable registry or 
> service components that provide configuration properties



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-4723) PublishAMQP 1.4.0 support Expression Language

2018-08-09 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser resolved NIFI-4723.
-
   Resolution: Duplicate
Fix Version/s: 1.8.0

Resolved by NIFI-5489

> PublishAMQP 1.4.0 support Expression Language 
> --
>
> Key: NIFI-4723
> URL: https://issues.apache.org/jira/browse/NIFI-4723
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Examples, Extensions
>Reporter: bat-hen
>Priority: Major
> Fix For: 1.8.0
>
>
> Hi,
> there is no support to Expression Language in PublishAMQP? for:
> Host Name
> Port
> User Name
> Password
> we want to do deployment with template and we cant use Expression Language in 
> rabbitmq.:|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5489) Support Attribute Expressions with AMQP Processors

2018-08-07 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16572090#comment-16572090
 ] 

Michael Moser commented on NIFI-5489:
-

See also NIFI-4723 and NIFI-5118

> Support Attribute Expressions with AMQP Processors
> --
>
> Key: NIFI-5489
> URL: https://issues.apache.org/jira/browse/NIFI-5489
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.7.1
>Reporter: Daniel
>Priority: Major
>
> Particularly the fields: host, virtualhost and username.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5350) Add a way to provide arbitrary Java options in shell scripts

2018-08-06 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser resolved NIFI-5350.
-
   Resolution: Fixed
Fix Version/s: 1.8.0

> Add a way to provide arbitrary Java options in shell scripts
> 
>
> Key: NIFI-5350
> URL: https://issues.apache.org/jira/browse/NIFI-5350
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.7.0
>Reporter: Lars Francke
>Assignee: Lars Francke
>Priority: Minor
> Fix For: 1.8.0
>
>
> I wanted to change the location of the bootstrap config file which can be 
> done using the System property {{org.apache.nifi.bootstrap.config.file}}. 
> Unfortunately there's no easy way to set that using the default {{nifi.sh}} 
> script.
> It can be done using the {{BOOTSTRAP_DEBUG_PARAMS}} environment variable but 
> that doesn't feel right and can break if anyone actually uses that variable.
> I suggest adding an optional environment variable {{BOOTSTRAP_JAVA_OPTS}} 
> that can be used to pass in extra options to Java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-3531) modify session.recover behaviour in nifi-jms-processors to cope with high-traffic JMS

2018-08-01 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-3531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-3531:

Assignee: Michael Moser
  Status: Patch Available  (was: Open)

> modify session.recover behaviour in nifi-jms-processors to cope with 
> high-traffic JMS
> -
>
> Key: NIFI-3531
> URL: https://issues.apache.org/jira/browse/NIFI-3531
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Dominik Benz
>Assignee: Michael Moser
>Priority: Major
>
> As described in this mailing list post
> http://apache-nifi-developer-list.39713.n7.nabble.com/session-recover-behaviour-in-nifi-jms-processor-td14940.html
> the current implementation of nifi-jms-processor, especially of JMSConsumer
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSConsumer.java
> causes frequent re-delivery of JMS messages due to the following behaviour:
> 1) several threads perform the JMS session callback in parallel
> 2) each callback performs a session.recover
> 3) during high traffic, the situation arises that the ACKs from another 
> thread may not (yet) have arrived at the JMS server
> 4) this implies that the pointer of session.recover will reconsume the 
> not-yet-acked message from another thread 
> I understood Nifi prefers message duplication over message loss - however, in 
> our case the number of re-delivered messages is "piling up" over time, 
> leading to a growing number of un-acked messages in JMS and finally to 
> storage problems on the JMS server side.
> It would be great to modify the nifi-jms-processor package to reliably cope 
> with higher-traffic JMS sources. Ideas from my side would be / include:
> 1) make synchronous / asynchronous delivery configurable
> 2) add configuration option for custom ACKing modes (i.e. non-standard JMS 
> modes like individual ACKing or NO_ACK)
> 3) add configuration option for JMS message selectors
> From other projects, I found the JMS communication options in this 
> spark-jms-receiver package
>   https://github.com/tbfenet/spark-jms-receiver
> very helpful (in which situations are synchronous / asynchronous approaches 
> reliable, how to buffer messages before acking them, how/when to ack, ...). 
> Here's also a java port of a subset:
>   https://github.com/bernhardschaefer/spark-jms-receiver-java
> For any mentioned option I'm also happy to help!
> Best & thanks for a great piece of software,
>   Dominik



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3531) modify session.recover behaviour in nifi-jms-processors to cope with high-traffic JMS

2018-07-30 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-3531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16562370#comment-16562370
 ] 

Michael Moser commented on NIFI-3531:
-

I have another scenario where the session.recover() causes problems. If the 
consumer has prefetch enabled, then the call to session.recover() causes 
messages sent but not acknowledged to be marked as "redelivered". This can 
cause JMS providers to retransmit these messages to the client.  Also, some 
providers limit the number of times a message can be redelivered, so a message 
could actually be lost.

I'm having a really hard time justifying the call to session.recover() before 
each call to consumer.receive().  It seems to cause known problems rather than 
limit possible message loss.

> modify session.recover behaviour in nifi-jms-processors to cope with 
> high-traffic JMS
> -
>
> Key: NIFI-3531
> URL: https://issues.apache.org/jira/browse/NIFI-3531
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Dominik Benz
>Priority: Major
>
> As described in this mailing list post
> http://apache-nifi-developer-list.39713.n7.nabble.com/session-recover-behaviour-in-nifi-jms-processor-td14940.html
> the current implementation of nifi-jms-processor, especially of JMSConsumer
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/JMSConsumer.java
> causes frequent re-delivery of JMS messages due to the following behaviour:
> 1) several threads perform the JMS session callback in parallel
> 2) each callback performs a session.recover
> 3) during high traffic, the situation arises that the ACKs from another 
> thread may not (yet) have arrived at the JMS server
> 4) this implies that the pointer of session.recover will reconsume the 
> not-yet-acked message from another thread 
> I understood Nifi prefers message duplication over message loss - however, in 
> our case the number of re-delivered messages is "piling up" over time, 
> leading to a growing number of un-acked messages in JMS and finally to 
> storage problems on the JMS server side.
> It would be great to modify the nifi-jms-processor package to reliably cope 
> with higher-traffic JMS sources. Ideas from my side would be / include:
> 1) make synchronous / asynchronous delivery configurable
> 2) add configuration option for custom ACKing modes (i.e. non-standard JMS 
> modes like individual ACKing or NO_ACK)
> 3) add configuration option for JMS message selectors
> From other projects, I found the JMS communication options in this 
> spark-jms-receiver package
>   https://github.com/tbfenet/spark-jms-receiver
> very helpful (in which situations are synchronous / asynchronous approaches 
> reliable, how to buffer messages before acking them, how/when to ack, ...). 
> Here's also a java port of a subset:
>   https://github.com/bernhardschaefer/spark-jms-receiver-java
> For any mentioned option I'm also happy to help!
> Best & thanks for a great piece of software,
>   Dominik



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5451) Modify NiFiGroovyTest.groovy to avoid needing unlimited strength JCE policy

2018-07-25 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-5451:

   Resolution: Fixed
Fix Version/s: 1.8.0
   Status: Resolved  (was: Patch Available)

Thank you [~alopresto]!

> Modify NiFiGroovyTest.groovy to avoid needing unlimited strength JCE policy
> ---
>
> Key: NIFI-5451
> URL: https://issues.apache.org/jira/browse/NIFI-5451
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Michael Moser
>Assignee: Andy LoPresto
>Priority: Minor
>  Labels: encryption, jce_policy
> Fix For: 1.8.0
>
>
> When running NiFiGroovyTest unit test when using a JDK that lacks unlimited 
> strength JCE policy, the test fails.
>  
> [ERROR] 
> testInitializePropertiesShouldSetBootstrapKeyFromFile(org.apache.nifi.NiFiGroovyTest)
>   Time elapsed: 0.121 s  <<< ERROR!
> java.lang.IllegalArgumentException: There was an issue decrypting protected 
> properties
> at 
> org.apache.nifi.NiFiGroovyTest.testInitializePropertiesShouldSetBootstrapKeyFromFile(NiFiGroovyTest.groovy:166)
> Caused by: org.apache.nifi.properties.SensitivePropertyProtectionException: 
> The key must be a valid hexadecimal key
> at 
> org.apache.nifi.NiFiGroovyTest.testInitializePropertiesShouldSetBootstrapKeyFromFile(NiFiGroovyTest.groovy:166)
>  
> The test uses a nifi.properties resource that does aes/gcm/256 encryption.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5451) Modify NiFiGroovyTest.groovy to avoid needing unlimited strength JCE policy

2018-07-24 Thread Michael Moser (JIRA)
Michael Moser created NIFI-5451:
---

 Summary: Modify NiFiGroovyTest.groovy to avoid needing unlimited 
strength JCE policy
 Key: NIFI-5451
 URL: https://issues.apache.org/jira/browse/NIFI-5451
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Reporter: Michael Moser


When running NiFiGroovyTest unit test when using a JDK that lacks unlimited 
strength JCE policy, the test fails.

 
[ERROR] 
testInitializePropertiesShouldSetBootstrapKeyFromFile(org.apache.nifi.NiFiGroovyTest)
  Time elapsed: 0.121 s  <<< ERROR!
java.lang.IllegalArgumentException: There was an issue decrypting protected 
properties
at 
org.apache.nifi.NiFiGroovyTest.testInitializePropertiesShouldSetBootstrapKeyFromFile(NiFiGroovyTest.groovy:166)
Caused by: org.apache.nifi.properties.SensitivePropertyProtectionException: The 
key must be a valid hexadecimal key
at 
org.apache.nifi.NiFiGroovyTest.testInitializePropertiesShouldSetBootstrapKeyFromFile(NiFiGroovyTest.groovy:166)
 
The test uses a nifi.properties resource that does aes/gcm/256 encryption.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5400) NiFiHostnameVerifier should be replaced

2018-07-24 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16554255#comment-16554255
 ] 

Michael Moser commented on NIFI-5400:
-

Sorry, I thought this was related to the recent discussions on the DEV list 
about wildcard SSL certificate restrictions for configurations of the REST API 
and UI.  I withdraw the comment for this ticket.

> NiFiHostnameVerifier should be replaced
> ---
>
> Key: NIFI-5400
> URL: https://issues.apache.org/jira/browse/NIFI-5400
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.7.0
>Reporter: Andy LoPresto
>Priority: Major
>  Labels: certificate, hostname, security, tls
>
> The {{NiFiHostnameVerifier}} does not handle wildcard certificates or complex 
> {{SubjectAlternativeNames}}. It should be replaced with a more full-featured 
> implementation, like {{OkHostnameVerifier}} from {{okhttp}} or 
> {{DefaultHostnameVerifier}} from {{http-client}}. Either of these options 
> requires introducing a new Maven dependency to {{nifi-commons}} and requires 
> further investigation. 
> *Note: * the {{sun.net.www.protocol.httpsDefaultHostnameVerifier}} simply 
> returns {{false}} on all inputs and is not a valid solution. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5400) NiFiHostnameVerifier should be replaced

2018-07-23 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553237#comment-16553237
 ] 

Michael Moser commented on NIFI-5400:
-

I think it would also be valuable to allow NiFi users to create and inject 
their own HostnameVerifier should they decide that is necessary.

> NiFiHostnameVerifier should be replaced
> ---
>
> Key: NIFI-5400
> URL: https://issues.apache.org/jira/browse/NIFI-5400
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.7.0
>Reporter: Andy LoPresto
>Priority: Major
>  Labels: certificate, hostname, security, tls
>
> The {{NiFiHostnameVerifier}} does not handle wildcard certificates or complex 
> {{SubjectAlternativeNames}}. It should be replaced with a more full-featured 
> implementation, like {{OkHostnameVerifier}} from {{okhttp}} or 
> {{DefaultHostnameVerifier}} from {{http-client}}. Either of these options 
> requires introducing a new Maven dependency to {{nifi-commons}} and requires 
> further investigation. 
> *Note: * the {{sun.net.www.protocol.httpsDefaultHostnameVerifier}} simply 
> returns {{false}} on all inputs and is not a valid solution. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3672) Older versions of IBM MQ allowed integer value to be set as String

2018-07-13 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-3672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543624#comment-16543624
 ] 

Michael Moser commented on NIFI-3672:
-

+1 for adding this feature to PublishJMS.

The deprecated PutJMS processor supported the {{javax.jms.Message}} methods 
{{setIntProperty}}, {{setBooleanProperty}}, {{setShortProperty}}, 
{{setLongProperty}}, {{setByteProperty}}, {{setDouleProperty}} and 
{{setFloatProperty}}.  PublishJMS could also support them.

> Older versions of IBM MQ allowed integer value to be set as String
> --
>
> Key: NIFI-3672
> URL: https://issues.apache.org/jira/browse/NIFI-3672
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.1.1
>Reporter: Andy LoPresto
>Priority: Minor
>  Labels: beginner, jms, third-party
>
> As reported in this [Stack Overflow 
> question|https://stackoverflow.com/questions/43213416/publishjms-processor-failing-for-writing-message-to-ibm-websphere-mq],
>  the {{PublishJMS}} processor was failing to connect to an IBM MQ JMS server 
> with the exception {{com.ibm.msg.client.jms.DetailedMessageFormatException: 
> JMSCC0051: The property 'JMS_IBM_MsgType' should be set using type 
> 'java.lang.Integer', not 'java.lang.String'.}} As noted in the answer, this 
> is a [known IBM issue 
> IT02814|https://www-01.ibm.com/support/docview.wss?uid=swg1IT02814] in which 
> older (pre-7.0) versions allowed integer properties to be set as a 
> {{java.lang.String}} but new versions require it to be a 
> {{java.lang.Integer}}. The property descriptor should be changed to validate 
> an integer value and return it with the correct type when setting the 
> configuration. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5196) AbstractJMSProcessor can leave connection hanging open

2018-07-13 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543485#comment-16543485
 ] 

Michael Moser commented on NIFI-5196:
-

I have another scenario when this happens.  If the framework throws an 
exception from the ProcessSession commit() method, then that exception 
propagates out of  rendezvousWithJms() and the AbstractJMSProcessor 
onTrigger().  Situations like an OutOfMemoryError or IOException: No space left 
on device can cause this.

In this situation, I don't think we should close and reopen connections to the 
JMS hub in each onTrigger.  So I propose a solution like this:
{code:java}
@Override
public void onTrigger(ProcessContext context, ProcessSession session) throws 
ProcessException {
T worker = workerPool.poll();
if (worker == null) {
worker = buildTargetResource(context);
}

try {
rendezvousWithJms(context, session, worker);
}
finally {
workerPool.offer(worker);
}
}
{code}
In the case where OOM or out of disk space occurs, the valid connection simply 
remains open. In the case where the connection configuration is wrong, the NiFi 
manager should be able to notice the problem and stop the processor, which will 
call the @OnStopped method and close the bad connection.

How does this sound?

> AbstractJMSProcessor can leave connection hanging open
> --
>
> Key: NIFI-5196
> URL: https://issues.apache.org/jira/browse/NIFI-5196
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: Nick Coleman
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>  Labels: JMS
>
> ConsumeJMS and PublishJMS are based on AbstractJMSProcessor.  They can cause 
> a connection to the MQ Server to be opened and not be closed until the NiFi 
> server is rebooted.
> This can create a problem for an MQ when the initial setup entered is wrong 
> for an IBM MQ system that only allows one connection per user.  Subsequent 
> connections are blocked as the first remains open.  Another potential problem 
> even if the subsequent connection works is the original connection is still 
> open and taking up resources.
> A simple change to the AbstractJMSProcessor would be in the onTrigger() 
> function:
>  
> {code:java}
> @Override
> public void onTrigger(ProcessContext context, ProcessSession session) throws 
> ProcessException {
> T worker = workerPool.poll();
> if (worker == null) {
> worker = buildTargetResource(context);
> }
> boolean offered = false;
> try {
> rendezvousWithJms(context, session, worker);
> offered = workerPool.offer(worker);
> }
> finally {
> if (!offered) {
> worker.shutdown();
> }
> }
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5275) PostHTTP - Hung connections and zero reuse of existing connections

2018-07-11 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-5275:

Status: Patch Available  (was: Open)

> PostHTTP - Hung connections and zero reuse of existing connections
> --
>
> Key: NIFI-5275
> URL: https://issues.apache.org/jira/browse/NIFI-5275
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: Steven Youtsey
>Assignee: Michael Moser
>Priority: Major
>
> Connection setups, the HEAD request, and the DELETE request do not have any 
> timeout associated with them. When the remote server goes sideways, these 
> actions will wait indefinitely and appear as being hung. See 
> https://issues.apache.org/jira/browse/HTTPCLIENT-1892 for an explanation as 
> to why the initial connection setups are not timing out.
> Connections, though pooled, are not being re-used. A new connection is 
> established for every POST. This creates a burden on highly loaded remote 
> listener servers. Verified by both netstat and turning on Debug for 
> org.apache.http.impl.conn.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-1336) PostHTTP does not route to failure in case of Connection failure

2018-06-15 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-1336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514143#comment-16514143
 ] 

Michael Moser edited comment on NIFI-1336 at 6/15/18 5:38 PM:
--

I think I may have resolved this by using a retry handler and better connection 
reuse (POST and DELETE use the same connection) in NIFI-5275

[https://github.com/apache/nifi/pull/2796]

 


was (Author: mosermw):
I think I map have resolved this by using a retry handler and better connection 
reuse (POST and DELETE use the same connection) in NIFI-5275

[https://github.com/apache/nifi/pull/2796]

 

> PostHTTP does not route to failure in case of Connection failure
> 
>
> Key: NIFI-1336
> URL: https://issues.apache.org/jira/browse/NIFI-1336
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Andre F de Miranda
>Priority: Major
>
> When unable to communicate with a remote NiFi instance, PostHTTP continually 
> rolls back the session (without penalizing) instead of routing to failure. 
> This results in filling the logs with messages like:
> 2015-12-29 14:09:02,023 WARN [Timer-Driven Process Thread-11] 
> o.a.nifi.processors.standard.PostHTTP 
> PostHTTP[id=015d5ace-3aa3-3e51-8aae-4776d3bd3897] Failed to delete Hold that 
> destination placed on 
> [StandardFlowFileRecord[uuid=220fbcac-d84e-41ed-9cc7-6e558ea207ee,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1451397424256-4, container=pub1, 
> section=4], offset=52418, 
> length=19672005],offset=0,name=GeoLite2-City.mmdb.gz,size=19672005]] due to 
> org.apache.http.conn.HttpHostConnectException: Connect to processing-3:5000 
> [processing-3/172.172.172.172] failed: Connection refused: 
> org.apache.http.conn.HttpHostConnectException: Connect to processing-3:5000 
> [processing-3.demo.onyara.com/172.172.172.172] failed: Connection refused



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-1336) PostHTTP does not route to failure in case of Connection failure

2018-06-15 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-1336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16514143#comment-16514143
 ] 

Michael Moser commented on NIFI-1336:
-

I think I map have resolved this by using a retry handler and better connection 
reuse (POST and DELETE use the same connection) in NIFI-5275

[https://github.com/apache/nifi/pull/2796]

 

> PostHTTP does not route to failure in case of Connection failure
> 
>
> Key: NIFI-1336
> URL: https://issues.apache.org/jira/browse/NIFI-1336
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Andre F de Miranda
>Priority: Major
>
> When unable to communicate with a remote NiFi instance, PostHTTP continually 
> rolls back the session (without penalizing) instead of routing to failure. 
> This results in filling the logs with messages like:
> 2015-12-29 14:09:02,023 WARN [Timer-Driven Process Thread-11] 
> o.a.nifi.processors.standard.PostHTTP 
> PostHTTP[id=015d5ace-3aa3-3e51-8aae-4776d3bd3897] Failed to delete Hold that 
> destination placed on 
> [StandardFlowFileRecord[uuid=220fbcac-d84e-41ed-9cc7-6e558ea207ee,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1451397424256-4, container=pub1, 
> section=4], offset=52418, 
> length=19672005],offset=0,name=GeoLite2-City.mmdb.gz,size=19672005]] due to 
> org.apache.http.conn.HttpHostConnectException: Connect to processing-3:5000 
> [processing-3/172.172.172.172] failed: Connection refused: 
> org.apache.http.conn.HttpHostConnectException: Connect to processing-3:5000 
> [processing-3.demo.onyara.com/172.172.172.172] failed: Connection refused



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5275) PostHTTP - Hung connections and zero reuse of existing connections

2018-06-14 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513003#comment-16513003
 ] 

Michael Moser commented on NIFI-5275:
-

I spent a good bit of time testing this, and I learned that normal connections 
were reused properly by the connection pool, but HTTPS connections were *not* 
being reused.

> PostHTTP - Hung connections and zero reuse of existing connections
> --
>
> Key: NIFI-5275
> URL: https://issues.apache.org/jira/browse/NIFI-5275
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: Steven Youtsey
>Assignee: Michael Moser
>Priority: Major
>
> Connection setups, the HEAD request, and the DELETE request do not have any 
> timeout associated with them. When the remote server goes sideways, these 
> actions will wait indefinitely and appear as being hung. See 
> https://issues.apache.org/jira/browse/HTTPCLIENT-1892 for an explanation as 
> to why the initial connection setups are not timing out.
> Connections, though pooled, are not being re-used. A new connection is 
> established for every POST. This creates a burden on highly loaded remote 
> listener servers. Verified by both netstat and turning on Debug for 
> org.apache.http.impl.conn.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-5275) PostHTTP - Hung connections and zero reuse of existing connections

2018-06-08 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser reassigned NIFI-5275:
---

Assignee: Michael Moser

> PostHTTP - Hung connections and zero reuse of existing connections
> --
>
> Key: NIFI-5275
> URL: https://issues.apache.org/jira/browse/NIFI-5275
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: Steven Youtsey
>Assignee: Michael Moser
>Priority: Major
>
> Connection setups, the HEAD request, and the DELETE request do not have any 
> timeout associated with them. When the remote server goes sideways, these 
> actions will wait indefinitely and appear as being hung. See 
> https://issues.apache.org/jira/browse/HTTPCLIENT-1892 for an explanation as 
> to why the initial connection setups are not timing out.
> Connections, though pooled, are not being re-used. A new connection is 
> established for every POST. This creates a burden on highly loaded remote 
> listener servers. Verified by both netstat and turning on Debug for 
> org.apache.http.impl.conn.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5274) ReplaceText can product StackOverflowError which causes admin yield

2018-06-06 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-5274:

Status: Patch Available  (was: Open)

> ReplaceText can product StackOverflowError which causes admin yield
> ---
>
> Key: NIFI-5274
> URL: https://issues.apache.org/jira/browse/NIFI-5274
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: Michael Moser
>Assignee: Michael Moser
>Priority: Major
>
> Regex Replace mode can easily produce StackOverflowError. Certain regular 
> expressions are implemented using recursion, which when used on large input 
> text can cause StackOverflowError.  This causes the ReplaceText processor to 
> rollback and admin yield, which causes the input flowfile to get stuck in the 
> input queue.
> We should be able to catch this condition and route the flowfile to failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5274) ReplaceText can product StackOverflowError which causes admin yield

2018-06-06 Thread Michael Moser (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16503756#comment-16503756
 ] 

Michael Moser commented on NIFI-5274:
-

Haha, no [~ottobackwards] you didn't introduce this.  I was in the middle of 
fixing it when your PR came in.  So I reviewed yours to get it committed before 
I put up a PR for this one!

> ReplaceText can product StackOverflowError which causes admin yield
> ---
>
> Key: NIFI-5274
> URL: https://issues.apache.org/jira/browse/NIFI-5274
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.6.0
>Reporter: Michael Moser
>Assignee: Michael Moser
>Priority: Major
>
> Regex Replace mode can easily produce StackOverflowError. Certain regular 
> expressions are implemented using recursion, which when used on large input 
> text can cause StackOverflowError.  This causes the ReplaceText processor to 
> rollback and admin yield, which causes the input flowfile to get stuck in the 
> input queue.
> We should be able to catch this condition and route the flowfile to failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5274) ReplaceText can product StackOverflowError which causes admin yield

2018-06-06 Thread Michael Moser (JIRA)
Michael Moser created NIFI-5274:
---

 Summary: ReplaceText can product StackOverflowError which causes 
admin yield
 Key: NIFI-5274
 URL: https://issues.apache.org/jira/browse/NIFI-5274
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.6.0
Reporter: Michael Moser
Assignee: Michael Moser


Regex Replace mode can easily produce StackOverflowError. Certain regular 
expressions are implemented using recursion, which when used on large input 
text can cause StackOverflowError.  This causes the ReplaceText processor to 
rollback and admin yield, which causes the input flowfile to get stuck in the 
input queue.

We should be able to catch this condition and route the flowfile to failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-4272) ReplaceText processor does not properly iterate multiple replacement values when EL is used

2018-06-06 Thread Michael Moser (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser resolved NIFI-4272.
-
   Resolution: Fixed
Fix Version/s: 1.7.0

> ReplaceText processor does not properly iterate multiple replacement values 
> when EL is used
> ---
>
> Key: NIFI-4272
> URL: https://issues.apache.org/jira/browse/NIFI-4272
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.0, 1.2.0, 1.3.0
>Reporter: Matthew Clarke
>Assignee: Otto Fowler
>Priority: Major
> Fix For: 1.7.0
>
>
> I am using the ReplaceText processor to take a string input (example:   
> {"name":"Smith","middle":"nifi","firstname":"John"} ) and change all the 
> filed names to all uppercase.
> Using above input as an example, I expect output like 
> {"NAME":"Smith","MIDDLE":"nifi","FIRSTNAME":"John"}
> I expect I should be able to do this with ReplaceText processor; however, I 
> see some unexpected behavior:
> ---
> Test 1:  (uses EL in the replacement value property)
> Search value:  \"([a-z]+?)\":\"(.+?)\"
> Replacement Value: \"*${'$1':toUpper()}*":\"$2\"
> Result: {"NAME":"Smith","NAME":"nifi","NAME":"John"}
> ---
> Test 2:  (Does not use EL in the replacement Value property)
> Search value:  \"([a-z]+?)\":\"(.+?)\"
> Replacement Value: \"new$1":\"$2\"
> Result: {"newname":"Smith","newmiddle":"nifi","newfirstname":"John"}
> 
> As you can see if I use a NiFi expression Language statement in the 
> Replacement Value property it no longer iterates as expect through the 
> various $1 captured values. It repeatedly uses the EL result from the first 
> EL evaluation in every iteration while $2 correctly iterates through the 
> search values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-3492) Allow configuration of default back pressure

2018-05-24 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser resolved NIFI-3492.
-
Resolution: Duplicate

Fixed in NIFI-3599

> Allow configuration of default back pressure
> 
>
> Key: NIFI-3492
> URL: https://issues.apache.org/jira/browse/NIFI-3492
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.1.1
>Reporter: Brandon DeVries
>Priority: Major
>
> NiFi 1.x sets a default back pressure of 10K files / 1 GB (hardcoded in 
> StandardFlowFileQueue) instead of the "unlimited" default in 0.x. This is 
> better in a lot of ways... however those values are potentially a bit 
> arbitrary, and not appropriate for every system.
> These values should be configurable, and exposed in nifi.properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5167) Reporting Task Controller Service UI failure

2018-05-08 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467506#comment-16467506
 ] 

Michael Moser commented on NIFI-5167:
-

More information ... this produces a Javascript error that reads:

Uncaught TypeError: Cannot read property 'reload' of undefined
  at Object. (nf-canvas-all.js?1.6.0:53)

> Reporting Task Controller Service UI failure
> 
>
> Key: NIFI-5167
> URL: https://issues.apache.org/jira/browse/NIFI-5167
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.6.0
>Reporter: Mark Bean
>Priority: Major
>
> The UI for Controller Services related to a Reporting Task can become 
> non-responsive. Test case: Create a Reporting Task which requires a 
> Controller Service (e.g. SiteToSiteBulletinReportingTask.) When configuring 
> the properties for the Reporting Task, select the Controller Service property 
> (e.g. SSL Context Service) and choose "create new service". Then, configure 
> the resultant Controller Service (e.g. StandardRestrictedSSLContextService.) 
> When choosing "Apply", the properties popup window is not dismissed. Yet, the 
> properties appear to apply successfully because on subsequent configuration, 
> the properties remain as previously set. Also, a message in the app.log 
> indicates:
> "INFO [Flow Service Tasks Thread-2] o.a.nifi.controller.StandardFlowService 
> Saved flow controller org.apache.nifi.controller.FlowController@443b57d7 // 
> Another save pending = false"
> The Controller Service configuration window should dismiss appropriately when 
> "Apply" button is selected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-4658) MergeContent Max Number of Entries resetting to default value

2018-04-02 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser resolved NIFI-4658.
-
   Resolution: Fixed
Fix Version/s: 1.6.0

> MergeContent Max Number of Entries resetting to default value
> -
>
> Key: NIFI-4658
> URL: https://issues.apache.org/jira/browse/NIFI-4658
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Brian Ghigiarelli
>Assignee: Mark Bean
>Priority: Major
> Fix For: 1.6.0
>
>
> Prior to and including 1.4.0, the MergeContent processor supports a property 
> called "Maximum Number of Entries". It has a default value of 1,000. Prior to 
> 1.4.0 and in the description of this property, if the property is not set, 
> there will be no maximum number of files to include in a bundle.
> However, with the release of 1.4.0, if you clear the value of this property 
> in order to have an unlimited number of files in the bundle and "Apply" the 
> change, the next time that you open the configuration of the processor, it 
> will again be set to the default value of 1,000. The expectation is that the 
> cleared value will remain cleared, maintaining a configuration for an 
> unlimited number of files in a merge bundle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-4950) MergeContent: Defragment can improperly reassemble

2018-03-16 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser resolved NIFI-4950.
-
   Resolution: Fixed
Fix Version/s: 1.6.0

> MergeContent: Defragment can improperly reassemble
> --
>
> Key: NIFI-4950
> URL: https://issues.apache.org/jira/browse/NIFI-4950
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Brandon DeVries
>Assignee: Mark Bean
>Priority: Minor
> Fix For: 1.6.0
>
>
> In Defragment mode, MergeContent can improperly reassemble the pieces of a 
> split file.  I understand this was previously discussed in NIFI-378, and the 
> outcome was to update the documentation for fragment.index [1]: 
> {quote} Applicable only if the  property is set to 
> Defragment. This attribute indicates the order in which the fragments should 
> be assembled. This attribute must be present on all FlowFiles when using the 
> Defragment Merge Strategy and must be a unique (i.e., unique across all 
> FlowFiles that have the same value for the "fragment.identifier" attribute) 
> integer between 0 and the value of the fragment.count attribute. If two or 
> more FlowFiles have the same value for the "fragment.identifier" attribute 
> and the same value for the "fragment.index" attribute, the behavior of this 
> Processor is undefined. 
> {quote}
> I believe this could (and probably should) be improved upon.  Specifically, 
> the discussion around NIFI-378 focused on the "improper" use of MergeContent, 
> in using the same fragment.identifier to "pair up" files.  The situation I've 
> encountered isn't really unusual in any way...
> I have a file, being split and sent via PostHTTP to another nifi instance.  
> If something "goes wrong", the sending NiFi may not get an acknowledgement of 
> success even if the file made it to the receiving NiFi.  It then sends the 
> segment again.  NiFi favors duplication over loss, so this is not unexpected. 
>  However, I now have a file broken into X fragments arriving on the other 
> side as X+1 (or more).  The reassembly may work... or both duplicates may be 
> chosen, and result in an incorrectly recreated file.
> To satisfy the contract as it exists, you would need to use a DetectDuplicate 
> before the MergeContent to filter these out.  However, that could potentially 
> incur a great of overhead.  In contrast, simply checking that there are no 
> duplicate fragment id's in a bin should be relatively straightforward.  How 
> to handle duplicates is a legitimate question... are they ignored,  or are 
> they discard (if they're actually the same)?  If the duplicate id's aren't 
> identical, what is the behavior? Personally, I would say if you have actual 
> duplicates, drop one and continue with the merge... if you have unequal 
> "duplicates", fail the bin.  But there's room for discussion there.
> The point is, in this circumstance it is very easy for a user to do a very 
> reasonable thing and end up with a corrupt file for reasons that are somewhat 
> esoteric.  Then, we would need to explain to them why "defragment" doesn't 
> actually defragment, but just kind of sorts a bin of matching things.  I 
> think we can do better than that.
>  [1] 
> [http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.5.0/org.apache.nifi.processors.standard.MergeContent/index.html]
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-4950) MergeContent: Defragment can improperly reassemble

2018-03-16 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser reassigned NIFI-4950:
---

Assignee: Mark Bean

> MergeContent: Defragment can improperly reassemble
> --
>
> Key: NIFI-4950
> URL: https://issues.apache.org/jira/browse/NIFI-4950
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Brandon DeVries
>Assignee: Mark Bean
>Priority: Minor
> Fix For: 1.6.0
>
>
> In Defragment mode, MergeContent can improperly reassemble the pieces of a 
> split file.  I understand this was previously discussed in NIFI-378, and the 
> outcome was to update the documentation for fragment.index [1]: 
> {quote} Applicable only if the  property is set to 
> Defragment. This attribute indicates the order in which the fragments should 
> be assembled. This attribute must be present on all FlowFiles when using the 
> Defragment Merge Strategy and must be a unique (i.e., unique across all 
> FlowFiles that have the same value for the "fragment.identifier" attribute) 
> integer between 0 and the value of the fragment.count attribute. If two or 
> more FlowFiles have the same value for the "fragment.identifier" attribute 
> and the same value for the "fragment.index" attribute, the behavior of this 
> Processor is undefined. 
> {quote}
> I believe this could (and probably should) be improved upon.  Specifically, 
> the discussion around NIFI-378 focused on the "improper" use of MergeContent, 
> in using the same fragment.identifier to "pair up" files.  The situation I've 
> encountered isn't really unusual in any way...
> I have a file, being split and sent via PostHTTP to another nifi instance.  
> If something "goes wrong", the sending NiFi may not get an acknowledgement of 
> success even if the file made it to the receiving NiFi.  It then sends the 
> segment again.  NiFi favors duplication over loss, so this is not unexpected. 
>  However, I now have a file broken into X fragments arriving on the other 
> side as X+1 (or more).  The reassembly may work... or both duplicates may be 
> chosen, and result in an incorrectly recreated file.
> To satisfy the contract as it exists, you would need to use a DetectDuplicate 
> before the MergeContent to filter these out.  However, that could potentially 
> incur a great of overhead.  In contrast, simply checking that there are no 
> duplicate fragment id's in a bin should be relatively straightforward.  How 
> to handle duplicates is a legitimate question... are they ignored,  or are 
> they discard (if they're actually the same)?  If the duplicate id's aren't 
> identical, what is the behavior? Personally, I would say if you have actual 
> duplicates, drop one and continue with the merge... if you have unequal 
> "duplicates", fail the bin.  But there's room for discussion there.
> The point is, in this circumstance it is very easy for a user to do a very 
> reasonable thing and end up with a corrupt file for reasons that are somewhat 
> esoteric.  Then, we would need to explain to them why "defragment" doesn't 
> actually defragment, but just kind of sorts a bin of matching things.  I 
> think we can do better than that.
>  [1] 
> [http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.5.0/org.apache.nifi.processors.standard.MergeContent/index.html]
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-3039) Provenance Repository - Fix PurgeOldEvent and Rollover Size Limits

2018-03-09 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-3039:

   Resolution: Fixed
Fix Version/s: 1.6.0
   Status: Resolved  (was: Patch Available)

> Provenance Repository - Fix PurgeOldEvent and Rollover Size Limits
> --
>
> Key: NIFI-3039
> URL: https://issues.apache.org/jira/browse/NIFI-3039
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.0.0, 1.1.0, 0.8.0, 0.7.1
>Reporter: Joe Skora
>Assignee: Joe Skora
>Priority: Major
> Fix For: 1.6.0
>
>
> Current {purgeOldEvents} logic triggers cleanup when 90% of space is used, 
> but it only removes one file if usage is under 100%, causing thrashing around 
> 100% usage.  In testing, cleanup up to 70% after hitting 90% makes the system 
> run more smoothly.
> Also, {rollover} will not trigger cleanup unless 110% of the allowed space is 
> in use, changing this to 100% also make a difference in testing.
> Before these changes, a test system that generates huge amounts of provenance 
> would become unstable and stop processing provenance until restarted.  With 
> these changes, the system consistently recovers even under heavy load.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-3599) Add nifi.properties value to globally set the default backpressure size threshold for each connection

2018-02-27 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-3599:

Status: Patch Available  (was: Open)

> Add nifi.properties value to globally set the default backpressure size 
> threshold for each connection
> -
>
> Key: NIFI-3599
> URL: https://issues.apache.org/jira/browse/NIFI-3599
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jeremy Dyer
>Assignee: Michael Moser
>Priority: Major
>
> By default each new connection added to the workflow canvas will have a 
> default backpressure size threshold of 10,000 objects. While the threshold 
> can be changed on a connection level it would be convenient to have a global 
> mechanism for setting that value to something other than 10,000. This 
> enhancement would add a property to nifi.properties that would allow for this 
> threshold to be set globally unless otherwise overridden at the connection 
> level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-3599) Add nifi.properties value to globally set the default backpressure size threshold for each connection

2018-02-27 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser reassigned NIFI-3599:
---

Assignee: Michael Moser  (was: Jeremy Dyer)

> Add nifi.properties value to globally set the default backpressure size 
> threshold for each connection
> -
>
> Key: NIFI-3599
> URL: https://issues.apache.org/jira/browse/NIFI-3599
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jeremy Dyer
>Assignee: Michael Moser
>Priority: Major
>
> By default each new connection added to the workflow canvas will have a 
> default backpressure size threshold of 10,000 objects. While the threshold 
> can be changed on a connection level it would be convenient to have a global 
> mechanism for setting that value to something other than 10,000. This 
> enhancement would add a property to nifi.properties that would allow for this 
> threshold to be set globally unless otherwise overridden at the connection 
> level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3599) Add nifi.properties value to globally set the default backpressure size threshold for each connection

2018-02-23 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374892#comment-16374892
 ] 

Michael Moser commented on NIFI-3599:
-

Thank you both for the feedback.  I agree showing the actual values in those 
fields is best, and I'll look into modifying the /nifi-api/flow/about endpoint 
response to provide them.

> Add nifi.properties value to globally set the default backpressure size 
> threshold for each connection
> -
>
> Key: NIFI-3599
> URL: https://issues.apache.org/jira/browse/NIFI-3599
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jeremy Dyer
>Assignee: Jeremy Dyer
>Priority: Major
>
> By default each new connection added to the workflow canvas will have a 
> default backpressure size threshold of 10,000 objects. While the threshold 
> can be changed on a connection level it would be convenient to have a global 
> mechanism for setting that value to something other than 10,000. This 
> enhancement would add a property to nifi.properties that would allow for this 
> threshold to be set globally unless otherwise overridden at the connection 
> level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-3599) Add nifi.properties value to globally set the default backpressure size threshold for each connection

2018-02-23 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374737#comment-16374737
 ] 

Michael Moser commented on NIFI-3599:
-

I have an approach to resolve this, but I would like to get [~mcgilman] and/or 
[~scottyaslan] to comment, because there are UI/UX implications.

It's fairly easy to move default back pressure Object and Data Size threshold 
settings from server-side code (StandardFlowFileQueue.java) to nifi.properties 
and make the back end use them.  However, the UI also has default back pressure 
set in the nf-connection-configuration.js code.  The UI does not seem to have 
access to nifi.properties in order to read settings from there.

When a new connection is drawn, I propose setting these two back pressure 
fields to 'default' in the UI, or leave them empty.  If a user doesn't change 
them, the JS would send to the server a null value in the JSON for these two 
fields.  The server would recognize this and use the nifi.properties default 
back pressure settings.  If a user makes changes to these fields, the JSON sent 
to the server would contain those changes.

I tested this approach and it works.  I'll be happy to submit a PR.  But is 
this an acceptable approach?  Thanks for feedback.

> Add nifi.properties value to globally set the default backpressure size 
> threshold for each connection
> -
>
> Key: NIFI-3599
> URL: https://issues.apache.org/jira/browse/NIFI-3599
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jeremy Dyer
>Assignee: Jeremy Dyer
>Priority: Major
>
> By default each new connection added to the workflow canvas will have a 
> default backpressure size threshold of 10,000 objects. While the threshold 
> can be changed on a connection level it would be convenient to have a global 
> mechanism for setting that value to something other than 10,000. This 
> enhancement would add a property to nifi.properties that would allow for this 
> threshold to be set globally unless otherwise overridden at the connection 
> level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4881) Provide TLS "auto-secure" feature

2018-02-16 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16367423#comment-16367423
 ] 

Michael Moser commented on NIFI-4881:
-

I have just a couple of observations that I wanted to share.

I feel that security definitions should not change in NiFi without an explicit 
administrator action.  Without that, I could see a definitions change could 
cause clients to lose access to NiFi and the administrator would not know why 
because in their mind they had changed nothing on the system.

I feel that once security definitions are in place, NiFi should be able to 
startup and run properly even when access to an HTTPS endpoint where those 
definitions are updated is not available.

> Provide TLS "auto-secure" feature
> -
>
> Key: NIFI-4881
> URL: https://issues.apache.org/jira/browse/NIFI-4881
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Configuration Management, Core Framework
>Affects Versions: 1.5.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: security, tls
>
> As documented in the Apache NiFi Wiki Feature Roadmap (Security), I have 
> wanted to implement for some time a feature where the administrator of a NiFi 
> instance does not have to know the intricate details of TLS configuration in 
> order to deploy a secure instance. What I propose is the following:
>  * The administrator can set the TLS security settings to *high*, *medium*, 
> and *low*
>  * These settings have accompanying descriptions explaining that "*high* 
> means most secure (with lower backwards compatibility)", "*medium* tries to 
> strike a balance between security and compatibility", and "*low* allows for 
> more widespread legacy compatibility with less emphasis on security". 
>  * The cipher suite lists and protocol versions for each would be downloaded 
> from the [Mozilla TLS 
> Observatory|https://wiki.mozilla.org/Security/Server_Side_TLS#Recommended_configurations],
>  which publishes updated definitions for each of "modern", "intermediate", 
> and "legacy" options at initial startup and at regular intervals. 
>  * For deployments in an airgapped or other environment without desired 
> connectivity to this service, or where the source is preferred to be 
> controlled by an organizational entity, an alternate service endpoint can be 
> configured (for proof of concept, this could even be the NiFi instance itself 
> reading a file from disk and returning JSON over an HTTP endpoint). 
>  * When a new definition is received or selected, the application framework 
> would either:
>  ** Restart the Jetty server automatically in a pre-configured timeframe
>  ** Wait for all queues to empty and then restart
>  ** Provide a visual alert (bulletin, etc.) that new definitions were 
> received and a manual restart is required
>  * This setting could be set in a variety of ways:
>  ** Directly in {{nifi.properties}} before the first application launch 
>  ** Given as an argument to an enhanced TLS Toolkit (i.e. 
> {{./bin/tls-toolkit.sh standalone ... -S high}}) and then the resulting 
> {{nifi.properties}} placed in the correct location
>  ** Through a UI configuration option (this would need to be restricted to 
> the appropriate NiFi permissions and would require a Jetty server restart)
>  * The definitions would "grow" with the ecosystem (i.e. as a new 
> vulnerability is discovered or a new protocol version/cipher suite is made 
> available, it is automatically added/removed from the definition, thus 
> continually improving the security stance of the application without 
> requiring active monitoring and input from an administrator)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4855) The layout of NiFi API document is broken

2018-02-14 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364855#comment-16364855
 ] 

Michael Moser commented on NIFI-4855:
-

Hi [~tasanuma0829], would you mind sharing which browser and version was used 
to capture the screenshot?  Thank you.

> The layout of NiFi API document is broken
> -
>
> Key: NIFI-4855
> URL: https://issues.apache.org/jira/browse/NIFI-4855
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation  Website
>Affects Versions: 1.5.0
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: broken_ui.png
>
>
> This is reported by Hiroaki Nawa.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4699) PostHTTP: modify to use FlowFileFilter

2018-02-08 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-4699:

   Resolution: Fixed
Fix Version/s: 1.6.0
   Status: Resolved  (was: Patch Available)

> PostHTTP: modify to use FlowFileFilter
> --
>
> Key: NIFI-4699
> URL: https://issues.apache.org/jira/browse/NIFI-4699
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Brandon DeVries
>Assignee: Michael Moser
>Priority: Minor
> Fix For: 1.6.0
>
> Attachments: PostHTTP-onTrigger-mod.java
>
>
> PostHTTP does batching of FlowFiles, but only if the "batch" is contiguous on 
> the queue.  Modify to use a FlowFileFilter instead.  See attached for 
> proposed changes to the onTrigger() method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4699) PostHTTP: modify to use FlowFileFilter

2018-02-08 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16357428#comment-16357428
 ] 

Michael Moser commented on NIFI-4699:
-

Thanks [~devriesb], with your review and +1, I pushed to master.

> PostHTTP: modify to use FlowFileFilter
> --
>
> Key: NIFI-4699
> URL: https://issues.apache.org/jira/browse/NIFI-4699
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Brandon DeVries
>Assignee: Michael Moser
>Priority: Minor
> Fix For: 1.6.0
>
> Attachments: PostHTTP-onTrigger-mod.java
>
>
> PostHTTP does batching of FlowFiles, but only if the "batch" is contiguous on 
> the queue.  Modify to use a FlowFileFilter instead.  See attached for 
> proposed changes to the onTrigger() method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4700) PostHTTP: close client

2018-02-05 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353038#comment-16353038
 ] 

Michael Moser commented on NIFI-4700:
-

OK, thanks.  [~m-hogue] if you also agree, do you mind closing PR 
[https://github.com/apache/nifi/pull/2434]?

> PostHTTP: close client
> --
>
> Key: NIFI-4700
> URL: https://issues.apache.org/jira/browse/NIFI-4700
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Brandon DeVries
>Assignee: Michael Hogue
>Priority: Major
> Fix For: 1.6.0
>
>
> In PostHTTP, the CloseableHttpClient never actually appears to be closed...
> Additionally, we could leverage CloseableHttpResponse to close responses.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4700) PostHTTP: close client

2018-02-05 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16352895#comment-16352895
 ] 

Michael Moser commented on NIFI-4700:
-

I also ran the PostHTTP to ListenHTTP test while monitoring with jvisualvm.  I 
didn't see any problems with the HttpClient leaking objects or exerting any 
unusual pressure on memory usage.

So if you concur [~devriesb] then I suggest we close this ticket.

> PostHTTP: close client
> --
>
> Key: NIFI-4700
> URL: https://issues.apache.org/jira/browse/NIFI-4700
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Brandon DeVries
>Assignee: Michael Hogue
>Priority: Major
> Fix For: 1.6.0
>
>
> In PostHTTP, the CloseableHttpClient never actually appears to be closed...
> Additionally, we could leverage CloseableHttpResponse to close responses.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4700) PostHTTP: close client

2018-02-02 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351023#comment-16351023
 ] 

Michael Moser commented on NIFI-4700:
-

Hi [~m-hogue] I did some testing of PostHTTP to see if a problem is caused by 
not closing the CloseableHttpClient.  There is some evidence that closing 
CloseableHttpClient, in certain conditions, will shutdown the connection pool 
... here is one 
[EXAMPLE|https://stackoverflow.com/questions/25889925/apache-poolinghttpclientconnectionmanager-throwing-illegal-state-exception].
 I think if you want to manage closing the clients yourself, as of 
HttpComponents version 4.4, you need to call 
HttpClientBuilder.setConnectionManagerShared(true).

During my testing of a PostHTTP with 2 threads sending files to a ListenHTTP, I 
observed that 2 connections were opened and were reused.  netstat did show that 
sometimes, a connection would be closed and a new on opened, so that was a bit 
odd.  But neither lsof nor netstat show any connection leak.  Even when I 
stopped and started both PostHTTP and ListenHttp, netstat and lsof showed 
connections closing, going away, and new ones appearing as needed.

So I'm a bit skeptical that there is a problem here.

> PostHTTP: close client
> --
>
> Key: NIFI-4700
> URL: https://issues.apache.org/jira/browse/NIFI-4700
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Brandon DeVries
>Assignee: Michael Hogue
>Priority: Major
> Fix For: 1.6.0
>
>
> In PostHTTP, the CloseableHttpClient never actually appears to be closed...
> Additionally, we could leverage CloseableHttpResponse to close responses.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-2630) Allow PublishJMS processor to create TextMessages

2018-02-01 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349370#comment-16349370
 ] 

Michael Moser commented on NIFI-2630:
-

This patch no longer applies cleanly, so I took the liberty of adding a commit 
on top of the patch to clean it up.  I will submit a PR after NIFI-4834 is 
complete, because that one touches the same files as this one.

> Allow PublishJMS processor to create TextMessages
> -
>
> Key: NIFI-2630
> URL: https://issues.apache.org/jira/browse/NIFI-2630
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.0
>Reporter: James Anderson
>Assignee: Michael Moser
>Priority: Minor
>  Labels: patch
> Attachments: 
> 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch
>
>
> Create a new configuration option for PublishJMS that allows the processor to 
> be configured to emit instances of TextMessages as well as BytesMessage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-2630) Allow PublishJMS processor to create TextMessages

2018-02-01 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser reassigned NIFI-2630:
---

Assignee: Michael Moser

> Allow PublishJMS processor to create TextMessages
> -
>
> Key: NIFI-2630
> URL: https://issues.apache.org/jira/browse/NIFI-2630
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.0
>Reporter: James Anderson
>Assignee: Michael Moser
>Priority: Minor
>  Labels: patch
> Attachments: 
> 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch
>
>
> Create a new configuration option for PublishJMS that allows the processor to 
> be configured to emit instances of TextMessages as well as BytesMessage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-2630) Allow PublishJMS processor to create TextMessages

2018-02-01 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-2630:

Fix Version/s: (was: 0.6.0)

> Allow PublishJMS processor to create TextMessages
> -
>
> Key: NIFI-2630
> URL: https://issues.apache.org/jira/browse/NIFI-2630
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.6.0
>Reporter: James Anderson
>Assignee: Michael Moser
>Priority: Minor
>  Labels: patch
> Attachments: 
> 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch
>
>
> Create a new configuration option for PublishJMS that allows the processor to 
> be configured to emit instances of TextMessages as well as BytesMessage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4699) PostHTTP: modify to use FlowFileFilter

2018-01-18 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-4699:

Status: Patch Available  (was: Open)

> PostHTTP: modify to use FlowFileFilter
> --
>
> Key: NIFI-4699
> URL: https://issues.apache.org/jira/browse/NIFI-4699
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Brandon DeVries
>Assignee: Michael Moser
>Priority: Minor
> Attachments: PostHTTP-onTrigger-mod.java
>
>
> PostHTTP does batching of FlowFiles, but only if the "batch" is contiguous on 
> the queue.  Modify to use a FlowFileFilter instead.  See attached for 
> proposed changes to the onTrigger() method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFI-4699) PostHTTP: modify to use FlowFileFilter

2018-01-16 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser reassigned NIFI-4699:
---

Assignee: Michael Moser

> PostHTTP: modify to use FlowFileFilter
> --
>
> Key: NIFI-4699
> URL: https://issues.apache.org/jira/browse/NIFI-4699
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Brandon DeVries
>Assignee: Michael Moser
>Priority: Minor
> Attachments: PostHTTP-onTrigger-mod.java
>
>
> PostHTTP does batching of FlowFiles, but only if the "batch" is contiguous on 
> the queue.  Modify to use a FlowFileFilter instead.  See attached for 
> proposed changes to the onTrigger() method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4697) PostHTTP: correct documentation

2018-01-05 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16313962#comment-16313962
 ] 

Michael Moser commented on NIFI-4697:
-

Hi [~m-hogue] I noticed that you assigned this ticket to yourself, and I hope 
you don't mind that I went ahead and put up a PR for it, since it was simple.  
I added a few words about connection pooling, which I thought were useful and 
appropriate.

> PostHTTP: correct documentation
> ---
>
> Key: NIFI-4697
> URL: https://issues.apache.org/jira/browse/NIFI-4697
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Brandon DeVries
>Assignee: Michael Hogue
>Priority: Minor
>
> The description of the URL property of PostHTTP says 
> "The URL to POST to. The first part of the URL must be static. However, the 
> path of the URL may be defined using the Attribute Expression Language. "  
> This does not appear to be true.  We should modify to something like "The URL 
> to POST to."
> ...or, someone can point out to me why I'm wrong in saying it's wrong...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4697) PostHTTP: correct documentation

2018-01-05 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-4697:

Status: Patch Available  (was: Open)

> PostHTTP: correct documentation
> ---
>
> Key: NIFI-4697
> URL: https://issues.apache.org/jira/browse/NIFI-4697
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Brandon DeVries
>Assignee: Michael Hogue
>Priority: Minor
>
> The description of the URL property of PostHTTP says 
> "The URL to POST to. The first part of the URL must be static. However, the 
> path of the URL may be defined using the Attribute Expression Language. "  
> This does not appear to be true.  We should modify to something like "The URL 
> to POST to."
> ...or, someone can point out to me why I'm wrong in saying it's wrong...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4504) SimpleMapCache/PersistentMapCache: Add removeAndGet and removeByPatternAndGet

2018-01-05 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16313813#comment-16313813
 ] 

Michael Moser commented on NIFI-4504:
-

I'm a bit late, but thank you Koji for reviewing this!

> SimpleMapCache/PersistentMapCache: Add removeAndGet and removeByPatternAndGet
> -
>
> Key: NIFI-4504
> URL: https://issues.apache.org/jira/browse/NIFI-4504
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Jon Kessler
>Assignee: Michael Moser
>Priority: Minor
> Fix For: 1.5.0
>
>
> Typical map implementations return the value that was removed when performing 
> a remove. Because you couldn't update the existing remove methods without it 
> being a breaking change I suggest adding new versions of the remove and 
> removeByPattern methods that return the removed value(s).
> These changes should also be applied up the chain to any class that makes use 
> of these classes such as the MapCacheServer and 
> AtomicDistributedMapCacheClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4505) MapCache/SimpleMapCache/PersistentMapCache: Add keyset method

2017-11-21 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-4505:

Status: Patch Available  (was: Open)

See PR below, combining changes for both NIFI-4504 and NIFI-4505

https://github.com/apache/nifi/pull/2284


> MapCache/SimpleMapCache/PersistentMapCache: Add keyset method
> -
>
> Key: NIFI-4505
> URL: https://issues.apache.org/jira/browse/NIFI-4505
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Jon Kessler
>Assignee: Michael Moser
>Priority: Minor
>
> Suggest adding a keyset method to the MapCache and implementations as well as 
> to any client/interface that make use of a MapCache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4504) SimpleMapCache/PersistentMapCache: Add removeAndGet and removeByPatternAndGet

2017-11-21 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-4504:

Status: Patch Available  (was: Open)

> SimpleMapCache/PersistentMapCache: Add removeAndGet and removeByPatternAndGet
> -
>
> Key: NIFI-4504
> URL: https://issues.apache.org/jira/browse/NIFI-4504
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Jon Kessler
>Assignee: Michael Moser
>Priority: Minor
>
> Typical map implementations return the value that was removed when performing 
> a remove. Because you couldn't update the existing remove methods without it 
> being a breaking change I suggest adding new versions of the remove and 
> removeByPattern methods that return the removed value(s).
> These changes should also be applied up the chain to any class that makes use 
> of these classes such as the MapCacheServer and 
> AtomicDistributedMapCacheClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (NIFI-4504) SimpleMapCache/PersistentMapCache: Add removeAndGet and removeByPatternAndGet

2017-11-21 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser reassigned NIFI-4504:
---

Assignee: Michael Moser

> SimpleMapCache/PersistentMapCache: Add removeAndGet and removeByPatternAndGet
> -
>
> Key: NIFI-4504
> URL: https://issues.apache.org/jira/browse/NIFI-4504
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Jon Kessler
>Assignee: Michael Moser
>Priority: Minor
>
> Typical map implementations return the value that was removed when performing 
> a remove. Because you couldn't update the existing remove methods without it 
> being a breaking change I suggest adding new versions of the remove and 
> removeByPattern methods that return the removed value(s).
> These changes should also be applied up the chain to any class that makes use 
> of these classes such as the MapCacheServer and 
> AtomicDistributedMapCacheClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4589) Allow multiple keys in FetchDistributedMapCache

2017-11-21 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-4589:

   Resolution: Fixed
Fix Version/s: 1.5.0
   Status: Resolved  (was: Patch Available)

> Allow multiple keys in FetchDistributedMapCache
> ---
>
> Key: NIFI-4589
> URL: https://issues.apache.org/jira/browse/NIFI-4589
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
> Fix For: 1.5.0
>
>
> Currently (1.4.0) FetchDistributedMapCache will look up a single key and 
> place the value (if present) in either the flow file content or a user-named 
> attribute, depending on the value of the "Put Cache Value In Attribute" 
> property. If the user wishes to look up more than one value, they need 
> multiple FetchDistributedMapCache processors, and each would make one call to 
> the server to get a single key.
> A useful improvement would be to allow multiple keys to be retrieved at once 
> by FetchDistributedMapCache. This would likely involve the following:
> 1) Update documentation and code to accept a comma-separated list of Cache 
> Key Identifiers
> 2) If a single Cache Key Identifier is specified, current behavior is 
> retained (i.e. output to flow file content or a specified attribute)
> 3) If multiple Cache Key Identifiers are specified, then Put Cache Value In 
> Attribute must be set, and the attributes will be prefixed by the value of 
> said property, followed by a period, followed by the evaluated cache key. So 
> if Cache Key Identifier is set to "field1, field2" and Put Cache Value In 
> Attribute is set to "myattrs", then the value for field1 will be placed in 
> the "myattrs.field1" attribute, and field2's value in "myattrs.field2" 
> respectively.
> 4) Due to the possible presence of Expression Language in the Cache Key 
> Identifier property, it may not be possible to determine whether multiple 
> cache keys are present (i.e. a single EL function that generates a 
> comma-separated list), so the requirement on Put Cache Value In Attribute 
> being set must be checked at validation time (if possible) and also run-time
> 5) To make this fetch efficient, a method "subMap" can be created on the 
> DistributedMapCache API, so multiple keys can be passed and multiple 
> key/value pairs can be returned in a single call to the cache server.
> 6) #5 implies a new protocol version (would be 3 at the time of this writing) 
> be added to the DistributedMapCache API
> 7) If protocol negotiation results in a lower version being used, then the 
> client should gracefully degrade into using the "subMap" operation to make 
> multiple calls to the "get" operation, and fill in the result map manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4166) Create toolkit module to generate and build Swagger API library for NiFi

2017-11-17 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-4166:

Status: Patch Available  (was: Open)

> Create toolkit module to generate and build Swagger API library for NiFi
> 
>
> Key: NIFI-4166
> URL: https://issues.apache.org/jira/browse/NIFI-4166
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Tools and Build
>Reporter: Joe Skora
>Assignee: Joe Skora
>Priority: Minor
>
> Create a new toolkit module to generate the Swagger API library based on the 
> current REST API annotations in the NiFi source by way of the Swagger Codegen 
> Maven Plugin.  This should make it easier access to access the REST API from 
> Java code or Groovy scripts.
> Swagger Codegen support other languages, so this could be expanded for 
> additional API client types.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (NIFI-4505) MapCache/SimpleMapCache/PersistentMapCache: Add keyset method

2017-11-06 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser reassigned NIFI-4505:
---

Assignee: Michael Moser

> MapCache/SimpleMapCache/PersistentMapCache: Add keyset method
> -
>
> Key: NIFI-4505
> URL: https://issues.apache.org/jira/browse/NIFI-4505
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.4.0
>Reporter: Jon Kessler
>Assignee: Michael Moser
>Priority: Minor
>
> Suggest adding a keyset method to the MapCache and implementations as well as 
> to any client/interface that make use of a MapCache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4550) Add an InferCharacterSet processor

2017-11-03 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238451#comment-16238451
 ] 

Michael Moser commented on NIFI-4550:
-

Perhaps somewhat related to NIFI-1874?

> Add an InferCharacterSet processor
> --
>
> Key: NIFI-4550
> URL: https://issues.apache.org/jira/browse/NIFI-4550
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Priority: Minor
>
> Sometimes in a NiFi flow it is not known what character set an incoming flow 
> file is using. This can make it difficult for downstream processing if the 
> processors expect a particular charset (whether the user can configure it or 
> not). There is a ConvertCharacterSet processor, but it expects an explicit 
> value for Input Character Set, when this might not be known.
> I propose an InferCharacterSet processor, which would presumably use some 
> license-friendly third-party library (there is a discussion 
> [here|https://stackoverflow.com/questions/499010/java-how-to-determine-the-correct-charset-encoding-of-a-stream])
>  to guess the character set, perhaps adding it as an attribute for use 
> downstream in ConvertCharacterSet.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4523) AWS S3 Processors should support arbitrary regions

2017-10-24 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16217759#comment-16217759
 ] 

Michael Moser commented on NIFI-4523:
-

Hi, I believe you can override the Region setting of the S3 processors by using 
a region-specific URL in the Endpoint Override URL property.  This property 
does support expression language.

s3-.amazonaws.com
for example:
s3-us-west-1.amazonaws.com

Does this work for you?

> AWS S3 Processors should support arbitrary regions
> --
>
> Key: NIFI-4523
> URL: https://issues.apache.org/jira/browse/NIFI-4523
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.4.0
>Reporter: Benjamin Garrett
>Priority: Minor
>
> Currently the ListS3 processor uses the REGION PropertyDescriptor defined in 
> AbstractAWSProcessor.  This uses ".allowableValues()" which forces the region 
> names to come from a hard coded list.  AWS does occasionally bring new 
> regions online.  Every time there is a new region then we have to either wait 
> for a new nifi upgrade or else override the AbstractAWSProcessor (as well as 
> the necessary child classes which extend it). 
> It is simple enough to just let us type in arbitrary text into the S3 
> processor.  For example you could just comment out line 97 in 
> AbstractAWSProcessor.
> //.allowableValues(getAvailableRegions())
> If you did this, typically you also have to add an appropriate validator, 
> e.g.:  .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
> A different approach would be to expand the Nifi framework to allow you to 
> specify both ".allowableValues()" but also to allow someone to type in 
> arbitrary text as well.  From a UI perspective, you would show the user a 
> choice list but then also make it editable so someone can type in arbitrary 
> text.  There have been other instances where I thought this feature would be 
> useful.  Maybe you would use a different method name instead of 
> allowableValues, such as 'possibleValues()', and if you did this then that 
> would be an indicator that the user gets an editable choice list (as opposed 
> to an uneditable hard-coded choice list).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-1490) Add multipart request support to ListenHTTP Processor

2017-08-29 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145310#comment-16145310
 ] 

Michael Moser commented on NIFI-1490:
-

+1 to the separate processor to handle multipart/form-data, so that both 
ListenHTTP and HandleHttpRequest (see NIFI-3469) can be used to receive data.

> Add multipart request support to ListenHTTP Processor
> -
>
> Key: NIFI-1490
> URL: https://issues.apache.org/jira/browse/NIFI-1490
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Andrew Serff
>
> The current ListenHTTP processor does not seem to support multipart requests 
> that are encoded with multipart/form-data.  When a multipart request is 
> received, the ListenHTTPServlet just copies the Request InputStream to the 
> FlowFiles content which leaves the form encoding wrapper in the content and 
> in turn makes the file invalid. 
> Specifically, we want to be able to support file uploads in a multipart 
> request. 
> See this thread in the mailing list for more info: 
> http://mail-archives.apache.org/mod_mbox/nifi-users/201602.mbox/%3C6DE9CEEF-2A37-480F-8D3C-5028C590FD9E%40acesinc.net%3E



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-3686) EOFException on swap in causes tight loop in polling for flowfiles

2017-08-04 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-3686:

Issue Type: Improvement  (was: Bug)

> EOFException on swap in causes tight loop in polling for flowfiles
> --
>
> Key: NIFI-3686
> URL: https://issues.apache.org/jira/browse/NIFI-3686
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.1.1
>Reporter: Michael Moser
>
> If flowfile_repository partition fills 100% while swapping files out to a new 
> swap file, then this swap file becomes corrupt (partially written).  When 
> NiFi tries to swap this file in, EOFException happens and we get following 
> ERROR, which is nice.
> 2017-04-10 18:02:58,855 ERROR [Timer-Driven Process Thread-3] 
> o.a.n.controller.StandardFlowFileQueue Failed to swap in FlowFiles from Swap 
> File 
> /local/mwmoser/nifi-1.2.0-SNAPSHOT/./flowfile_repository/swap/1491574631605-2840b630-57fc-4f49-615b-0b37d77bec66-5dbc0ad0-921c-483e-a05d-5c65d014fa48.swap;
>  Swap File appears to be corrupt!
> However, once all other dataflow stops, the queue now shows 1 flowfiles 
> in it.  The processor reading from this queue constantly has its onTrigger() 
> called, and session.get() polls the queue and gets 0 files returned.  This 
> happens in a tight loop, with no other errors.
> To a user it appears that the processor is doing lots of work but just not 
> processing those 1 files.  The error message above only appears once in 
> the nifi-app.log, so you don't see anything wrong if you tail the log. 
>  When you restart NiFi, the error message above appears again, but the user 
> experience of 1 files not processing remains.
> The new SchemaSwapDeserializer does not (and perhaps cannot) implement the 
> IncompleteSwapFileException that the old SimpleSwapDeserializer does.  So, 
> reading a swap file is currently all-or-nothing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-3686) EOFException on swap in causes tight loop in polling for flowfiles

2017-08-04 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-3686:

Priority: Minor  (was: Major)

> EOFException on swap in causes tight loop in polling for flowfiles
> --
>
> Key: NIFI-3686
> URL: https://issues.apache.org/jira/browse/NIFI-3686
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.1.1
>Reporter: Michael Moser
>Priority: Minor
>
> If flowfile_repository partition fills 100% while swapping files out to a new 
> swap file, then this swap file becomes corrupt (partially written).  When 
> NiFi tries to swap this file in, EOFException happens and we get following 
> ERROR, which is nice.
> 2017-04-10 18:02:58,855 ERROR [Timer-Driven Process Thread-3] 
> o.a.n.controller.StandardFlowFileQueue Failed to swap in FlowFiles from Swap 
> File 
> /local/mwmoser/nifi-1.2.0-SNAPSHOT/./flowfile_repository/swap/1491574631605-2840b630-57fc-4f49-615b-0b37d77bec66-5dbc0ad0-921c-483e-a05d-5c65d014fa48.swap;
>  Swap File appears to be corrupt!
> However, once all other dataflow stops, the queue now shows 1 flowfiles 
> in it.  The processor reading from this queue constantly has its onTrigger() 
> called, and session.get() polls the queue and gets 0 files returned.  This 
> happens in a tight loop, with no other errors.
> To a user it appears that the processor is doing lots of work but just not 
> processing those 1 files.  The error message above only appears once in 
> the nifi-app.log, so you don't see anything wrong if you tail the log. 
>  When you restart NiFi, the error message above appears again, but the user 
> experience of 1 files not processing remains.
> The new SchemaSwapDeserializer does not (and perhaps cannot) implement the 
> IncompleteSwapFileException that the old SimpleSwapDeserializer does.  So, 
> reading a swap file is currently all-or-nothing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3376) Content repository disk usage is not close to reported size in Status Bar

2017-07-27 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103952#comment-16103952
 ] 

Michael Moser commented on NIFI-3376:
-

I also modified the title to describe the observations rather than propose a 
solution.  Thanks to all who are interested in investigating this!

> Content repository disk usage is not close to reported size in Status Bar
> -
>
> Key: NIFI-3376
> URL: https://issues.apache.org/jira/browse/NIFI-3376
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.7.1, 1.1.1
>Reporter: Michael Moser
>Assignee: Michael Hogue
> Attachments: NIFI-3376_Content_Repo_size_demo.xml
>
>
> On NiFi systems that deal with many files whose size is less than 1 MB, we 
> often see that the actual disk usage of the content_repository is much 
> greater than the size of flowfiles that NiFi reports are in its queues.  As 
> an example, NiFi may report "50,000 / 12.5 GB" but the content_repository 
> takes up 240 GB of its file system.  This leads to scenarios where a 500 GB 
> content_repository file system gets 100% full, but "I only had 40 GB of data 
> in my NiFi!"
> When several content claims exist in a single resource claim, and most but 
> not all content claims are terminated, the entire resource claim is still not 
> eligible for deletion or archive.  This could mean that only one 10 KB 
> content claim out of a 1 MB resource claim is counted by NiFi as existing in 
> its queues.
> If a particular flow has a slow egress point where flowfiles could back up 
> and remain on the system longer than expected, this problem is exacerbated.
> A potential solution is to compact resource claim files on disk. A background 
> thread could examine all resource claims, and for those that get "old" and 
> whose active content claim usage drops below a threshold, then rewrite the 
> resource claim file.
> A potential work-around is to allow modification of the FileSystemRepository 
> MAX_APPENDABLE_CLAIM_LENGTH to make it a smaller number.  This would increase 
> the probability that the content claims reference count in a resource claim 
> would reach 0 and the resource claim becomes eligible for deletion/archive.  
> Let users trade-off performance for more accurate accounting of NiFi queue 
> size to content repository size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-3376) Content repository disk usage is not close to reported size in Status Bar

2017-07-27 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-3376:

Summary: Content repository disk usage is not close to reported size in 
Status Bar  (was: Implement content repository ResourceClaim compaction)

> Content repository disk usage is not close to reported size in Status Bar
> -
>
> Key: NIFI-3376
> URL: https://issues.apache.org/jira/browse/NIFI-3376
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.7.1, 1.1.1
>Reporter: Michael Moser
>Assignee: Michael Hogue
> Attachments: NIFI-3376_Content_Repo_size_demo.xml
>
>
> On NiFi systems that deal with many files whose size is less than 1 MB, we 
> often see that the actual disk usage of the content_repository is much 
> greater than the size of flowfiles that NiFi reports are in its queues.  As 
> an example, NiFi may report "50,000 / 12.5 GB" but the content_repository 
> takes up 240 GB of its file system.  This leads to scenarios where a 500 GB 
> content_repository file system gets 100% full, but "I only had 40 GB of data 
> in my NiFi!"
> When several content claims exist in a single resource claim, and most but 
> not all content claims are terminated, the entire resource claim is still not 
> eligible for deletion or archive.  This could mean that only one 10 KB 
> content claim out of a 1 MB resource claim is counted by NiFi as existing in 
> its queues.
> If a particular flow has a slow egress point where flowfiles could back up 
> and remain on the system longer than expected, this problem is exacerbated.
> A potential solution is to compact resource claim files on disk. A background 
> thread could examine all resource claims, and for those that get "old" and 
> whose active content claim usage drops below a threshold, then rewrite the 
> resource claim file.
> A potential work-around is to allow modification of the FileSystemRepository 
> MAX_APPENDABLE_CLAIM_LENGTH to make it a smaller number.  This would increase 
> the probability that the content claims reference count in a resource claim 
> would reach 0 and the resource claim becomes eligible for deletion/archive.  
> Let users trade-off performance for more accurate accounting of NiFi queue 
> size to content repository size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3736) NiFi not honoring the "nifi.content.claim.max.appendable.size" and "nifi.content.claim.max.flow.files" properties

2017-07-27 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16103804#comment-16103804
 ] 

Michael Moser commented on NIFI-3736:
-

While building a template to show the effect of NIFI-3376, I found 2 things 
about this issue.

1. The nifi.content.claim.max.flow.files property setting has no effect on 
claim size.  Shall I reopen this JIRA to address this or make a new one?
2. In 1.3.0, the effective nifi.content.claim.max.appendable.size was hard 
coded to 1 MB.  Now that the property is used, the default for that property is 
10 MB.  This is a pretty significant change to default behavior of the content 
repository.  I think we should set the default for this property to 1 MB to 
match 1.3.0 behavior.  Shall I make a new JIRA to address this?

> NiFi not honoring the "nifi.content.claim.max.appendable.size" and 
> "nifi.content.claim.max.flow.files" properties
> -
>
> Key: NIFI-3736
> URL: https://issues.apache.org/jira/browse/NIFI-3736
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Michael Hogue
> Fix For: 1.4.0
>
>
> The nifi.properties file has two properties for controlling how many 
> FlowFiles to jam into one Content Claim. Unfortunately, it looks like this is 
> no longer honored in FileSystemRepository.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3376) Implement content repository ResourceClaim compaction

2017-07-24 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099055#comment-16099055
 ] 

Michael Moser commented on NIFI-3376:
-

I don't think the degree of difficulty should stop us from giving a good faith 
effort to resolve issues like this.  When you have a 99% full content_repo, 
it's not a good user experience to wonder what NiFi could possibly be keeping 
in there.  I agree that compaction is going to be a hard problem to solve, but 
if we can't work out an alternative that gets us most of the way there, then we 
should seriously reconsider moving forward with it.

So, in the interest of an 80% solution, what does everyone think of this 
proposal?  The way I currently understand it, if we've placed 99 files (for 
example) into a single content claim but it has not yet reached 
MAX_APPENDABLE_CLAIM_LENGTH, and the next file we want to store is very large, 
then that large file could become the 100th file in that claim.  The deletion 
of that large file from content_repo is now dependent on the disposition of the 
99 smaller files.  Can we always just write files that are larger than 
MAX_APPENDABLE_CLAIM_LENGTH into their own claim?  Now that 
MAX_APPENDABLE_CLAIM_LENGTH is configurable again, NiFi users would have fine 
control over this behavior, and it's easy to explain.

> Implement content repository ResourceClaim compaction
> -
>
> Key: NIFI-3376
> URL: https://issues.apache.org/jira/browse/NIFI-3376
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.7.1, 1.1.1
>Reporter: Michael Moser
>Assignee: Michael Hogue
>
> On NiFi systems that deal with many files whose size is less than 1 MB, we 
> often see that the actual disk usage of the content_repository is much 
> greater than the size of flowfiles that NiFi reports are in its queues.  As 
> an example, NiFi may report "50,000 / 12.5 GB" but the content_repository 
> takes up 240 GB of its file system.  This leads to scenarios where a 500 GB 
> content_repository file system gets 100% full, but "I only had 40 GB of data 
> in my NiFi!"
> When several content claims exist in a single resource claim, and most but 
> not all content claims are terminated, the entire resource claim is still not 
> eligible for deletion or archive.  This could mean that only one 10 KB 
> content claim out of a 1 MB resource claim is counted by NiFi as existing in 
> its queues.
> If a particular flow has a slow egress point where flowfiles could back up 
> and remain on the system longer than expected, this problem is exacerbated.
> A potential solution is to compact resource claim files on disk. A background 
> thread could examine all resource claims, and for those that get "old" and 
> whose active content claim usage drops below a threshold, then rewrite the 
> resource claim file.
> A potential work-around is to allow modification of the FileSystemRepository 
> MAX_APPENDABLE_CLAIM_LENGTH to make it a smaller number.  This would increase 
> the probability that the content claims reference count in a resource claim 
> would reach 0 and the resource claim becomes eligible for deletion/archive.  
> Let users trade-off performance for more accurate accounting of NiFi queue 
> size to content repository size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3376) Implement content repository ResourceClaim compaction

2017-07-14 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087459#comment-16087459
 ] 

Michael Moser commented on NIFI-3376:
-

I think allowing MAX_APPENDABLE_CLAIM_LENGTH to be configurable should 
definitely happen using NIFI-3736, and with documentation cleanup should also 
resolve NIFI-1603.  Once those are done, then this ticket's priority can be 
lowered, but still viable.

> Implement content repository ResourceClaim compaction
> -
>
> Key: NIFI-3376
> URL: https://issues.apache.org/jira/browse/NIFI-3376
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.7.1, 1.1.1
>Reporter: Michael Moser
>
> On NiFi systems that deal with many files whose size is less than 1 MB, we 
> often see that the actual disk usage of the content_repository is much 
> greater than the size of flowfiles that NiFi reports are in its queues.  As 
> an example, NiFi may report "50,000 / 12.5 GB" but the content_repository 
> takes up 240 GB of its file system.  This leads to scenarios where a 500 GB 
> content_repository file system gets 100% full, but "I only had 40 GB of data 
> in my NiFi!"
> When several content claims exist in a single resource claim, and most but 
> not all content claims are terminated, the entire resource claim is still not 
> eligible for deletion or archive.  This could mean that only one 10 KB 
> content claim out of a 1 MB resource claim is counted by NiFi as existing in 
> its queues.
> If a particular flow has a slow egress point where flowfiles could back up 
> and remain on the system longer than expected, this problem is exacerbated.
> A potential solution is to compact resource claim files on disk. A background 
> thread could examine all resource claims, and for those that get "old" and 
> whose active content claim usage drops below a threshold, then rewrite the 
> resource claim file.
> A potential work-around is to allow modification of the FileSystemRepository 
> MAX_APPENDABLE_CLAIM_LENGTH to make it a smaller number.  This would increase 
> the probability that the content claims reference count in a resource claim 
> would reach 0 and the resource claim becomes eligible for deletion/archive.  
> Let users trade-off performance for more accurate accounting of NiFi queue 
> size to content repository size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3503) Create a 'SplitCSV' processor

2017-06-29 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16068426#comment-16068426
 ] 

Michael Moser commented on NIFI-3503:
-

concur

> Create a 'SplitCSV' processor
> -
>
> Key: NIFI-3503
> URL: https://issues.apache.org/jira/browse/NIFI-3503
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Wesley L Lawrence
>Priority: Minor
>
> While the 'SplitText' processor helps break up newline separated records into 
> individual files, it's not uncommon to have CSV files where records span 
> multiple lines, and 'SplitText' isn't able or meant to handle this.
> Currently, one can replace, remove, or escape newline characters that exist 
> in a single CSV record by searching within quoted columns with 'ReplaceText', 
> before passing the data onto 'SplitText'. However, this may not work in all 
> cases, or could potentially remove the valid newline character at the end of 
> a CSV record, if all edge cases aren't properly covered with regex.
> Having a dedicated 'SplitCSV' processor will solve this problem, and be a 
> simpler approach for users.
> See the following [Apache NiFi user email 
> thread|https://mail-archives.apache.org/mod_mbox/nifi-users/201702.mbox/%3CCAFuL2BbgymFXwu5fRyd8pP-zu6WkToqPE2Ek7bkyBg0_-cknqQ%40mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-1452) Yield Duration can short circuit long Timer Driven Run Schedule

2017-05-19 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-1452:

Status: Patch Available  (was: Open)

> Yield Duration can short circuit long Timer Driven Run Schedule
> ---
>
> Key: NIFI-1452
> URL: https://issues.apache.org/jira/browse/NIFI-1452
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.4.1
>Reporter: Michael Moser
>Assignee: Michael Moser
>Priority: Trivial
>
> This may be a rare use case, but I configured a GetFile processor to be Timer 
> Driven with a Run Schedule of 30 secs.  Its Yield Duration was default 1 sec. 
>  I expected GetFile onTrigger() to be called every 30 secs, but it was being 
> called every 1 sec most of the time.
> GetFile will call context.yield() when it polls a directory and gets an empty 
> list in return.  It appears that a yield will ignore the Run Schedule.  Many 
> standard processors call context.yield() when they have no work to do.
> I changed my scheduling strategy to CRON Driven with its run schedule every 
> 30 seconds, and the onTrigger() was called every 30 seconds, even after a 
> yield.  So CRON Driven scheduling is working as expected after a yield.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-1452) Yield Duration can short circuit long Timer Driven Run Schedule

2017-05-19 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017931#comment-16017931
 ] 

Michael Moser commented on NIFI-1452:
-

This also affects GetHTTP, which yields when HTTP server responds with 304 
NOT_MODIFIED.  So a GetHTTP with Run Schedule of 10 minutes and Yield Duration 
of 1 second, actually hits the HTTP server every 1 second when the server 
replies with 304.

An example server that exhibits this behavior is 
http://archive.apache.org/icons/blank.gif

> Yield Duration can short circuit long Timer Driven Run Schedule
> ---
>
> Key: NIFI-1452
> URL: https://issues.apache.org/jira/browse/NIFI-1452
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.4.1
>Reporter: Michael Moser
>Assignee: Michael Moser
>Priority: Trivial
>
> This may be a rare use case, but I configured a GetFile processor to be Timer 
> Driven with a Run Schedule of 30 secs.  Its Yield Duration was default 1 sec. 
>  I expected GetFile onTrigger() to be called every 30 secs, but it was being 
> called every 1 sec most of the time.
> GetFile will call context.yield() when it polls a directory and gets an empty 
> list in return.  It appears that a yield will ignore the Run Schedule.  Many 
> standard processors call context.yield() when they have no work to do.
> I changed my scheduling strategy to CRON Driven with its run schedule every 
> 30 seconds, and the onTrigger() was called every 30 seconds, even after a 
> yield.  So CRON Driven scheduling is working as expected after a yield.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (NIFI-3824) Perform Release Manager duties for 0.7.3 Release

2017-05-18 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser resolved NIFI-3824.
-
   Resolution: Delivered
Fix Version/s: 0.7.3

> Perform Release Manager duties for 0.7.3 Release
> 
>
> Key: NIFI-3824
> URL: https://issues.apache.org/jira/browse/NIFI-3824
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Michael Moser
>Assignee: Michael Moser
>Priority: Minor
> Fix For: 0.7.3
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (NIFI-1452) Yield Duration can short circuit long Timer Driven Run Schedule

2017-05-17 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser reassigned NIFI-1452:
---

Assignee: Michael Moser

> Yield Duration can short circuit long Timer Driven Run Schedule
> ---
>
> Key: NIFI-1452
> URL: https://issues.apache.org/jira/browse/NIFI-1452
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.4.1
>Reporter: Michael Moser
>Assignee: Michael Moser
>Priority: Trivial
>
> This may be a rare use case, but I configured a GetFile processor to be Timer 
> Driven with a Run Schedule of 30 secs.  Its Yield Duration was default 1 sec. 
>  I expected GetFile onTrigger() to be called every 30 secs, but it was being 
> called every 1 sec most of the time.
> GetFile will call context.yield() when it polls a directory and gets an empty 
> list in return.  It appears that a yield will ignore the Run Schedule.  Many 
> standard processors call context.yield() when they have no work to do.
> I changed my scheduling strategy to CRON Driven with its run schedule every 
> 30 seconds, and the onTrigger() was called every 30 seconds, even after a 
> yield.  So CRON Driven scheduling is working as expected after a yield.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (NIFI-1917) Upgrade jul-to-slf4j to 1.7.13 or later

2017-05-16 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser resolved NIFI-1917.
-
   Resolution: Fixed
Fix Version/s: 1.2.0

> Upgrade jul-to-slf4j to 1.7.13 or later
> ---
>
> Key: NIFI-1917
> URL: https://issues.apache.org/jira/browse/NIFI-1917
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 0.6.1
>Reporter: Michael Moser
>Priority: Minor
> Fix For: 1.2.0
>
>
> When building a custom processor, we encountered 
> [SLF4J-337|http://jira.qos.ch/browse/SLF4J-337] which is a bug in 
> jul-to-slf4j version 1.7.12.  Please upgrade SLF4J to help us resolve this.  
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (NIFI-3911) Improve documentation for Controller Services scoping

2017-05-16 Thread Michael Moser (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16012646#comment-16012646
 ] 

Michael Moser commented on NIFI-3911:
-

Recommend improving the Migration Guidance confluence page for "Migrating from 
0.7.x to 1.0.0" to include a sentence about this as well.

> Improve documentation for Controller Services scoping
> -
>
> Key: NIFI-3911
> URL: https://issues.apache.org/jira/browse/NIFI-3911
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Documentation & Website
>Reporter: Andrew Lim
>Assignee: Andrew Lim
>Priority: Minor
>
> Filing to revisit and improve the documentation on Controller Services 
> scoping which has caused some confusion for users.  There have been some 
> improvements made to improve the UX (NIFI-3128) but the doc can perhaps due a 
> better job highlighting this change in functionality from 0.x NiFi.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-3862) ListenHTTPServlet should set the issuerDN as well

2017-05-14 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-3862:

Fix Version/s: (was: 0.7.3)

> ListenHTTPServlet should set the issuerDN as well
> -
>
> Key: NIFI-3862
> URL: https://issues.apache.org/jira/browse/NIFI-3862
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 0.7.1
>Reporter: Vincent Russell
>Priority: Minor
>  Labels: security
>
> The ListenHtttpServlet currently sets the remote user DN as an attribute in 
> the flowfile, but it really should set the remote issuer dn as well.  This 
> will allow us to verify that the remote host actually as permission to hit 
> our server using those two pieces of information.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-2685) Builds fails due to "java.awt.AWTError: Can't connect to X11 window server "

2017-05-11 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-2685:

Fix Version/s: 0.7.3

> Builds fails due to "java.awt.AWTError: Can't connect to X11 window server "
> 
>
> Key: NIFI-2685
> URL: https://issues.apache.org/jira/browse/NIFI-2685
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Andre F de Miranda
>Assignee: Andre F de Miranda
>Priority: Trivial
> Fix For: 1.1.0, 0.8.0, 0.7.3
>
>
> When building NiFi on a machine with ssh X11 forwarding enabled, maven may 
> fail to build due to the following error:
> {code}
> testResize(org.apache.nifi.processors.image.TestResizeImage)  Time elapsed: 
> 2.22 sec  <<< FAILURE!
> java.lang.AssertionError: java.awt.AWTError: Can't connect to X11 window 
> server using 'localhost:12.0' as the value of the DISPLAY variable.
> at 
> org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:192)
> at 
> org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:151)
> at 
> org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:146)
> at 
> org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:141)
> at 
> org.apache.nifi.util.StandardProcessorTestRunner.run(StandardProcessorTestRunner.java:136)
> at 
> org.apache.nifi.processors.image.TestResizeImage.testResize(TestResizeImage.java:44)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (NIFI-1125) ERROR [NiFi Web Server-22655] o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: java.lang.NullPointerException

2017-05-11 Thread Michael Moser (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Moser updated NIFI-1125:

Fix Version/s: 0.7.3

> ERROR [NiFi Web Server-22655] o.a.nifi.web.api.config.ThrowableMapper An 
> unexpected error has occurred: java.lang.NullPointerException
> --
>
> Key: NIFI-1125
> URL: https://issues.apache.org/jira/browse/NIFI-1125
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 0.4.0
> Environment: Centos 6.7
>Reporter: Mark Petronic
>Assignee: Koji Kawamura
>Priority: Minor
> Fix For: 0.8.0, 1.2.0, 0.7.3
>
> Attachments: StatsIngestFlow.xml
>
>
> Running with a latest master branch build off commit
> dbf0c7893fef964bfbb3a4c039c756396587ce12.
> Steps to recreate:
> 1. All processors stopped
> 2. Add uuid to "Attributes to Send" in the InvokeHttp Processor
> 3. Save and close config dialog
> 4. Open same config dialog and remove uuid field so that is is "No value set"
> 5. Apply changes and web UI will crash and indicate error has occurred
> and below will seen in user log
> 6. Hit F5 and browser reloads UI just fine
> See attached template.
> 2015-11-07 19:42:58,369 ERROR [NiFi Web Server-22655]
> o.a.nifi.web.api.config.ThrowableMapper An unexpected error has
> occurred: java.lang.NullPointerException. Returning Internal Server
> Error response.
> java.lang.NullPointerException: null
> at 
> org.apache.nifi.processors.standard.InvokeHTTP.onPropertyModified(InvokeHTTP.java:121)
> ~[na:na]
> at 
> org.apache.nifi.controller.AbstractConfiguredComponent.removeProperty(AbstractConfiguredComponent.java:163)
> ~[nifi-framework-core-api-0.3.1-SNAPSHOT.jar:0.3.1-SNAPSHOT]
> at 
> org.apache.nifi.web.dao.impl.StandardProcessorDAO.configureProcessor(StandardProcessorDAO.java:174)
> ~[classes/:na]
> at 
> org.apache.nifi.web.dao.impl.StandardProcessorDAO.updateProcessor(StandardProcessorDAO.java:391)
> ~[classes/:na]
> at 
> org.apache.nifi.web.dao.impl.StandardProcessorDAO$$FastClassBySpringCGLIB$$779e089b.invoke()
> ~[spring-core-4.1.6.RELEASE.jar:na]
> at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
> ~[spring-core-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:717)
> ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
> ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at 
> org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:85)
> ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at 
> org.apache.nifi.audit.ProcessorAuditor.updateProcessorAdvice(ProcessorAuditor.java:120)
> ~[classes/:na]
> at sun.reflect.GeneratedMethodAccessor176.invoke(Unknown Source) ~[na:na]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.8.0_51]
> at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_51]
> at 
> org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:621)
> ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at 
> org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:610)
> ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at 
> org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:68)
> ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:168)
> ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at 
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
> ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:653)
> ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
> at 
> org.apache.nifi.web.dao.impl.StandardProcessorDAO$$EnhancerBySpringCGLIB$$f8bfa279.updateProcessor()
> ~[spring-core-4.1.6.RELEASE.jar:na]
> at 
> org.apache.nifi.web.StandardNiFiServiceFacade$2.execute(StandardNiFiServiceFacade.java:412)
> ~[classes/:0.3.1-SNAPSHOT]
> at 
> org.apache.nifi.web.StandardOptimisticLockingManager.configureFlow(StandardOptimisticLockingManager.java:83)
> ~[nifi-web-optimistic-locking-0.3.1-SNAPSHOT.jar:0.3.1-SNAPSHOT]
> at 
> 

  1   2   3   4   >