[jira] [Commented] (NIFI-7588) InvokeHTTP ignoring custom parameters when stop+finalize+start

2021-08-05 Thread Firenz (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17393745#comment-17393745
 ] 

Firenz commented on NIFI-7588:
--

I think i had the same issue with my old NIFI 1.8.0 here :  NIFI-5964

> InvokeHTTP ignoring custom parameters when stop+finalize+start
> --
>
> Key: NIFI-7588
> URL: https://issues.apache.org/jira/browse/NIFI-7588
> Project: Apache NiFi
>  Issue Type: Bug
> Environment: Amazon Linux
>Reporter: Alejandro Fiel Martínez
>Assignee: Joseph Gresock
>Priority: Major
> Attachments: invokeHTTP_NiFi_bug.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> I have an InvokeHTTP processor, with 3 custom paramenters to be passed as 
> headers, If I add an SSL Context Service and remove it, the processor stop 
> using those 3 paramenters and I have to delete and recreate then, they are 
> there, but I see in DEBUG that there are not used in the GET request.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-9011) Tags Processor Kafka 2.6 / 2.5

2021-08-05 Thread Firenz (Jira)
Firenz created NIFI-9011:


 Summary: Tags Processor Kafka 2.6 / 2.5
 Key: NIFI-9011
 URL: https://issues.apache.org/jira/browse/NIFI-9011
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.13.2, 1.13.1, 1.14.0, 1.13.0
Reporter: Firenz


The Kafka processors 2.6 are still decribe are "2.5"

One exemple : 

In 
/nifi/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-2-6-processors/src/main/java/org/apache/nifi/processors/kafka/pubsub
{code:java}
@CapabilityDescription("Sends the contents of a FlowFile as a message to Apache 
Kafka using the Kafka 2.5 Producer API."
+ "The messages to send may be individual FlowFiles or may be delimited, 
using a "
+ "user-specified delimiter, such as a new-line. "
+ "The complementary NiFi processor for fetching messages is 
ConsumeKafka_2_6.")

{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8992) KafkaRecordSink_2_* - Unable to use PLAIN auth mecanism

2021-08-03 Thread Firenz (Jira)
Firenz created NIFI-8992:


 Summary: KafkaRecordSink_2_* - Unable to use PLAIN auth mecanism
 Key: NIFI-8992
 URL: https://issues.apache.org/jira/browse/NIFI-8992
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.14.0, 1.13.0
Reporter: Firenz


The KafkaRecordSink services cannot use PLAIN authn mecanism.

If we use "PLAIN", an error suggests to fill username + password properties. 
Theses properties do not exist. If we add them dynamically, it does not work 
neither. The Kafka JaaS config (with jvm args) is not used.

 

(There is a similar compatibility breakthrough with all kafka processor (1.8.0 
was ok, i'm testing 1.14). The fact to use a static property to choose the 
Authn method (Kerberos, Plain etc) + username + password breaks/hide the global 
Kafka Jaas config).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7840) ExecuteScript - Groovy 3.0

2020-09-23 Thread Firenz (Jira)
Firenz created NIFI-7840:


 Summary: ExecuteScript - Groovy 3.0
 Key: NIFI-7840
 URL: https://issues.apache.org/jira/browse/NIFI-7840
 Project: Apache NiFi
  Issue Type: Wish
  Components: Extensions
Reporter: Firenz


Groovy 4.0 is in alpha, and Groovy 3.0 is out for long time, with features like 
lambdas.

(ExecuteScript is using Groov 2.5.4)

New features : [https://groovy-lang.org/releasenotes/groovy-3.0.html]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7749) listSFTP - Failed "Should not reach here"

2020-08-19 Thread Firenz (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17180595#comment-17180595
 ] 

Firenz commented on NIFI-7749:
--

After investigation it would be a side effect of a HTTP proxy access denied : 
{code:java}
2020-08-18 12:01:29,453 ERROR [Timer-Driven Process Thread-8] 
o.a.nifi.processors.standard.ListSFTP 
ListSFTP[id=9e4976a8-7457-3ded-7ddd-28cbf73202c8] 
ListSFTP[id=9e4976a8-7457-3ded-7ddd-28cbf73202c8] failed to process session due 
to java.lang.InternalError: Should not reach here; Processor Administratively 
Yielded for 1 sec: java.lang.InternalError: Should not reach here2020-08-18 
12:01:29,453 ERROR [Timer-Driven Process Thread-8] 
o.a.nifi.processors.standard.ListSFTP 
ListSFTP[id=9e4976a8-7457-3ded-7ddd-28cbf73202c8] 
ListSFTP[id=9e4976a8-7457-3ded-7ddd-28cbf73202c8] failed to process session due 
to java.lang.InternalError: Should not reach here; Processor Administratively 
Yielded for 1 sec: java.lang.InternalError: Should not reach 
herejava.lang.InternalError: Should not reach here at 
java.net.HttpConnectSocketImpl.doTunneling(Unknown Source) at 
java.net.HttpConnectSocketImpl.doTunnel(Unknown Source) at 
java.net.HttpConnectSocketImpl.access$200(Unknown Source) at 
java.net.HttpConnectSocketImpl$2.run(Unknown Source) at 
java.net.HttpConnectSocketImpl$2.run(Unknown Source) at 
java.security.AccessController.doPrivileged(Native Method) at 
java.net.HttpConnectSocketImpl.privilegedDoTunnel(Unknown Source) at 
java.net.HttpConnectSocketImpl.connect(Unknown Source) at 
java.net.Socket.connect(Unknown Source) at 
net.schmizz.sshj.SocketClient.connect(SocketClient.java:126) at 
org.apache.nifi.processors.standard.util.SFTPTransfer.getSFTPClient(SFTPTransfer.java:595)
 at 
org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:238)
 at 
org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:201)
 at 
org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:106)
 at 
org.apache.nifi.processors.standard.ListSFTP.performListing(ListSFTP.java:146) 
at 
org.apache.nifi.processor.util.list.AbstractListProcessor.listByTrackingTimestamps(AbstractListProcessor.java:472)
 at 
org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:414)
 at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176)
 at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
 at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
 at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) at 
java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at 
java.util.concurrent.FutureTask.runAndReset(Unknown Source) at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown
 Source) at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
 Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at 
java.lang.Thread.run(Unknown Source)Caused by: 
java.lang.reflect.InvocationTargetException: null at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at 
java.lang.reflect.Method.invoke(Unknown Source) ... 29 common frames 
omittedCaused by: java.io.IOException: Unable to tunnel through proxy. Proxy 
returns "HTTP/1.1 403 Forbidden" at 
sun.net.www.protocol.http.HttpURLConnection.doTunneling(Unknown Source) ... 33 
common frames omitted{code}
 

I have to double-check, but it was not there in NIFI 1.8.0 with the previous 
JSch (i upgraded due to a Algorithm negociation issue).

 

I'm using Oracle JRE  :
{code:java}
java version "1.8.0_211"
Java(TM) SE Runtime Environment (build 1.8.0_211-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode){code}
 

> listSFTP - Failed "Should not reach here"
> -
>
> Key: NIFI-7749
> URL: https://issues.apache.org/jira/browse/NIFI-7749
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
> Environment: Windows 10 + Java8 + One standalone NIFI
>Reporter: Firenz
>Priority: Major
> Attachments: listSFTP.xml
>
>
> Connecting to a B2B SFTP over HTTPs proxy : 
>  
> ListSFTP[id=9e4976a8-7457-3ded-7ddd-28cbf73202c8] 
> ListSFTP[id=9e4976a8-7457-3ded-7ddd-28cbf73202c8] failed to process session 
> due to {color:#FF}Should not reach here{color}; Processor 
> 

[jira] [Created] (NIFI-7749) listSFTP - Failed "Should not reach here"

2020-08-18 Thread Firenz (Jira)
Firenz created NIFI-7749:


 Summary: listSFTP - Failed "Should not reach here"
 Key: NIFI-7749
 URL: https://issues.apache.org/jira/browse/NIFI-7749
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.11.4
 Environment: Windows 10 + Java8 + One standalone NIFI
Reporter: Firenz
 Attachments: listSFTP.xml

Connecting to a B2B SFTP over HTTPs proxy : 

 

ListSFTP[id=9e4976a8-7457-3ded-7ddd-28cbf73202c8] 
ListSFTP[id=9e4976a8-7457-3ded-7ddd-28cbf73202c8] failed to process session due 
to {color:#FF}Should not reach here{color}; Processor Administratively 
Yielded for 1 sec: java.lang.InternalError: Should not reach here

 

I cant find a clue in github for the keyword "Should not reach here". Template 
with "masked" URLs/credentials.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7596) InvokeHTTP - Get Request Body From Attribute

2020-07-02 Thread Firenz (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Firenz updated NIFI-7596:
-
Description: 
*Use case :* the flow file contains 1 big data (ex : PDF or pictures), but you 
want to lookup metadata with InvokeHTTP using a POST (or PUT). POST is often 
used in http API for "search" with lot of params or list of params.

Other exemple is the 3-steps batch-upload of products like Nuxeo or Salesforce 
Bulk API : you need to "open" a batch with a POTS invokeHTTP.

 

*Proposition* : In the InvokeHTTP, add a "Get Request Body From Attribute" 
(like Put Response Body In Attribute). If this field not empty, the chosen 
attribute will be read as the request, insted of the flow file.

  was:
*Use case :* the flow file contains 1 big data (ex : PDF or pictures), but you 
want to lookup metadata with InvokeHTTP using a POST (or PUT). POST is often 
used in http API for "search" with lot of params or list of params.

 

*Proposition* : In the InvokeHTTP, add a "Get Request Body From Attribute" 
(like Put Response Body In Attribute). If this field not empty, the chosen 
attribute will be read as the request, insted of the flow file.


> InvokeHTTP - Get Request Body From Attribute
> 
>
> Key: NIFI-7596
> URL: https://issues.apache.org/jira/browse/NIFI-7596
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Firenz
>Priority: Minor
>
> *Use case :* the flow file contains 1 big data (ex : PDF or pictures), but 
> you want to lookup metadata with InvokeHTTP using a POST (or PUT). POST is 
> often used in http API for "search" with lot of params or list of params.
> Other exemple is the 3-steps batch-upload of products like Nuxeo or 
> Salesforce Bulk API : you need to "open" a batch with a POTS invokeHTTP.
>  
> *Proposition* : In the InvokeHTTP, add a "Get Request Body From Attribute" 
> (like Put Response Body In Attribute). If this field not empty, the chosen 
> attribute will be read as the request, insted of the flow file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7596) InvokeHTTP - Get Request Body From Attribute

2020-07-02 Thread Firenz (Jira)
Firenz created NIFI-7596:


 Summary: InvokeHTTP - Get Request Body From Attribute
 Key: NIFI-7596
 URL: https://issues.apache.org/jira/browse/NIFI-7596
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Firenz


*Use case :* the flow file contains 1 big data (ex : PDF or pictures), but you 
want to lookup metadata with InvokeHTTP using a POST (or PUT). POST is often 
used in http API for "search" with lot of params or list of params.

 

*Proposition* : In the InvokeHTTP, add a "Get Request Body From Attribute" 
(like Put Response Body In Attribute). If this field not empty, the chosen 
attribute will be read as the request, insted of the flow file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6975) Processor for SAP Java Connector (SAP JCo)

2020-06-24 Thread Firenz (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144018#comment-17144018
 ] 

Firenz commented on NIFI-6975:
--

It seems difficult due to the fact that the JCO cannot be redistributed with 
the nifi distribution. SAP says : "The redistribution of any connector is not 
allowed." ([https://support.sap.com/en/product/connectors/jco.html])

 

It may be possible for someone to make the code in a external GIT repo, and 
allow each to build the nar and copy it to NIFI.

 

 

> Processor for SAP Java Connector (SAP JCo)
> --
>
> Key: NIFI-6975
> URL: https://issues.apache.org/jira/browse/NIFI-6975
> Project: Apache NiFi
>  Issue Type: Wish
>Reporter: Martin
>Priority: Major
>  Labels: JCo, Processor, RFC, SAP
>
> A jco processor would allow us to communicate with on-premise SAP systems via 
> SAP's RFC protocol.
> https://support.sap.com/en/product/connectors/jco.html
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7522) Processor Kafka 2.5 / Static Membership

2020-06-17 Thread Firenz (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Firenz updated NIFI-7522:
-
Attachment: (was: nifi-kafka-2-5-nar-1.11.5-SNAPSHOT.nar)

> Processor Kafka 2.5 / Static Membership
> ---
>
> Key: NIFI-7522
> URL: https://issues.apache.org/jira/browse/NIFI-7522
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Firenz
>Priority: Minor
>  Labels: kafka, processor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Kafka 2.4 introduced the static membership 
> [[KIP-345|https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]]
>  
> So i propose to add a Publish/Consume 2_5 with the last Kafka Client 2.5. 
>  
> I don't know the versioning strategy for Kafka Processors. We have 2 choices 
> : 
>  * Thanks to dynamic properties, upgrading the 2.0 processors set with 2.5 
> kafka libs (confusing but lazy update)
>  * Create a new 2.5 processors set
>  
> I've created a PR with the second choice : A new 2.5 processors set.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7522) Processor Kafka 2.5 / Static Membership

2020-06-17 Thread Firenz (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Firenz updated NIFI-7522:
-
Description: 
Kafka 2.4 introduced the static membership 
[[KIP-345|https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]]

 

So i propose to add a Publish/Consume 2_5 with the last Kafka Client 2.5. 

 

I don't know the versioning strategy for Kafka Processors. We have 2 choices : 
 * Thanks to dynamic properties, upgrading the 2.0 processors set with 2.5 
kafka libs (confusing but lazy update)
 * Create a new 2.5 processors set

 

I've created a PR with the second choice : A new 2.5 processors set.

 

  was:
Kafka 2.4 introduced the static membership 
[[KIP-345|https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]]

 

So i propose to add a Publish/Consume 2_5 with the last Kafka Client 2.5. 

 

I don't know the versionning strategy for Kafka Processors. We have 2 choices : 
 * Thanks to dynamic properties, upgrading the 2.0 processors set with 2.5 
kafka libs (confusing but lazy update)
 * Create a new 2.5 processors set

 

So i created a new set of 2.5 processors from 1.11.5-SNAPSHOT. It builds, test 
ok. But i need to learn the full process to create the branch/pull/merge 
request etc. For people in the urge or just curious, my last build is attached.

 

 Labels: kafka processor  (was: )

> Processor Kafka 2.5 / Static Membership
> ---
>
> Key: NIFI-7522
> URL: https://issues.apache.org/jira/browse/NIFI-7522
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Firenz
>Priority: Minor
>  Labels: kafka, processor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Kafka 2.4 introduced the static membership 
> [[KIP-345|https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]]
>  
> So i propose to add a Publish/Consume 2_5 with the last Kafka Client 2.5. 
>  
> I don't know the versioning strategy for Kafka Processors. We have 2 choices 
> : 
>  * Thanks to dynamic properties, upgrading the 2.0 processors set with 2.5 
> kafka libs (confusing but lazy update)
>  * Create a new 2.5 processors set
>  
> I've created a PR with the second choice : A new 2.5 processors set.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7533) Urgent requirement for cdc for Postgres

2020-06-17 Thread Firenz (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17138184#comment-17138184
 ] 

Firenz commented on NIFI-7533:
--

CDC for MySQL and PostgreSQL are two distinct "technologies" and require 
differents efforts. Si there is no Postgres CDC "out of the nifi box".

 
 * A simple CDC for Postgres : use Postgres Logical Replication with 
publish/suscribe + JSON format to output in local/nfs. GetFiles with nifi and 
parse. 
 * Alternative : use Postgres Logical Replication  + KafkaConnect + Kafka.

Of course it depends of lot of things (OnPrem Postgres, OS access, network 
typo, perfs...).

 

> Urgent requirement for cdc for Postgres
> ---
>
> Key: NIFI-7533
> URL: https://issues.apache.org/jira/browse/NIFI-7533
> Project: Apache NiFi
>  Issue Type: Wish
>Reporter: Vivek
>Priority: Major
>
> Hi Team, I can see the cdc for mysql in Nifi & was successfully able to use 
> it but now I need to use cdc with Postgres. Can someone suggest if this is 
> possible in Nifi.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7522) Processor Kafka 2.5 / Static Membership

2020-06-11 Thread Firenz (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Firenz updated NIFI-7522:
-
Description: 
Kafka 2.4 introduced the static membership 
[[KIP-345|https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]]

 

So i propose to add a Publish/Consume 2_5 with the last Kafka Client 2.5. 

 

I don't know the versionning strategy for Kafka Processors. We have 2 choices : 
 * Thanks to dynamic properties, upgrading the 2.0 processor with 2.5 libs 
(confusing but lazy update)
 * Create a new 2.5 process set

 

So i created a new set of 2.5 processors from 1.11.5-SNAPSHOT. It builds, test 
ok. But i need to learn the full process to create the branch/pull/merge 
request etc. For people in the urge or just curious, my last build is attached.

 

  was:
Kafka 2.4 introduced the static membership 
[[KIP-345|https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]]

 

So i propose to add a Publish/Consume 2_5 with the last Kafka Client 2.5. 

 

I don't know the versionning strategy for Kafka Processors. There is no 
difference between the 2.0 and 2.5 processor (no new fields, the 
group.instance.id is added with dynamic properties).

 

So i created a new set of 2.5 processors from 1.11.5-SNAPSHOT. It builds, test 
ok. But i need to learn the full process to create the branch/pull/merge 
request etc. For people in the urge or just curious, my last build is attached.

 


> Processor Kafka 2.5 / Static Membership
> ---
>
> Key: NIFI-7522
> URL: https://issues.apache.org/jira/browse/NIFI-7522
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Firenz
>Priority: Minor
> Attachments: nifi-kafka-2-5-nar-1.11.5-SNAPSHOT.nar
>
>
> Kafka 2.4 introduced the static membership 
> [[KIP-345|https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]]
>  
> So i propose to add a Publish/Consume 2_5 with the last Kafka Client 2.5. 
>  
> I don't know the versionning strategy for Kafka Processors. We have 2 choices 
> : 
>  * Thanks to dynamic properties, upgrading the 2.0 processor with 2.5 libs 
> (confusing but lazy update)
>  * Create a new 2.5 process set
>  
> So i created a new set of 2.5 processors from 1.11.5-SNAPSHOT. It builds, 
> test ok. But i need to learn the full process to create the branch/pull/merge 
> request etc. For people in the urge or just curious, my last build is 
> attached.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7522) Processor Kafka 2.5 / Static Membership

2020-06-11 Thread Firenz (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Firenz updated NIFI-7522:
-
Description: 
Kafka 2.4 introduced the static membership 
[[KIP-345|https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]]

 

So i propose to add a Publish/Consume 2_5 with the last Kafka Client 2.5. 

 

I don't know the versionning strategy for Kafka Processors. We have 2 choices : 
 * Thanks to dynamic properties, upgrading the 2.0 processors set with 2.5 
kafka libs (confusing but lazy update)
 * Create a new 2.5 processors set

 

So i created a new set of 2.5 processors from 1.11.5-SNAPSHOT. It builds, test 
ok. But i need to learn the full process to create the branch/pull/merge 
request etc. For people in the urge or just curious, my last build is attached.

 

  was:
Kafka 2.4 introduced the static membership 
[[KIP-345|https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]]

 

So i propose to add a Publish/Consume 2_5 with the last Kafka Client 2.5. 

 

I don't know the versionning strategy for Kafka Processors. We have 2 choices : 
 * Thanks to dynamic properties, upgrading the 2.0 processor with 2.5 libs 
(confusing but lazy update)
 * Create a new 2.5 process set

 

So i created a new set of 2.5 processors from 1.11.5-SNAPSHOT. It builds, test 
ok. But i need to learn the full process to create the branch/pull/merge 
request etc. For people in the urge or just curious, my last build is attached.

 


> Processor Kafka 2.5 / Static Membership
> ---
>
> Key: NIFI-7522
> URL: https://issues.apache.org/jira/browse/NIFI-7522
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Firenz
>Priority: Minor
> Attachments: nifi-kafka-2-5-nar-1.11.5-SNAPSHOT.nar
>
>
> Kafka 2.4 introduced the static membership 
> [[KIP-345|https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]]
>  
> So i propose to add a Publish/Consume 2_5 with the last Kafka Client 2.5. 
>  
> I don't know the versionning strategy for Kafka Processors. We have 2 choices 
> : 
>  * Thanks to dynamic properties, upgrading the 2.0 processors set with 2.5 
> kafka libs (confusing but lazy update)
>  * Create a new 2.5 processors set
>  
> So i created a new set of 2.5 processors from 1.11.5-SNAPSHOT. It builds, 
> test ok. But i need to learn the full process to create the branch/pull/merge 
> request etc. For people in the urge or just curious, my last build is 
> attached.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7522) Processor Kafka 2.5 / Static Membership

2020-06-11 Thread Firenz (Jira)
Firenz created NIFI-7522:


 Summary: Processor Kafka 2.5 / Static Membership
 Key: NIFI-7522
 URL: https://issues.apache.org/jira/browse/NIFI-7522
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Firenz
 Attachments: nifi-kafka-2-5-nar-1.11.5-SNAPSHOT.nar

Kafka 2.4 introduced the static membership 
[[KIP-345|https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances]]

 

So i propose to add a Publish/Consume 2_5 with the last Kafka Client 2.5. 

 

I don't know the versionning strategy for Kafka Processors. There is no 
difference between the 2.0 and 2.5 processor (no new fields, the 
group.instance.id is added with dynamic properties).

 

So i created a new set of 2.5 processors from 1.11.5-SNAPSHOT. It builds, test 
ok. But i need to learn the full process to create the branch/pull/merge 
request etc. For people in the urge or just curious, my last build is attached.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6777) Data Provenance SIGSEGV : Search Lucene Index-1

2019-12-06 Thread Firenz (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16989925#comment-16989925
 ] 

Firenz commented on NIFI-6777:
--

The disabling of G1GC did not solved our issue. We've just have a "double 
crash" : Node 1 & 3 of our 3-nodes cluster.
 * Would using OpenJDK change somehting ? 
 * Would upgrading Nifi 1.8.0 to OpenJDK 9/11/13 is possible ?

> Data Provenance SIGSEGV : Search Lucene Index-1
> ---
>
> Key: NIFI-6777
> URL: https://issues.apache.org/jira/browse/NIFI-6777
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
> Environment: Linux CentOS Linux release 7.5.1804 (Core)
> Linux 3.10.0-862.11.6.el7.x86_64
> jre1.8.0_181-amd64 et jre1.8.0_221-amd64
>Reporter: Firenz
>Priority: Major
>
> NIFI crashes with a core dump when using data provenance (5 times in 1 
> months, 4 differents VMs).
>  
> It happens radomly when one of my data analyst is using dataprovenance, but 
> the thread is always "Search Lucene Index-1". My monitoring does not show any 
> memory or thread issues (overload...)
>  
> Any advice ? (I cannot attach the hs_err_pid54305.log, token issue with Jira 
> ?)
>  
> Beginning of core dump : 
> {code:java}
> # JRE version: Java(TM) SE Runtime Environment (8.0_181-b13) (build 
> 1.8.0_181-b13)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.181-b13 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # J 33233 C2 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V (448 
> bytes) @ 0x7f9c52baabf9 [0x7f9
> c52baaa40+0x1b9]
> #
> # Core dump written. Default location: /appli/nifi-1.8.0/core or core.54305
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> #---  T H R E A D  ---Current thread 
> (0x7f9c3c02b000):  JavaThread "Search Lucene Index-1" daemon 
> [_thread_in_Java, id=64766, stack(0x7f9b9a
> c4d000,0x7f9b9ad4e000)]siginfo: si_signo: 11 (SIGSEGV), si_code: 1 
> (SEGV_MAPERR), si_addr: 0x7f9b82019d0c
> {code}
> My Dataprovenance config :
> {code:java}
> # Persistent Provenance Repository Properties
> nifi.provenance.repository.directory.default=/data_nifi_provenance_repo/1.8.0
> nifi.provenance.repository.max.storage.time=480 hours
> nifi.provenance.repository.max.storage.size=9 GB
> nifi.provenance.repository.rollover.time=30 secs
> nifi.provenance.repository.rollover.size=100 MB
> nifi.provenance.repository.query.threads=2
> nifi.provenance.repository.index.threads=2
> nifi.provenance.repository.compress.on.rollover=true
> nifi.provenance.repository.always.sync=false
> nifi.provenance.repository.journal.count=16
> # Comma-separated list of fields. Fields that are not indexed will not be 
> searchable. Valid fields are:
> # EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
> AlternateIdentifierURI, Relationship, Details
> nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, 
> ProcessorID, Relationship
> # FlowFile Attributes that should be indexed and made searchable.  Some 
> examples to consider are filename, uuid, mime.type
> nifi.provenance.repository.indexed.attributes=external_id, batch_id, 
> property_id, property_code, http.headers.x-request-id
> # Large values for the shard size will result in more Java heap usage when 
> searching the Provenance Repository
> # but should provide better performance
> nifi.provenance.repository.index.shard.size=500 MB
> # Indicates the maximum length that a FlowFile attribute can be when 
> retrieving a Provenance Event from
> # the repository. If the length of any attribute exceeds this value, it will 
> be truncated when the event is retrieved.
> nifi.provenance.repository.max.attribute.length=65536
> nifi.provenance.repository.concurrent.merge.threads=2
> nifi.provenance.repository.warm.cache.frequency=1 hour
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6777) Data Provenance SIGSEGV : Search Lucene Index-1

2019-10-16 Thread Firenz (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953028#comment-16953028
 ] 

Firenz commented on NIFI-6777:
--

Thanks for you answer. We are using G1GC. So, we've just disabled it on our 
staging, and we'll rollout on production next week. I'll let you know the 
result in 1 month, or the next crash if any.

> Data Provenance SIGSEGV : Search Lucene Index-1
> ---
>
> Key: NIFI-6777
> URL: https://issues.apache.org/jira/browse/NIFI-6777
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
> Environment: Linux CentOS Linux release 7.5.1804 (Core)
> Linux 3.10.0-862.11.6.el7.x86_64
> jre1.8.0_181-amd64 et jre1.8.0_221-amd64
>Reporter: Firenz
>Priority: Major
>
> NIFI crashes with a core dump when using data provenance (5 times in 1 
> months, 4 differents VMs).
>  
> It happens radomly when one of my data analyst is using dataprovenance, but 
> the thread is always "Search Lucene Index-1". My monitoring does not show any 
> memory or thread issues (overload...)
>  
> Any advice ? (I cannot attach the hs_err_pid54305.log, token issue with Jira 
> ?)
>  
> Beginning of core dump : 
> {code:java}
> # JRE version: Java(TM) SE Runtime Environment (8.0_181-b13) (build 
> 1.8.0_181-b13)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.181-b13 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # J 33233 C2 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V (448 
> bytes) @ 0x7f9c52baabf9 [0x7f9
> c52baaa40+0x1b9]
> #
> # Core dump written. Default location: /appli/nifi-1.8.0/core or core.54305
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> #---  T H R E A D  ---Current thread 
> (0x7f9c3c02b000):  JavaThread "Search Lucene Index-1" daemon 
> [_thread_in_Java, id=64766, stack(0x7f9b9a
> c4d000,0x7f9b9ad4e000)]siginfo: si_signo: 11 (SIGSEGV), si_code: 1 
> (SEGV_MAPERR), si_addr: 0x7f9b82019d0c
> {code}
> My Dataprovenance config :
> {code:java}
> # Persistent Provenance Repository Properties
> nifi.provenance.repository.directory.default=/data_nifi_provenance_repo/1.8.0
> nifi.provenance.repository.max.storage.time=480 hours
> nifi.provenance.repository.max.storage.size=9 GB
> nifi.provenance.repository.rollover.time=30 secs
> nifi.provenance.repository.rollover.size=100 MB
> nifi.provenance.repository.query.threads=2
> nifi.provenance.repository.index.threads=2
> nifi.provenance.repository.compress.on.rollover=true
> nifi.provenance.repository.always.sync=false
> nifi.provenance.repository.journal.count=16
> # Comma-separated list of fields. Fields that are not indexed will not be 
> searchable. Valid fields are:
> # EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
> AlternateIdentifierURI, Relationship, Details
> nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, 
> ProcessorID, Relationship
> # FlowFile Attributes that should be indexed and made searchable.  Some 
> examples to consider are filename, uuid, mime.type
> nifi.provenance.repository.indexed.attributes=external_id, batch_id, 
> property_id, property_code, http.headers.x-request-id
> # Large values for the shard size will result in more Java heap usage when 
> searching the Provenance Repository
> # but should provide better performance
> nifi.provenance.repository.index.shard.size=500 MB
> # Indicates the maximum length that a FlowFile attribute can be when 
> retrieving a Provenance Event from
> # the repository. If the length of any attribute exceeds this value, it will 
> be truncated when the event is retrieved.
> nifi.provenance.repository.max.attribute.length=65536
> nifi.provenance.repository.concurrent.merge.threads=2
> nifi.provenance.repository.warm.cache.frequency=1 hour
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6777) Data Provenance SIGSEGV : Search Lucene Index-1

2019-10-15 Thread Firenz (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Firenz updated NIFI-6777:
-
Description: 
NIFI crashes with a core dump when using data provenance (5 times in 1 months, 
4 differents VMs).

 

It happens radomly when one of my data analyst is using dataprovenance, but the 
thread is always "Search Lucene Index-1". My monitoring does not show any 
memory or thread issues (overload...)

 

Any advice ? (I cannot attach the hs_err_pid54305.log, token issue with Jira ?)

 

Beginning of core dump : 
{code:java}
# JRE version: Java(TM) SE Runtime Environment (8.0_181-b13) (build 
1.8.0_181-b13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.181-b13 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# J 33233 C2 
org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V (448 
bytes) @ 0x7f9c52baabf9 [0x7f9
c52baaa40+0x1b9]
#
# Core dump written. Default location: /appli/nifi-1.8.0/core or core.54305
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
#---  T H R E A D  ---Current thread 
(0x7f9c3c02b000):  JavaThread "Search Lucene Index-1" daemon 
[_thread_in_Java, id=64766, stack(0x7f9b9a
c4d000,0x7f9b9ad4e000)]siginfo: si_signo: 11 (SIGSEGV), si_code: 1 
(SEGV_MAPERR), si_addr: 0x7f9b82019d0c
{code}
My Dataprovenance config :
{code:java}
# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=/data_nifi_provenance_repo/1.8.0
nifi.provenance.repository.max.storage.time=480 hours
nifi.provenance.repository.max.storage.size=9 GB
nifi.provenance.repository.rollover.time=30 secs
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
nifi.provenance.repository.journal.count=16
# Comma-separated list of fields. Fields that are not indexed will not be 
searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
AlternateIdentifierURI, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, 
ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable.  Some 
examples to consider are filename, uuid, mime.type
nifi.provenance.repository.indexed.attributes=external_id, batch_id, 
property_id, property_code, http.headers.x-request-id
# Large values for the shard size will result in more Java heap usage when 
searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving 
a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be 
truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
nifi.provenance.repository.concurrent.merge.threads=2
nifi.provenance.repository.warm.cache.frequency=1 hour

{code}

  was:
NIFI crashes with a core dump when using data provenance (5 times in 1 months, 
4 differents VMs).

 

It happens randomly, but the thread is always "Search Lucene Index-1". My 
monitoring does not show any memory or thread issues (overload...)

 

Any advice ? (I cannot attach the hs_err_pid54305.log, token issue with Jira ?)

 

Beginning of core dump : 
{code:java}
# JRE version: Java(TM) SE Runtime Environment (8.0_181-b13) (build 
1.8.0_181-b13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.181-b13 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# J 33233 C2 
org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V (448 
bytes) @ 0x7f9c52baabf9 [0x7f9
c52baaa40+0x1b9]
#
# Core dump written. Default location: /appli/nifi-1.8.0/core or core.54305
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
#---  T H R E A D  ---Current thread 
(0x7f9c3c02b000):  JavaThread "Search Lucene Index-1" daemon 
[_thread_in_Java, id=64766, stack(0x7f9b9a
c4d000,0x7f9b9ad4e000)]siginfo: si_signo: 11 (SIGSEGV), si_code: 1 
(SEGV_MAPERR), si_addr: 0x7f9b82019d0c
{code}
My Dataprovenance config :
{code:java}
# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=/data_nifi_provenance_repo/1.8.0
nifi.provenance.repository.max.storage.time=480 hours
nifi.provenance.repository.max.storage.size=9 GB
nifi.provenance.repository.rollover.time=30 secs
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false

[jira] [Created] (NIFI-6777) Data Provenance SIGSEGV : Search Lucene Index-1

2019-10-15 Thread Firenz (Jira)
Firenz created NIFI-6777:


 Summary: Data Provenance SIGSEGV : Search Lucene Index-1
 Key: NIFI-6777
 URL: https://issues.apache.org/jira/browse/NIFI-6777
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.8.0
 Environment: Linux CentOS Linux release 7.5.1804 (Core)
Linux 3.10.0-862.11.6.el7.x86_64
jre1.8.0_181-amd64 et jre1.8.0_221-amd64
Reporter: Firenz


NIFI crashes with a core dump when using data provenance (5 times in 1 months, 
4 differents VMs).

 

It happens randomly, but the thread is always "Search Lucene Index-1". My 
monitoring does not show any memory or thread issues (overload...)

 

Any advice ? (I cannot attach the hs_err_pid54305.log, token issue with Jira ?)

 

Beginning of core dump : 
{code:java}
# JRE version: Java(TM) SE Runtime Environment (8.0_181-b13) (build 
1.8.0_181-b13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.181-b13 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# J 33233 C2 
org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V (448 
bytes) @ 0x7f9c52baabf9 [0x7f9
c52baaa40+0x1b9]
#
# Core dump written. Default location: /appli/nifi-1.8.0/core or core.54305
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
#---  T H R E A D  ---Current thread 
(0x7f9c3c02b000):  JavaThread "Search Lucene Index-1" daemon 
[_thread_in_Java, id=64766, stack(0x7f9b9a
c4d000,0x7f9b9ad4e000)]siginfo: si_signo: 11 (SIGSEGV), si_code: 1 
(SEGV_MAPERR), si_addr: 0x7f9b82019d0c
{code}
My Dataprovenance config :
{code:java}
# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=/data_nifi_provenance_repo/1.8.0
nifi.provenance.repository.max.storage.time=480 hours
nifi.provenance.repository.max.storage.size=9 GB
nifi.provenance.repository.rollover.time=30 secs
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
nifi.provenance.repository.journal.count=16
# Comma-separated list of fields. Fields that are not indexed will not be 
searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
AlternateIdentifierURI, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, 
ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable.  Some 
examples to consider are filename, uuid, mime.type
nifi.provenance.repository.indexed.attributes=external_id, batch_id, 
property_id, property_code, http.headers.x-request-id
# Large values for the shard size will result in more Java heap usage when 
searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving 
a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be 
truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
nifi.provenance.repository.concurrent.merge.threads=2
nifi.provenance.repository.warm.cache.frequency=1 hour

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-5964) InvokeHTTP, Dynamic Headers not sent (Cluster)

2019-09-20 Thread Firenz (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Firenz resolved NIFI-5964.
--
Resolution: Cannot Reproduce

Not reproduct since.

> InvokeHTTP, Dynamic Headers not sent (Cluster)
> --
>
> Key: NIFI-5964
> URL: https://issues.apache.org/jira/browse/NIFI-5964
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.8.0
> Environment: Oracle Jre1.8.0_181
> CentOS Linux release 7.5.1804 (Core)
> 3.10.0-862.11.6.el7.x86_64
> Zookeeper 3.4.12 (same VM, dedicated to NIFI, not the standalone of NIFI)
>Reporter: Firenz
>Priority: Critical
> Attachments: Step 1 - Bulletin.png, Step 1 - InvokeHTTP conf.png, 
> Step 2 - Bulletin.png, Step 2 - InvokeHTTP.png, Step 3 - Bulletin.png
>
>
> InvokeHTTP does not always send HTTP Headers specified with dynamic 
> Properties.
>  
> This issue has only been observed with a 3 nodes cluster (3 NIFI, 3 
> zookeeper, 3 VMs). This works fine on standalone NIFI (our others stagings). 
> Our flow is not connection-load-balanced. The first processor is "PRIMARY" 
> tagged.
>  
> Step 0 
> Everything works. We do some STOP, START. The InvokeHTTP is not modified by 
> anyone.
> Step 1
> We observe the HTTP Headers are not sent anymore. The flow files on the 3 
> servers are identicals unzipped (cksum). The flow run on our PRIMARY node 
> "014"
> Step 2
> We add manually a new dynamic header in the InvokeHTTP (x-test). (With our 
> node "019", our https UI is loadbalanced) 
> The new dymanic header is sent, not the initial ones.
> Step 3
> We offload the PRIMARY node. The traffic goes to a new PRIMARY node ("005"). 
> It works : the HTTP headers are sent.
>  
> Some pictures provided for each step.
>  
> Note : the headers "masked in the snapshots"  are string of 36 chars max 
> (UUID/secrets)
> So, depending on the node, the HTTP Header are not sent.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6697) HandleHttpRequest - 500, stopping does not works

2019-09-20 Thread Firenz (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934661#comment-16934661
 ] 

Firenz commented on NIFI-6697:
--

Thanks for the hint. I may try to create a quick fix for NIFI 1.8.0 and test 
next week.

> HandleHttpRequest - 500, stopping does not works
> 
>
> Key: NIFI-6697
> URL: https://issues.apache.org/jira/browse/NIFI-6697
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
> Environment: CentOS Linux release 7.5.1804 (Core)
> Oracle jre1.8.0_181-amd64 
>Reporter: Firenz
>Priority: Critical
>
> My HandleHTTPRequest does not accept anymore POST messages, after some time, 
> returning 500 (port 10010).
> There is lot of file availaible on the machine (ulimit etc).
> Settings :
>  * 1 thread
>  * timerdriver
>  * 200ms
>  * 50 containerqueue (i'm pretty sure i never fill it due to low volumes of 
> calls).
> "Stop Processor" does not work : it starts the stop thread, never ending.
> "Terminate Thread" seems to work (the #2 on the processor disappears), but a 
> thread dump shows that *not* :  
> {code:java}
> /"qtp920046370-777080-acceptor-0@367cdcb6-ServerConnector@6962170f{HTTP/1.1,[http/1.1]}{0.0.0.0:10010}"
>  Id=777080 RUNNABLE
>  at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
>  at sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
> waiting on java.lang.Object@3b97aac8 at 
> sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
> waiting on java.lang.Object@3b97aac8 at 
> org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:369) at 
> org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:639)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680)
>  at java.lang.Thread.run(Unknown Source)
> {code}
>  
> The listener is still there : 
>  
> {code:java}
> netstat -an | grep 10010
>  tcp 0 0 0.0.0.0:10010 0.0.0.0:* LISTEN {code}
>  
> Restarting the processor (after the "terminate thread") does not work since 
> the port is already listening.
>  
> Current workaround : stop/start nifi.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6697) HandleHttpRequest - 500, stopping does not works

2019-09-20 Thread Firenz (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Firenz updated NIFI-6697:
-
Description: 
My HandleHTTPRequest does not accept anymore POST messages, after some time, 
returning 500 (port 10010).

There is lot of file availaible on the machine (ulimit etc).

Settings :
 * 1 thread
 * timerdriver
 * 200ms
 * 50 containerqueue (i'm pretty sure i never fill it due to low volumes of 
calls).

"Stop Processor" does not work : it starts the stop thread, never ending.

"Terminate Thread" seems to work (the #2 on the processor disappears), but a 
thread dump shows that *not* :  
{code:java}
/"qtp920046370-777080-acceptor-0@367cdcb6-ServerConnector@6962170f{HTTP/1.1,[http/1.1]}{0.0.0.0:10010}"
 Id=777080 RUNNABLE
 at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
 at sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)

waiting on java.lang.Object@3b97aac8 at 
sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
waiting on java.lang.Object@3b97aac8 at 
org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:369) at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:639)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680) 
at java.lang.Thread.run(Unknown Source)

{code}
 

The listener is still there : 

 
{code:java}
netstat -an | grep 10010
 tcp 0 0 0.0.0.0:10010 0.0.0.0:* LISTEN {code}
 

Restarting the processor (after the "terminate thread") does not work since the 
port is already listening.

 

Current workaround : stop/start nifi.

 

  was:
My HandleHTTPRequest does not accept anymore POST messages, after some time, 
returning 500.

 

Settings : 1thread, timerdriver, 200ms, 50 containerqueue.

 

Stop does not work. Terminate thread seems to work, but a thread dump show that 
not : 

"qtp920046370-777080-acceptor-0@367cdcb6-ServerConnector@6962170f\{HTTP/1.1,[http/1.1]}{0.0.0.0:10010}"
 Id=777080 RUNNABLE
 at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
 at sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
 - waiting on java.lang.Object@3b97aac8
 at sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
 - waiting on java.lang.Object@3b97aac8
 at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:369)
 at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:639)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680)
 at java.lang.Thread.run(Unknown Source)

 

The listener is still there : 

netstat -an | grep 10010
tcp 0 0 0.0.0.0:10010 0.0.0.0:* LISTEN

 

Restart the processor (after a terminatethread) does not work since the port is 
already listening.

 

Current workaround : stop/start

 


> HandleHttpRequest - 500, stopping does not works
> 
>
> Key: NIFI-6697
> URL: https://issues.apache.org/jira/browse/NIFI-6697
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
> Environment: CentOS Linux release 7.5.1804 (Core)
> Oracle jre1.8.0_181-amd64 
>Reporter: Firenz
>Priority: Critical
>
> My HandleHTTPRequest does not accept anymore POST messages, after some time, 
> returning 500 (port 10010).
> There is lot of file availaible on the machine (ulimit etc).
> Settings :
>  * 1 thread
>  * timerdriver
>  * 200ms
>  * 50 containerqueue (i'm pretty sure i never fill it due to low volumes of 
> calls).
> "Stop Processor" does not work : it starts the stop thread, never ending.
> "Terminate Thread" seems to work (the #2 on the processor disappears), but a 
> thread dump shows that *not* :  
> {code:java}
> /"qtp920046370-777080-acceptor-0@367cdcb6-ServerConnector@6962170f{HTTP/1.1,[http/1.1]}{0.0.0.0:10010}"
>  Id=777080 RUNNABLE
>  at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
>  at sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
> waiting on java.lang.Object@3b97aac8 at 
> sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
> waiting on java.lang.Object@3b97aac8 at 
> org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:369) at 
> org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:639)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680)
>  at java.lang.Thread.run(Unknown Source)
> {code}
>  
> The listener is still there : 
>  
> {code:java}
> netstat -an | grep 10010
>  tcp 0 0 0.0.0.0:10010 0.0.0.0:* LISTEN {code}
>  
> Restarting the processor (after 

[jira] [Created] (NIFI-6697) HandleHttpRequest - 500, stopping does not works

2019-09-20 Thread Firenz (Jira)
Firenz created NIFI-6697:


 Summary: HandleHttpRequest - 500, stopping does not works
 Key: NIFI-6697
 URL: https://issues.apache.org/jira/browse/NIFI-6697
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.8.0
 Environment: CentOS Linux release 7.5.1804 (Core)
Oracle jre1.8.0_181-amd64 
Reporter: Firenz


My HandleHTTPRequest does not accept anymore POST messages, after some time, 
returning 500.

 

Settings : 1thread, timerdriver, 200ms, 50 containerqueue.

 

Stop does not work. Terminate thread seems to work, but a thread dump show that 
not : 

"qtp920046370-777080-acceptor-0@367cdcb6-ServerConnector@6962170f\{HTTP/1.1,[http/1.1]}{0.0.0.0:10010}"
 Id=777080 RUNNABLE
 at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
 at sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
 - waiting on java.lang.Object@3b97aac8
 at sun.nio.ch.ServerSocketChannelImpl.accept(Unknown Source)
 - waiting on java.lang.Object@3b97aac8
 at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:369)
 at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:639)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680)
 at java.lang.Thread.run(Unknown Source)

 

The listener is still there : 

netstat -an | grep 10010
tcp 0 0 0.0.0.0:10010 0.0.0.0:* LISTEN

 

Restart the processor (after a terminatethread) does not work since the port is 
already listening.

 

Current workaround : stop/start

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-6233) HandleHttpRequest -> File Leak if Port already Bind

2019-04-23 Thread Firenz (JIRA)
Firenz created NIFI-6233:


 Summary: HandleHttpRequest -> File Leak if Port already Bind
 Key: NIFI-6233
 URL: https://issues.apache.org/jira/browse/NIFI-6233
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.8.0
 Environment: Linux Centos7 3.10.0-862.11.6.el7.x86_64
jre1.8.0_181-amd64
Reporter: Firenz


When you start a HandleHttpRequest processor, and the port is already bind you 
get errors, with bulletins : as expected.

 

However, this case implies a file leak (socks TCP). You can check it with 
netstat -an | grep TCP ou lsof.

 

(In our case, the whole 3-nodes cluster was unavailable, the 65000 limit was 
reached in less than 1 hour).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5964) InvokeHTTP, Dynamic Headers not sent (Cluster)

2019-01-18 Thread Firenz (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16746454#comment-16746454
 ] 

Firenz commented on NIFI-5964:
--

Additionnal infos : it happened with two differents invokeHttp "flows", 
requesting differents URLs. The shared property of they URLs is read timeout 
issues (slow APIs).

No issues when using NIFI 1.4.0 (but we had no cluster).

> InvokeHTTP, Dynamic Headers not sent (Cluster)
> --
>
> Key: NIFI-5964
> URL: https://issues.apache.org/jira/browse/NIFI-5964
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.8.0
> Environment: Oracle Jre1.8.0_181
> CentOS Linux release 7.5.1804 (Core)
> 3.10.0-862.11.6.el7.x86_64
> Zookeeper 3.4.12 (same VM, dedicated to NIFI, not the standalone of NIFI)
>Reporter: Firenz
>Priority: Critical
> Attachments: Step 1 - Bulletin.png, Step 1 - InvokeHTTP conf.png, 
> Step 2 - Bulletin.png, Step 2 - InvokeHTTP.png, Step 3 - Bulletin.png
>
>
> InvokeHTTP does not always send HTTP Headers specified with dynamic 
> Properties.
>  
> This issue has only been observed with a 3 nodes cluster (3 NIFI, 3 
> zookeeper, 3 VMs). This works fine on standalone NIFI (our others stagings). 
> Our flow is not connection-load-balanced. The first processor is "PRIMARY" 
> tagged.
>  
> Step 0 
> Everything works. We do some STOP, START. The InvokeHTTP is not modified by 
> anyone.
> Step 1
> We observe the HTTP Headers are not sent anymore. The flow files on the 3 
> servers are identicals unzipped (cksum). The flow run on our PRIMARY node 
> "014"
> Step 2
> We add manually a new dynamic header in the InvokeHTTP (x-test). (With our 
> node "019", our https UI is loadbalanced) 
> The new dymanic header is sent, not the initial ones.
> Step 3
> We offload the PRIMARY node. The traffic goes to a new PRIMARY node ("005"). 
> It works : the HTTP headers are sent.
>  
> Some pictures provided for each step.
>  
> Note : the headers "masked in the snapshots"  are string of 36 chars max 
> (UUID/secrets)
> So, depending on the node, the HTTP Header are not sent.
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5964) InvokeHTTP, Dynamic Headers not sent (Cluster)

2019-01-18 Thread Firenz (JIRA)
Firenz created NIFI-5964:


 Summary: InvokeHTTP, Dynamic Headers not sent (Cluster)
 Key: NIFI-5964
 URL: https://issues.apache.org/jira/browse/NIFI-5964
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.8.0
 Environment: Oracle Jre1.8.0_181
CentOS Linux release 7.5.1804 (Core)
3.10.0-862.11.6.el7.x86_64
Zookeeper 3.4.12 (same VM, dedicated to NIFI, not the standalone of NIFI)

Reporter: Firenz
 Attachments: Step 1 - Bulletin.png, Step 1 - InvokeHTTP conf.png, Step 
2 - Bulletin.png, Step 2 - InvokeHTTP.png, Step 3 - Bulletin.png

InvokeHTTP does not always send HTTP Headers specified with dynamic Properties.

 

This issue has only been observed with a 3 nodes cluster (3 NIFI, 3 zookeeper, 
3 VMs). This works fine on standalone NIFI (our others stagings). Our flow is 
not connection-load-balanced. The first processor is "PRIMARY" tagged.

 

Step 0 

Everything works. We do some STOP, START. The InvokeHTTP is not modified by 
anyone.

Step 1

We observe the HTTP Headers are not sent anymore. The flow files on the 3 
servers are identicals unzipped (cksum). The flow run on our PRIMARY node "014"

Step 2

We add manually a new dynamic header in the InvokeHTTP (x-test). (With our node 
"019", our https UI is loadbalanced) 

The new dymanic header is sent, not the initial ones.

Step 3

We offload the PRIMARY node. The traffic goes to a new PRIMARY node ("005"). It 
works : the HTTP headers are sent.

 

Some pictures provided for each step.

 

Note : the headers "masked in the snapshots"  are string of 36 chars max 
(UUID/secrets)

So, depending on the node, the HTTP Header are not sent.

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5964) InvokeHTTP, Dynamic Headers not sent (Cluster)

2019-01-18 Thread Firenz (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Firenz updated NIFI-5964:
-
Attachment: Step 1 - Bulletin.png
Step 1 - InvokeHTTP conf.png
Step 2 - Bulletin.png
Step 2 - InvokeHTTP.png
Step 3 - Bulletin.png

> InvokeHTTP, Dynamic Headers not sent (Cluster)
> --
>
> Key: NIFI-5964
> URL: https://issues.apache.org/jira/browse/NIFI-5964
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.8.0
> Environment: Oracle Jre1.8.0_181
> CentOS Linux release 7.5.1804 (Core)
> 3.10.0-862.11.6.el7.x86_64
> Zookeeper 3.4.12 (same VM, dedicated to NIFI, not the standalone of NIFI)
>Reporter: Firenz
>Priority: Critical
> Attachments: Step 1 - Bulletin.png, Step 1 - InvokeHTTP conf.png, 
> Step 2 - Bulletin.png, Step 2 - InvokeHTTP.png, Step 3 - Bulletin.png
>
>
> InvokeHTTP does not always send HTTP Headers specified with dynamic 
> Properties.
>  
> This issue has only been observed with a 3 nodes cluster (3 NIFI, 3 
> zookeeper, 3 VMs). This works fine on standalone NIFI (our others stagings). 
> Our flow is not connection-load-balanced. The first processor is "PRIMARY" 
> tagged.
>  
> Step 0 
> Everything works. We do some STOP, START. The InvokeHTTP is not modified by 
> anyone.
> Step 1
> We observe the HTTP Headers are not sent anymore. The flow files on the 3 
> servers are identicals unzipped (cksum). The flow run on our PRIMARY node 
> "014"
> Step 2
> We add manually a new dynamic header in the InvokeHTTP (x-test). (With our 
> node "019", our https UI is loadbalanced) 
> The new dymanic header is sent, not the initial ones.
> Step 3
> We offload the PRIMARY node. The traffic goes to a new PRIMARY node ("005"). 
> It works : the HTTP headers are sent.
>  
> Some pictures provided for each step.
>  
> Note : the headers "masked in the snapshots"  are string of 36 chars max 
> (UUID/secrets)
> So, depending on the node, the HTTP Header are not sent.
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5825) ConsumeJMS - Not using Clientid / durable does not works

2018-11-15 Thread Firenz (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Firenz updated NIFI-5825:
-
Description: 
I cannot make the ConsumeJMS 1.8.0 processor work as a "durable consumer". I 
set the client id, but it mis-used. I'm using ActiveMQ (AmazonMQ).

 

I set (check pictures for full details) : 
 * "sub" in Connection Client Id
 * "true" in durable subscription

Result in ActiveMQ :
 * NIFI creates in "Durable Topic Subscribers" "sub-" (like "sub-4" 
instead of "sub").

 

 

I think the problem may be in AbstractJMSProcessor.java/buildTargetResource and 
we should have something like that : 
{code:java}
String clientId = 
context.getProperty(CLIENT_ID).evaluateAttributeExpressions().getValue();

if (clientId != null) {
if (!durable){ 
// we have to generate a unique, dynamique, temp, client id
clientId = clientId + "-" + clientIdCounter.getAndIncrement();
}
cachingFactory.setClientId(clientId);
}
{code}
 
 But dunno how to get this "durable" prop in the Abstract during the 
"OnTrigger" stuff...

 

 

  was:
I cannot make the ConsumeJMS 1.8.0 processor working as a durable. I set the 
client id, but it half-used. I'm using ActiveMQ (AmazonMQ).

 

I set :
 * "sub" in Connection Client Id
 * "true" in durable subscription

Result in ActiveMQ :
 * NIFI creates in "Durable Topic Subscribers" "sub-" (like "sub-4" 
instead of "sub").

!image-2018-11-15-23-21-51-988.png!!image-2018-11-15-23-19-49-175.png!

I think the problem may be in AbstractJMSProcessor.java/buildTargetResource and 
we should have something like that : 
{code:java}
String clientId = 
context.getProperty(CLIENT_ID).evaluateAttributeExpressions().getValue();

if (clientId != null) {
if (!durable){ 
// we have to generate a unique, dynamique, temp, client id
clientId = clientId + "-" + clientIdCounter.getAndIncrement();
}
cachingFactory.setClientId(clientId);
}
{code}
 
 But dunno how to get this "durable" prop in the Abstract during the 
"OnTrigger" stuff...

 

 


> ConsumeJMS - Not using Clientid / durable does not works
> 
>
> Key: NIFI-5825
> URL: https://issues.apache.org/jira/browse/NIFI-5825
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Firenz
>Priority: Major
> Attachments: amq_consume.JPG, nifi_consumejms.JPG
>
>
> I cannot make the ConsumeJMS 1.8.0 processor work as a "durable consumer". I 
> set the client id, but it mis-used. I'm using ActiveMQ (AmazonMQ).
>  
> I set (check pictures for full details) : 
>  * "sub" in Connection Client Id
>  * "true" in durable subscription
> Result in ActiveMQ :
>  * NIFI creates in "Durable Topic Subscribers" "sub-" (like 
> "sub-4" instead of "sub").
>  
>  
> I think the problem may be in AbstractJMSProcessor.java/buildTargetResource 
> and we should have something like that : 
> {code:java}
> String clientId = 
> context.getProperty(CLIENT_ID).evaluateAttributeExpressions().getValue();
> if (clientId != null) {
> if (!durable){ 
> // we have to generate a unique, dynamique, temp, client id
> clientId = clientId + "-" + clientIdCounter.getAndIncrement();
> }
> cachingFactory.setClientId(clientId);
> }
> {code}
>  
>  But dunno how to get this "durable" prop in the Abstract during the 
> "OnTrigger" stuff...
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5825) ConsumeJMS - Not using Clientid / durable does not works

2018-11-15 Thread Firenz (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Firenz updated NIFI-5825:
-
Attachment: amq_consume.JPG

> ConsumeJMS - Not using Clientid / durable does not works
> 
>
> Key: NIFI-5825
> URL: https://issues.apache.org/jira/browse/NIFI-5825
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Firenz
>Priority: Major
> Attachments: amq_consume.JPG, nifi_consumejms.JPG
>
>
> I cannot make the ConsumeJMS 1.8.0 processor working as a durable. I set the 
> client id, but it half-used. I'm using ActiveMQ (AmazonMQ).
>  
> I set :
>  * "sub" in Connection Client Id
>  * "true" in durable subscription
> Result in ActiveMQ :
>  * NIFI creates in "Durable Topic Subscribers" "sub-" (like 
> "sub-4" instead of "sub").
> !image-2018-11-15-23-21-51-988.png!!image-2018-11-15-23-19-49-175.png!
> I think the problem may be in AbstractJMSProcessor.java/buildTargetResource 
> and we should have something like that : 
> {code:java}
> String clientId = 
> context.getProperty(CLIENT_ID).evaluateAttributeExpressions().getValue();
> if (clientId != null) {
> if (!durable){ 
> // we have to generate a unique, dynamique, temp, client id
> clientId = clientId + "-" + clientIdCounter.getAndIncrement();
> }
> cachingFactory.setClientId(clientId);
> }
> {code}
>  
>  But dunno how to get this "durable" prop in the Abstract during the 
> "OnTrigger" stuff...
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-5825) ConsumeJMS - Not using Clientid / durable does not works

2018-11-15 Thread Firenz (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Firenz updated NIFI-5825:
-
Attachment: nifi_consumejms.JPG

> ConsumeJMS - Not using Clientid / durable does not works
> 
>
> Key: NIFI-5825
> URL: https://issues.apache.org/jira/browse/NIFI-5825
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Firenz
>Priority: Major
> Attachments: amq_consume.JPG, nifi_consumejms.JPG
>
>
> I cannot make the ConsumeJMS 1.8.0 processor working as a durable. I set the 
> client id, but it half-used. I'm using ActiveMQ (AmazonMQ).
>  
> I set :
>  * "sub" in Connection Client Id
>  * "true" in durable subscription
> Result in ActiveMQ :
>  * NIFI creates in "Durable Topic Subscribers" "sub-" (like 
> "sub-4" instead of "sub").
> !image-2018-11-15-23-21-51-988.png!!image-2018-11-15-23-19-49-175.png!
> I think the problem may be in AbstractJMSProcessor.java/buildTargetResource 
> and we should have something like that : 
> {code:java}
> String clientId = 
> context.getProperty(CLIENT_ID).evaluateAttributeExpressions().getValue();
> if (clientId != null) {
> if (!durable){ 
> // we have to generate a unique, dynamique, temp, client id
> clientId = clientId + "-" + clientIdCounter.getAndIncrement();
> }
> cachingFactory.setClientId(clientId);
> }
> {code}
>  
>  But dunno how to get this "durable" prop in the Abstract during the 
> "OnTrigger" stuff...
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5825) ConsumeJMS - Not using Clientid / durable does not works

2018-11-15 Thread Firenz (JIRA)
Firenz created NIFI-5825:


 Summary: ConsumeJMS - Not using Clientid / durable does not works
 Key: NIFI-5825
 URL: https://issues.apache.org/jira/browse/NIFI-5825
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.8.0
Reporter: Firenz


I cannot make the ConsumeJMS 1.8.0 processor working as a durable. I set the 
client id, but it half-used. I'm using ActiveMQ (AmazonMQ).

 

I set :
 * "sub" in Connection Client Id
 * "true" in durable subscription

Result in ActiveMQ :
 * NIFI creates in "Durable Topic Subscribers" "sub-" (like "sub-4" 
instead of "sub").

!image-2018-11-15-23-21-51-988.png!!image-2018-11-15-23-19-49-175.png!

I think the problem may be in AbstractJMSProcessor.java/buildTargetResource and 
we should have something like that : 
{code:java}
String clientId = 
context.getProperty(CLIENT_ID).evaluateAttributeExpressions().getValue();

if (clientId != null) {
if (!durable){ 
// we have to generate a unique, dynamique, temp, client id
clientId = clientId + "-" + clientIdCounter.getAndIncrement();
}
cachingFactory.setClientId(clientId);
}
{code}
 
 But dunno how to get this "durable" prop in the Abstract during the 
"OnTrigger" stuff...

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5536) Add EL support for password on AMQP processors

2018-08-21 Thread Firenz (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16586984#comment-16586984
 ] 

Firenz commented on NIFI-5536:
--

A first iteration shouldn't be to enable EL on password like the FTP/HTTP 
processors' families, to be consistent ?

> Add EL support for password on AMQP processors
> --
>
> Key: NIFI-5536
> URL: https://issues.apache.org/jira/browse/NIFI-5536
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Corey Fritz
>Priority: Major
>
> NIFI-5489 added EL support to the host, port, virtual host, and user 
> properties of AMQP processors. Not sure why password was not included. We 
> have a use case where sensitive values (passwords) are set as environment 
> variables on our Docker containers and then those variables are referenced by 
> name using EL expressions in our processors and controller services. Flow 
> authors then have no need or means to know what those sensitive values are.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-5046) Nifi toolkit - "server" - Bad check on listening port

2018-04-06 Thread Firenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-5046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Firenz resolved NIFI-5046.
--
Resolution: Not A Bug

The port specified with "--port" is overwritten by the one specified in the 
"-configJsonIn " file

> Nifi toolkit - "server" - Bad check on listening port
> -
>
> Key: NIFI-5046
> URL: https://issues.apache.org/jira/browse/NIFI-5046
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.5.0
> Environment: Linux Centos7.4 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu 
> Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Firenz
>Priority: Major
>
> In "server" mode , it seems the Listening Port check is done on the default 
> port (8443) and not the specified port (with the -p option).
>  
> {code:java}
> 2018/04/06 12:11:42 INFO [main] org.eclipse.jetty.server.Server: 
> jetty-9.4.3.v20170317
> Service server error: Adresse déjà utilisée
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5046) Nifi toolkit - "server" - Bad check on listening port

2018-04-06 Thread Firenz (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-5046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428238#comment-16428238
 ] 

Firenz commented on NIFI-5046:
--

My mistake, the port specified with "--port" is just overwritten by the one 
specified in "--configJsonIn "

> Nifi toolkit - "server" - Bad check on listening port
> -
>
> Key: NIFI-5046
> URL: https://issues.apache.org/jira/browse/NIFI-5046
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.5.0
> Environment: Linux Centos7.4 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu 
> Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Firenz
>Priority: Major
>
> In "server" mode , it seems the Listening Port check is done on the default 
> port (8443) and not the specified port (with the -p option).
>  
> {code:java}
> 2018/04/06 12:11:42 INFO [main] org.eclipse.jetty.server.Server: 
> jetty-9.4.3.v20170317
> Service server error: Adresse déjà utilisée
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-5046) Nifi toolkit - "server" - Bad check on listening port

2018-04-06 Thread Firenz (JIRA)
Firenz created NIFI-5046:


 Summary: Nifi toolkit - "server" - Bad check on listening port
 Key: NIFI-5046
 URL: https://issues.apache.org/jira/browse/NIFI-5046
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.5.0
 Environment: Linux Centos7.4 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 
25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Reporter: Firenz


In "server" mode , it seems the Listening Port check is done on the default 
port (8443) and not the specified port (with the -p option).

 
{code:java}
2018/04/06 12:11:42 INFO [main] org.eclipse.jetty.server.Server: 
jetty-9.4.3.v20170317
Service server error: Adresse déjà utilisée
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4500) Alternative UI - Table Way

2017-10-18 Thread Firenz (JIRA)
Firenz created NIFI-4500:


 Summary: Alternative UI - Table Way
 Key: NIFI-4500
 URL: https://issues.apache.org/jira/browse/NIFI-4500
 Project: Apache NiFi
  Issue Type: Wish
  Components: Core UI
Affects Versions: 1.4.0
Reporter: Firenz
Priority: Minor


Before talking, i appreciate the fact to share the same UI between Dev and Ops.

But, if we use lot of Nifi Process Groups at the root, the standard UI provides 
a bad user experience, mainly for the ops teams. Using Sub-Process Group just 
to UI-organize does not seems a good practice to me.

It would be great to have an alternate Web UI like a table with these columns : 

Process[Group] Bulletin icon
Process[Group] Name
Process[Group] Count Of Connections (?)
Process[Group] State (START/RUNNING/DISABLE), with options to 
start/stop/disable...
Process[Group] Comment
Process[Group] Version (there an issue about this)
Process[Group] Queue state (with option to fully clear, there an issue about 
this)
Process[Group] Stats
Link to data provenance
Link to related Controllers ?
Link to Variables ?




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)