[jira] [Comment Edited] (NIFI-13108) Dependency hygiene - commons codec 1.17

2024-04-27 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841526#comment-17841526
 ] 

Joe Witt edited comment on NIFI-13108 at 4/27/24 7:05 PM:
--

{noformat}
** [INFO]   ch.qos.logback:*  1.5.5 -> 1.5.6

** [INFO]   com.amazonaws:* ... 1.12.686 -> 1.12.710

** [INFO]   com.github.luben:zstd-jni . 1.5.6-1 -> 
1.5.6-3

not direct
[INFO]   com.github.spotbugs:spotbugs-annotations .. 4.8.3 -> 4.8.4

** [INFO]   commons-cli:commons-cli ... 1.6.0 -> 
1.7.0

** [INFO]   commons-codec:commons-codec . 1.16.1 -> 
1.17.0

** [INFO]   io.fabric8:* ... 6.10.0 -> 6.12.1

** [INFO]   io.swagger.core.v3:swagger-annotations .. 2.2.20 -> 
2.2.21

** [INFO]   jakarta.xml.bind:jakarta.xml.bind-api . 4.0.1 -> 
4.0.2

** [INFO]   org.apache.commons:commons-text . 1.11.0 -> 
1.12.0

** [INFO]   org.apache.logging.log4j:* . 2.23.0 -> 2.23.1

not direct
[INFO]   org.apache.maven.plugin-tools:maven-plugin-annotations ...  3.10.2 -> 
3.12.0

** [INFO]   org.glassfish.jaxb:jaxb-runtime ... 4.0.4 -> 
4.0.5

** [INFO]   org.glassfish.jersey.*:* . 3.1.4 -> 3.1.6

not changing for now
[INFO]   org.hamcrest:hamcrest-core  1.3 -> 2.2


** [INFO]   org.jsoup:jsoup . 1.17.1 -> 
1.17.2


** [INFO]   org.junit.platform:junit-platform-commons  1.10.0 -> 1.10.2

** [INFO]   org.mockito:* .. 5.8.0 -> 5.11.0


** [INFO]   org.testcontainers:* . 1.19.4 -> 1.19.7

** [INFO]   software.amazon.awssdk:* . 2.25.16 -> 2.25.40

** [INFO]   com.puppycrawl.tools:checkstyle ... 9.3 -> 
10.15.0

** [INFO]   org.ow2.asm:asm ... 9.6 -> 
9.7


{noformat}



was (Author: joewitt):
{noformat}
** [INFO]   ch.qos.logback:*  1.5.5 -> 1.5.6

** [INFO]   com.amazonaws:* ... 1.12.686 -> 1.12.710

** [INFO]   com.github.luben:zstd-jni . 1.5.6-1 -> 
1.5.6-3

not direct
[INFO]   com.github.spotbugs:spotbugs-annotations .. 4.8.3 -> 4.8.4

** [INFO]   commons-cli:commons-cli ... 1.6.0 -> 
1.7.0

** [INFO]   commons-codec:commons-codec . 1.16.1 -> 
1.17.0

** [INFO]   io.fabric8:* ... 6.10.0 -> 6.12.1

** [INFO]   io.swagger.core.v3:swagger-annotations .. 2.2.20 -> 
2.2.21

** [INFO]   jakarta.xml.bind:jakarta.xml.bind-api . 4.0.1 -> 
4.0.2

** [INFO]   org.apache.commons:commons-text . 1.11.0 -> 
1.12.0

** [INFO]   org.apache.logging.log4j:* . 2.23.0 -> 2.23.1

not direct
[INFO]   org.apache.maven.plugin-tools:maven-plugin-annotations ...  3.10.2 -> 
3.12.0

** [INFO]   org.glassfish.jaxb:jaxb-runtime ... 4.0.4 -> 
4.0.5

** [INFO]   org.glassfish.jersey.*:* . 3.1.4 -> 3.1.6

not changing for now
[INFO]   org.hamcrest:hamcrest-core  1.3 -> 2.2


** [INFO]   org.jsoup:jsoup . 1.17.1 -> 
1.17.2


** [INFO]   org.junit.platform:junit-platform-commons  1.10.0 -> 1.10.2

not changing for now
[INFO]   org.mockito:* .. 5.8.0 -> 5.11.0


** [INFO]   org.testcontainers:* . 1.19.4 -> 1.19.7

** [INFO]   software.amazon.awssdk:* . 2.25.16 -> 2.25.40

** [INFO]   com.puppycrawl.tools:checkstyle ... 9.3 -> 
10.15.0

** [INFO]   org.ow2.asm:asm ... 9.6 -> 
9.7


{noformat}


> Dependency hygiene  - commons codec 1.17
> 
>
> Key: NIFI-13108
> URL: https://issues.apache.org/jira/browse/NIFI-13108
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 2.0.0-M3
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (NIFI-13108) Dependency hygiene - commons codec 1.17

2024-04-27 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841526#comment-17841526
 ] 

Joe Witt edited comment on NIFI-13108 at 4/27/24 7:04 PM:
--

{noformat}
** [INFO]   ch.qos.logback:*  1.5.5 -> 1.5.6

** [INFO]   com.amazonaws:* ... 1.12.686 -> 1.12.710

** [INFO]   com.github.luben:zstd-jni . 1.5.6-1 -> 
1.5.6-3

not direct
[INFO]   com.github.spotbugs:spotbugs-annotations .. 4.8.3 -> 4.8.4

** [INFO]   commons-cli:commons-cli ... 1.6.0 -> 
1.7.0

** [INFO]   commons-codec:commons-codec . 1.16.1 -> 
1.17.0

** [INFO]   io.fabric8:* ... 6.10.0 -> 6.12.1

** [INFO]   io.swagger.core.v3:swagger-annotations .. 2.2.20 -> 
2.2.21

** [INFO]   jakarta.xml.bind:jakarta.xml.bind-api . 4.0.1 -> 
4.0.2

** [INFO]   org.apache.commons:commons-text . 1.11.0 -> 
1.12.0

** [INFO]   org.apache.logging.log4j:* . 2.23.0 -> 2.23.1

not direct
[INFO]   org.apache.maven.plugin-tools:maven-plugin-annotations ...  3.10.2 -> 
3.12.0

** [INFO]   org.glassfish.jaxb:jaxb-runtime ... 4.0.4 -> 
4.0.5

** [INFO]   org.glassfish.jersey.*:* . 3.1.4 -> 3.1.6

not changing for now
[INFO]   org.hamcrest:hamcrest-core  1.3 -> 2.2


** [INFO]   org.jsoup:jsoup . 1.17.1 -> 
1.17.2


** [INFO]   org.junit.platform:junit-platform-commons  1.10.0 -> 1.10.2

not changing for now
[INFO]   org.mockito:* .. 5.8.0 -> 5.11.0


** [INFO]   org.testcontainers:* . 1.19.4 -> 1.19.7

** [INFO]   software.amazon.awssdk:* . 2.25.16 -> 2.25.40

** [INFO]   com.puppycrawl.tools:checkstyle ... 9.3 -> 
10.15.0

** [INFO]   org.ow2.asm:asm ... 9.6 -> 
9.7


{noformat}



was (Author: joewitt):

{noformat}
[INFO]   ch.qos.logback:*  1.5.5 -> 1.5.6

[INFO]   com.amazonaws:* ... 1.12.686 -> 1.12.710

[INFO]   com.github.luben:zstd-jni . 1.5.6-1 -> 1.5.6-3

[INFO]   com.github.spotbugs:spotbugs-annotations .. 4.8.3 -> 4.8.4

[INFO]   commons-cli:commons-cli ... 1.6.0 -> 1.7.0

[INFO]   commons-codec:commons-codec . 1.16.1 -> 1.17.0

[INFO]   io.fabric8:* ... 6.10.0 -> 6.12.1

[INFO]   io.swagger.core.v3:swagger-annotations .. 2.2.20 -> 2.2.21

[INFO]   jakarta.xml.bind:jakarta.xml.bind-api . 4.0.1 -> 4.0.2

[INFO]   org.apache.commons:commons-text . 1.11.0 -> 1.12.0

[INFO]   org.apache.logging.log4j:* . 2.23.0 -> 2.23.1

[INFO]   org.apache.maven.plugin-tools:maven-plugin-annotations ...  3.10.2 -> 
3.12.0

[INFO]   org.glassfish.jaxb:jaxb-runtime ... 4.0.4 -> 4.0.5
[INFO]   org.glassfish.jersey.bundles:jaxrs-ri . 3.1.4 -> 3.1.6
[INFO]   org.glassfish.jersey.connectors:* ... 3.1.4 -> 3.1.6
[INFO]   org.glassfish.jersey.containers:* ... 3.1.4 -> 3.1.6
[INFO]   org.glassfish.jersey.containers.glassfish:jersey-gf-ejb ... 3.1.4 -> 
3.1.6
[INFO]   org.glassfish.jersey.core:*  3.1.4 -> 3.1.6
[INFO]   org.glassfish.jersey.ext:*  3.1.4 -> 3.1.6
[INFO]   org.glassfish.jersey.ext.cdi:* ...  3.1.4 -> 3.1.6
[INFO]   org.glassfish.jersey.ext.microprofile:* ...  3.1.4 -> 3.1.6

[INFO]   org.hamcrest:hamcrest-core  1.3 -> 2.2

[INFO]   org.jsoup:jsoup . 1.17.1 -> 1.17.2


[INFO]   org.junit.platform:junit-platform-commons  1.10.0 -> 1.10.2

[INFO]   org.mockito:* .. 5.8.0 -> 5.11.0

[INFO]   org.testcontainers:* . 1.19.4 -> 1.19.7

[INFO]   software.amazon.awssdk:* . 2.25.16 -> 2.25.40

[INFO]   com.puppycrawl.tools:checkstyle ... 9.3 -> 10.15.0

{noformat}


> Dependency hygiene  - commons codec 1.17
> 
>
> Key: NIFI-13108
> URL: https://issues.apache.org/jira/browse/NIFI-13108
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 2.0.0-M3
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13108) Dependency hygiene - commons codec 1.17

2024-04-27 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841526#comment-17841526
 ] 

Joe Witt commented on NIFI-13108:
-


{noformat}
[INFO]   ch.qos.logback:*  1.5.5 -> 1.5.6

[INFO]   com.amazonaws:* ... 1.12.686 -> 1.12.710

[INFO]   com.github.luben:zstd-jni . 1.5.6-1 -> 1.5.6-3

[INFO]   com.github.spotbugs:spotbugs-annotations .. 4.8.3 -> 4.8.4

[INFO]   commons-cli:commons-cli ... 1.6.0 -> 1.7.0

[INFO]   commons-codec:commons-codec . 1.16.1 -> 1.17.0

[INFO]   io.fabric8:* ... 6.10.0 -> 6.12.1

[INFO]   io.swagger.core.v3:swagger-annotations .. 2.2.20 -> 2.2.21

[INFO]   jakarta.xml.bind:jakarta.xml.bind-api . 4.0.1 -> 4.0.2

[INFO]   org.apache.commons:commons-text . 1.11.0 -> 1.12.0

[INFO]   org.apache.logging.log4j:* . 2.23.0 -> 2.23.1

[INFO]   org.apache.maven.plugin-tools:maven-plugin-annotations ...  3.10.2 -> 
3.12.0

[INFO]   org.glassfish.jaxb:jaxb-runtime ... 4.0.4 -> 4.0.5
[INFO]   org.glassfish.jersey.bundles:jaxrs-ri . 3.1.4 -> 3.1.6
[INFO]   org.glassfish.jersey.connectors:* ... 3.1.4 -> 3.1.6
[INFO]   org.glassfish.jersey.containers:* ... 3.1.4 -> 3.1.6
[INFO]   org.glassfish.jersey.containers.glassfish:jersey-gf-ejb ... 3.1.4 -> 
3.1.6
[INFO]   org.glassfish.jersey.core:*  3.1.4 -> 3.1.6
[INFO]   org.glassfish.jersey.ext:*  3.1.4 -> 3.1.6
[INFO]   org.glassfish.jersey.ext.cdi:* ...  3.1.4 -> 3.1.6
[INFO]   org.glassfish.jersey.ext.microprofile:* ...  3.1.4 -> 3.1.6

[INFO]   org.hamcrest:hamcrest-core  1.3 -> 2.2

[INFO]   org.jsoup:jsoup . 1.17.1 -> 1.17.2


[INFO]   org.junit.platform:junit-platform-commons  1.10.0 -> 1.10.2

[INFO]   org.mockito:* .. 5.8.0 -> 5.11.0

[INFO]   org.testcontainers:* . 1.19.4 -> 1.19.7

[INFO]   software.amazon.awssdk:* . 2.25.16 -> 2.25.40

[INFO]   com.puppycrawl.tools:checkstyle ... 9.3 -> 10.15.0

{noformat}


> Dependency hygiene  - commons codec 1.17
> 
>
> Key: NIFI-13108
> URL: https://issues.apache.org/jira/browse/NIFI-13108
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 2.0.0-M3
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13108) Dependency hygiene - commons codec 1.17

2024-04-27 Thread Joe Witt (Jira)
Joe Witt created NIFI-13108:
---

 Summary: Dependency hygiene  - commons codec 1.17
 Key: NIFI-13108
 URL: https://issues.apache.org/jira/browse/NIFI-13108
 Project: Apache NiFi
  Issue Type: Task
Reporter: Joe Witt
Assignee: Joe Witt
 Fix For: 2.0.0-M3






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13007) Remove Solr Components in 2x line

2024-04-26 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-13007:

Status: Patch Available  (was: Open)

> Remove Solr Components in 2x line
> -
>
> Key: NIFI-13007
> URL: https://issues.apache.org/jira/browse/NIFI-13007
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 2.0.0-M3
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In NIFI-12998 it was found that with basic dependency hygiene our Solr 
> components cant even build now.  And it turns out they already dont work at 
> runtime.
> In NIFi-13006 we deprecate and in this JIRA we remove.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13006) Deprecate Solr Components in NiFi 1.x

2024-04-26 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-13006:

Status: Patch Available  (was: Open)

> Deprecate Solr Components in NiFi 1.x
> -
>
> Key: NIFI-13006
> URL: https://issues.apache.org/jira/browse/NIFI-13006
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 1.26.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There is not a clear go forward strategy to support Solr components as they 
> current rely on Jetty 10 and we've moved on in terms of Java version and 
> Jetty version for maintenance, function, and security reasons.  So we'll need 
> to deprecate them.
> There will be an associated removal in NiFi 2.x
> We can restore them if the Solr community updates their dependencies.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-12986) Tidy up JavaDoc of ProcessSession

2024-04-22 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-12986.
-
Resolution: Fixed

> Tidy up JavaDoc of ProcessSession
> -
>
> Key: NIFI-12986
> URL: https://issues.apache.org/jira/browse/NIFI-12986
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: endzeit
>Assignee: endzeit
>Priority: Major
> Fix For: 2.0.0-M3
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> While working on NIFI-12982 I noticed that the JavaDoc of {{ProcessSession}} 
> has some minor typos and documentation drifts between method overloads.
> The goal of this ticket is to aim make the JavaDoc for the current 
> {{ProcessSession}} specification more consistent. The specified contract must 
> not be altered. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12986) Tidy up JavaDoc of ProcessSession

2024-04-22 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839790#comment-17839790
 ] 

Joe Witt commented on NIFI-12986:
-

[~EndzeitBegins][~exceptionfactory] It is important to undo the heart of the 
change which is declaring this thing deprecated.  Subsequent 
discussion/follow-up can happen in another JIRA.  But we need to keep main 
shippable and right now we're declaring something deprecated we don't intend to

> Tidy up JavaDoc of ProcessSession
> -
>
> Key: NIFI-12986
> URL: https://issues.apache.org/jira/browse/NIFI-12986
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: endzeit
>Assignee: endzeit
>Priority: Major
> Fix For: 2.0.0-M3
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> While working on NIFI-12982 I noticed that the JavaDoc of {{ProcessSession}} 
> has some minor typos and documentation drifts between method overloads.
> The goal of this ticket is to aim make the JavaDoc for the current 
> {{ProcessSession}} specification more consistent. The specified contract must 
> not be altered. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13080) nifi-properties and nifi-property-utils should not be part of the public api of nifi extensions such as processors, controller services and reporting tasks

2024-04-22 Thread Joe Witt (Jira)
Joe Witt created NIFI-13080:
---

 Summary: nifi-properties and nifi-property-utils should not be 
part of the public api of nifi extensions such as processors, controller 
services and reporting tasks
 Key: NIFI-13080
 URL: https://issues.apache.org/jira/browse/NIFI-13080
 Project: Apache NiFi
  Issue Type: Task
Reporter: Joe Witt
Assignee: Joe Witt


After the great refactor of NIFI-12998 a few scenarios emerged that we should 
resolve.

The bom/parents for NiFi extensions such as nifi-extension-bundles should 
enforce that a pom should not have a declared dependency on nifi-properties and 
nifi-property-utils.

nifi-property-utils has extensive usage today due to a StringUtils class.  The 
purpose of that StringUtils class seems to be to have a very lightweight copy 
of things often found in commons-lang3 but when we don't want to pull in that 
dependency to every module.  It isn't clear how valuable that is at this point 
and it is worth review.  But either way StringUtils of nifi-property-utils does 
not seem like it should be a thing or should not be where it is.

The handful of remaining 'nifi-extension-bundle modules that do use 
NiFiProperties are extensions which also don't really belong in the 
nifi-extension-bundle module but instead should be in a 
'nifi-framework-extensions' bundle module as these are things which are more 
like framework extensions that the public/intentional components of processors, 
controller services, and reporting tasks and these framework focused extensions 
have different obligations/accesses such as NiFiProperties being totally fair 
game.  But for a processor to depend on NIFi properties is a 'config/user 
experience smell'



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12998) nifi-nar-bundle has improper dependencies - the full source tree needs dependency cleanup and management

2024-04-22 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839500#comment-17839500
 ] 

Joe Witt commented on NIFI-12998:
-

All checks are green.

Manual tests all looking good.

Run on docker w python components all good.

> nifi-nar-bundle has improper dependencies - the full source tree needs 
> dependency cleanup and management
> 
>
> Key: NIFI-12998
> URL: https://issues.apache.org/jira/browse/NIFI-12998
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> found in nifi-nar-bundles pom
>   
> com.maxmind.geoip2
> geoip2
> 4.2.0
> 
> This should not be here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-9927) Simplify redundant version declarations in NiFi

2024-04-21 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-9927.

Fix Version/s: 1.17.0
   Resolution: Fixed

> Simplify redundant version declarations in NiFi
> ---
>
> Key: NIFI-9927
> URL: https://issues.apache.org/jira/browse/NIFI-9927
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Minor
> Fix For: 1.17.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-9195) Java 8 FR build appears to allow style check failures through

2024-04-21 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-9195.

Resolution: Abandoned

> Java 8 FR build appears to allow style check failures through
> -
>
> Key: NIFI-9195
> URL: https://issues.apache.org/jira/browse/NIFI-9195
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mike Thomsen
>Assignee: Joe Witt
>Priority: Major
>
> I've noticed that when a style check fails on the macOS Java 8 and Ubuntu 
> Java 11 builds, it will often pass on the Java 8 FR build for Windows. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-8611) GCP BigQuery processors support using designate project resource for ingestion

2024-04-21 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-8611.

Resolution: Abandoned

> GCP BigQuery processors support using designate project resource for ingestion
> --
>
> Key: NIFI-8611
> URL: https://issues.apache.org/jira/browse/NIFI-8611
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: Chih Han Yu
>Assignee: Joe Witt
>Priority: Major
>  Labels: GCP, bigquery
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> For now, *PutBigQueryBatch* processor and *PutBigQueryStreaming* processor 
> can only assign a single project id for consuming resources and do ingestion. 
> But in some business cases, the project providing resources and the project 
> which be inserted are not always the same. 
> src/main/java/org/apache/nifi/processors/gcp/AbstractGCPProcessor.java
>  
> {code:java}
> ..
> public static final PropertyDescriptor PROJECT_ID = new PropertyDescriptor
> .Builder().name("gcp-project-id")
> .displayName("Project ID")
> .description("Google Cloud Project ID")
> .required(false)
> 
> .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
> .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
> .build();
> ..{code}
>  
> We've test a solution which is workable, which is, adding another property 
> *DESIGNATE_PROJECT_ID* in *AbstractBigQueryProcessor*, it'll only impact 
> *PutBigQueryBatch* processor and *PutBigQueryStreaming* processor.
> If user provides designate project id:
>  * Use *PROJECT_ID* (defined in AbstractGCPProcessor) as resource consuming 
> project. 
>  * Put data into *DESIGNATE_PROJECT_ID*  (defined in 
> AbstractBigQueryProcessor). 
> If user does {color:#ff}not{color} provide designate project id:
>  * Use *PROJECT_ID* (defined in AbstractGCPProcessor) as resource consuming 
> project. 
>  * Put data into *PROJECT_ID*  (defined in AbstractGCPProcessor). 
> Since we already implemented this solution in production environment, I'll 
> submit a PR later for this improvement. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-11799) Change default setting to have content archive disable

2024-04-21 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-11799.
-
Resolution: Won't Do

> Change default setting to have content archive disable
> --
>
> Key: NIFI-11799
> URL: https://issues.apache.org/jira/browse/NIFI-11799
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>
> Users today download nifi and just get up and running.  And we want them to 
> be able to do so.  However, this also means they have a single disk for 
> content, provenance, and flowfiles.  The default is 50% archive max and to 
> have archive on.  This means many users end up hitting a choke point with 
> their flow at that 50% threshold where it needs to archive but cant because 
> the 'archive' is already full.
> Instead we should disable archive so this max doesn't matter.  Users that 
> want to take advantage of the awesomeness of content archival with provenance 
> can choose to do so but nobody will be otherwise slowed down by default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-12342) Remove extraneous 'relativePath' entry in pom.xml for Apache parent

2024-04-21 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-12342.
-
Resolution: Won't Fix

> Remove extraneous 'relativePath' entry in pom.xml for Apache parent
> ---
>
> Key: NIFI-12342
> URL: https://issues.apache.org/jira/browse/NIFI-12342
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>
> This relativePath entry dates back to 2015.  Dont recall why we needed it but 
> it is problematic now.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12998) nifi-nar-bundle has improper dependencies - the full source tree needs dependency cleanup and management

2024-04-20 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839261#comment-17839261
 ] 

Joe Witt commented on NIFI-12998:
-

integration-tests-ci seems unstable on CI servers.  Runs fine locally.  This 
test is an example
{noformat}
---
Test set: org.apache.nifi.jms.processors.JMSPublisherConsumerIT
---
Tests run: 10, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 3.949 s <<< 
FAILURE! -- in org.apache.nifi.jms.processors.JMSPublisherConsumerIT
org.apache.nifi.jms.processors.JMSPublisherConsumerIT.validateNIFI6721 -- Time 
elapsed: 0.024 s <<< FAILURE!
org.opentest4j.AssertionFailedError: expected: <1713587661028> but was: 
<1713587661029>
at 
org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
at 
org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
at 
org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
at 
org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:166)
at 
org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:161)
at org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:632)
at 
org.apache.nifi.jms.processors.JMSPublisherConsumerIT.validateNIFI6721(JMSPublisherConsumerIT.java:283)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
{noformat}


> nifi-nar-bundle has improper dependencies - the full source tree needs 
> dependency cleanup and management
> 
>
> Key: NIFI-12998
> URL: https://issues.apache.org/jira/browse/NIFI-12998
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> found in nifi-nar-bundles pom
>   
> com.maxmind.geoip2
> geoip2
> 4.2.0
> 
> This should not be here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-12923) PutHDFS to support appending avro data

2024-04-19 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-12923.
-
Resolution: Fixed

> PutHDFS to support appending avro data
> --
>
> Key: NIFI-12923
> URL: https://issues.apache.org/jira/browse/NIFI-12923
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Balázs Gerner
>Assignee: Balázs Gerner
>Priority: Major
> Fix For: 2.0.0-M3, 1.26.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The goal of this ticket is to extend the PutHDFS processor with the ability 
> to append avro records. The processor already provides an option to set 
> 'append' as conflict resolution strategy, but that does not work correctly in 
> case of avro files, because the serialized avro file cannot be deserialized 
> again (because the binary content is invalid).
> Some notes about the implementation:
>  * The user needs to explicitly select avro as file format and append as 
> conflict resolution mode to enable 'avro append' mode, otherwise regular 
> append mode will work just as before. There is no auto detection of mimetype 
> for the incoming flowfile.
>  * The records of the incoming flowfile and the ones in the existing avro 
> file need to conform to the same avro schema, otherwise the append operation 
> fails with incompatible schema.
>  * The 'avro append' mode should only work when compression type is set to 
> 'none', if any other compression type is selected in 'avro append' mode the 
> user should get a validation error.
> The changes will have to be added to *support/nifi-1.x* branch also.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (NIFI-12923) PutHDFS to support appending avro data

2024-04-19 Thread Joe Witt (Jira)


[ https://issues.apache.org/jira/browse/NIFI-12923 ]


Joe Witt deleted comment on NIFI-12923:
-

was (Author: joewitt):
[~balazsgerner][~mattyb149] I am reverting the commit for this test change as 
the two tests are unstable and need review/resolution.


{noformat}
[ERROR] Tests run: 20, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.478 
s <<< FAILURE! -- in org.apache.nifi.processors.hadoop.PutHDFSTest
[ERROR] 
org.apache.nifi.processors.hadoop.PutHDFSTest.testPutFileWithAppendAvroModeNewFileCreated
 -- Time elapsed: 0.005 s <<< ERROR!
java.io.FileNotFoundException: src/test/resources/testdata-avro/input.avro (No 
such file or directory)
at java.base/java.io.FileInputStream.open0(Native Method)
at java.base/java.io.FileInputStream.open(FileInputStream.java:213)
at java.base/java.io.FileInputStream.(FileInputStream.java:152)
at java.base/java.io.FileInputStream.(FileInputStream.java:106)
at 
org.apache.nifi.processors.hadoop.PutHDFSTest.testPutFileWithAppendAvroModeNewFileCreated(PutHDFSTest.java:281)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)

[ERROR] 
org.apache.nifi.processors.hadoop.PutHDFSTest.testPutFileWithAppendAvroModeWhenTargetFileAlreadyExists
 -- Time elapsed: 0.009 s <<< ERROR!
java.io.FileNotFoundException: src/test/resources/testdata-avro/input.avro (No 
such file or directory)
at java.base/java.io.FileInputStream.open0(Native Method)
at java.base/java.io.FileInputStream.open(FileInputStream.java:213)
at java.base/java.io.FileInputStream.(FileInputStream.java:152)
at java.base/java.io.FileInputStream.(FileInputStream.java:106)
at 
org.apache.nifi.processors.hadoop.PutHDFSTest.testPutFileWithAppendAvroModeWhenTargetFileAlreadyExists(PutHDFSTest.java:308)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)

{noformat}


> PutHDFS to support appending avro data
> --
>
> Key: NIFI-12923
> URL: https://issues.apache.org/jira/browse/NIFI-12923
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Balázs Gerner
>Assignee: Balázs Gerner
>Priority: Major
> Fix For: 2.0.0-M3, 1.26.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The goal of this ticket is to extend the PutHDFS processor with the ability 
> to append avro records. The processor already provides an option to set 
> 'append' as conflict resolution strategy, but that does not work correctly in 
> case of avro files, because the serialized avro file cannot be deserialized 
> again (because the binary content is invalid).
> Some notes about the implementation:
>  * The user needs to explicitly select avro as file format and append as 
> conflict resolution mode to enable 'avro append' mode, otherwise regular 
> append mode will work just as before. There is no auto detection of mimetype 
> for the incoming flowfile.
>  * The records of the incoming flowfile and the ones in the existing avro 
> file need to conform to the same avro schema, otherwise the append operation 
> fails with incompatible schema.
>  * The 'avro append' mode should only work when compression type is set to 
> 'none', if any other compression type is selected in 'avro append' mode the 
> user should get a validation error.
> The changes will have to be added to *support/nifi-1.x* branch also.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12923) PutHDFS to support appending avro data

2024-04-19 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839172#comment-17839172
 ] 

Joe Witt commented on NIFI-12923:
-

Disregard - error was in rebase on my end

> PutHDFS to support appending avro data
> --
>
> Key: NIFI-12923
> URL: https://issues.apache.org/jira/browse/NIFI-12923
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Balázs Gerner
>Assignee: Balázs Gerner
>Priority: Major
> Fix For: 2.0.0-M3, 1.26.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The goal of this ticket is to extend the PutHDFS processor with the ability 
> to append avro records. The processor already provides an option to set 
> 'append' as conflict resolution strategy, but that does not work correctly in 
> case of avro files, because the serialized avro file cannot be deserialized 
> again (because the binary content is invalid).
> Some notes about the implementation:
>  * The user needs to explicitly select avro as file format and append as 
> conflict resolution mode to enable 'avro append' mode, otherwise regular 
> append mode will work just as before. There is no auto detection of mimetype 
> for the incoming flowfile.
>  * The records of the incoming flowfile and the ones in the existing avro 
> file need to conform to the same avro schema, otherwise the append operation 
> fails with incompatible schema.
>  * The 'avro append' mode should only work when compression type is set to 
> 'none', if any other compression type is selected in 'avro append' mode the 
> user should get a validation error.
> The changes will have to be added to *support/nifi-1.x* branch also.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (NIFI-12923) PutHDFS to support appending avro data

2024-04-19 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt reopened NIFI-12923:
-

> PutHDFS to support appending avro data
> --
>
> Key: NIFI-12923
> URL: https://issues.apache.org/jira/browse/NIFI-12923
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Balázs Gerner
>Assignee: Balázs Gerner
>Priority: Major
> Fix For: 2.0.0-M3, 1.26.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The goal of this ticket is to extend the PutHDFS processor with the ability 
> to append avro records. The processor already provides an option to set 
> 'append' as conflict resolution strategy, but that does not work correctly in 
> case of avro files, because the serialized avro file cannot be deserialized 
> again (because the binary content is invalid).
> Some notes about the implementation:
>  * The user needs to explicitly select avro as file format and append as 
> conflict resolution mode to enable 'avro append' mode, otherwise regular 
> append mode will work just as before. There is no auto detection of mimetype 
> for the incoming flowfile.
>  * The records of the incoming flowfile and the ones in the existing avro 
> file need to conform to the same avro schema, otherwise the append operation 
> fails with incompatible schema.
>  * The 'avro append' mode should only work when compression type is set to 
> 'none', if any other compression type is selected in 'avro append' mode the 
> user should get a validation error.
> The changes will have to be added to *support/nifi-1.x* branch also.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12923) PutHDFS to support appending avro data

2024-04-19 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839171#comment-17839171
 ] 

Joe Witt commented on NIFI-12923:
-

[~balazsgerner][~mattyb149] I am reverting the commit for this test change as 
the two tests are unstable and need review/resolution.


{noformat}
[ERROR] Tests run: 20, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.478 
s <<< FAILURE! -- in org.apache.nifi.processors.hadoop.PutHDFSTest
[ERROR] 
org.apache.nifi.processors.hadoop.PutHDFSTest.testPutFileWithAppendAvroModeNewFileCreated
 -- Time elapsed: 0.005 s <<< ERROR!
java.io.FileNotFoundException: src/test/resources/testdata-avro/input.avro (No 
such file or directory)
at java.base/java.io.FileInputStream.open0(Native Method)
at java.base/java.io.FileInputStream.open(FileInputStream.java:213)
at java.base/java.io.FileInputStream.(FileInputStream.java:152)
at java.base/java.io.FileInputStream.(FileInputStream.java:106)
at 
org.apache.nifi.processors.hadoop.PutHDFSTest.testPutFileWithAppendAvroModeNewFileCreated(PutHDFSTest.java:281)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)

[ERROR] 
org.apache.nifi.processors.hadoop.PutHDFSTest.testPutFileWithAppendAvroModeWhenTargetFileAlreadyExists
 -- Time elapsed: 0.009 s <<< ERROR!
java.io.FileNotFoundException: src/test/resources/testdata-avro/input.avro (No 
such file or directory)
at java.base/java.io.FileInputStream.open0(Native Method)
at java.base/java.io.FileInputStream.open(FileInputStream.java:213)
at java.base/java.io.FileInputStream.(FileInputStream.java:152)
at java.base/java.io.FileInputStream.(FileInputStream.java:106)
at 
org.apache.nifi.processors.hadoop.PutHDFSTest.testPutFileWithAppendAvroModeWhenTargetFileAlreadyExists(PutHDFSTest.java:308)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)

{noformat}


> PutHDFS to support appending avro data
> --
>
> Key: NIFI-12923
> URL: https://issues.apache.org/jira/browse/NIFI-12923
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Balázs Gerner
>Assignee: Balázs Gerner
>Priority: Major
> Fix For: 2.0.0-M3, 1.26.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The goal of this ticket is to extend the PutHDFS processor with the ability 
> to append avro records. The processor already provides an option to set 
> 'append' as conflict resolution strategy, but that does not work correctly in 
> case of avro files, because the serialized avro file cannot be deserialized 
> again (because the binary content is invalid).
> Some notes about the implementation:
>  * The user needs to explicitly select avro as file format and append as 
> conflict resolution mode to enable 'avro append' mode, otherwise regular 
> append mode will work just as before. There is no auto detection of mimetype 
> for the incoming flowfile.
>  * The records of the incoming flowfile and the ones in the existing avro 
> file need to conform to the same avro schema, otherwise the append operation 
> fails with incompatible schema.
>  * The 'avro append' mode should only work when compression type is set to 
> 'none', if any other compression type is selected in 'avro append' mode the 
> user should get a validation error.
> The changes will have to be added to *support/nifi-1.x* branch also.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13075) AzureLogAnalyticsReportingTask does not re-initialize properly

2024-04-19 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-13075:

Description: 
apache nifi slack user reports:

The reporting task runs initially but when stopped and started again, it no 
longer processes provenance events until nifi is restarted. I figured by 
comparing the implementation to the SiteToSiteProvenance reporting tasks that 
it's because the event consumer isn't properly re-initialized [(this if 
statement is the 
problem)|https://github.com/apache/nifi/blob/6ecc398d3f92425447e43242af4992757e25b3c5/nifi-nar-bundles/nifi-azure-bundle/nifi-azure-reporting-task/src/main/java/org/apache/nifi/reporting/azure/loganalytics/AzureLogAnalyticsProvenanceReportingTask.java#L201].
 The SiteToSite task always re-initializes the event consumer always in 
onScheduled.
Additionally, not a bug but an enhancement for the 
AzureLogAnalyticsProvenanceReportingTask is to keep state of the last processed 
event id, like SiteToSite does.

AzureLogAnalyticsProvenanceReportingTask.java
if (consumer != null)



A bit more insight: the reporting task calls consumer.setScheduled(false) in 
onUnscheduled, but in onTrigger, due to aforementioned if statement, it never 
calls consumer.setScheduled(true), so this might the root cause.








  was:
apache nifi slack user reports:

The reporting task runs initially but when stopped and started again, it no 
longer processes provenance events until nifi is restarted. I figured by 
comparing the implementation to the SiteToSiteProvenance reporting tasks that 
it's because the event consumer isn't properly re-initialized [(this if 
statement is the 
problem)|https://github.com/apache/nifi/blob/6ecc398d3f92425447e43242af4992757e25b3c5/nifi-nar-bundles/nifi-azure-bundle/nifi-azure-reporting-task/src/main/java/org/apache/nifi/reporting/azure/loganalytics/AzureLogAnalyticsProvenanceReportingTask.java#L201].
 The SiteToSite task always re-initializes the event consumer always in 
onScheduled.
Additionally, not a bug but an enhancement for the 
AzureLogAnalyticsProvenanceReportingTask is to keep state of the last processed 
event id, like SiteToSite does.

AzureLogAnalyticsProvenanceReportingTask.java
if (consumer != null)


> AzureLogAnalyticsReportingTask does not re-initialize properly
> --
>
> Key: NIFI-13075
> URL: https://issues.apache.org/jira/browse/NIFI-13075
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Joe Witt
>Priority: Major
>
> apache nifi slack user reports:
> The reporting task runs initially but when stopped and started again, it no 
> longer processes provenance events until nifi is restarted. I figured by 
> comparing the implementation to the SiteToSiteProvenance reporting tasks that 
> it's because the event consumer isn't properly re-initialized [(this if 
> statement is the 
> problem)|https://github.com/apache/nifi/blob/6ecc398d3f92425447e43242af4992757e25b3c5/nifi-nar-bundles/nifi-azure-bundle/nifi-azure-reporting-task/src/main/java/org/apache/nifi/reporting/azure/loganalytics/AzureLogAnalyticsProvenanceReportingTask.java#L201].
>  The SiteToSite task always re-initializes the event consumer always in 
> onScheduled.
> Additionally, not a bug but an enhancement for the 
> AzureLogAnalyticsProvenanceReportingTask is to keep state of the last 
> processed event id, like SiteToSite does.
> AzureLogAnalyticsProvenanceReportingTask.java
> if (consumer != null)
> A bit more insight: the reporting task calls consumer.setScheduled(false) in 
> onUnscheduled, but in onTrigger, due to aforementioned if statement, it never 
> calls consumer.setScheduled(true), so this might the root cause.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13075) AzureLogAnalyticsReportingTask does not re-initialize properly

2024-04-19 Thread Joe Witt (Jira)
Joe Witt created NIFI-13075:
---

 Summary: AzureLogAnalyticsReportingTask does not re-initialize 
properly
 Key: NIFI-13075
 URL: https://issues.apache.org/jira/browse/NIFI-13075
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Joe Witt


apache nifi slack user reports:

The reporting task runs initially but when stopped and started again, it no 
longer processes provenance events until nifi is restarted. I figured by 
comparing the implementation to the SiteToSiteProvenance reporting tasks that 
it's because the event consumer isn't properly re-initialized [(this if 
statement is the 
problem)|https://github.com/apache/nifi/blob/6ecc398d3f92425447e43242af4992757e25b3c5/nifi-nar-bundles/nifi-azure-bundle/nifi-azure-reporting-task/src/main/java/org/apache/nifi/reporting/azure/loganalytics/AzureLogAnalyticsProvenanceReportingTask.java#L201].
 The SiteToSite task always re-initializes the event consumer always in 
onScheduled.
Additionally, not a bug but an enhancement for the 
AzureLogAnalyticsProvenanceReportingTask is to keep state of the last processed 
event id, like SiteToSite does.

AzureLogAnalyticsProvenanceReportingTask.java
if (consumer != null)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13057) When a property is both required and sensitive, setting it blank does not remove the "Sensitive value set" message.

2024-04-16 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837848#comment-17837848
 ] 

Joe Witt commented on NIFI-13057:
-

I'm not sure but I think that is the crux of the confusion and i'd be lying if 
it didn't trip me up sometimes too :)

Best UX way to help the user there - open for suggestions!

> When a property is both required and sensitive, setting it blank does not 
> remove the "Sensitive value set" message.
> ---
>
> Key: NIFI-13057
> URL: https://issues.apache.org/jira/browse/NIFI-13057
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Daniel Stieglitz
>Priority: Major
> Attachments: image-2024-04-16-13-13-24-650.png
>
>
> When testing [NIIFI-12960|https://issues.apache.org/jira/browse/NIFI-12960], 
> I noticed when a property had the combination
> {code:java}
> .required(true)
> .sensitive(true)
> {code}
> and after a user entered a value and then attempts to remove the value it 
> does not work and the message "Sensitive value set" still remains in the text 
> box. This is even when the user checks the "Set empty string" checkbox. 
> Attached is a screenshot of a DecryptContent processor after an attempt to 
> remove the set password.
>  !image-2024-04-16-13-13-24-650.png! 
> The combination 
> {code:java}
> .required(true)
> .sensitive(true)
> {code}
> is prevalent in the code base. I found it ~40 times in the following files.
> # QueryAirtableTable
> # StandardAsanaClientProviderService
> # ClientSideEncryptionSupport
> # AzureStorageUtils
> # StandardKustoIngestService
> # AbstractAzureLogAnalyticsReportingTask
> # DecryptContent
> # DecryptContentAge
> # DecryptContentCompatibility
> # EncryptContentAge
> # VerifyContentMAC
> # StandardDropboxCredentialService
> # AbstractEmailProcessor
> # GhostFlowRegistryClient
> # GhostControllerService
> # GhostFlowAnalysisRule
> # GhostParameterProvider
> # GhostProcessor
> # GhostReportingTask
> # Neo4JCypherClientService
> # GetHubSpot
> # AbstractIoTDB
> # StandardPGPPrivateKeyService
> # GetShopify
> # ListenSlack
> # ConsumeSlack
> # PublishSlack
> # SlackRecordSink
> # BasicProperties
> # V3SecurityProperties
> # ConsumeTwitter
> # OnePasswordParameterProvider
> # KerberosPasswordUserService
> # StandardOauth2AccessTokenProvider
> # GetWorkdayReport
> # ZendeskProperties



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13057) When a property is both required and sensitive, setting it blank does not remove the "Sensitive value set" message.

2024-04-16 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837844#comment-17837844
 ] 

Joe Witt commented on NIFI-13057:
-

Ok so let's separate a few things to get to the key details and next steps.

Terminology wise:
- A property is either set or not set.
- A property which has an 'empty string' is set (set to simply be a string of 
no text but this is different than not set).*
- Properties which are required are always required to be set.  They can never 
be unset.  Once a value is set there will always be a value.  They can be 
changed but never unset.
- Properties which are sensitive can be set or unset.  Once set they will 
always show 'sensitive value..' but not the actual values because of their 
stated sensitivity.

In your example you have a required property which was set.  And you wanted to 
backspace and unset it.  But since it is required it can never be unset.  You 
need to change it to another value which should work.  You should be able to 
select 'empty string' as well which should work.


* why would a user 'set' a value of 'empty string'?  It happens all the time.  
Imagine wanting to supply a replace value in a find/replace logic.  

Hopefully this helps clarify.  I think the confusion stems from the fact that 
as a user when you delete a value and apply having it show right back up is 
confusing.  We should perhaps signal that you cannot unset a required field but 
can only change it.

> When a property is both required and sensitive, setting it blank does not 
> remove the "Sensitive value set" message.
> ---
>
> Key: NIFI-13057
> URL: https://issues.apache.org/jira/browse/NIFI-13057
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Daniel Stieglitz
>Priority: Major
> Attachments: image-2024-04-16-13-13-24-650.png
>
>
> When testing [NIIFI-12960|https://issues.apache.org/jira/browse/NIFI-12960], 
> I noticed when a property had the combination
> {code:java}
> .required(true)
> .sensitive(true)
> {code}
> and after a user entered a value and then attempts to remove the value it 
> does not work and the message "Sensitive value set" still remains in the text 
> box. This is even when the user checks the "Set empty string" checkbox. 
> Attached is a screenshot of a DecryptContent processor after an attempt to 
> remove the set password.
>  !image-2024-04-16-13-13-24-650.png! 
> The combination 
> {code:java}
> .required(true)
> .sensitive(true)
> {code}
> is prevalent in the code base. I found it ~40 times in the following files.
> # QueryAirtableTable
> # StandardAsanaClientProviderService
> # ClientSideEncryptionSupport
> # AzureStorageUtils
> # StandardKustoIngestService
> # AbstractAzureLogAnalyticsReportingTask
> # DecryptContent
> # DecryptContentAge
> # DecryptContentCompatibility
> # EncryptContentAge
> # VerifyContentMAC
> # StandardDropboxCredentialService
> # AbstractEmailProcessor
> # GhostFlowRegistryClient
> # GhostControllerService
> # GhostFlowAnalysisRule
> # GhostParameterProvider
> # GhostProcessor
> # GhostReportingTask
> # Neo4JCypherClientService
> # GetHubSpot
> # AbstractIoTDB
> # StandardPGPPrivateKeyService
> # GetShopify
> # ListenSlack
> # ConsumeSlack
> # PublishSlack
> # SlackRecordSink
> # BasicProperties
> # V3SecurityProperties
> # ConsumeTwitter
> # OnePasswordParameterProvider
> # KerberosPasswordUserService
> # StandardOauth2AccessTokenProvider
> # GetWorkdayReport
> # ZendeskProperties



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13057) When a property is both required and sensitive, setting it blank does not remove the "Sensitive value set" message.

2024-04-16 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837801#comment-17837801
 ] 

Joe Witt commented on NIFI-13057:
-

[~dstiegli1] Reading the description this sounds like it is working as it 
should.  The 'empty string' is a valid input or in general is a valid input 
(obviously a non empty string validator would change that).  But when it comes 
to sensitive values once they're entered (even if that is an empty string) this 
is still an entry and we're showing that it is set but as always with sensitive 
we're not showing what it is set to.  Given that it is required you can never 
unset the value.  You can only change it.

With that in mind what are you proposing?

> When a property is both required and sensitive, setting it blank does not 
> remove the "Sensitive value set" message.
> ---
>
> Key: NIFI-13057
> URL: https://issues.apache.org/jira/browse/NIFI-13057
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Daniel Stieglitz
>Priority: Major
> Attachments: image-2024-04-16-13-13-24-650.png
>
>
> When testing [NIIFI-12960|https://issues.apache.org/jira/browse/NIFI-12960], 
> I noticed when a property had the combination
> {code:java}
> .required(true)
> .sensitive(true)
> {code}
> and after a user entered a value and then attempts to remove the value it 
> does not work and the message "Sensitive value set" still remains in the text 
> box. This is even when the user checks the "Set empty string" checkbox. 
> Attached is a screenshot of a DecryptContent processor after an attempt to 
> remove the set password.
>  !image-2024-04-16-13-13-24-650.png! 
> The combination 
> {code:java}
> .required(true)
> .sensitive(true)
> {code}
> is prevalent in the code base. I found it ~40 times in the following files.
> # QueryAirtableTable
> # StandardAsanaClientProviderService
> # ClientSideEncryptionSupport
> # AzureStorageUtils
> # StandardKustoIngestService
> # AbstractAzureLogAnalyticsReportingTask
> # DecryptContent
> # DecryptContentAge
> # DecryptContentCompatibility
> # EncryptContentAge
> # VerifyContentMAC
> # StandardDropboxCredentialService
> # AbstractEmailProcessor
> # GhostFlowRegistryClient
> # GhostControllerService
> # GhostFlowAnalysisRule
> # GhostParameterProvider
> # GhostProcessor
> # GhostReportingTask
> # Neo4JCypherClientService
> # GetHubSpot
> # AbstractIoTDB
> # StandardPGPPrivateKeyService
> # GetShopify
> # ListenSlack
> # ConsumeSlack
> # PublishSlack
> # SlackRecordSink
> # BasicProperties
> # V3SecurityProperties
> # ConsumeTwitter
> # OnePasswordParameterProvider
> # KerberosPasswordUserService
> # StandardOauth2AccessTokenProvider
> # GetWorkdayReport
> # ZendeskProperties



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13042) Support Python 3.12 for Python Processors

2024-04-14 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-13042:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Support Python 3.12 for Python Processors
> -
>
> Key: NIFI-13042
> URL: https://issues.apache.org/jira/browse/NIFI-13042
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0-M3
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> NiFi 2.0.0-M1 and M2 support Python Processors with Python 3.11, but 
> attempting to start NiFi with Python 3.12 results in the following error:
> {noformat}
> ERROR [python-log-29812] py4j.java_gateway Error while waiting for a 
> connection.
> Traceback (most recent call last):
>   File "nifi-2.0.0-SNAPSHOT/python/framework/py4j/java_gateway.py", line 
> 2304, in run
> connection.start()
>   File "python3.12/threading.py", line 992, in start
> _start_new_thread(self._bootstrap, ())
> RuntimeError: can't create new thread at interpreter shutdown
> {noformat}
> This issue follows the pattern of [CPython Issue 
> 115533|https://github.com/python/cpython/issues/115533], which describes 
> changes in Python 3.12 to prevent spawning new threads when the main Python 
> process has finalized.
> In the context of the NiFi Python Framework, this applies to the 
> Controller.py main method, which is responsible for starting the Py4J gateway 
> to communicate between Java and Python processes.
> Following the workaround proposed in the CPython issue discussion, the main 
> function in Controller.py can be adjusted to join to non-daemon threads. This 
> maintains the association between the Python main function and threads spawn 
> for Py4J socket communication.
> In addition to this thread issue, the Python FileFinder 
> [find_module|https://docs.python.org/3.10/library/importlib.html#importlib.abc.PathEntryFinder.find_module]
>  has been deprecated since Python 3.4 and was removed in Python 3.12. Usage 
> of {{find_module}} in ExtensionsManager.py should be replaced with 
> {{find_spec}} which provide equivalent functionality.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12998) nifi-nar-bundle has improper dependencies - the full source tree needs dependency cleanup and management

2024-04-12 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836665#comment-17836665
 ] 

Joe Witt commented on NIFI-12998:
-

This started in an effort to remove a couple clearly extraneous entries in the 
nifi-nar-bundle pom.  What I found as I pulled the thread was that our repo 
holistically has grown very messy and tangled in ways that make adding 
capabilities and maintenance much harder.  Some members of the community have 
done tremendous work to improve the situation by moving more things to the top 
level pom dependency management section and introducing BOMs where available 
for things like Jetty and various others.  These all help and are part of the 
strategy but several challenges remain due to years of technical debt accrual.

1. We have made the 'nifi-nar-bundles' module into a frankenstein of awkward 
inter dependencies between the framework, actual extensions, etc.. You really 
can't build components outside of that with the way it works right now without 
a lot of manual work.
2. We have no clear delineation of what is actually provided in the application 
of NiFi (or minifi java or stateless for that matter) when build extension nars 
on top.
3. Related to 2 it isn't clear or obvious to users actually how any of our 
classloading magic truly works.  Which artifacts are you actually having to 
contend with being in a parent classloader and which ones are you free to take 
a different version/approach with?
4.  If you want to build a new nar or modify one which exists you really don't 
know and frankly don't have much options what you can use.  The parent modules 
have forced dependencies (not managed dependencies but hard dependencies) which 
complicate things.  You also effectively have to build them within the 
nifi-nar-bundles module hierarchy to have it work for now.  Going forward there 
should be in effect three patterns we enable.  (a) you want to build a nar and 
you only need base/root dependencies which you cannot ovveride.   (b) you want 
to build a nar and you're going to leverage one or more standard service apis.  
(c) you want to build a nar and you're going to leverage standard shared 
libraries and service apis.   Nars should be free to choose whichever level of 
integration/tie in they want and they should be able to live easily wherever 
they need in the nifi codebase or in anyone elses repo.  It was always supposed 
to be dead simple to build extensions.
5. If you analyze dependency trees POMs are riddled with absurd inclusions that 
have absolutely nothing to do with the component involved.  Lots of copy and 
paste is clearly a factor but also inclusion of dependencies because you think 
some upstream user would need those.  Dependencies which got left around after 
various refactorings.  Dependencies which are marked as provided but not 
actually provided/etc.. I envision various devs playing dependency bingo until 
the build works.  We are fighting maven and its great powers instead of letting 
it do the hard work for us.

The PR associated with the above will be very large as it unwinds these various 
complexities.  The result should be far more maintainable and easier to reason 
over.  It heavily leverages the BOM pattern both for external dependencies but 
also within NiFi.  It moves modules to their more logical/independent spot.  It 
goes module by module and does full dependency analysis to eliminate any 
incorrect dependencies either from things being listed as dependencies which 
aren't actually used as well as things which override parent managed versions 
when it should not as well.


> nifi-nar-bundle has improper dependencies - the full source tree needs 
> dependency cleanup and management
> 
>
> Key: NIFI-12998
> URL: https://issues.apache.org/jira/browse/NIFI-12998
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> found in nifi-nar-bundles pom
>   
> com.maxmind.geoip2
> geoip2
> 4.2.0
> 
> This should not be here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13007) Remove Solr Components in 2x line

2024-04-06 Thread Joe Witt (Jira)
Joe Witt created NIFI-13007:
---

 Summary: Remove Solr Components in 2x line
 Key: NIFI-13007
 URL: https://issues.apache.org/jira/browse/NIFI-13007
 Project: Apache NiFi
  Issue Type: Task
Reporter: Joe Witt
Assignee: Joe Witt
 Fix For: 2.0.0-M3


In NIFI-12998 it was found that with basic dependency hygiene our Solr 
components cant even build now.  And it turns out they already dont work at 
runtime.

In NIFi-13006 we deprecate and in this JIRA we remove.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13006) Deprecate Solr Components in NiFi 1.x

2024-04-06 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-13006:

Fix Version/s: 1.26.0

> Deprecate Solr Components in NiFi 1.x
> -
>
> Key: NIFI-13006
> URL: https://issues.apache.org/jira/browse/NIFI-13006
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 1.26.0
>
>
> There is not a clear go forward strategy to support Solr components as they 
> current rely on Jetty 10 and we've moved on in terms of Java version and 
> Jetty version for maintenance, function, and security reasons.  So we'll need 
> to deprecate them.
> There will be an associated removal in NiFi 2.x
> We can restore them if the Solr community updates their dependencies.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13006) Deprecate Solr Components in NiFi 1.x

2024-04-06 Thread Joe Witt (Jira)
Joe Witt created NIFI-13006:
---

 Summary: Deprecate Solr Components in NiFi 1.x
 Key: NIFI-13006
 URL: https://issues.apache.org/jira/browse/NIFI-13006
 Project: Apache NiFi
  Issue Type: Task
Reporter: Joe Witt
Assignee: Joe Witt


There is not a clear go forward strategy to support Solr components as they 
current rely on Jetty 10 and we've moved on in terms of Java version and Jetty 
version for maintenance, function, and security reasons.  So we'll need to 
deprecate them.

There will be an associated removal in NiFi 2.x

We can restore them if the Solr community updates their dependencies.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12998) nifi-nar-bundle has improper dependencies - the full source tree needs dependency cleanup and management

2024-04-05 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834174#comment-17834174
 ] 

Joe Witt commented on NIFI-12998:
-

Some TODOs:
1. remove the TODO entry by eliminating the nifi-nar-bom and just having the 
nifi-nar-bundle pom import the nifi-bom instead.
2. Do the entire build with all optional profiles to ensure everything is 
tidied up.
3. Once the full build is totally good/confirmed to have the right bits.  THEN 
review ways to reduce version declarations without creating new needless 
coupling.
4. Evaluate whether certain poms could include/import the 
nifit-standard-services-api-bom to simplify version references or not.
5. review all the remaining things in the new build that were not in the old 
build.  Some look like references we need to clean.  API jars for example 
should not be in the nars/etc..

> nifi-nar-bundle has improper dependencies - the full source tree needs 
> dependency cleanup and management
> 
>
> Key: NIFI-12998
> URL: https://issues.apache.org/jira/browse/NIFI-12998
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> found in nifi-nar-bundles pom
>   
> com.maxmind.geoip2
> geoip2
> 4.2.0
> 
> This should not be here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12998) nifi-nar-bundle has improper dependencies - the full source tree needs dependency cleanup and management

2024-04-05 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834173#comment-17834173
 ] 

Joe Witt commented on NIFI-12998:
-

As for the nifi-python-framework-api it is properly ONLY in the resulting lib 
directory and not the other places.  So again in the new build the output is 
more correct.

> nifi-nar-bundle has improper dependencies - the full source tree needs 
> dependency cleanup and management
> 
>
> Key: NIFI-12998
> URL: https://issues.apache.org/jira/browse/NIFI-12998
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> found in nifi-nar-bundles pom
>   
> com.maxmind.geoip2
> geoip2
> 4.2.0
> 
> This should not be here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12998) nifi-nar-bundle has improper dependencies - the full source tree needs dependency cleanup and management

2024-04-05 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834171#comment-17834171
 ] 

Joe Witt commented on NIFI-12998:
-

In the new build the nifi-record lib is in the following places only.  And the 
main one that is important is nifi-standard-services-api-nar which all these 
things extend from in a nar sense.  So the new one looks a lot more correct.  
need to review the ones that include it beyond the standard services api nar 
and see if they can go away.


{noformat}
~/Downloads/new-nifi.txt:569: 
./work/nar/extensions/nifi-couchbase-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
~/Downloads/new-nifi.txt:610: 
./work/nar/extensions/nifi-db-schema-registry-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
~/Downloads/new-nifi.txt:627: 
./work/nar/extensions/nifi-dbcp-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
~/Downloads/new-nifi.txt:1260: 
./work/nar/extensions/nifi-prometheus-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
~/Downloads/new-nifi.txt:1693: 
./work/nar/extensions/nifi-standard-services-api-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
~/Downloads/new-nifi.txt:1787: 
./work/nar/extensions/nifi-workday-processors-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar

{noformat}


> nifi-nar-bundle has improper dependencies - the full source tree needs 
> dependency cleanup and management
> 
>
> Key: NIFI-12998
> URL: https://issues.apache.org/jira/browse/NIFI-12998
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> found in nifi-nar-bundles pom
>   
> com.maxmind.geoip2
> geoip2
> 4.2.0
> 
> This should not be here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12998) nifi-nar-bundle has improper dependencies - the full source tree needs dependency cleanup and management

2024-04-05 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834167#comment-17834167
 ] 

Joe Witt commented on NIFI-12998:
-

In Old (not new)

{noformat}

./work/nar/extensions/nifi-apicurio-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-aws-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-azure-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-box-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-confluent-platform-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-dropbox-processors-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-elasticsearch-client-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-elasticsearch-restapi-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-enrich-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-gcp-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-geohash-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-jms-processors-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-jolt-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-kafka-2-6-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-lookup-services-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-mongodb-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-mongodb-services-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-mqtt-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-poi-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-py4j-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-record-serialization-services-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-record-sink-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-redis-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-registry-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-salesforce-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-scripting-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-site-to-site-reporting-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-slack-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-smb-client-api-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-smb-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-smb-smbj-client-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-standard-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-zendesk-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-zendesk-services-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar

./work/nar/extensions/nifi-py4j-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-python-framework-api-2.0.0-SNAPSHOT.jar
./work/nar/framework/nifi-framework-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-python-framework-api-2.0.0-SNAPSHOT.jar

{noformat}



[jira] [Commented] (NIFI-12998) nifi-nar-bundle has improper dependencies - the full source tree needs dependency cleanup and management

2024-04-05 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834166#comment-17834166
 ] 

Joe Witt commented on NIFI-12998:
-

In New (not old)

{noformat}

./work/nar/extensions/nifi-amqp-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-security-utils-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-distributed-cache-services-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-security-utils-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-email-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-security-utils-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-prometheus-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-security-utils-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-ssl-context-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-security-utils-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-websocket-processors-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-security-utils-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-websocket-services-api-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-security-utils-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-websocket-services-jetty-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-security-utils-api-2.0.0-SNAPSHOT.jar

./work/nar/extensions/nifi-amqp-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-asana-processors-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-asana-services-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-compress-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-couchbase-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-db-schema-registry-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-dbcp-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-distributed-cache-services-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-email-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-evtx-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-file-resource-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-framework-kubernetes-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-hazelcast-services-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-hl7-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-http-context-map-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-kerberos-credentials-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-kerberos-user-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-prometheus-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-proxy-configuration-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-server-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-single-user-iaa-providers-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-ssl-context-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-stateful-analysis-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-websocket-processors-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-websocket-services-jetty-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-windows-event-log-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar

[jira] [Commented] (NIFI-12998) nifi-nar-bundle has improper dependencies - the full source tree needs dependency cleanup and management

2024-04-05 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834160#comment-17834160
 ] 

Joe Witt commented on NIFI-12998:
-

to generate comparable output after creating a build start nifi.  then in the 
nifi bin root directory run 


{noformat}
find . -type f | grep jar | sort -u
{noformat}

Do that on both old and new and you can compare the diffs.


> nifi-nar-bundle has improper dependencies - the full source tree needs 
> dependency cleanup and management
> 
>
> Key: NIFI-12998
> URL: https://issues.apache.org/jira/browse/NIFI-12998
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> found in nifi-nar-bundles pom
>   
> com.maxmind.geoip2
> geoip2
> 4.2.0
> 
> This should not be here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12998) nifi-nar-bundle has improper dependencies - the full source tree needs dependency cleanup and management

2024-04-05 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834159#comment-17834159
 ] 

Joe Witt commented on NIFI-12998:
-

After latest commit added on the WIP pull request.  The build compiles and 
runs.  But there is some remaining discrepancy between former build and new 
build.  The size is roughly the same but we need to resolve these differences.  
Some look like gaps to solve and some look like improvements.


{noformat}
In New (not old)

./work/nar/extensions/nifi-amqp-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-security-utils-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-amqp-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-ssl-context-service-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-amqp-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-asana-processors-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-distributed-cache-client-service-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-asana-processors-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-asana-services-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-compress-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-couchbase-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-distributed-cache-client-service-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-couchbase-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-lookup-service-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-couchbase-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-serialization-service-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-couchbase-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-db-schema-registry-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-dbcp-service-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-db-schema-registry-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-json-schema-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-db-schema-registry-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-db-schema-registry-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-serialization-service-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-db-schema-registry-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-schema-registry-service-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-db-schema-registry-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-dbcp-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-dbcp-service-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-dbcp-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-kerberos-credentials-service-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-dbcp-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-kerberos-user-service-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-dbcp-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-serialization-service-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-dbcp-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-record-sink-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-dbcp-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-security-kerberos-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-dbcp-service-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-distributed-cache-services-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-distributed-cache-client-service-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-distributed-cache-services-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-security-utils-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-distributed-cache-services-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-ssl-context-service-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-distributed-cache-services-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-utils-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-email-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-oauth2-provider-api-2.0.0-SNAPSHOT.jar
./work/nar/extensions/nifi-email-nar-2.0.0-SNAPSHOT.nar-unpacked/NAR-INF/bundled-dependencies/nifi-security-utils-api-2.0.0-SNAPSHOT.jar

[jira] [Updated] (NIFI-12998) nifi-nar-bundle has improper dependencies - the full source tree needs dependency cleanup and management

2024-04-05 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12998:

Summary: nifi-nar-bundle has improper dependencies - the full source tree 
needs dependency cleanup and management  (was: maxmind geoip2 does not belong 
in nifi-nar-bundle pom)

> nifi-nar-bundle has improper dependencies - the full source tree needs 
> dependency cleanup and management
> 
>
> Key: NIFI-12998
> URL: https://issues.apache.org/jira/browse/NIFI-12998
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> found in nifi-nar-bundles pom
>   
> com.maxmind.geoip2
> geoip2
> 4.2.0
> 
> This should not be here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13002) Restore compression level mistakenly removed in NIFI-12996

2024-04-04 Thread Joe Witt (Jira)
Joe Witt created NIFI-13002:
---

 Summary: Restore compression level mistakenly removed in NIFI-12996
 Key: NIFI-13002
 URL: https://issues.apache.org/jira/browse/NIFI-13002
 Project: Apache NiFi
  Issue Type: Task
Reporter: Joe Witt
Assignee: Joe Witt
 Fix For: 2.0.0-M3, 1.26.0


I removed the compression level as I misread the underlying code.  It needs to 
remain so this JIRA puts it back in.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12998) maxmind geoip2 does not belong in nifi-nar-bundle pom

2024-04-03 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12998:

Fix Version/s: (was: 2.0.0-M3)

> maxmind geoip2 does not belong in nifi-nar-bundle pom
> -
>
> Key: NIFI-12998
> URL: https://issues.apache.org/jira/browse/NIFI-12998
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> found in nifi-nar-bundles pom
>   
> com.maxmind.geoip2
> geoip2
> 4.2.0
> 
> This should not be here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12998) maxmind geoip2 does not belong in nifi-nar-bundle pom

2024-04-03 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12998:

Affects Version/s: 2.0.0-M2

> maxmind geoip2 does not belong in nifi-nar-bundle pom
> -
>
> Key: NIFI-12998
> URL: https://issues.apache.org/jira/browse/NIFI-12998
> Project: Apache NiFi
>  Issue Type: Task
>Affects Versions: 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> found in nifi-nar-bundles pom
>   
> com.maxmind.geoip2
> geoip2
> 4.2.0
> 
> This should not be here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12998) maxmind geoip2 does not belong in nifi-nar-bundle pom

2024-04-03 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12998:

Status: Patch Available  (was: Open)

> maxmind geoip2 does not belong in nifi-nar-bundle pom
> -
>
> Key: NIFI-12998
> URL: https://issues.apache.org/jira/browse/NIFI-12998
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 2.0.0-M3
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> found in nifi-nar-bundles pom
>   
> com.maxmind.geoip2
> geoip2
> 4.2.0
> 
> This should not be here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12969) Under heavy load, nifi node unable to rejoin cluster, graph modified with temp funnel

2024-04-03 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12969:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Under heavy load, nifi node unable to rejoin cluster, graph modified with 
> temp funnel
> -
>
> Key: NIFI-12969
> URL: https://issues.apache.org/jira/browse/NIFI-12969
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.24.0, 2.0.0-M2
>Reporter: Nissim Shiman
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 2.0.0-M3, 1.26.0
>
> Attachments: nifi-app.log, simple_flow.png, 
> simple_flow_with_temp-funnel.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Under heavy load, if a node leaves the cluster (due to heartbeat time out), 
> many times it is unable to rejoin the cluster.
> The nodes' graph will have been modified with a temp-funnel as well.
> Appears to be some sort of [timing 
> issue|https://github.com/apache/nifi/blob/rel/nifi-2.0.0-M2/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/connectable/StandardConnection.java#L298]
>  # To reproduce, on a nifi cluster of three nodes, set up:
> 2 GenerateFlowFile processors -> PG
> Inside PG:
> inputPort -> UpdateAttribute
>  # Keep all defaults except for the following:
> For UpdateAttribute terminate the success relationship
> One of the GenerateFlowFile processors can be disabled,
> the other one should have Run Schedule to be 0 min (this will allow for the 
> heavy load)
>  # In nifi.properties (on all 3 nodes) to allow for nodes to fall out of the 
> cluster, set: nifi.cluster.protocol.heartbeat.interval=2 sec  (default is 5) 
> nifi.cluster.protocol.heartbeat.missable.max=1   (default is 8)
> Restart nifi. Start flow. The nodes will quickly fall out and rejoin cluster. 
> After a few minutes one will likely not be able to rejoin.  The graph for 
> that node will have the disabled GenerateFlowFile now pointing to a funnel (a 
> temp-funnel) instead of the PG
> Stack trace on that nodes nifi-app.log will look like this: (this is from 
> 2.0.0-M2):
> {code:java}
> 2024-03-28 13:55:19,395 INFO [Reconnect to Cluster] 
> o.a.nifi.controller.StandardFlowService Node disconnected due to Failed to 
> properly handle Reconnection request due to org.apache.nifi.control
> ler.serialization.FlowSynchronizationException: Failed to connect node to 
> cluster because local flow controller partially updated. Administrator should 
> disconnect node and review flow for corrup
> tion.
> 2024-03-28 13:55:19,395 ERROR [Reconnect to Cluster] 
> o.a.nifi.controller.StandardFlowService Handling reconnection request failed 
> due to: org.apache.nifi.controller.serialization.FlowSynchroniza
> tionException: Failed to connect node to cluster because local flow 
> controller partially updated. Administrator should disconnect node and review 
> flow for corruption.
> org.apache.nifi.controller.serialization.FlowSynchronizationException: Failed 
> to connect node to cluster because local flow controller partially updated. 
> Administrator should disconnect node and
>  review flow for corruption.
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:985)
> at 
> org.apache.nifi.controller.StandardFlowService.handleReconnectionRequest(StandardFlowService.java:655)
> at 
> org.apache.nifi.controller.StandardFlowService$1.run(StandardFlowService.java:384)
> at java.base/java.lang.Thread.run(Thread.java:1583)
> Caused by: 
> org.apache.nifi.controller.serialization.FlowSynchronizationException: 
> java.lang.IllegalStateException: Cannot change destination of Connection 
> because FlowFiles from this Connection
> are currently held by LocalPort[id=99213c00-78ca-4848-112f-5454cc20656b, 
> type=INPUT_PORT, name=inputPort, group=innerPG]
> at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.synchronizeFlow(VersionedFlowSynchronizer.java:472)
> at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.sync(VersionedFlowSynchronizer.java:223)
> at 
> org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1740)
> at 
> org.apache.nifi.persistence.StandardFlowConfigurationDAO.load(StandardFlowConfigurationDAO.java:91)
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:805)
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:954)
> ... 3 common frames omitted
> Caused by: java.lang.IllegalStateException: Cannot change destination of 
> Connection because FlowFiles from 

[jira] [Created] (NIFI-12998) maxmind geoip2 does not belong in nifi-nar-bundle pom

2024-04-03 Thread Joe Witt (Jira)
Joe Witt created NIFI-12998:
---

 Summary: maxmind geoip2 does not belong in nifi-nar-bundle pom
 Key: NIFI-12998
 URL: https://issues.apache.org/jira/browse/NIFI-12998
 Project: Apache NiFi
  Issue Type: Task
Reporter: Joe Witt
Assignee: Joe Witt
 Fix For: 2.0.0-M3


found in nifi-nar-bundles pom

  
com.maxmind.geoip2
geoip2
4.2.0


This should not be here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12969) Under heavy load, nifi node unable to rejoin cluster, graph modified with temp funnel

2024-04-03 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17833713#comment-17833713
 ] 

Joe Witt commented on NIFI-12969:
-

thanks [~pgyori].  Will flag this as a key issue to look into for 1.26 and 
2.0m3.  If triaged and found to be manageable we can relax it but otherwise it 
will get some attention before we release.

> Under heavy load, nifi node unable to rejoin cluster, graph modified with 
> temp funnel
> -
>
> Key: NIFI-12969
> URL: https://issues.apache.org/jira/browse/NIFI-12969
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.24.0, 2.0.0-M2
>Reporter: Nissim Shiman
>Assignee: Nissim Shiman
>Priority: Critical
> Fix For: 2.0.0-M3, 1.26.0
>
> Attachments: nifi-app.log, simple_flow.png, 
> simple_flow_with_temp-funnel.png
>
>
> Under heavy load, if a node leaves the cluster (due to heartbeat time out), 
> many times it is unable to rejoin the cluster.
> The nodes' graph will have been modified with a temp-funnel as well.
> Appears to be some sort of [timing 
> issue|https://github.com/apache/nifi/blob/rel/nifi-2.0.0-M2/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/connectable/StandardConnection.java#L298]
>  # To reproduce, on a nifi cluster of three nodes, set up:
> 2 GenerateFlowFile processors -> PG
> Inside PG:
> inputPort -> UpdateAttribute
>  # Keep all defaults except for the following:
> For UpdateAttribute terminate the success relationship
> One of the GenerateFlowFile processors can be disabled,
> the other one should have Run Schedule to be 0 min (this will allow for the 
> heavy load)
>  # In nifi.properties (on all 3 nodes) to allow for nodes to fall out of the 
> cluster, set: nifi.cluster.protocol.heartbeat.interval=2 sec  (default is 5) 
> nifi.cluster.protocol.heartbeat.missable.max=1   (default is 8)
> Restart nifi. Start flow. The nodes will quickly fall out and rejoin cluster. 
> After a few minutes one will likely not be able to rejoin.  The graph for 
> that node will have the disabled GenerateFlowFile now pointing to a funnel (a 
> temp-funnel) instead of the PG
> Stack trace on that nodes nifi-app.log will look like this: (this is from 
> 2.0.0-M2):
> {code:java}
> 2024-03-28 13:55:19,395 INFO [Reconnect to Cluster] 
> o.a.nifi.controller.StandardFlowService Node disconnected due to Failed to 
> properly handle Reconnection request due to org.apache.nifi.control
> ler.serialization.FlowSynchronizationException: Failed to connect node to 
> cluster because local flow controller partially updated. Administrator should 
> disconnect node and review flow for corrup
> tion.
> 2024-03-28 13:55:19,395 ERROR [Reconnect to Cluster] 
> o.a.nifi.controller.StandardFlowService Handling reconnection request failed 
> due to: org.apache.nifi.controller.serialization.FlowSynchroniza
> tionException: Failed to connect node to cluster because local flow 
> controller partially updated. Administrator should disconnect node and review 
> flow for corruption.
> org.apache.nifi.controller.serialization.FlowSynchronizationException: Failed 
> to connect node to cluster because local flow controller partially updated. 
> Administrator should disconnect node and
>  review flow for corruption.
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:985)
> at 
> org.apache.nifi.controller.StandardFlowService.handleReconnectionRequest(StandardFlowService.java:655)
> at 
> org.apache.nifi.controller.StandardFlowService$1.run(StandardFlowService.java:384)
> at java.base/java.lang.Thread.run(Thread.java:1583)
> Caused by: 
> org.apache.nifi.controller.serialization.FlowSynchronizationException: 
> java.lang.IllegalStateException: Cannot change destination of Connection 
> because FlowFiles from this Connection
> are currently held by LocalPort[id=99213c00-78ca-4848-112f-5454cc20656b, 
> type=INPUT_PORT, name=inputPort, group=innerPG]
> at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.synchronizeFlow(VersionedFlowSynchronizer.java:472)
> at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.sync(VersionedFlowSynchronizer.java:223)
> at 
> org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1740)
> at 
> org.apache.nifi.persistence.StandardFlowConfigurationDAO.load(StandardFlowConfigurationDAO.java:91)
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:805)
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:954)
> ... 3 common frames 

[jira] [Updated] (NIFI-12969) Under heavy load, nifi node unable to rejoin cluster, graph modified with temp funnel

2024-04-03 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12969:

Fix Version/s: 2.0.0-M3
   1.26.0

> Under heavy load, nifi node unable to rejoin cluster, graph modified with 
> temp funnel
> -
>
> Key: NIFI-12969
> URL: https://issues.apache.org/jira/browse/NIFI-12969
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.24.0, 2.0.0-M2
>Reporter: Nissim Shiman
>Assignee: Nissim Shiman
>Priority: Major
> Fix For: 2.0.0-M3, 1.26.0
>
> Attachments: nifi-app.log, simple_flow.png, 
> simple_flow_with_temp-funnel.png
>
>
> Under heavy load, if a node leaves the cluster (due to heartbeat time out), 
> many times it is unable to rejoin the cluster.
> The nodes' graph will have been modified with a temp-funnel as well.
> Appears to be some sort of [timing 
> issue|https://github.com/apache/nifi/blob/rel/nifi-2.0.0-M2/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/connectable/StandardConnection.java#L298]
>  # To reproduce, on a nifi cluster of three nodes, set up:
> 2 GenerateFlowFile processors -> PG
> Inside PG:
> inputPort -> UpdateAttribute
>  # Keep all defaults except for the following:
> For UpdateAttribute terminate the success relationship
> One of the GenerateFlowFile processors can be disabled,
> the other one should have Run Schedule to be 0 min (this will allow for the 
> heavy load)
>  # In nifi.properties (on all 3 nodes) to allow for nodes to fall out of the 
> cluster, set: nifi.cluster.protocol.heartbeat.interval=2 sec  (default is 5) 
> nifi.cluster.protocol.heartbeat.missable.max=1   (default is 8)
> Restart nifi. Start flow. The nodes will quickly fall out and rejoin cluster. 
> After a few minutes one will likely not be able to rejoin.  The graph for 
> that node will have the disabled GenerateFlowFile now pointing to a funnel (a 
> temp-funnel) instead of the PG
> Stack trace on that nodes nifi-app.log will look like this: (this is from 
> 2.0.0-M2):
> {code:java}
> 2024-03-28 13:55:19,395 INFO [Reconnect to Cluster] 
> o.a.nifi.controller.StandardFlowService Node disconnected due to Failed to 
> properly handle Reconnection request due to org.apache.nifi.control
> ler.serialization.FlowSynchronizationException: Failed to connect node to 
> cluster because local flow controller partially updated. Administrator should 
> disconnect node and review flow for corrup
> tion.
> 2024-03-28 13:55:19,395 ERROR [Reconnect to Cluster] 
> o.a.nifi.controller.StandardFlowService Handling reconnection request failed 
> due to: org.apache.nifi.controller.serialization.FlowSynchroniza
> tionException: Failed to connect node to cluster because local flow 
> controller partially updated. Administrator should disconnect node and review 
> flow for corruption.
> org.apache.nifi.controller.serialization.FlowSynchronizationException: Failed 
> to connect node to cluster because local flow controller partially updated. 
> Administrator should disconnect node and
>  review flow for corruption.
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:985)
> at 
> org.apache.nifi.controller.StandardFlowService.handleReconnectionRequest(StandardFlowService.java:655)
> at 
> org.apache.nifi.controller.StandardFlowService$1.run(StandardFlowService.java:384)
> at java.base/java.lang.Thread.run(Thread.java:1583)
> Caused by: 
> org.apache.nifi.controller.serialization.FlowSynchronizationException: 
> java.lang.IllegalStateException: Cannot change destination of Connection 
> because FlowFiles from this Connection
> are currently held by LocalPort[id=99213c00-78ca-4848-112f-5454cc20656b, 
> type=INPUT_PORT, name=inputPort, group=innerPG]
> at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.synchronizeFlow(VersionedFlowSynchronizer.java:472)
> at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.sync(VersionedFlowSynchronizer.java:223)
> at 
> org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1740)
> at 
> org.apache.nifi.persistence.StandardFlowConfigurationDAO.load(StandardFlowConfigurationDAO.java:91)
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:805)
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:954)
> ... 3 common frames omitted
> Caused by: java.lang.IllegalStateException: Cannot change destination of 
> Connection because FlowFiles from this Connection are currently held by 
> 

[jira] [Updated] (NIFI-12969) Under heavy load, nifi node unable to rejoin cluster, graph modified with temp funnel

2024-04-03 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12969:

Priority: Critical  (was: Major)

> Under heavy load, nifi node unable to rejoin cluster, graph modified with 
> temp funnel
> -
>
> Key: NIFI-12969
> URL: https://issues.apache.org/jira/browse/NIFI-12969
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.24.0, 2.0.0-M2
>Reporter: Nissim Shiman
>Assignee: Nissim Shiman
>Priority: Critical
> Fix For: 2.0.0-M3, 1.26.0
>
> Attachments: nifi-app.log, simple_flow.png, 
> simple_flow_with_temp-funnel.png
>
>
> Under heavy load, if a node leaves the cluster (due to heartbeat time out), 
> many times it is unable to rejoin the cluster.
> The nodes' graph will have been modified with a temp-funnel as well.
> Appears to be some sort of [timing 
> issue|https://github.com/apache/nifi/blob/rel/nifi-2.0.0-M2/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/connectable/StandardConnection.java#L298]
>  # To reproduce, on a nifi cluster of three nodes, set up:
> 2 GenerateFlowFile processors -> PG
> Inside PG:
> inputPort -> UpdateAttribute
>  # Keep all defaults except for the following:
> For UpdateAttribute terminate the success relationship
> One of the GenerateFlowFile processors can be disabled,
> the other one should have Run Schedule to be 0 min (this will allow for the 
> heavy load)
>  # In nifi.properties (on all 3 nodes) to allow for nodes to fall out of the 
> cluster, set: nifi.cluster.protocol.heartbeat.interval=2 sec  (default is 5) 
> nifi.cluster.protocol.heartbeat.missable.max=1   (default is 8)
> Restart nifi. Start flow. The nodes will quickly fall out and rejoin cluster. 
> After a few minutes one will likely not be able to rejoin.  The graph for 
> that node will have the disabled GenerateFlowFile now pointing to a funnel (a 
> temp-funnel) instead of the PG
> Stack trace on that nodes nifi-app.log will look like this: (this is from 
> 2.0.0-M2):
> {code:java}
> 2024-03-28 13:55:19,395 INFO [Reconnect to Cluster] 
> o.a.nifi.controller.StandardFlowService Node disconnected due to Failed to 
> properly handle Reconnection request due to org.apache.nifi.control
> ler.serialization.FlowSynchronizationException: Failed to connect node to 
> cluster because local flow controller partially updated. Administrator should 
> disconnect node and review flow for corrup
> tion.
> 2024-03-28 13:55:19,395 ERROR [Reconnect to Cluster] 
> o.a.nifi.controller.StandardFlowService Handling reconnection request failed 
> due to: org.apache.nifi.controller.serialization.FlowSynchroniza
> tionException: Failed to connect node to cluster because local flow 
> controller partially updated. Administrator should disconnect node and review 
> flow for corruption.
> org.apache.nifi.controller.serialization.FlowSynchronizationException: Failed 
> to connect node to cluster because local flow controller partially updated. 
> Administrator should disconnect node and
>  review flow for corruption.
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:985)
> at 
> org.apache.nifi.controller.StandardFlowService.handleReconnectionRequest(StandardFlowService.java:655)
> at 
> org.apache.nifi.controller.StandardFlowService$1.run(StandardFlowService.java:384)
> at java.base/java.lang.Thread.run(Thread.java:1583)
> Caused by: 
> org.apache.nifi.controller.serialization.FlowSynchronizationException: 
> java.lang.IllegalStateException: Cannot change destination of Connection 
> because FlowFiles from this Connection
> are currently held by LocalPort[id=99213c00-78ca-4848-112f-5454cc20656b, 
> type=INPUT_PORT, name=inputPort, group=innerPG]
> at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.synchronizeFlow(VersionedFlowSynchronizer.java:472)
> at 
> org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.sync(VersionedFlowSynchronizer.java:223)
> at 
> org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1740)
> at 
> org.apache.nifi.persistence.StandardFlowConfigurationDAO.load(StandardFlowConfigurationDAO.java:91)
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:805)
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:954)
> ... 3 common frames omitted
> Caused by: java.lang.IllegalStateException: Cannot change destination of 
> Connection because FlowFiles from this Connection are currently held by 
> 

[jira] [Commented] (NIFI-12996) nifi-shared-bom construct broke classloading for optional dependencies of common-compress including zstd

2024-04-02 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1788#comment-1788
 ] 

Joe Witt commented on NIFI-12996:
-

[~exceptionfactory] mentions that the same problem exists with the netty 
references and for that one the best answer might simply be removal from the 
shared-services nar/bom and let the nars pull what they need for now.

> nifi-shared-bom construct broke classloading for optional dependencies of 
> common-compress including zstd
> 
>
> Key: NIFI-12996
> URL: https://issues.apache.org/jira/browse/NIFI-12996
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.24.0, 1.25.0, 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Critical
> Fix For: 2.0.0-M3, 1.26.0
>
>
> In Apache Slack a user reported this was broken for them in NiFi 1.25.0 but 
> used to work in 1.19.1.
> On apache 2.0 m2+latest I can reproduce this by simply trying to use it at 
> all.  First question is do we not have unit tests but anyway this needs to be 
> evaluated as to why it isn't working.
> {noformat}
> 2024-04-02 13:30:12,514 ERROR [Timer-Driven Process Thread-3] 
> o.a.n.p.standard.CompressContent 
> CompressContent[id=a07f7ae5-018e-1000-c82c-2e2143f9c320] Processing halted: 
> yielding [1 sec]
> java.lang.NoClassDefFoundError: com/github/luben/zstd/ZstdOutputStream
> at 
> org.apache.commons.compress.compressors.zstandard.ZstdCompressorOutputStream.(ZstdCompressorOutputStream.java:57)
> at 
> org.apache.nifi.processors.standard.CompressContent$1.process(CompressContent.java:353)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:3426)
> at 
> org.apache.nifi.processors.standard.CompressContent.onTrigger(CompressContent.java:300)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1274)
> at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:244)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)
> at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
> at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358)
> at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
> at java.base/java.lang.Thread.run(Thread.java:1583)
> Caused by: java.lang.ClassNotFoundException: 
> com.github.luben.zstd.ZstdOutputStream
> at 
> java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:593)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526)
> ... 15 common frames omitted
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12996) nifi-shared-bom construct broke classloading for optional dependencies of common-compress including zstd

2024-04-02 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1785#comment-1785
 ] 

Joe Witt commented on NIFI-12996:
-

First noted we pull in zstd-jni 1.5.5-6 whereas commons-compress expects 
1.5.6-1.  Though fixing that did not fix the problem.

Attaching a debugger to a running NiFi where this problem can be reproduced 
showed that the 'CompressContent' class is tied to the proper nifi-standard-nar 
which is correct and is indeed where zstd-jni lives.  However looking at the 
ZstdCompressorOutputStream class shows it is tied to the parent nar which is 
'nifi-shared-services-nar' and that does NOT have the zstd-jni and we're. 
parent first loading.  This is why we're seeing the issue.

Solution:
(A) Remove commons-compress from the shared-services-nar (OR)
(B) Ensure optional dependencies of commons-compress are ALSO promoted to the 
shared-services-nar with commons-compress

Also need to ensure these optional dependencies are well maintained.

Also need to review for any other such cases of promoted dependencies which 
have optional deps like this as the same problem would occur.

> nifi-shared-bom construct broke classloading for optional dependencies of 
> common-compress including zstd
> 
>
> Key: NIFI-12996
> URL: https://issues.apache.org/jira/browse/NIFI-12996
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.24.0, 1.25.0, 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Critical
> Fix For: 2.0.0-M3, 1.26.0
>
>
> In Apache Slack a user reported this was broken for them in NiFi 1.25.0 but 
> used to work in 1.19.1.
> On apache 2.0 m2+latest I can reproduce this by simply trying to use it at 
> all.  First question is do we not have unit tests but anyway this needs to be 
> evaluated as to why it isn't working.
> {noformat}
> 2024-04-02 13:30:12,514 ERROR [Timer-Driven Process Thread-3] 
> o.a.n.p.standard.CompressContent 
> CompressContent[id=a07f7ae5-018e-1000-c82c-2e2143f9c320] Processing halted: 
> yielding [1 sec]
> java.lang.NoClassDefFoundError: com/github/luben/zstd/ZstdOutputStream
> at 
> org.apache.commons.compress.compressors.zstandard.ZstdCompressorOutputStream.(ZstdCompressorOutputStream.java:57)
> at 
> org.apache.nifi.processors.standard.CompressContent$1.process(CompressContent.java:353)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:3426)
> at 
> org.apache.nifi.processors.standard.CompressContent.onTrigger(CompressContent.java:300)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1274)
> at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:244)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)
> at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
> at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358)
> at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
> at java.base/java.lang.Thread.run(Thread.java:1583)
> Caused by: java.lang.ClassNotFoundException: 
> com.github.luben.zstd.ZstdOutputStream
> at 
> java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:593)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526)
> ... 15 common frames omitted
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12996) nifi-shared-bom construct broke classloading for optional dependencies of common-compress including zstd

2024-04-02 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12996:

Priority: Critical  (was: Major)

> nifi-shared-bom construct broke classloading for optional dependencies of 
> common-compress including zstd
> 
>
> Key: NIFI-12996
> URL: https://issues.apache.org/jira/browse/NIFI-12996
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.24.0, 1.25.0, 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Critical
> Fix For: 2.0.0-M3, 1.26.0
>
>
> In Apache Slack a user reported this was broken for them in NiFi 1.25.0 but 
> used to work in 1.19.1.
> On apache 2.0 m2+latest I can reproduce this by simply trying to use it at 
> all.  First question is do we not have unit tests but anyway this needs to be 
> evaluated as to why it isn't working.
> {noformat}
> 2024-04-02 13:30:12,514 ERROR [Timer-Driven Process Thread-3] 
> o.a.n.p.standard.CompressContent 
> CompressContent[id=a07f7ae5-018e-1000-c82c-2e2143f9c320] Processing halted: 
> yielding [1 sec]
> java.lang.NoClassDefFoundError: com/github/luben/zstd/ZstdOutputStream
> at 
> org.apache.commons.compress.compressors.zstandard.ZstdCompressorOutputStream.(ZstdCompressorOutputStream.java:57)
> at 
> org.apache.nifi.processors.standard.CompressContent$1.process(CompressContent.java:353)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:3426)
> at 
> org.apache.nifi.processors.standard.CompressContent.onTrigger(CompressContent.java:300)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1274)
> at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:244)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)
> at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
> at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358)
> at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
> at java.base/java.lang.Thread.run(Thread.java:1583)
> Caused by: java.lang.ClassNotFoundException: 
> com.github.luben.zstd.ZstdOutputStream
> at 
> java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:593)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526)
> ... 15 common frames omitted
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12996) nifi-shared-bom construct broke classloading for optional dependencies of common-compress including zstd

2024-04-02 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12996:

Fix Version/s: 2.0.0-M3
   1.26.0

> nifi-shared-bom construct broke classloading for optional dependencies of 
> common-compress including zstd
> 
>
> Key: NIFI-12996
> URL: https://issues.apache.org/jira/browse/NIFI-12996
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.24.0, 1.25.0, 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 2.0.0-M3, 1.26.0
>
>
> In Apache Slack a user reported this was broken for them in NiFi 1.25.0 but 
> used to work in 1.19.1.
> On apache 2.0 m2+latest I can reproduce this by simply trying to use it at 
> all.  First question is do we not have unit tests but anyway this needs to be 
> evaluated as to why it isn't working.
> {noformat}
> 2024-04-02 13:30:12,514 ERROR [Timer-Driven Process Thread-3] 
> o.a.n.p.standard.CompressContent 
> CompressContent[id=a07f7ae5-018e-1000-c82c-2e2143f9c320] Processing halted: 
> yielding [1 sec]
> java.lang.NoClassDefFoundError: com/github/luben/zstd/ZstdOutputStream
> at 
> org.apache.commons.compress.compressors.zstandard.ZstdCompressorOutputStream.(ZstdCompressorOutputStream.java:57)
> at 
> org.apache.nifi.processors.standard.CompressContent$1.process(CompressContent.java:353)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:3426)
> at 
> org.apache.nifi.processors.standard.CompressContent.onTrigger(CompressContent.java:300)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1274)
> at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:244)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)
> at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
> at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358)
> at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
> at java.base/java.lang.Thread.run(Thread.java:1583)
> Caused by: java.lang.ClassNotFoundException: 
> com.github.luben.zstd.ZstdOutputStream
> at 
> java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:593)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526)
> ... 15 common frames omitted
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12996) nifi-shared-bom construct broke classloading for optional dependencies of common-compress including zstd

2024-04-02 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12996:

Affects Version/s: 1.24.0

> nifi-shared-bom construct broke classloading for optional dependencies of 
> common-compress including zstd
> 
>
> Key: NIFI-12996
> URL: https://issues.apache.org/jira/browse/NIFI-12996
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.24.0, 1.25.0, 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>
> In Apache Slack a user reported this was broken for them in NiFi 1.25.0 but 
> used to work in 1.19.1.
> On apache 2.0 m2+latest I can reproduce this by simply trying to use it at 
> all.  First question is do we not have unit tests but anyway this needs to be 
> evaluated as to why it isn't working.
> {noformat}
> 2024-04-02 13:30:12,514 ERROR [Timer-Driven Process Thread-3] 
> o.a.n.p.standard.CompressContent 
> CompressContent[id=a07f7ae5-018e-1000-c82c-2e2143f9c320] Processing halted: 
> yielding [1 sec]
> java.lang.NoClassDefFoundError: com/github/luben/zstd/ZstdOutputStream
> at 
> org.apache.commons.compress.compressors.zstandard.ZstdCompressorOutputStream.(ZstdCompressorOutputStream.java:57)
> at 
> org.apache.nifi.processors.standard.CompressContent$1.process(CompressContent.java:353)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:3426)
> at 
> org.apache.nifi.processors.standard.CompressContent.onTrigger(CompressContent.java:300)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1274)
> at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:244)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)
> at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
> at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358)
> at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
> at java.base/java.lang.Thread.run(Thread.java:1583)
> Caused by: java.lang.ClassNotFoundException: 
> com.github.luben.zstd.ZstdOutputStream
> at 
> java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:593)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526)
> ... 15 common frames omitted
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12996) nifi-shared-bom construct broke classloading for optional dependencies of common-compress including zstd

2024-04-02 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12996:

Issue Type: Bug  (was: Improvement)

> nifi-shared-bom construct broke classloading for optional dependencies of 
> common-compress including zstd
> 
>
> Key: NIFI-12996
> URL: https://issues.apache.org/jira/browse/NIFI-12996
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.24.0, 1.25.0, 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 2.0.0-M3, 1.26.0
>
>
> In Apache Slack a user reported this was broken for them in NiFi 1.25.0 but 
> used to work in 1.19.1.
> On apache 2.0 m2+latest I can reproduce this by simply trying to use it at 
> all.  First question is do we not have unit tests but anyway this needs to be 
> evaluated as to why it isn't working.
> {noformat}
> 2024-04-02 13:30:12,514 ERROR [Timer-Driven Process Thread-3] 
> o.a.n.p.standard.CompressContent 
> CompressContent[id=a07f7ae5-018e-1000-c82c-2e2143f9c320] Processing halted: 
> yielding [1 sec]
> java.lang.NoClassDefFoundError: com/github/luben/zstd/ZstdOutputStream
> at 
> org.apache.commons.compress.compressors.zstandard.ZstdCompressorOutputStream.(ZstdCompressorOutputStream.java:57)
> at 
> org.apache.nifi.processors.standard.CompressContent$1.process(CompressContent.java:353)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:3426)
> at 
> org.apache.nifi.processors.standard.CompressContent.onTrigger(CompressContent.java:300)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1274)
> at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:244)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)
> at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
> at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358)
> at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
> at java.base/java.lang.Thread.run(Thread.java:1583)
> Caused by: java.lang.ClassNotFoundException: 
> com.github.luben.zstd.ZstdOutputStream
> at 
> java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:593)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526)
> ... 15 common frames omitted
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12996) nifi-shared-bom construct broke classloading for optional dependencies of common-compress including zstd

2024-04-02 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12996:

Summary: nifi-shared-bom construct broke classloading for optional 
dependencies of common-compress including zstd  (was: CompressContent Zstd 
compression and decompression appears broken with missing classes)

> nifi-shared-bom construct broke classloading for optional dependencies of 
> common-compress including zstd
> 
>
> Key: NIFI-12996
> URL: https://issues.apache.org/jira/browse/NIFI-12996
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.25.0, 2.0.0-M2
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>
> In Apache Slack a user reported this was broken for them in NiFi 1.25.0 but 
> used to work in 1.19.1.
> On apache 2.0 m2+latest I can reproduce this by simply trying to use it at 
> all.  First question is do we not have unit tests but anyway this needs to be 
> evaluated as to why it isn't working.
> {noformat}
> 2024-04-02 13:30:12,514 ERROR [Timer-Driven Process Thread-3] 
> o.a.n.p.standard.CompressContent 
> CompressContent[id=a07f7ae5-018e-1000-c82c-2e2143f9c320] Processing halted: 
> yielding [1 sec]
> java.lang.NoClassDefFoundError: com/github/luben/zstd/ZstdOutputStream
> at 
> org.apache.commons.compress.compressors.zstandard.ZstdCompressorOutputStream.(ZstdCompressorOutputStream.java:57)
> at 
> org.apache.nifi.processors.standard.CompressContent$1.process(CompressContent.java:353)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:3426)
> at 
> org.apache.nifi.processors.standard.CompressContent.onTrigger(CompressContent.java:300)
> at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1274)
> at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:244)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)
> at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
> at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358)
> at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
> at java.base/java.lang.Thread.run(Thread.java:1583)
> Caused by: java.lang.ClassNotFoundException: 
> com.github.luben.zstd.ZstdOutputStream
> at 
> java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:593)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526)
> ... 15 common frames omitted
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-12996) CompressContent Zstd compression and decompression appears broken with missing classes

2024-04-02 Thread Joe Witt (Jira)
Joe Witt created NIFI-12996:
---

 Summary: CompressContent Zstd compression and decompression 
appears broken with missing classes
 Key: NIFI-12996
 URL: https://issues.apache.org/jira/browse/NIFI-12996
 Project: Apache NiFi
  Issue Type: Improvement
Affects Versions: 2.0.0-M2, 1.25.0
Reporter: Joe Witt
Assignee: Joe Witt


In Apache Slack a user reported this was broken for them in NiFi 1.25.0 but 
used to work in 1.19.1.

On apache 2.0 m2+latest I can reproduce this by simply trying to use it at all. 
 First question is do we not have unit tests but anyway this needs to be 
evaluated as to why it isn't working.


{noformat}
2024-04-02 13:30:12,514 ERROR [Timer-Driven Process Thread-3] 
o.a.n.p.standard.CompressContent 
CompressContent[id=a07f7ae5-018e-1000-c82c-2e2143f9c320] Processing halted: 
yielding [1 sec]
java.lang.NoClassDefFoundError: com/github/luben/zstd/ZstdOutputStream
at 
org.apache.commons.compress.compressors.zstandard.ZstdCompressorOutputStream.(ZstdCompressorOutputStream.java:57)
at 
org.apache.nifi.processors.standard.CompressContent$1.process(CompressContent.java:353)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:3426)
at 
org.apache.nifi.processors.standard.CompressContent.onTrigger(CompressContent.java:300)
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1274)
at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:244)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
at 
java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358)
at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at java.base/java.lang.Thread.run(Thread.java:1583)
Caused by: java.lang.ClassNotFoundException: 
com.github.luben.zstd.ZstdOutputStream
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:593)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526)
... 15 common frames omitted
{noformat}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-5894) FTPTransfer NullPointerException - FTPFile.getTimestamp() returns null

2024-04-02 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-5894:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

+1 merged to main

> FTPTransfer NullPointerException - FTPFile.getTimestamp() returns null
> --
>
> Key: NIFI-5894
> URL: https://issues.apache.org/jira/browse/NIFI-5894
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: henning ottesen
>Assignee: Eric Ulicny
>Priority: Major
> Fix For: 2.0.0-M3
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> 2018-12-13 10:34:07,405 WARN [Timer-Driven Process Thread-2] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ListFTP[id=a6cee9c8-0167-1000-2bc2-5e69d2e33af5] due to uncaught Exception: 
> java.lang.NullPointerException java.lang.NullPointerException: null
>   at 
> org.apache.nifi.processors.standard.util.FTPTransfer.newFileInfo(FTPTransfer.java:305)
>   at 
> org.apache.nifi.processors.standard.util.FTPTransfer.getListing(FTPTransfer.java:270)
>   at 
> org.apache.nifi.processors.standard.util.FTPTransfer.getListing(FTPTransfer.java:260)
>   at 
> org.apache.nifi.processors.standard.util.FTPTransfer.getListing(FTPTransfer.java:191)
>   at 
> org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:106)
>   at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.listByTrackingTimestamps(AbstractListProcessor.java:471)
>   at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:413)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-5894) FTPTransfer NullPointerException - FTPFile.getTimestamp() returns null

2024-04-02 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-5894:
---
Fix Version/s: 2.0.0-M3

> FTPTransfer NullPointerException - FTPFile.getTimestamp() returns null
> --
>
> Key: NIFI-5894
> URL: https://issues.apache.org/jira/browse/NIFI-5894
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: henning ottesen
>Assignee: Eric Ulicny
>Priority: Major
> Fix For: 2.0.0-M3
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> 2018-12-13 10:34:07,405 WARN [Timer-Driven Process Thread-2] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ListFTP[id=a6cee9c8-0167-1000-2bc2-5e69d2e33af5] due to uncaught Exception: 
> java.lang.NullPointerException java.lang.NullPointerException: null
>   at 
> org.apache.nifi.processors.standard.util.FTPTransfer.newFileInfo(FTPTransfer.java:305)
>   at 
> org.apache.nifi.processors.standard.util.FTPTransfer.getListing(FTPTransfer.java:270)
>   at 
> org.apache.nifi.processors.standard.util.FTPTransfer.getListing(FTPTransfer.java:260)
>   at 
> org.apache.nifi.processors.standard.util.FTPTransfer.getListing(FTPTransfer.java:191)
>   at 
> org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:106)
>   at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.listByTrackingTimestamps(AbstractListProcessor.java:471)
>   at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:413)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-5894) FTPTransfer NullPointerException - FTPFile.getTimestamp() returns null

2024-04-02 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17833302#comment-17833302
 ] 

Joe Witt commented on NIFI-5894:


[~Eulicny] We feel having waited nearly 6 years for this is enough :).  After a 
full clean build now will merge. 

In all seriousness sorry for not seeing/engaging with this earlier.  Seems like 
a very reasonable improvement to what would be a super annoying case to hit 
where the FTP server or client lib can't parse the modification timestamp.



> FTPTransfer NullPointerException - FTPFile.getTimestamp() returns null
> --
>
> Key: NIFI-5894
> URL: https://issues.apache.org/jira/browse/NIFI-5894
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.8.0
>Reporter: henning ottesen
>Assignee: Eric Ulicny
>Priority: Major
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> 2018-12-13 10:34:07,405 WARN [Timer-Driven Process Thread-2] 
> o.a.n.controller.tasks.ConnectableTask Administratively Yielding 
> ListFTP[id=a6cee9c8-0167-1000-2bc2-5e69d2e33af5] due to uncaught Exception: 
> java.lang.NullPointerException java.lang.NullPointerException: null
>   at 
> org.apache.nifi.processors.standard.util.FTPTransfer.newFileInfo(FTPTransfer.java:305)
>   at 
> org.apache.nifi.processors.standard.util.FTPTransfer.getListing(FTPTransfer.java:270)
>   at 
> org.apache.nifi.processors.standard.util.FTPTransfer.getListing(FTPTransfer.java:260)
>   at 
> org.apache.nifi.processors.standard.util.FTPTransfer.getListing(FTPTransfer.java:191)
>   at 
> org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:106)
>   at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.listByTrackingTimestamps(AbstractListProcessor.java:471)
>   at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:413)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
>   at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12971) Provide a utility to detect leaked ProcessSession objects in unit tests or the UI

2024-03-29 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17832292#comment-17832292
 ] 

Joe Witt commented on NIFI-12971:
-

Updated the ticket from 'bug' to improvement as this is current functioning as 
designed.  The idea noted here is good to consider provided a very efficient 
implementation is considered.  And updated the description of the JIRA to 
reflect what this aims to do so it doesn't create FUD for others.

> Provide a utility to detect leaked ProcessSession objects in unit tests or 
> the UI
> -
>
> Key: NIFI-12971
> URL: https://issues.apache.org/jira/browse/NIFI-12971
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.25.0, 2.0.0-M2
>Reporter: endzeit
>Priority: Major
>
> When developing processors for NiFi, developers need to implement 
> [Processor|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/Processor.html].
> Most often this is done by extending 
> [AbstractProcessor|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/AbstractProcessor.html]
>  which ensures that the 
> [ProcessSession|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/ProcessSession.html]
>  used is either commited or, if that's not possible, rolled back.
> In cases where the developer needs more control over session management, they 
> might extend from 
> [AbstractSessionFactoryProcessor|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/AbstractSessionFactoryProcessor.html]
>  instead, which allows to create and handle {{ProcessSessions}} on their own 
> terms.
> When using the latter, developers need to ensure they handle all sessions 
> created gracefully, that is, to commit or roll back all sessions they create, 
> like {{AbstractProcessor}} ensures.
> However, failing to do so may lead to unnoticed leakage / lost of 
> [FlowFile|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/flowfile/FlowFile.html]s
>  and their associated data. 
> While data might be recovered from provenance, users are most likely not even 
> aware of the data loss, as 
> there won't be a bulletin visible in the UI indicating data loss due to no 
> Exception occuring or am error being logged.
> The following is a minimal example, which reproduces the problem. All 
> {{FlowFiles}} that enter the processor leak and eventually get lost when the 
> processor is shut down.
> {code:java}
> @InputRequirement(INPUT_REQUIRED)
> public class LeakFlowFile extends AbstractSessionFactoryProcessor {
> public static final Relationship REL_SUCCESS = new Relationship.Builder()
> .name("success")
> .description("All FlowFiles are routed to this relationship.")
> .build();
> private static final Set RELATIONSHIPS = 
> Set.of(REL_SUCCESS);
> @Override
> public Set getRelationships() {
> return RELATIONSHIPS;
> }
> @Override
> public void onTrigger(ProcessContext context, ProcessSessionFactory 
> sessionFactory) throws ProcessException {
> ProcessSession session = sessionFactory.createSession();
> FlowFile flowFile = session.get();
> if (flowFile == null) {
> return;
> }
> session.transfer(flowFile, REL_SUCCESS);
> // whoops, no commit or rollback
> }
> } {code}
> While the issue is quite obvious in this example, it might not be for more 
> complex processors, e.g. when based on 
> [BinFiles|https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-extension-utils/nifi-bin-manager/src/main/java/org/apache/nifi/processor/util/bin/BinFiles.java].
>  In case a developer misses to commit / rollback the session in 
> {{{}processBin{}}}, the same behaviour can be observed.
> The behavior also is not made visible by tests. The following test passes, 
> even though the session has not been committed (or rolled back).
> {code:java}
> class LeakFlowFileTest {
> private final TestRunner testRunner = 
> TestRunners.newTestRunner(LeakFlowFile.class);
> @Test
> void doesNotDetectLeak() {
> testRunner.enqueue("some data");
> testRunner.run();
> testRunner.assertAllFlowFilesTransferred(LeakFlowFile.REL_SUCCESS, 1);
> }
> } {code}
> 
> I would like to propose enhancements to NiFi in order to ease detection of 
> such implementation faults or even confine the harm they might incur.
> One approach is to extend the capabilities of TestRunner such that on 
> shutdown of a tested processor, it checks whether all sessions that were 
> created during the test and had a change associated with them, e.g. pulling a 
> FlowFile or adjusting state, do not have 

[jira] [Updated] (NIFI-12971) Provide a utility to detect leaked ProcessSession objects in unit tests or the UI

2024-03-29 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12971:

Summary: Provide a utility to detect leaked ProcessSession objects in unit 
tests or the UI  (was: Provide a utility to detect leaked ProcessSession 
objects in unit tests)

> Provide a utility to detect leaked ProcessSession objects in unit tests or 
> the UI
> -
>
> Key: NIFI-12971
> URL: https://issues.apache.org/jira/browse/NIFI-12971
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.25.0, 2.0.0-M2
>Reporter: endzeit
>Priority: Major
>
> When developing processors for NiFi, developers need to implement 
> [Processor|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/Processor.html].
> Most often this is done by extending 
> [AbstractProcessor|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/AbstractProcessor.html]
>  which ensures that the 
> [ProcessSession|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/ProcessSession.html]
>  used is either commited or, if that's not possible, rolled back.
> In cases where the developer needs more control over session management, they 
> might extend from 
> [AbstractSessionFactoryProcessor|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/AbstractSessionFactoryProcessor.html]
>  instead, which allows to create and handle {{ProcessSessions}} on their own 
> terms.
> When using the latter, developers need to ensure they handle all sessions 
> created gracefully, that is, to commit or roll back all sessions they create, 
> like {{AbstractProcessor}} ensures.
> However, failing to do so may lead to unnoticed leakage / lost of 
> [FlowFile|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/flowfile/FlowFile.html]s
>  and their associated data. 
> While data might be recovered from provenance, users are most likely not even 
> aware of the data loss, as 
> there won't be a bulletin visible in the UI indicating data loss due to no 
> Exception occuring or am error being logged.
> The following is a minimal example, which reproduces the problem. All 
> {{FlowFiles}} that enter the processor leak and eventually get lost when the 
> processor is shut down.
> {code:java}
> @InputRequirement(INPUT_REQUIRED)
> public class LeakFlowFile extends AbstractSessionFactoryProcessor {
> public static final Relationship REL_SUCCESS = new Relationship.Builder()
> .name("success")
> .description("All FlowFiles are routed to this relationship.")
> .build();
> private static final Set RELATIONSHIPS = 
> Set.of(REL_SUCCESS);
> @Override
> public Set getRelationships() {
> return RELATIONSHIPS;
> }
> @Override
> public void onTrigger(ProcessContext context, ProcessSessionFactory 
> sessionFactory) throws ProcessException {
> ProcessSession session = sessionFactory.createSession();
> FlowFile flowFile = session.get();
> if (flowFile == null) {
> return;
> }
> session.transfer(flowFile, REL_SUCCESS);
> // whoops, no commit or rollback
> }
> } {code}
> While the issue is quite obvious in this example, it might not be for more 
> complex processors, e.g. when based on 
> [BinFiles|https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-extension-utils/nifi-bin-manager/src/main/java/org/apache/nifi/processor/util/bin/BinFiles.java].
>  In case a developer misses to commit / rollback the session in 
> {{{}processBin{}}}, the same behaviour can be observed.
> The behavior also is not made visible by tests. The following test passes, 
> even though the session has not been committed (or rolled back).
> {code:java}
> class LeakFlowFileTest {
> private final TestRunner testRunner = 
> TestRunners.newTestRunner(LeakFlowFile.class);
> @Test
> void doesNotDetectLeak() {
> testRunner.enqueue("some data");
> testRunner.run();
> testRunner.assertAllFlowFilesTransferred(LeakFlowFile.REL_SUCCESS, 1);
> }
> } {code}
> 
> I would like to propose enhancements to NiFi in order to ease detection of 
> such implementation faults or even confine the harm they might incur.
> One approach is to extend the capabilities of TestRunner such that on 
> shutdown of a tested processor, it checks whether all sessions that were 
> created during the test and had a change associated with them, e.g. pulling a 
> FlowFile or adjusting state, do not have pending changes left but were 
> properly handled, e.g. by committing the session. In case that's not the 
> case, the test may fail, similar to trying to commit a 

[jira] [Updated] (NIFI-12971) Provide a utility to detect leaked ProcessSession objects in unit tests

2024-03-29 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12971:

Summary: Provide a utility to detect leaked ProcessSession objects in unit 
tests  (was: Processor may leak / lose ProcessSession with FlowFile)

> Provide a utility to detect leaked ProcessSession objects in unit tests
> ---
>
> Key: NIFI-12971
> URL: https://issues.apache.org/jira/browse/NIFI-12971
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.25.0, 2.0.0-M2
>Reporter: endzeit
>Priority: Major
>
> When developing processors for NiFi, developers need to implement 
> [Processor|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/Processor.html].
> Most often this is done by extending 
> [AbstractProcessor|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/AbstractProcessor.html]
>  which ensures that the 
> [ProcessSession|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/ProcessSession.html]
>  used is either commited or, if that's not possible, rolled back.
> In cases where the developer needs more control over session management, they 
> might extend from 
> [AbstractSessionFactoryProcessor|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/AbstractSessionFactoryProcessor.html]
>  instead, which allows to create and handle {{ProcessSessions}} on their own 
> terms.
> When using the latter, developers need to ensure they handle all sessions 
> created gracefully, that is, to commit or roll back all sessions they create, 
> like {{AbstractProcessor}} ensures.
> However, failing to do so may lead to unnoticed leakage / lost of 
> [FlowFile|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/flowfile/FlowFile.html]s
>  and their associated data. 
> While data might be recovered from provenance, users are most likely not even 
> aware of the data loss, as 
> there won't be a bulletin visible in the UI indicating data loss due to no 
> Exception occuring or am error being logged.
> The following is a minimal example, which reproduces the problem. All 
> {{FlowFiles}} that enter the processor leak and eventually get lost when the 
> processor is shut down.
> {code:java}
> @InputRequirement(INPUT_REQUIRED)
> public class LeakFlowFile extends AbstractSessionFactoryProcessor {
> public static final Relationship REL_SUCCESS = new Relationship.Builder()
> .name("success")
> .description("All FlowFiles are routed to this relationship.")
> .build();
> private static final Set RELATIONSHIPS = 
> Set.of(REL_SUCCESS);
> @Override
> public Set getRelationships() {
> return RELATIONSHIPS;
> }
> @Override
> public void onTrigger(ProcessContext context, ProcessSessionFactory 
> sessionFactory) throws ProcessException {
> ProcessSession session = sessionFactory.createSession();
> FlowFile flowFile = session.get();
> if (flowFile == null) {
> return;
> }
> session.transfer(flowFile, REL_SUCCESS);
> // whoops, no commit or rollback
> }
> } {code}
> While the issue is quite obvious in this example, it might not be for more 
> complex processors, e.g. when based on 
> [BinFiles|https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-extension-utils/nifi-bin-manager/src/main/java/org/apache/nifi/processor/util/bin/BinFiles.java].
>  In case a developer misses to commit / rollback the session in 
> {{{}processBin{}}}, the same behaviour can be observed.
> The behavior also is not made visible by tests. The following test passes, 
> even though the session has not been committed (or rolled back).
> {code:java}
> class LeakFlowFileTest {
> private final TestRunner testRunner = 
> TestRunners.newTestRunner(LeakFlowFile.class);
> @Test
> void doesNotDetectLeak() {
> testRunner.enqueue("some data");
> testRunner.run();
> testRunner.assertAllFlowFilesTransferred(LeakFlowFile.REL_SUCCESS, 1);
> }
> } {code}
> 
> I would like to propose enhancements to NiFi in order to ease detection of 
> such implementation faults or even confine the harm they might incur.
> One approach is to extend the capabilities of TestRunner such that on 
> shutdown of a tested processor, it checks whether all sessions that were 
> created during the test and had a change associated with them, e.g. pulling a 
> FlowFile or adjusting state, do not have pending changes left but were 
> properly handled, e.g. by committing the session. In case that's not the 
> case, the test may fail, similar to trying to commit a session where 
> FlowFiles haven't been transferred / 

[jira] [Updated] (NIFI-12971) Processor may leak / lose ProcessSession with FlowFile

2024-03-29 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12971:

Issue Type: Improvement  (was: Bug)

> Processor may leak / lose ProcessSession with FlowFile
> --
>
> Key: NIFI-12971
> URL: https://issues.apache.org/jira/browse/NIFI-12971
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.25.0, 2.0.0-M2
>Reporter: endzeit
>Priority: Major
>
> When developing processors for NiFi, developers need to implement 
> [Processor|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/Processor.html].
> Most often this is done by extending 
> [AbstractProcessor|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/AbstractProcessor.html]
>  which ensures that the 
> [ProcessSession|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/ProcessSession.html]
>  used is either commited or, if that's not possible, rolled back.
> In cases where the developer needs more control over session management, they 
> might extend from 
> [AbstractSessionFactoryProcessor|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/processor/AbstractSessionFactoryProcessor.html]
>  instead, which allows to create and handle {{ProcessSessions}} on their own 
> terms.
> When using the latter, developers need to ensure they handle all sessions 
> created gracefully, that is, to commit or roll back all sessions they create, 
> like {{AbstractProcessor}} ensures.
> However, failing to do so may lead to unnoticed leakage / lost of 
> [FlowFile|https://www.javadoc.io/doc/org.apache.nifi/nifi-api/latest/org/apache/nifi/flowfile/FlowFile.html]s
>  and their associated data. 
> While data might be recovered from provenance, users are most likely not even 
> aware of the data loss, as 
> there won't be a bulletin visible in the UI indicating data loss due to no 
> Exception occuring or am error being logged.
> The following is a minimal example, which reproduces the problem. All 
> {{FlowFiles}} that enter the processor leak and eventually get lost when the 
> processor is shut down.
> {code:java}
> @InputRequirement(INPUT_REQUIRED)
> public class LeakFlowFile extends AbstractSessionFactoryProcessor {
> public static final Relationship REL_SUCCESS = new Relationship.Builder()
> .name("success")
> .description("All FlowFiles are routed to this relationship.")
> .build();
> private static final Set RELATIONSHIPS = 
> Set.of(REL_SUCCESS);
> @Override
> public Set getRelationships() {
> return RELATIONSHIPS;
> }
> @Override
> public void onTrigger(ProcessContext context, ProcessSessionFactory 
> sessionFactory) throws ProcessException {
> ProcessSession session = sessionFactory.createSession();
> FlowFile flowFile = session.get();
> if (flowFile == null) {
> return;
> }
> session.transfer(flowFile, REL_SUCCESS);
> // whoops, no commit or rollback
> }
> } {code}
> While the issue is quite obvious in this example, it might not be for more 
> complex processors, e.g. when based on 
> [BinFiles|https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-extension-utils/nifi-bin-manager/src/main/java/org/apache/nifi/processor/util/bin/BinFiles.java].
>  In case a developer misses to commit / rollback the session in 
> {{{}processBin{}}}, the same behaviour can be observed.
> The behavior also is not made visible by tests. The following test passes, 
> even though the session has not been committed (or rolled back).
> {code:java}
> class LeakFlowFileTest {
> private final TestRunner testRunner = 
> TestRunners.newTestRunner(LeakFlowFile.class);
> @Test
> void doesNotDetectLeak() {
> testRunner.enqueue("some data");
> testRunner.run();
> testRunner.assertAllFlowFilesTransferred(LeakFlowFile.REL_SUCCESS, 1);
> }
> } {code}
> 
> I would like to propose enhancements to NiFi in order to ease detection of 
> such implementation faults or even confine the harm they might incur.
> One approach is to extend the capabilities of TestRunner such that on 
> shutdown of a tested processor, it checks whether all sessions that were 
> created during the test and had a change associated with them, e.g. pulling a 
> FlowFile or adjusting state, do not have pending changes left but were 
> properly handled, e.g. by committing the session. In case that's not the 
> case, the test may fail, similar to trying to commit a session where 
> FlowFiles haven't been transferred / removed. 
> This way, developers that test their processors thoroughly might catch such 
> implementation mistakes early on even before they get 

[jira] [Commented] (NIFI-12917) Content repository claim is being zero'd out

2024-03-29 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17832244#comment-17832244
 ] 

Joe Witt commented on NIFI-12917:
-

[~jasintguru] Thanks for reporting.  Can you try reproducing this on a recent 
release?

This is not a familiar issue but we did make some important fixes in/around 
that time.

> Content repository claim is being zero'd out
> 
>
> Key: NIFI-12917
> URL: https://issues.apache.org/jira/browse/NIFI-12917
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.19.1
> Environment: Red Hat 7 running Apache NiFi 1.19.1 with no custom 
> NARs. 
>Reporter: Evan F.
>Priority: Critical
> Attachments: example_MergeContent.txt
>
>
> I've had rare but consistent instances of MergeContent processors merging 
> files and losing the content claim/payload while merging. I've pasted an 
> example log entry below where there's a size to the file pre-merge and the 
> post-merge content claim becomes zero. I've attached a text file with the 
> configuration of my MergeContent processor as well. This happens a fraction 
> of a percent of the time but it happens daily, somewhere around 10 times a 
> day. I scoured forums and previous tickets but I wasn't able to find a 
> relevant issue. I'm hoping someone can tell me if this has already been 
> addressed. 
>  
> INFO [Timer-Driven Process Thread-7] o.a.n.processors.standard.MergeContent 
> Merge Content[id=e6f460da-018d-1000-1c97-a6bd946d4f61] Merged 
> [StandardFlowFileRecord[uuid=0e5b7c30-021d-4bd9-9edb-1681c9c14d4e,claim=StandardContentClaim
>  [resourceClaim=StandardRe         sourceClaim[id=1710769372188-3700202, 
> container=default, section=490], offset=0, 
> length=52430102],offset=0,name=testFile,size=52430102]] into 
> StandardFlowFileRecord[uuid=536bf8e4-915b-4dad-9e16-063b35e834c1,claim=,offset=0,
>          name=testFile.pkg,size=0]. Reason for merging: Maximum number of 
> entries reached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-8930) Add ability for ScanAttribute processor to match on attribute values containing a delimited list of values.

2024-03-27 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831439#comment-17831439
 ] 

Joe Witt commented on NIFI-8930:


[~tlsmith]Hey sorry this one must have gotten lost in the shuffle.  The 
processor works on the basis of an extremely fast [search 
algorithm|https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm] which 
is a very different animal than regex so that might be part of the delta.  What 
you want to do makes sense for sure just not sure if 'this' is the right 
component or not.  

> Add ability for ScanAttribute processor to match on attribute values 
> containing a delimited list of values.
> ---
>
> Key: NIFI-8930
> URL: https://issues.apache.org/jira/browse/NIFI-8930
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.14.0
>Reporter: Tim Smith
>Assignee: Tim Smith
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The ScanAttribute processor is limited in that an exact match is required 
> when applied for attribute value. There are many times an attribute contains 
> a delimited list of values. It would be useful to specify a delimiter and  
> have the match criteria applied to each delimited value in the attribute. Two 
> additional property descriptors need created, 'Delimiter' and 'Delimited 
> Match Criteria'. 'Delimiter' sets the delimiter to apply to the attribute(s). 
> 'Delimited Match Criteria' specifies how the delimited attributes should 
> match the dictionary. If 'All Must Match' is selected, all delimited values 
> must match a dictionary term/pattern for attribute to be matched. For 'At 
> Least 1 Must Match' , if any one delimited value matches,then the attribute 
> matches.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12930) FetchFile on failure during StandardProcessSession.importFrom should route to failure instead of rollback

2024-03-25 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830716#comment-17830716
 ] 

Joe Witt commented on NIFI-12930:
-

[~jrsteinebrey]I have it assigned to me as there is already a PR that addresses 
it.  Thanks

> FetchFile on failure during StandardProcessSession.importFrom should route to 
> failure instead of rollback
> -
>
> Key: NIFI-12930
> URL: https://issues.apache.org/jira/browse/NIFI-12930
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Consider a scenario involving corrupt files on a disk.
> FetchFile during the importFrom call can fail in such cases as would any 
> process.  But the current handling calls rollback instead of routing to 
> failure.   As a result the flow could be stuck in an endless loop trying the 
> same objects over and over and not giving the flow designer a chance to 
> reasonably handle such cases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-12930) FetchFile on failure during StandardProcessSession.importFrom should route to failure instead of rollback

2024-03-25 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt reassigned NIFI-12930:
---

Assignee: Joe Witt  (was: Jim Steinebrey)

> FetchFile on failure during StandardProcessSession.importFrom should route to 
> failure instead of rollback
> -
>
> Key: NIFI-12930
> URL: https://issues.apache.org/jira/browse/NIFI-12930
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Consider a scenario involving corrupt files on a disk.
> FetchFile during the importFrom call can fail in such cases as would any 
> process.  But the current handling calls rollback instead of routing to 
> failure.   As a result the flow could be stuck in an endless loop trying the 
> same objects over and over and not giving the flow designer a chance to 
> reasonably handle such cases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12943) Upgrade Hadoop to 3.4.0

2024-03-25 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12943:

Fix Version/s: 2.0.0-M3
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Upgrade Hadoop to 3.4.0
> ---
>
> Key: NIFI-12943
> URL: https://issues.apache.org/jira/browse/NIFI-12943
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0-M3
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Apache Hadoop dependencies should be upgraded to 
> [3.4.0|https://hadoop.apache.org/docs/r3.4.0/index.html] to incorporate bug 
> fixes and improvements, including a number of transitive dependency upgrades.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-12930) FetchFile on failure during StandardProcessSession.importFrom should route to failure instead of rollback

2024-03-21 Thread Joe Witt (Jira)
Joe Witt created NIFI-12930:
---

 Summary: FetchFile on failure during 
StandardProcessSession.importFrom should route to failure instead of rollback
 Key: NIFI-12930
 URL: https://issues.apache.org/jira/browse/NIFI-12930
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Joe Witt


Consider a scenario involving corrupt files on a disk.

FetchFile during the importFrom call can fail in such cases as would any 
process.  But the current handling calls rollback instead of routing to 
failure.   As a result the flow could be stuck in an endless loop trying the 
same objects over and over and not giving the flow designer a chance to 
reasonably handle such cases.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12911) Upgrade Jagged to 0.3.2

2024-03-15 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12911:

Fix Version/s: 2.0.0
   1.26.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Upgrade Jagged to 0.3.2
> ---
>
> Key: NIFI-12911
> URL: https://issues.apache.org/jira/browse/NIFI-12911
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Labels: backport-needed
> Fix For: 2.0.0, 1.26.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Jagged [0.3.2|https://github.com/exceptionfactory/jagged/releases/tag/0.3.2] 
> for the EncryptContentAge and DecryptContentAge Processors includes minor bug 
> fixes related to stream closure handling.
> This upgrade should be applied to both the main and support branches.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12910) Upgrade Spring Framework to 6.0.18

2024-03-15 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12910:

Fix Version/s: 2.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Upgrade Spring Framework to 6.0.18
> --
>
> Key: NIFI-12910
> URL: https://issues.apache.org/jira/browse/NIFI-12910
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, NiFi Registry
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 2.0.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Spring Framework dependencies should be upgraded to 
> [6.0.18|https://github.com/spring-projects/spring-framework/releases/tag/v6.0.18]
>  for NiFi.
> Spring Framework dependencies for NiFi Registry should be upgraded to 
> [6.1.5.|https://github.com/spring-projects/spring-framework/releases/tag/v6.1.5]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-12907) Update to Groovy 4.0.20

2024-03-14 Thread Joe Witt (Jira)
Joe Witt created NIFI-12907:
---

 Summary: Update to Groovy 4.0.20
 Key: NIFI-12907
 URL: https://issues.apache.org/jira/browse/NIFI-12907
 Project: Apache NiFi
  Issue Type: Task
Reporter: Joe Witt
Assignee: Joe Witt
 Fix For: 2.0.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12906) Upgrade zookeeper to move past server side issues in CVE-2024-23944

2024-03-14 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12906:

Fix Version/s: 2.0.0
   1.26.0

> Upgrade zookeeper to move past server side issues in CVE-2024-23944
> ---
>
> Key: NIFI-12906
> URL: https://issues.apache.org/jira/browse/NIFI-12906
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 2.0.0, 1.26.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-12906) Upgrade zookeeper to move past server side issues in CVE-2024-23944

2024-03-14 Thread Joe Witt (Jira)
Joe Witt created NIFI-12906:
---

 Summary: Upgrade zookeeper to move past server side issues in 
CVE-2024-23944
 Key: NIFI-12906
 URL: https://issues.apache.org/jira/browse/NIFI-12906
 Project: Apache NiFi
  Issue Type: Task
Reporter: Joe Witt
Assignee: Joe Witt






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12826) Unstable test TestFTP testListFtpHostPortVariablesFileFound

2024-02-20 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12826:

Summary: Unstable test TestFTP testListFtpHostPortVariablesFileFound  (was: 
Unstable test)

> Unstable test TestFTP testListFtpHostPortVariablesFileFound
> ---
>
> Key: NIFI-12826
> URL: https://issues.apache.org/jira/browse/NIFI-12826
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
>
> {noformat}
> Error:  
> org.apache.nifi.processors.standard.TestFTP.testListFtpHostPortVariablesFileFound
>  -- Time elapsed: 0.232 s <<< FAILURE!
> org.opentest4j.AssertionFailedError: expected: <1> but was: <0>
>   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>   at 
> org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
>   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:150)
>   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:145)
>   at org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:527)
>   at 
> org.apache.nifi.util.StandardProcessorTestRunner.assertTransferCount(StandardProcessorTestRunner.java:404)
>   at 
> org.apache.nifi.processors.standard.TestFTP.testListFtpHostPortVariablesFileFound(TestFTP.java:319)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:580)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
> {noformat}
> Failed in a random build 
> https://github.com/apache/nifi/actions/runs/7981892840/job/2179960?pr=8438



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-12826) Unstable test

2024-02-20 Thread Joe Witt (Jira)
Joe Witt created NIFI-12826:
---

 Summary: Unstable test
 Key: NIFI-12826
 URL: https://issues.apache.org/jira/browse/NIFI-12826
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Joe Witt
Assignee: Joe Witt



{noformat}
Error:  
org.apache.nifi.processors.standard.TestFTP.testListFtpHostPortVariablesFileFound
 -- Time elapsed: 0.232 s <<< FAILURE!
org.opentest4j.AssertionFailedError: expected: <1> but was: <0>
at 
org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
at 
org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
at 
org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
at 
org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:150)
at 
org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:145)
at org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:527)
at 
org.apache.nifi.util.StandardProcessorTestRunner.assertTransferCount(StandardProcessorTestRunner.java:404)
at 
org.apache.nifi.processors.standard.TestFTP.testListFtpHostPortVariablesFileFound(TestFTP.java:319)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
{noformat}

Failed in a random build 
https://github.com/apache/nifi/actions/runs/7981892840/job/2179960?pr=8438




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12768) Intermittent Failures in TestListFile.testFilterAge

2024-02-20 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819019#comment-17819019
 ] 

Joe Witt commented on NIFI-12768:
-

New PR will remove the same pattern of checks in the same test but preceding 
the change already made in this JIRA.  It presumes ordering that isn't 
guaranteed which makes these already timing dependent tests worse.

> Intermittent Failures in TestListFile.testFilterAge
> ---
>
> Key: NIFI-12768
> URL: https://issues.apache.org/jira/browse/NIFI-12768
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 2.0.0-M2
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The TestListFIle class has not changed substantively in quite some time, but 
> it has begun to fail more recently across multiple platforms on GitHub Action 
> runners.
> The {{testFilterAge}} method often fails with the same stack trace:
> {noformat}
> Error:  org.apache.nifi.processors.standard.TestListFile.testFilterAge -- 
> Time elapsed: 6.436 s <<< FAILURE!
> org.opentest4j.AssertionFailedError: expected:  but was: 
>   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>   at 
> org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
>   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182)
>   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:177)
>   at org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1141)
>   at 
> org.apache.nifi.processors.standard.TestListFile.testFilterAge(TestListFile.java:331)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:580)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
> {noformat}
> The test method use recalculated timestamps to set file modification time, so 
> the problem appears to be related to these timing calculations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-12824) Remove ExecuteStateless Processor

2024-02-20 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-12824.
-
Resolution: Fixed

> Remove ExecuteStateless Processor
> -
>
> Key: NIFI-12824
> URL: https://issues.apache.org/jira/browse/NIFI-12824
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The {{ExecuteStateless}} Processor and the {{nifi-stateless-processor-nar}} 
> should be removed from the main branch as part of preparation for NiFi 2.0.0. 
> The {{ExecuteStateless}} Processor provided a transitional solution prior to 
> Stateless Execution of Process Groups in standard NiFi deployments. Now that 
> Stateless Execution is implemented, the Processor should be removed. As the 
> NiFi 2.0.0-M1 and 2.0.0-M2 releases include both the {{ExecuteStateless}} 
> Processor and Stateless Execution options, those releases can provide a 
> transitional step for deployments that need to have both options available 
> for an interim migration approach.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12768) Intermittent Failures in TestListFile.testFilterAge

2024-02-20 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819016#comment-17819016
 ] 

Joe Witt commented on NIFI-12768:
-

Reopened since this JIRA/fix is still on this unreleased line.  But a very 
similar problem remains in a similar test.
{code:java}
Error:  org.apache.nifi.processors.standard.TestListFile.testFilterAge -- Time 
elapsed: 6.302 s <<< FAILURE!
org.opentest4j.AssertionFailedError: expected: <2> but was: <3>
at 
org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
at 
org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
at 
org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
at 
org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:150)
at 
org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:145)
at org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:527)
at 
org.apache.nifi.processors.standard.TestListFile.testFilterAge(TestListFile.java:319)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
{code}


> Intermittent Failures in TestListFile.testFilterAge
> ---
>
> Key: NIFI-12768
> URL: https://issues.apache.org/jira/browse/NIFI-12768
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 2.0.0-M2
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The TestListFIle class has not changed substantively in quite some time, but 
> it has begun to fail more recently across multiple platforms on GitHub Action 
> runners.
> The {{testFilterAge}} method often fails with the same stack trace:
> {noformat}
> Error:  org.apache.nifi.processors.standard.TestListFile.testFilterAge -- 
> Time elapsed: 6.436 s <<< FAILURE!
> org.opentest4j.AssertionFailedError: expected:  but was: 
>   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>   at 
> org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
>   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182)
>   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:177)
>   at org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1141)
>   at 
> org.apache.nifi.processors.standard.TestListFile.testFilterAge(TestListFile.java:331)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:580)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
> {noformat}
> The test method use recalculated timestamps to set file modification time, so 
> the problem appears to be related to these timing calculations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (NIFI-12768) Intermittent Failures in TestListFile.testFilterAge

2024-02-20 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt reopened NIFI-12768:
-

> Intermittent Failures in TestListFile.testFilterAge
> ---
>
> Key: NIFI-12768
> URL: https://issues.apache.org/jira/browse/NIFI-12768
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 2.0.0-M2
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The TestListFIle class has not changed substantively in quite some time, but 
> it has begun to fail more recently across multiple platforms on GitHub Action 
> runners.
> The {{testFilterAge}} method often fails with the same stack trace:
> {noformat}
> Error:  org.apache.nifi.processors.standard.TestListFile.testFilterAge -- 
> Time elapsed: 6.436 s <<< FAILURE!
> org.opentest4j.AssertionFailedError: expected:  but was: 
>   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>   at 
> org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
>   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182)
>   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:177)
>   at org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:1141)
>   at 
> org.apache.nifi.processors.standard.TestListFile.testFilterAge(TestListFile.java:331)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:580)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
> {noformat}
> The test method use recalculated timestamps to set file modification time, so 
> the problem appears to be related to these timing calculations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-12772) Remote poll batch size not exposed for ListSFTP

2024-02-20 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-12772.
-
Fix Version/s: 2.0.0
   Resolution: Fixed

> Remote poll batch size not exposed for ListSFTP
> ---
>
> Key: NIFI-12772
> URL: https://issues.apache.org/jira/browse/NIFI-12772
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.25.0
>Reporter: Tom Brisland
>Priority: Minor
> Fix For: 2.0.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> I was planning on adding support for a batch size in ListSFTP due to some 
> issues we're seeing with a large number of files on an SFTP server.
> Thankfully, it seems that REMOTE_POLL_BATCH_SIZE is already supported in all 
> the FTP processor variations, just not exposed in ListSFTP.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12796) PutDatabaseRecord operation should support u/c/d for Debezium

2024-02-20 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12796:

Fix Version/s: 2.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> PutDatabaseRecord operation should support u/c/d for Debezium
> -
>
> Key: NIFI-12796
> URL: https://issues.apache.org/jira/browse/NIFI-12796
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Similar to NIFI-12344, PutDatabaseRecord operation path property should 
> support the values u/c/d for out of the box integration with Debezium events.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12818) Deprecate Apache Atlas Reporting Task for Removal

2024-02-20 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12818:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Deprecate Apache Atlas Reporting Task for Removal
> -
>
> Key: NIFI-12818
> URL: https://issues.apache.org/jira/browse/NIFI-12818
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 1.26.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{ReportLineageToAtlas}} Reporting Task should be deprecated on the 
> support branch for removal from the main branch.
> The Apache Atlas Reporting Task includes a number of transitive dependencies 
> that do not align with current Apache NiFi project versions, including 
> Servlet API, Java XML Binding, and Jersey. These differences are in addition 
> existing differences for outdated version references such as Guava, Gson, 
> Jackson, and Netty. The current Reporting Task implementation also includes a 
> number of classes that are specific to certain NiFi components, which is not 
> a maintainable implementation. The current {{ReportLineageToAtlas}} Task 
> should be removed from the main branch for NiFi 2.0 to provide clear baseline 
> for any potential future implementation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12817) Move Hadoop DBCP NAR to Hadoop Build Profile

2024-02-20 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12817:

Fix Version/s: 2.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Move Hadoop DBCP NAR to Hadoop Build Profile
> 
>
> Key: NIFI-12817
> URL: https://issues.apache.org/jira/browse/NIFI-12817
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 2.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Recent updates moved Hadoop components to an optional build profile named 
> {{{}include-hadoop{}}}. The {{nifi-hadoop-dbcp-service-nar}} should be moved 
> to the same optional profile for common grouping of related components.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12821) Set docker-maven-plugin Version

2024-02-20 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12821:

Fix Version/s: 2.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Set docker-maven-plugin Version
> ---
>
> Key: NIFI-12821
> URL: https://issues.apache.org/jira/browse/NIFI-12821
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Tools and Build
>Affects Versions: 2.0.0-M2
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The docker-maven-plugin supports building Docker images for multiple modules. 
> The parent Maven configuration should have an explicit version listed to 
> avoid unexpected upgrades resulting in build failures.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12820) Upgrade Email Processors to Jakarta Mail 2

2024-02-20 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12820:

Fix Version/s: 2.0.0

> Upgrade Email Processors to Jakarta Mail 2
> --
>
> Key: NIFI-12820
> URL: https://issues.apache.org/jira/browse/NIFI-12820
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{nifi-email-bundle}} includes multiple Processors for sending and 
> receiving email using standard protocols including SMTP, POP3, and IMAP. 
> Processors for POP3 and IMAP use Spring Integration Mail libraries while 
> ListenSMTP uses the SubEtha SMTP library. These libraries have shared 
> dependencies on Java Mail 1, which has been superseded by Jakarta Mail 2. All 
> libraries need to be upgraded together to avoid runtime conflicts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12820) Upgrade Email Processors to Jakarta Mail 2

2024-02-20 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12820:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Upgrade Email Processors to Jakarta Mail 2
> --
>
> Key: NIFI-12820
> URL: https://issues.apache.org/jira/browse/NIFI-12820
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{nifi-email-bundle}} includes multiple Processors for sending and 
> receiving email using standard protocols including SMTP, POP3, and IMAP. 
> Processors for POP3 and IMAP use Spring Integration Mail libraries while 
> ListenSMTP uses the SubEtha SMTP library. These libraries have shared 
> dependencies on Java Mail 1, which has been superseded by Jakarta Mail 2. All 
> libraries need to be upgraded together to avoid runtime conflicts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12819) Remove Apache Atlas Reporting Task

2024-02-20 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12819:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Remove Apache Atlas Reporting Task
> --
>
> Key: NIFI-12819
> URL: https://issues.apache.org/jira/browse/NIFI-12819
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 2.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{ReportLineageToAtlas}} Task and associated {{nifi-atlas-bundle}} should 
> be removed from the main branch for NiFi 2.0 based on legacy dependencies as 
> described in NIFI-12818.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12232) Frequent "failed to connect node to cluster because local flow controller partially updated. Administrator should disconnect node and review flow for corruption"

2024-02-16 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818091#comment-17818091
 ] 

Joe Witt commented on NIFI-12232:
-

Also hit by

https://apachenifi.slack.com/archives/C0L9VCD47/p1708113098305609

Roman Wesołowski
  29 minutes ago
Hi all,
I have 3 nodes Nifi cluster with 2.0.0-M1 version. Till today everything was 
working correctly, during my development something strange happeded. For some 
reason 2 nodes disconnected from cluster, and I am not able to reconnect them 
to the cluster. I have resterted nodes but without successes. All machines are 
up but can not connect each other.  Any help would be appreciated.
2024-02-16 15:14:36,663 ERROR [Reconnect to Cluster] 
o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.120.8.252:8080 -- 
Node disconnected from cluster due to 
org.apache.nifi.controller.serialization.FlowSynchronizationException: Failed 
to connect node to cluster because local flow controller partially updated. 
Administrator should disconnect node andreview flow for corruption.

> Frequent "failed to connect node to cluster because local flow controller 
> partially updated. Administrator should disconnect node and review flow for 
> corruption"
> -
>
> Key: NIFI-12232
> URL: https://issues.apache.org/jira/browse/NIFI-12232
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration Management
>Affects Versions: 1.23.2
>Reporter: John Joseph
>Assignee: Mark Payne
>Priority: Major
> Attachments: image-2023-10-16-16-12-31-027.png, 
> image-2024-02-14-13-33-44-354.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is an issue that we have been observing in the 1.23.2 version of NiFi 
> when we try upgrade,
> Since Rolling upgrade is not supported in NiFi, we scale out the revision 
> that is running and {_}run a helm upgrade{_}.
> We have NIFI running in k8s cluster mode, there is a post job that call the 
> Tenants and policies API. On a successful run it would run like this
> {code:java}
> set_policies() Action: 'read' Resource: '/flow' entity_id: 
> 'ad2d3ad6-5d69-3e0f-95e9-c7feb36e2de5' entity_name: 'CN=nifi-api-admin' 
> entity_type: 'USER'
> set_policies() status: '200'
> 'read' '/flow' policy already exists. It will be updated...
> set_policies() fetching policy inside -eq 200 status: '200'
> set_policies() after update PUT: '200'
> set_policies() Action: 'read' Resource: '/tenants' entity_id: 
> 'ad2d3ad6-5d69-3e0f-95e9-c7feb36e2de5' entity_name: 'CN=nifi-api-admin' 
> entity_type: 'USER'
> set_policies() status: '200'{code}
> *_This job was running fine in 1.23.0, 1.22 and other previous versions._* In 
> {*}{{1.23.2}}{*}, we are noticing that the job is failing very frequently 
> with the error logs;
> {code:java}
> set_policies() Action: 'read' Resource: '/flow' entity_id: 
> 'ad2d3ad6-5d69-3e0f-95e9-c7feb36e2de5' entity_name: 'CN=nifi-api-admin' 
> entity_type: 'USER'
> set_policies() status: '200'
> 'read' '/flow' policy already exists. It will be updated...
> set_policies() fetching policy inside -eq 200 status: '200'
> set_policies() after update PUT: '400'
> An error occurred getting 'read' '/flow' policy: 'This node is disconnected 
> from its configured cluster. The requested change will only be allowed if the 
> flag to acknowledge the disconnected node is set.'{code}
> {{_*'This node is disconnected from its configured cluster. The requested 
> change will only be allowed if the flag to acknowledge the disconnected node 
> is set.'*_}}
> The job is configured to run only after all the pods are up and running. 
> Though the pods are up we see exception is the inside pods
> {code:java}
> org.apache.nifi.controller.serialization.FlowSynchronizationException: Failed 
> to connect node to cluster because local flow controller partially updated. 
> Administrator should disconnect node and review flow for corruption.
> at 
> org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:1059)
> at 
> org.apache.nifi.controller.StandardFlowService.handleReconnectionRequest(StandardFlowService.java:667)
> at 
> org.apache.nifi.controller.StandardFlowService.access$200(StandardFlowService.java:107)
> at 
> org.apache.nifi.controller.StandardFlowService$1.run(StandardFlowService.java:396)
> at java.base/java.lang.Thread.run(Thread.java:833)
> Caused by: 
> org.apache.nifi.controller.serialization.FlowSynchronizationException: 
> java.lang.IllegalStateException: Cannot change destination of Connection 
> because the current destination is running
> at 
> 

[jira] [Comment Edited] (NIFI-12809) PublishKafkaRecord_2_6 - RoundRobin partitioner skipping every other partition

2024-02-16 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818086#comment-17818086
 ] 

Joe Witt edited comment on NIFI-12809 at 2/16/24 7:56 PM:
--

[~slyouts] Did you review any Kafka bugs for the noted scenario?  I see 
https://issues.apache.org/jira/browse/KAFKA-13180 that sounds very closely 
related.

If there is a reasonable workaround for this issue that we can control on the 
NiFi side we should ensure we remove any such change in versions it is resolved 
in (such as Kafka 3 components/etc..) 



was (Author: joewitt):
[~slyouts] Did you review any Kafka bugs for the noted scenario?  I see 
https://issues.apache.org/jira/browse/KAFKA-13180 that sounds very closely 
related.


> PublishKafkaRecord_2_6 - RoundRobin partitioner skipping every other partition
> --
>
> Key: NIFI-12809
> URL: https://issues.apache.org/jira/browse/NIFI-12809
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.25.0
>Reporter: Steven Youtsey
>Priority: Major
>  Labels: kafka, partitioners, publish
>
> When configured to use the NiFi RoundRobin partitioner, the processor 
> publishes to every other partition. If the number of partitions in the topic 
> and the number of records being published are the right combination, this 
> problem is masked. We see this issue when we set the partitions to 26, but 
> not when set to 25. 
> I took a code-dive into the o.a.k.c.producer.KafkaProducer and discovered 
> that it is invoking the Partitioner twice when a "new batch" is created. 
> Thus, the RoundRobin partitioner bumps the index by 2. If the RoundRobin 
> partitioner overwrote the onNewBatch method, this problem could be solved.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12809) PublishKafkaRecord_2_6 - RoundRobin partitioner skipping every other partition

2024-02-16 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818086#comment-17818086
 ] 

Joe Witt commented on NIFI-12809:
-

[~slyouts] Did you review any Kafka bugs for the noted scenario?  I see 
https://issues.apache.org/jira/browse/KAFKA-13180 that sounds very closely 
related.


> PublishKafkaRecord_2_6 - RoundRobin partitioner skipping every other partition
> --
>
> Key: NIFI-12809
> URL: https://issues.apache.org/jira/browse/NIFI-12809
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.25.0
>Reporter: Steven Youtsey
>Priority: Major
>  Labels: kafka, partitioners, publish
>
> When configured to use the NiFi RoundRobin partitioner, the processor 
> publishes to every other partition. If the number of partitions in the topic 
> and the number of records being published are the right combination, this 
> problem is masked. We see this issue when we set the partitions to 26, but 
> not when set to 25. 
> I took a code-dive into the o.a.k.c.producer.KafkaProducer and discovered 
> that it is invoking the Partitioner twice when a "new batch" is created. 
> Thus, the RoundRobin partitioner bumps the index by 2. If the RoundRobin 
> partitioner overwrote the onNewBatch method, this problem could be solved.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12792) Deprecate Spark Livy Components for Removal

2024-02-14 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12792:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Deprecate Spark Livy Components for Removal
> ---
>
> Key: NIFI-12792
> URL: https://issues.apache.org/jira/browse/NIFI-12792
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 1.26.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{nifi-spark-bundle}} contains several components for interacting with 
> [Apache Livy|https://livy.apache.org/], which supports executing Spark jobs 
> over HTTP. The Apache Livy project has graduated from incubation after 
> several years and the NiFi Spark Livy components are not published as part of 
> standard binary builds. For these reasons, the components should be 
> deprecated on the support branch for subsequent removal from the main branch.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12770) Deprecate Apache Ranger Integration for Removal

2024-02-14 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12770:

Fix Version/s: 1.26.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Deprecate Apache Ranger Integration for Removal
> ---
>
> Key: NIFI-12770
> URL: https://issues.apache.org/jira/browse/NIFI-12770
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 1.26.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Apache Ranger integration should be deprecated on the support branch for 
> subsequent removal on the main branch due to incompatibilities with Jetty 12 
> and related libraries. The Ranger plugins require Jetty 9, which was marked 
> as [End of Community 
> Support|https://github.com/jetty/jetty.project/issues/7958] in June 2022.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12793) Remove Spark Livy Components

2024-02-14 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12793:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Remove Spark Livy Components
> 
>
> Key: NIFI-12793
> URL: https://issues.apache.org/jira/browse/NIFI-12793
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{nifi-spark-bundle}} with associated components for interacting with 
> Spark using Apache Livy should be removed from the main branch as part of 
> maintenance efforts for NiFi 2.0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-12765) Nifi and nifi registry ranger audit is broken

2024-02-14 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-12765.
-
Fix Version/s: 2.0.0
   Resolution: Fixed

> Nifi and nifi registry ranger audit is broken
> -
>
> Key: NIFI-12765
> URL: https://issues.apache.org/jira/browse/NIFI-12765
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 2.0.0-M2, 2.0.0
>Reporter: Zoltán Kornél Török
>Assignee: Zoltán Kornél Török
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> h3. Bug description
> Currently ranger plugins are not reporting audit events into ranger.
> h2. Investigation
> In the nifi log I found the following ("classic") NoClassDefFoundError:
> {code:java}
> ERROR org.apache.ranger.audit.destination.SolrAuditDestination: Can't connect 
> to Solr server. 
> ZooKeepers=cfm-oudjal-dd-master0.cfm-5pax.svbr-nqvp.int.cldr.work:2181/solr-infrajava.lang.NoClassDefFoundError:
>  org/eclipse/jetty/client/util/SPNEGOAuthentication
>   at 
> org.apache.ranger.audit.destination.SolrAuditDestination.connect(SolrAuditDestination.java:168)
>   at 
> org.apache.ranger.audit.destination.SolrAuditDestination.log(SolrAuditDestination.java:227)
>   at 
> org.apache.ranger.audit.queue.AuditBatchQueue.runLogAudit(AuditBatchQueue.java:309)
>   at 
> org.apache.ranger.audit.queue.AuditBatchQueue.run(AuditBatchQueue.java:215)
>   at java.base/java.lang.Thread.run(Thread.java:1583)
> Caused by: java.lang.ClassNotFoundException: 
> org.eclipse.jetty.client.util.SPNEGOAuthentication
>   at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445)
>   at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:593)
>   at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526)
>   ... 5 common frames omitted {code}
> As you can see ranger-audit depends on solr client which depends on jetty 
> client.
> The problem is that solr client class use 
> org.eclipse.jetty.client.util.SPNEGOAuthentication - 
> [https://github.infra.cloudera.com/CDH/solr/blob/solr9-master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientBuilder.java#L46]
> However in case jetty 12.x line, this class is moved to another package: 
> [https://github.com/jetty/jetty.project/commit/a1c5cefd0d5657df04e5364cca9315aa4e2a1aef]
>  
> So the problem exist, since jetty version upgraded to 12
> h2. Proposed solution
> Sadly there is no available solr client (or ranger client), which haven't had 
> this dependency. The only solution what I found (and propose in my pr) is to 
> override jetty version in case of ranger plugins to jetty line 11, where this 
> class is not moved. I tested it on my environment and the audit logging to 
> ranger worked well with that version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12770) Deprecate Apache Ranger Integration for Removal

2024-02-12 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816719#comment-17816719
 ] 

Joe Witt commented on NIFI-12770:
-

[~dstiegli1]We set fix versions on things generally when we're merging them.

> Deprecate Apache Ranger Integration for Removal
> ---
>
> Key: NIFI-12770
> URL: https://issues.apache.org/jira/browse/NIFI-12770
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Apache Ranger integration should be deprecated on the support branch for 
> subsequent removal on the main branch due to incompatibilities with Jetty 12 
> and related libraries. The Ranger plugins require Jetty 9, which was marked 
> as [End of Community 
> Support|https://github.com/jetty/jetty.project/issues/7958] in June 2022.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12781) Remove file size from UPLOAD provenance event

2024-02-12 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816717#comment-17816717
 ] 

Joe Witt commented on NIFI-12781:
-

Lehel can you elaborate more on what the purpose of this JIRA is?

The subject says you want to remove the file size.  The body says the filesize 
is the primary concern.

The point of provenance data is to give an accurate accounting of what was done 
to/for data and where data came from or went to. What is the proposed outcome 
of this change?

Thanks

> Remove file size from UPLOAD provenance event
> -
>
> Key: NIFI-12781
> URL: https://issues.apache.org/jira/browse/NIFI-12781
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Lehel Boér
>Assignee: Lehel Boér
>Priority: Major
>
> Introducing a file location in the UPLOAD provenance event adds complexity, 
> particularly in regard to modifications in the FileResourceService. Given 
> that the primary concern is displaying file size in the Status History, for 
> now it's advisable to remove the ProvenanceFileResource class. Since the 
> UPLOAD event hasn't been utilized, there are no backward compatibility issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12772) Remote poll batch size not exposed for ListSFTP

2024-02-09 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816245#comment-17816245
 ] 

Joe Witt commented on NIFI-12772:
-

The challenge here is whether the underlying library makes it possible to 
control that.  If it does then great.  If not then...not sure how to solve.  A 
similar difficulty existed back in the day for ListFile or equivalent cases.  
Java itself didnt expose a way to control this.  You could bring down a system 
just by making a listing call in a large directory.  New IO mechanisms resolved 
that scenario but SFTP is a different animal.

Love the idea just need to make sure the library exposes that power.

> Remote poll batch size not exposed for ListSFTP
> ---
>
> Key: NIFI-12772
> URL: https://issues.apache.org/jira/browse/NIFI-12772
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.25.0
>Reporter: Tom Brisland
>Priority: Minor
>
> I was planning on adding support for a batch size in ListSFTP due to some 
> issues we're seeing with a large number of files on an SFTP server.
> Thankfully, it seems that REMOTE_POLL_BATCH_SIZE is already supported in all 
> the FTP processor variations, just not exposed in ListSFTP.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-6344) Add Failure Relationship to UpdateAttribute

2024-02-09 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816244#comment-17816244
 ] 

Joe Witt commented on NIFI-6344:


Given the new capabilities for migrating configs in NiFi 2.0 we can fix this.

Add a property to UpdateAttribute that is 'Failure Strategy' and the options 
are 'rollback' or 'route to failure'.  If that property is set with rollback it 
behaves like it does now and I recommend that remain the default.  If that 
property is set to 'route to failure' then we add a relationship which needs to 
be set which is of course called 'failure'.  For flows being migrated from a 
version before this behavior was available to a version that has this 
capability we just set the value of this parameter to our default.

This lets existing flows migrate over just fine.  It lets us give users a 
failure path for the cases they want one.  It lets us keep the vast majority of 
flows and uses of this where failure is not relevant stay clean.  And it 
handles migration.

The processor needs to be updated to catch the exceptions and then follow this 
logic.  Today it just lets it fly to the framework which causes the processor 
to yield and penalizes the flowfile for the default time.  When now catching 
the problem we should just avoid yielding and instead penalize the specific 
offending flowfile which lets everything else operate super fast.

Thanks to Mark Payne for the chat on this.

> Add Failure Relationship to UpdateAttribute
> ---
>
> Key: NIFI-6344
> URL: https://issues.apache.org/jira/browse/NIFI-6344
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.9.2
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
>  Labels: attribute, backwards-compatibility, expression-language, 
> routing
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> EL makes it possible for an UpdateAttribute processor to fail. When this 
> happens the FlowFile is rolled back, and there is no way to route it to 
> handle the failure automatically.
> Considerations:
> UpdateAttribute is used in probably all but the simplest of flows, thus any 
> change made to support a failure relationship must be handled delicately. The 
> goal of this change is for users to have no change in functionality unless 
> they specifically configure it.
> Proposal: 
> It was proposed on the Slack channel to create the failure relationship, but 
> default it to auto-terminate. This is a good start, but without further work 
> would result in a change in functionality. I propose that we will default to 
> auto-terminate, but also detect this behavior in the code. If the Failure 
> relationship is set to auto-terminate then we will rollback the transaction.
> The only downside I see with this is you can't actually auto-terminate 
> Failures without the addition of another property, such as Failure Behavior: 
> Route to Failure and Rollback options.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-5448) Failed EL date parsing live-locks processors without a failure relationship

2024-02-09 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-5448.

Resolution: Duplicate

Will address the fundamental concern in this other jira

> Failed EL date parsing live-locks processors without a failure relationship
> ---
>
> Key: NIFI-5448
> URL: https://issues.apache.org/jira/browse/NIFI-5448
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: David Koster
>Assignee: Mike Thomsen
>Priority: Major
>
> Processors that utilize the Expression Language need to always present a 
> failure relationship.
> If a processor with only a success relationship, for example UpdateAttribute, 
> utilizes the expression language to perform type coercion to a date and 
> fails, the processor will be unable to dispose of the FlowFile and remain 
> blocked indefinitely.
> Recreation flow:
> GenerateFlowFile -> Update Attribute #1 -> Update Attribute #2 -> Anything
> Update Attribute #1 - test = "Hello World"
> Update Attribute #2 - test = ${test:toDate('-MM-dd')}
>  
> Generates an IllegalAttributeException on UpdateAttribute.
>  
> The behavior should match numerical type coercion and silently skip the 
> processing or offer failure relationships on processors supporting EL



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >