[jira] [Created] (NIFI-12613) Align type-safe access to allowableValue with it's declaration

2024-01-15 Thread endzeit (Jira)
endzeit created NIFI-12613:
--

 Summary: Align type-safe access to allowableValue with it's 
declaration 
 Key: NIFI-12613
 URL: https://issues.apache.org/jira/browse/NIFI-12613
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: endzeit
Assignee: endzeit


NIFI-12452 introduced a new method on {{PropertyValue}} to type-safely access a 
property with allowableValues constrained by an Enum. 
{code:java}
 & DescribedValue> E asDescribedValue(Class enumType) 
throws IllegalArgumentException {code}
I think it makes sense to align the access site in {{PropertyValue}} with the 
declaration site in {{PropertyDescriptor.Builder}}.

This would involve renaming the method to {{asAllowableValue}} for improved 
symmetry.
This is a breaking change, however the method was never part of an stable 
release.

Additionally, NIFI-12573 unified the behaviour of specifying Enums (not) 
implementing {{DescribedValue}} as allowableValues. With this change in place, 
I think it's reasonable to open the method to accepts any Enum as well. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12593 - ValidateCSV - get all constraint violations for an invalid line [nifi]

2024-01-15 Thread via GitHub


pvillard31 commented on PR #8229:
URL: https://github.com/apache/nifi/pull/8229#issuecomment-1893191565

   I've pushed changes to address the comments of the review. Thanks @dan-s1 
@exceptionfactory.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (NIFI-12612) Fix JASN1Reader to handle OBJECT IDENTIFIER

2024-01-15 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-12612.
---
Fix Version/s: 1.25.0
   2.0.0-M2
   Resolution: Fixed

> Fix JASN1Reader to handle OBJECT IDENTIFIER
> ---
>
> Key: NIFI-12612
> URL: https://issues.apache.org/jira/browse/NIFI-12612
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
> Fix For: 1.25.0, 2.0.0-M2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> OBJECT IDENTIFIER is a special asn1 type that is not recognized by 
> JASN1Reader. Instead it tries to handle it as a Record as a fallback.
> The fix would be to simply treat (convert really) OBJECT IDENTIFIER types as 
> strings.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12612) Fix JASN1Reader to handle OBJECT IDENTIFIER

2024-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807071#comment-17807071
 ] 

ASF subversion and git services commented on NIFI-12612:


Commit da7c9bcddb46a5862da542bde48a73805a00d878 in nifi's branch 
refs/heads/main from Tamas Palfy
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=da7c9bcddb ]

NIFI-12612 In asn1 bundle handle OBJECT IDENTIFIER type as string.

Signed-off-by: Pierre Villard 

This closes #8247.


> Fix JASN1Reader to handle OBJECT IDENTIFIER
> ---
>
> Key: NIFI-12612
> URL: https://issues.apache.org/jira/browse/NIFI-12612
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> OBJECT IDENTIFIER is a special asn1 type that is not recognized by 
> JASN1Reader. Instead it tries to handle it as a Record as a fallback.
> The fix would be to simply treat (convert really) OBJECT IDENTIFIER types as 
> strings.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12612) Fix JASN1Reader to handle OBJECT IDENTIFIER

2024-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807072#comment-17807072
 ] 

ASF subversion and git services commented on NIFI-12612:


Commit d446c0c90b5eba3ff813f711c5a69d88e07bc988 in nifi's branch 
refs/heads/support/nifi-1.x from Tamas Palfy
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=d446c0c90b ]

NIFI-12612 In asn1 bundle handle OBJECT IDENTIFIER type as string.

Signed-off-by: Pierre Villard 

This closes #8247.


> Fix JASN1Reader to handle OBJECT IDENTIFIER
> ---
>
> Key: NIFI-12612
> URL: https://issues.apache.org/jira/browse/NIFI-12612
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> OBJECT IDENTIFIER is a special asn1 type that is not recognized by 
> JASN1Reader. Instead it tries to handle it as a Record as a fallback.
> The fix would be to simply treat (convert really) OBJECT IDENTIFIER types as 
> strings.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12612 In asn1 bundle handle OBJECT IDENTIFIER type as string. [nifi]

2024-01-15 Thread via GitHub


asfgit closed pull request #8247: NIFI-12612 In asn1 bundle handle OBJECT 
IDENTIFIER type as string.
URL: https://github.com/apache/nifi/pull/8247


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12554 Placed all Jolt related code under a new nifi-jolt-bundle and refactored to reduce duplicate code. [nifi]

2024-01-15 Thread via GitHub


EndzeitBegins commented on PR #8249:
URL: https://github.com/apache/nifi/pull/8249#issuecomment-1893105650

   Thank you for working on this @dan-s1.  
   
   I haven't done a full code review so I might've just overlooked something.
   But I noticed that the package names of the processors change due to the 
move from the standard-nar to the new jolt-nar. 
   From my understanding this will result in the processors needing to be 
replaced manually on the canvas between version updates, or is there a 
migration in place for this sort of change?
   If not, and the user needs to replace those processors themself, I don't 
think we need to migrate properties as there won't be any instance of the 
processor existing anyway.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12441 Added No Tracking listing strategy to ListS3 [nifi]

2024-01-15 Thread via GitHub


juldrixx commented on PR #8088:
URL: https://github.com/apache/nifi/pull/8088#issuecomment-1892841189

   > @juldrixx There appears to be a test failure in the `ci-workflow` for the 
new listing strategy:
   > 
   > ```
   > org.apache.nifi.processors.aws.s3.TestListS3.testNoTrackingList -- Time 
elapsed: 0.020 s <<< FAILURE
   > ```
   
   Sorry, I didn't see that the unit test results had changed when I rebased. 
It should be fine now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12554 Placed all Jolt related code under a new nifi-jolt-bundle and refactored to reduce duplicate code. [nifi]

2024-01-15 Thread via GitHub


dan-s1 commented on code in PR #8249:
URL: https://github.com/apache/nifi/pull/8249#discussion_r1452774911


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-nar/src/main/resources/META-INF/NOTICE:
##
@@ -10,7 +10,7 @@ This product includes the following work from the Apache 
Hadoop project under Ap
 This includes derived works from the Apache Software License V2 library Jolt 
(https://github.com/bazaarvoice/jolt)
   Copyright 2013-2014 Bazaarvoice, Inc
   The derived work is adapted from 
com.bazaarvoice.jolt.chainr.ChainrBuilder.java, 
com.bazaarvoice.jolt.chainr.spec.ChainrSpec.java,
-  com.bazaarvoice.jolt.chainr.spec.ChainrEntry.java and can be found in the 
org.apache.nifi.processors.standard.util.jolt.TransformFactory.java class.
+  com.bazaarvoice.jolt.chainr.spec.ChainrEntry.java and can be found in the 
util.org.apache.nifi.processors.jolt.TransformFactory.java class.

Review Comment:
   Yeah I noticed that also after I pushed.  You beat me to it :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12554 Placed all Jolt related code under a new nifi-jolt-bundle and refactored to reduce duplicate code. [nifi]

2024-01-15 Thread via GitHub


dan-s1 commented on code in PR #8249:
URL: https://github.com/apache/nifi/pull/8249#discussion_r1452773110


##
nifi-nar-bundles/nifi-standard-bundle/pom.xml:
##
@@ -265,7 +290,7 @@
 
 com.networknt
 json-schema-validator
-1.1.0
+1.0.87

Review Comment:
   Kind of weird as I updated main and then on my branch I rebased with main, I 
am not sure why all these changes were not picked up.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12554 Placed all Jolt related code under a new nifi-jolt-bundle and refactored to reduce duplicate code. [nifi]

2024-01-15 Thread via GitHub


exceptionfactory commented on code in PR #8249:
URL: https://github.com/apache/nifi/pull/8249#discussion_r1452770561


##
nifi-nar-bundles/nifi-jolt-bundle/pom.xml:
##
@@ -57,6 +45,22 @@
 json-utils
 ${jolt.version}
 
+
+javax.servlet.jsp
+javax.servlet.jsp-api
+2.3.3
+
+
+javax.ws.rs
+javax.ws.rs-api
+2.1.1
+

Review Comment:
   These dependencies are the old versions that are no longer current following 
the upgrade to Spring 6 and Jetty 12. These should be replaced with the new 
Jakarta equivalent versions.



##
nifi-nar-bundles/nifi-standard-bundle/pom.xml:
##
@@ -101,7 +99,7 @@
 
 com.hierynomus
 sshj
-0.38.0
+0.37.0

Review Comment:
   This change should be reverted.



##
nifi-nar-bundles/nifi-standard-bundle/pom.xml:
##
@@ -210,6 +208,33 @@
 json-flattener
 0.16.6
 
+
+org.apache.bval
+bval-jsr
+1.1.2
+
+
+org.apache.tomcat
+tomcat-el-api
+
+
+org.apache.geronimo.specs
+geronimo-annotation_1.2_spec
+
+
+org.apache.geronimo.specs
+geronimo-jcdi_1.1.spec
+
+
+org.apache.geronimo.specs
+geronimo-jpa_2.0.spec
+
+
+org.apache.bval
+bval-xstream
+
+
+

Review Comment:
   This dependency should be removed as it is not used.



##
nifi-nar-bundles/nifi-standard-bundle/pom.xml:
##
@@ -80,18 +77,19 @@
 2.0.0-SNAPSHOT
 
 
-com.bazaarvoice.jolt
-jolt-core
-${jolt.version}
+javax.servlet.jsp
+javax.servlet.jsp-api
+2.3.3
 
 
-com.bazaarvoice.jolt
-json-utils
-${jolt.version}
+javax.servlet
+javax.servlet-api
+3.1.0
 
 
-jakarta.ws.rs
-jakarta.ws.rs-api
+javax.ws.rs
+javax.ws.rs-api
+2.1.1

Review Comment:
   These javax dependencies are no longer current and should be removed.



##
nifi-nar-bundles/nifi-standard-bundle/pom.xml:
##
@@ -265,7 +290,7 @@
 
 com.networknt
 json-schema-validator
-1.1.0
+1.0.87

Review Comment:
   This change should be reverted.



##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-nar/src/main/resources/META-INF/NOTICE:
##
@@ -10,7 +10,7 @@ This product includes the following work from the Apache 
Hadoop project under Ap
 This includes derived works from the Apache Software License V2 library Jolt 
(https://github.com/bazaarvoice/jolt)
   Copyright 2013-2014 Bazaarvoice, Inc
   The derived work is adapted from 
com.bazaarvoice.jolt.chainr.ChainrBuilder.java, 
com.bazaarvoice.jolt.chainr.spec.ChainrSpec.java,
-  com.bazaarvoice.jolt.chainr.spec.ChainrEntry.java and can be found in the 
org.apache.nifi.processors.standard.util.jolt.TransformFactory.java class.
+  com.bazaarvoice.jolt.chainr.spec.ChainrEntry.java and can be found in the 
util.org.apache.nifi.processors.jolt.TransformFactory.java class.

Review Comment:
   This appears to be incorrect and should be reverted.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] NIFI-12554 Placed all Jolt related code under a new nifi-jolt-bundle and refactored to reduce duplicate code. [nifi]

2024-01-15 Thread via GitHub


dan-s1 opened a new pull request, #8249:
URL: https://github.com/apache/nifi/pull/8249

   …
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-12554](https://issues.apache.org/jira/browse/NIFI-12554)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [ ] Pull Request based on current revision of the `main` branch
   - [ ] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 21
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-4491) Add a CaptureChangeMySQLRecord processor

2024-01-15 Thread Matt Burgess (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-4491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17806977#comment-17806977
 ] 

Matt Burgess commented on NIFI-4491:


That would be great!

> Add a CaptureChangeMySQLRecord processor
> 
>
> Key: NIFI-4491
> URL: https://issues.apache.org/jira/browse/NIFI-4491
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Matt Burgess
>Priority: Major
>
> The main reason CaptureChangeMySQL doesn't leverage the RecordSetWriter API 
> is that those capabilities were being developed in parallel with that 
> processor. Whether a new record-aware processor is better than an improvement 
> to the original is up for discussion; however, it would be a good idea to 
> support the RecordSetWriter API for any CDC (CaptureChangeXYZ) processor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12441 Added No Tracking listing strategy to ListS3 [nifi]

2024-01-15 Thread via GitHub


exceptionfactory commented on PR #8088:
URL: https://github.com/apache/nifi/pull/8088#issuecomment-1892798911

   @juldrixx There appears to be a test failure in the `ci-workflow` for the 
new listing strategy:
   
   ```
   org.apache.nifi.processors.aws.s3.TestListS3.testNoTrackingList -- Time 
elapsed: 0.020 s <<< FAILURE
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-12573) Improve support for Enum values in PropertyDescriptor.Builder

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-12573:

Fix Version/s: 2.0.0-M2
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Improve support for Enum values in PropertyDescriptor.Builder
> -
>
> Key: NIFI-12573
> URL: https://issues.apache.org/jira/browse/NIFI-12573
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: endzeit
>Assignee: endzeit
>Priority: Minor
> Fix For: 2.0.0-M2
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The {{PropertyDescriptor.Builder}} provides several methods accepting Enum 
> values. 
> However, some of those are restricted to implement {{DescribedValue}} while 
> other do not.
> This limitation should be lifted to provide support for both Enum that 
> implement {{DescribedValue}} and those that do not.
> Additionally, parameters should not be restricted to {{AllowableValue}} but 
> rather the underlying interface {{DescribedValue}}. 
> This affected the following methods:
> {code:java}
>  & DescribedValue> Builder allowableValues(final Class 
> enumClass)
> -> > Builder allowableValues(final Class enumClass)
> public  & DescribedValue> Builder allowableValues(final 
> EnumSet enumValues)
> -> public > Builder allowableValues(final EnumSet 
> enumValues)
> public Builder allowableValues(final AllowableValue... values)
> -> public Builder allowableValues(final DescribedValue... values)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12573) Improve support for Enum values in PropertyDescriptor.Builder

2024-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17806968#comment-17806968
 ] 

ASF subversion and git services commented on NIFI-12573:


Commit 4588c6c37e023c78870dbd6ceb1e6b9738e76471 in nifi's branch 
refs/heads/main from EndzeitBegins
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=4588c6c37e ]

NIFI-12573 Improved support for Enums in PropertyDescriptor.Builder

NIFI-12574 Add clearDefaultValue to PropertyDescriptor.Builder

This closes #8211

Signed-off-by: David Handermann 


> Improve support for Enum values in PropertyDescriptor.Builder
> -
>
> Key: NIFI-12573
> URL: https://issues.apache.org/jira/browse/NIFI-12573
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: endzeit
>Assignee: endzeit
>Priority: Minor
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The {{PropertyDescriptor.Builder}} provides several methods accepting Enum 
> values. 
> However, some of those are restricted to implement {{DescribedValue}} while 
> other do not.
> This limitation should be lifted to provide support for both Enum that 
> implement {{DescribedValue}} and those that do not.
> Additionally, parameters should not be restricted to {{AllowableValue}} but 
> rather the underlying interface {{DescribedValue}}. 
> This affected the following methods:
> {code:java}
>  & DescribedValue> Builder allowableValues(final Class 
> enumClass)
> -> > Builder allowableValues(final Class enumClass)
> public  & DescribedValue> Builder allowableValues(final 
> EnumSet enumValues)
> -> public > Builder allowableValues(final EnumSet 
> enumValues)
> public Builder allowableValues(final AllowableValue... values)
> -> public Builder allowableValues(final DescribedValue... values)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12574) Add clearDefaultValue to PropertyDescriptor.Builder

2024-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17806969#comment-17806969
 ] 

ASF subversion and git services commented on NIFI-12574:


Commit 4588c6c37e023c78870dbd6ceb1e6b9738e76471 in nifi's branch 
refs/heads/main from EndzeitBegins
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=4588c6c37e ]

NIFI-12573 Improved support for Enums in PropertyDescriptor.Builder

NIFI-12574 Add clearDefaultValue to PropertyDescriptor.Builder

This closes #8211

Signed-off-by: David Handermann 


> Add clearDefaultValue to PropertyDescriptor.Builder
> ---
>
> Key: NIFI-12574
> URL: https://issues.apache.org/jira/browse/NIFI-12574
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: endzeit
>Assignee: endzeit
>Priority: Minor
>
> At the moment there is no way to remove a {{defaultValue}} from a 
> {{PropertyDescriptor}}. 
> There is a method to effectively copy an existing {{PropertyDescriptor}} to 
> create a new one. However, one a {{PropertyDescriptor}} has a defaultValue, 
> it cannot be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-12574) Add clearDefaultValue to PropertyDescriptor.Builder

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-12574.
-
Fix Version/s: 2.0.0-M2
   Resolution: Fixed

> Add clearDefaultValue to PropertyDescriptor.Builder
> ---
>
> Key: NIFI-12574
> URL: https://issues.apache.org/jira/browse/NIFI-12574
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: endzeit
>Assignee: endzeit
>Priority: Minor
> Fix For: 2.0.0-M2
>
>
> At the moment there is no way to remove a {{defaultValue}} from a 
> {{PropertyDescriptor}}. 
> There is a method to effectively copy an existing {{PropertyDescriptor}} to 
> create a new one. However, one a {{PropertyDescriptor}} has a defaultValue, 
> it cannot be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12573 Improve support for Enum values in PropertyDescriptor.Builder [nifi]

2024-01-15 Thread via GitHub


exceptionfactory closed pull request #8211: NIFI-12573 Improve support for Enum 
values in PropertyDescriptor.Builder
URL: https://github.com/apache/nifi/pull/8211


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12573 Improve support for Enum values in PropertyDescriptor.Builder [nifi]

2024-01-15 Thread via GitHub


EndzeitBegins commented on PR #8211:
URL: https://github.com/apache/nifi/pull/8211#issuecomment-1892753997

   As always, thank you for input and taking on the merge process 
@exceptionfactory. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12573 Improve support for Enum values in PropertyDescriptor.Builder [nifi]

2024-01-15 Thread via GitHub


exceptionfactory commented on PR #8211:
URL: https://github.com/apache/nifi/pull/8211#issuecomment-1892750763

   Thanks for the quick reply @EndzeitBegins. That's a very good point about 
the second scenario, and the first one is also reasonable. With that 
background, I agree with the changes and will proceed with merging. +1


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12573 Improve support for Enum values in PropertyDescriptor.Builder [nifi]

2024-01-15 Thread via GitHub


EndzeitBegins commented on PR #8211:
URL: https://github.com/apache/nifi/pull/8211#issuecomment-1892742398

   Thanks for the questions @exceptionfactory.
   First of all I agree that it should be preferred to let the enums implement 
`DescribedValue` whenever feasible.
   Maybe that's something worth to better highlight in the code documentation. 
   
   In particular I thought about two situations, where the more adaptive 
approach introduced in this PR would be beneficial.
   1. There might be cases we're an existing enum (e.g. from a library) is used 
that cannot be altered. Even though I'd personally prefer creating one's own 
enum (implementing DescribedValue) that maps to the library one (similar to the 
azure bundle) I can see how someone might not want to go "the extra mile".
   2. And by far more important to me, I noticed that when using the existing 
`public > Builder allowableValues(final E[] values)` with an 
enum that actually implements DescribedValue, those values will be ignored, 
which might come as a surprise. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-12596) PutIceberg is missing case-insensitive Record type handling in List and Map types

2024-01-15 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-12596:

Status: Patch Available  (was: Open)

> PutIceberg is missing case-insensitive Record type handling in List and Map 
> types
> -
>
> Key: NIFI-12596
> URL: https://issues.apache.org/jira/browse/NIFI-12596
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Mark Bathori
>Assignee: Mark Bathori
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> With NIFI-11263 case-insensitive and order independent field handling was 
> added to Record types but it is missing in case of List and Map types 
> containing Records.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12573) Improve support for Enum values in PropertyDescriptor.Builder

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-12573:

Status: Patch Available  (was: Open)

> Improve support for Enum values in PropertyDescriptor.Builder
> -
>
> Key: NIFI-12573
> URL: https://issues.apache.org/jira/browse/NIFI-12573
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: endzeit
>Assignee: endzeit
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{PropertyDescriptor.Builder}} provides several methods accepting Enum 
> values. 
> However, some of those are restricted to implement {{DescribedValue}} while 
> other do not.
> This limitation should be lifted to provide support for both Enum that 
> implement {{DescribedValue}} and those that do not.
> Additionally, parameters should not be restricted to {{AllowableValue}} but 
> rather the underlying interface {{DescribedValue}}. 
> This affected the following methods:
> {code:java}
>  & DescribedValue> Builder allowableValues(final Class 
> enumClass)
> -> > Builder allowableValues(final Class enumClass)
> public  & DescribedValue> Builder allowableValues(final 
> EnumSet enumValues)
> -> public > Builder allowableValues(final EnumSet 
> enumValues)
> public Builder allowableValues(final AllowableValue... values)
> -> public Builder allowableValues(final DescribedValue... values)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12402 add option to user can avoid an inactive indicator is crea… [nifi]

2024-01-15 Thread via GitHub


exceptionfactory commented on code in PR #8063:
URL: https://github.com/apache/nifi/pull/8063#discussion_r1452722401


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/MonitorActivity.java:
##
@@ -105,6 +105,14 @@ public class MonitorActivity extends AbstractProcessor {
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .defaultValue("Activity restored at time: 
${now():format('/MM/dd HH:mm:ss')} after being inactive for 
${inactivityDurationMillis:toNumber():divide(6)} minutes")
 .build();
+public static final PropertyDescriptor WAIT_FOR_ACTIVITY = new 
PropertyDescriptor.Builder()
+.name("Wait For Activity")

Review Comment:
   Following the general Title Case convention for property names, recommend 
adjusting `For` to `for`.
   ```suggestion
   .name("Wait for Activity")
   ```



##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/MonitorActivity.java:
##
@@ -105,6 +105,14 @@ public class MonitorActivity extends AbstractProcessor {
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .defaultValue("Activity restored at time: 
${now():format('/MM/dd HH:mm:ss')} after being inactive for 
${inactivityDurationMillis:toNumber():divide(6)} minutes")
 .build();
+public static final PropertyDescriptor WAIT_FOR_ACTIVITY = new 
PropertyDescriptor.Builder()
+.name("Wait For Activity")
+.description("When the processor gets started or restarted, if set 
to true, only send an inactive indicator if there had been activity beforehand. 
"
++ "Otherwise send an inactive indicator even if there had 
not been activity beforehand.")
+.required(false)

Review Comment:
   Since this property has a default value that preserves the existing 
behavior, this new property can be marked as required.
   ```suggestion
   .required(true)
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12313 - Update PutDatabaseRecord.java add property use database table column datatype [nifi]

2024-01-15 Thread via GitHub


exceptionfactory commented on PR #7981:
URL: https://github.com/apache/nifi/pull/7981#issuecomment-1892729012

   @avasquezkudaw I submitted pull request #8248 for NIFI-9458 implementing 
framework for microsecond and nanosecond precision in schema field definitions. 
There may be other changes required for this particular use case. However, in 
light of those changes and  previous discussion, I am closing this pull request 
for now. Feel free to revisit this issue in light of those date formatting 
changes, and we can also evaluate other options as needed on this Jira issue.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12313 - Update PutDatabaseRecord.java add property use database table column datatype [nifi]

2024-01-15 Thread via GitHub


exceptionfactory closed pull request #7981: NIFI-12313 - Update 
PutDatabaseRecord.java add property use database table column datatype
URL: https://github.com/apache/nifi/pull/7981


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-11288) When using AssumeRoleWithWebIdentity method, the STS dependencies is missing.

2024-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17806961#comment-17806961
 ] 

ASF subversion and git services commented on NIFI-11288:


Commit 8401ffc1ca9cded74b3ddb8a5a4384003193b6bf in nifi's branch 
refs/heads/support/nifi-1.x from Juldrixx
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=8401ffc1ca ]

NIFI-11288 Add AWS STS dependency for AssumeRoleWithWebIdentity method

This closes #7974

Signed-off-by: David Handermann 
(cherry picked from commit 281a28c5d4ce1f4c70fc5990eecb07a5c1badca5)


> When using AssumeRoleWithWebIdentity method, the STS dependencies is missing.
> -
>
> Key: NIFI-11288
> URL: https://issues.apache.org/jira/browse/NIFI-11288
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions, NiFi Registry
>Reporter: Julian Zhang
>Priority: Minor
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> When NiFi Processors or NiFi Registry accesses AWS services using the 
> AssumeRoleWithWebIdentity method in the AWSCredentialsProviderChain class, 
> the correct credentials are not available due to the lack of STS dependencies.
> {code:java}
> 2023-03-14 15:56:44,700 DEBUG [Timer-Driven Process Thread-2] 
> c.a.auth.AWSCredentialsProviderChain Unable to load credentials from 
> WebIdentityTokenCredentialsProvider: To use assume role profiles the 
> aws-java-sdk-sts module must be on the class path. {code}
> ADDITIONAL INFORMATION: STS lib needs to be on class path for the 
> WebidentityTokenCredentialsProvider to work properly, otherwise it will be 
> skipped in DefaultAWSCredentialsProviderChain.
>  https://github.com/aws/aws-sdk-java/issues/2136



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11288) When using AssumeRoleWithWebIdentity method, the STS dependencies is missing.

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-11288:

Fix Version/s: 1.25.0
   2.0.0-M2
 Assignee: Julien G.
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> When using AssumeRoleWithWebIdentity method, the STS dependencies is missing.
> -
>
> Key: NIFI-11288
> URL: https://issues.apache.org/jira/browse/NIFI-11288
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions, NiFi Registry
>Reporter: Julian Zhang
>Assignee: Julien G.
>Priority: Minor
> Fix For: 1.25.0, 2.0.0-M2
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> When NiFi Processors or NiFi Registry accesses AWS services using the 
> AssumeRoleWithWebIdentity method in the AWSCredentialsProviderChain class, 
> the correct credentials are not available due to the lack of STS dependencies.
> {code:java}
> 2023-03-14 15:56:44,700 DEBUG [Timer-Driven Process Thread-2] 
> c.a.auth.AWSCredentialsProviderChain Unable to load credentials from 
> WebIdentityTokenCredentialsProvider: To use assume role profiles the 
> aws-java-sdk-sts module must be on the class path. {code}
> ADDITIONAL INFORMATION: STS lib needs to be on class path for the 
> WebidentityTokenCredentialsProvider to work properly, otherwise it will be 
> skipped in DefaultAWSCredentialsProviderChain.
>  https://github.com/aws/aws-sdk-java/issues/2136



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-11288 Add missing dependencies required by AWS AssumeRoleWithWeb… [nifi]

2024-01-15 Thread via GitHub


exceptionfactory closed pull request #7974: NIFI-11288 Add missing dependencies 
required by AWS AssumeRoleWithWeb…
URL: https://github.com/apache/nifi/pull/7974


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-12593) ValidateCSV - get all constraint violations for an invalid line

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-12593:

Labels: backport-needed  (was: )

> ValidateCSV - get all constraint violations for an invalid line
> ---
>
> Key: NIFI-12593
> URL: https://issues.apache.org/jira/browse/NIFI-12593
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>  Labels: backport-needed
> Fix For: 1.25.0, 2.0.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Right now the ValidateCSV will invalidate a line as soon as a column is 
> violating the specified constraint and will go the next one. It means that 
> for an invalid line, we'll only know about the first violation but the line 
> may be invalid for many columns.
> It'd be nice to have the option to get all of the violations for a given 
> line. This should be optional, and false by default, as this would impact the 
> performances of the processing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12467) HadoopDBCPConnectionPool doesn't use KerberosUserService

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-12467:

Labels: backport-needed  (was: )

> HadoopDBCPConnectionPool doesn't use KerberosUserService
> 
>
> Key: NIFI-12467
> URL: https://issues.apache.org/jira/browse/NIFI-12467
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>  Labels: backport-needed
>
> NIFI-8978 added some KerberosUserService integration with the 
> DBCPConnectionPool implementations, but for HadoopDBCPConnectionPool its 
> values are only used during customValidate(). They are not read in 
> onEnabled() and will fail when authenticating.
> The workaround is to use the Principal and Keytab/Password properties or a 
> Keytab Credentials Service instead of a KerberosUserService.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12467) HadoopDBCPConnectionPool doesn't use KerberosUserService

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-12467:

Fix Version/s: (was: 1.25.0)
   (was: 2.0.0)

> HadoopDBCPConnectionPool doesn't use KerberosUserService
> 
>
> Key: NIFI-12467
> URL: https://issues.apache.org/jira/browse/NIFI-12467
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> NIFI-8978 added some KerberosUserService integration with the 
> DBCPConnectionPool implementations, but for HadoopDBCPConnectionPool its 
> values are only used during customValidate(). They are not read in 
> onEnabled() and will fail when authenticating.
> The workaround is to use the Principal and Keytab/Password properties or a 
> Keytab Credentials Service instead of a KerberosUserService.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-8932) Add feature to CSVReader to skip N lines at top of the file

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-8932:
---
Labels: backport-needed  (was: )

> Add feature to CSVReader to skip N lines at top of the file
> ---
>
> Key: NIFI-8932
> URL: https://issues.apache.org/jira/browse/NIFI-8932
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Philipp Korniets
>Assignee: Matt Burgess
>Priority: Minor
>  Labels: backport-needed
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> We have a lot of CSV files where provider add custom header/footer to valid 
> CSV content.
>  CSV header is actually second row. 
> To remove unnecessary data we can use
>  * ReplaceText 
>  * splitText->RouteOnAttribute -> MergeContent
> It would be great to have an option in CSVReader controller to skip N rows 
> from top/bottom in order to get5 clean data.
>  * skip N from the top
>  * skip M from the bottom
>  Similar request was developed in FLINK 
> https://issues.apache.org/jira/browse/FLINK-1002
>  
> Data Example:
> {code}
> 7/20/21 2:48:47 AM GMT-04:00  ABB: Blended Rate Calc (X),,,
> distribution_id,Distribution 
> Id,settle_date,group_code,company_name,currency_code,common_account_name,business_date,prod_code,security,class,asset_type
> -1,all,20210719,Repo 21025226,qwerty                                    
> ,EUR,TPSL_21025226   ,19-Jul-21,BRM96ST7   ,ABC 
> 14/09/24,NR,BOND  
> -1,all,20210719,Repo 21025226,qwerty                                    
> ,GBP,RPSS_21025226   ,19-Jul-21,,Total @ -0.11,,
> {code}
> |7/20/21 2:48:47 AM GMT-04:00  ABB: Blended Rate Calc (X)|  |  |  |  |  |  |  
> |  |  |  |  |  
> |distribution_id|Distribution 
> Id|settle_date|group_code|company_name|currency_code|common_account_name|business_date|prod_code|security|class|asset_type|
> |-1|all|20210719|Repo 21025226|qwerty                                    
> |EUR|TPSL_21025226   |19-Jul-21|BRM96ST7   |ABC 
> 14/09/24|NR|BOND  |
> |-1|all|20210719|Repo 21025226|qwerty                                    
> |GBP|RPSS_21025226   |19-Jul-21| |Total @ -0.11| | |



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12441) Add No Tracking Strategy to ListS3

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-12441:

Fix Version/s: (was: 2.0.0)

> Add No Tracking Strategy to ListS3
> --
>
> Key: NIFI-12441
> URL: https://issues.apache.org/jira/browse/NIFI-12441
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Julien G.
>Assignee: Julien G.
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As was done for ListGCS in 
> [NIFI-11891|https://issues.apache.org/jira/browse/NIFI-11891], I want to make 
> it possible to have no state in ListS3 so that I can periodically retrieve 
> all my objects in S3.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12430) Update unboundid-ldapsdk to 6.0.10

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-12430:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Update unboundid-ldapsdk to 6.0.10
> --
>
> Key: NIFI-12430
> URL: https://issues.apache.org/jira/browse/NIFI-12430
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 2.0.0-M1, 1.24.0
>Reporter: Mike R
>Assignee: Mike R
>Priority: Minor
> Fix For: 1.25.0, 2.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Update unboundid-ldapsdk to 6.0.10



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-8932) Add feature to CSVReader to skip N lines at top of the file

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-8932:
---
Fix Version/s: (was: 1.25.0)
   (was: 2.0.0)

> Add feature to CSVReader to skip N lines at top of the file
> ---
>
> Key: NIFI-8932
> URL: https://issues.apache.org/jira/browse/NIFI-8932
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Philipp Korniets
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> We have a lot of CSV files where provider add custom header/footer to valid 
> CSV content.
>  CSV header is actually second row. 
> To remove unnecessary data we can use
>  * ReplaceText 
>  * splitText->RouteOnAttribute -> MergeContent
> It would be great to have an option in CSVReader controller to skip N rows 
> from top/bottom in order to get5 clean data.
>  * skip N from the top
>  * skip M from the bottom
>  Similar request was developed in FLINK 
> https://issues.apache.org/jira/browse/FLINK-1002
>  
> Data Example:
> {code}
> 7/20/21 2:48:47 AM GMT-04:00  ABB: Blended Rate Calc (X),,,
> distribution_id,Distribution 
> Id,settle_date,group_code,company_name,currency_code,common_account_name,business_date,prod_code,security,class,asset_type
> -1,all,20210719,Repo 21025226,qwerty                                    
> ,EUR,TPSL_21025226   ,19-Jul-21,BRM96ST7   ,ABC 
> 14/09/24,NR,BOND  
> -1,all,20210719,Repo 21025226,qwerty                                    
> ,GBP,RPSS_21025226   ,19-Jul-21,,Total @ -0.11,,
> {code}
> |7/20/21 2:48:47 AM GMT-04:00  ABB: Blended Rate Calc (X)|  |  |  |  |  |  |  
> |  |  |  |  |  
> |distribution_id|Distribution 
> Id|settle_date|group_code|company_name|currency_code|common_account_name|business_date|prod_code|security|class|asset_type|
> |-1|all|20210719|Repo 21025226|qwerty                                    
> |EUR|TPSL_21025226   |19-Jul-21|BRM96ST7   |ABC 
> 14/09/24|NR|BOND  |
> |-1|all|20210719|Repo 21025226|qwerty                                    
> |GBP|RPSS_21025226   |19-Jul-21| |Total @ -0.11| | |



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12420) Add Sawmill transformation processors

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-12420:

Fix Version/s: (was: 1.25.0)
   (was: 2.0.0)

> Add Sawmill transformation processors
> -
>
> Key: NIFI-12420
> URL: https://issues.apache.org/jira/browse/NIFI-12420
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Similarly to 
> JoltTransformJSON
> JoltTransformRecord
> JSLTTransformJSON
> It would be nice to have  SawmillTransformJSON and SawmillTransformRecord 
> processors that rely on the Sawmill transformation DSL
> https://github.com/logzio/sawmill
> https://github.com/logzio/sawmill/wiki
> to transform input data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-9458) Refactor NiFi to use Java 8 Time classes

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-9458:
---
Fix Version/s: 2.0.0

> Refactor NiFi to use Java 8 Time classes
> 
>
> Key: NIFI-9458
> URL: https://issues.apache.org/jira/browse/NIFI-9458
> Project: Apache NiFi
>  Issue Type: Epic
>  Components: Core Framework, Extensions
>Reporter: Pierre Villard
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In order to support microseconds or even nanoseconds in NiFi, we should 
> consider refactoring NiFi code to leverage Java 8 Time classes. This is going 
> to be a significant amount of work so creating this epic to breakdown the 
> work into smaller pieces.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-9458) Refactor NiFi to use Java 8 Time classes

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-9458:
---
Status: Patch Available  (was: Open)

> Refactor NiFi to use Java 8 Time classes
> 
>
> Key: NIFI-9458
> URL: https://issues.apache.org/jira/browse/NIFI-9458
> Project: Apache NiFi
>  Issue Type: Epic
>  Components: Core Framework, Extensions
>Reporter: Pierre Villard
>Assignee: David Handermann
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In order to support microseconds or even nanoseconds in NiFi, we should 
> consider refactoring NiFi code to leverage Java 8 Time classes. This is going 
> to be a significant amount of work so creating this epic to breakdown the 
> work into smaller pieces.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] NIFI-9458 Replace SimpleDateFormat with DateTimeFormatter [nifi]

2024-01-15 Thread via GitHub


exceptionfactory opened a new pull request, #8248:
URL: https://github.com/apache/nifi/pull/8248

   # Summary
   
   [NIFI-9458](https://issues.apache.org/jira/browse/NIFI-9458) Replaces direct 
use of 
[java.text.SimpleDateFormat](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html)
 with 
[java.time.format.DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html)
 across component utilities and test classes.
   
   `SimpleDateFormat` handles parsing and formatting instances of 
[java.util.Date](https://docs.oracle.com/javase/8/docs/api/java/util/Date.html) 
but it lacks features such as nanosecond precision, and can be subject to 
subtle conversion issues with time zone offsets.
   
   `DateTimeFormatter` provides parsing and formatting for newer `java.time` 
classes, which provide greater precision than other types like `java.sql.Time` 
and `java.sql.Timestamp`.
   
   Pattern characters for `DateTimeFormatter` are largely similar to 
`SimpleDateFormat`, with several additional characters supporting nanoseconds 
and optional boundaries. Some formatting patterns are more strict, which could 
lead to parsing failures on some existing custom formats. This is a primary 
reason for including these changes as part of the NiFi 2.0 release version.
   
   Functional changes include removing date and time handling methods from the 
`DataTypeUtils` class and adding new `FieldConverter` implementations in the 
same `nifi-record` module. The new `StandardFieldConverterRegistry` provides an 
abstraction for retrieving registered instances of `FieldConverter` based on 
the expected output field type. The `FieldConverter` interface approach 
provides a more maintainable and focused approach to object conversion while 
the `FieldConverterRegistry` provides a similar level of reference convenience. 
Instances of `DateTimeFormatter` are thread-safe, unlike `SimpleDateFormat`, 
which enables instance caching inside the `StandardFieldConverterRegistry`.
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [X] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [X] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [X] Pull Request based on current revision of the `main` branch
   - [X] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [X] Build completed using `mvn clean install -P contrib-check`
 - [X] JDK 21
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12593 - ValidateCSV - get all constraint violations for an invalid line [nifi]

2024-01-15 Thread via GitHub


dan-s1 commented on code in PR #8229:
URL: https://github.com/apache/nifi/pull/8229#discussion_r1452651455


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java:
##
@@ -174,6 +175,19 @@ public class ValidateCsv extends AbstractProcessor {
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor GET_ALL_VIOLATIONS = new 
PropertyDescriptor.Builder()
+.name("validate-csv-violations")
+.displayName("Get all violations")
+.description("If true, the validation.error.message attribute 
would contain the list of all the violations"
++ " for the first invalid line. Note that setting this 
property to true would slightly decrease"
++ " the performances as all columns would be validated. If 
false, a line is invalid as soon as a"
++ " column is found violating the specified constraint and 
only this violation for the first invalid"
++ " line will be indicated in the validation.error.message 
attribute.")
+.required(true)
+.allowableValues("true", "false")
+.defaultValue("false")
+.build();

Review Comment:
   @pvillard31 I am fine with include also. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12593 - ValidateCSV - get all constraint violations for an invalid line [nifi]

2024-01-15 Thread via GitHub


exceptionfactory commented on code in PR #8229:
URL: https://github.com/apache/nifi/pull/8229#discussion_r1452650933


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java:
##
@@ -174,6 +175,19 @@ public class ValidateCsv extends AbstractProcessor {
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor GET_ALL_VIOLATIONS = new 
PropertyDescriptor.Builder()
+.name("validate-csv-violations")
+.displayName("Get all violations")
+.description("If true, the validation.error.message attribute 
would contain the list of all the violations"
++ " for the first invalid line. Note that setting this 
property to true would slightly decrease"
++ " the performances as all columns would be validated. If 
false, a line is invalid as soon as a"
++ " column is found violating the specified constraint and 
only this violation for the first invalid"
++ " line will be indicated in the validation.error.message 
attribute.")
+.required(true)
+.allowableValues("true", "false")
+.defaultValue("false")
+.build();

Review Comment:
   `Include All Violations` sounds sufficient, or `Report All Errors` if you 
prefer.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12593 - ValidateCSV - get all constraint violations for an invalid line [nifi]

2024-01-15 Thread via GitHub


exceptionfactory commented on code in PR #8229:
URL: https://github.com/apache/nifi/pull/8229#discussion_r1452650933


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java:
##
@@ -174,6 +175,19 @@ public class ValidateCsv extends AbstractProcessor {
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor GET_ALL_VIOLATIONS = new 
PropertyDescriptor.Builder()
+.name("validate-csv-violations")
+.displayName("Get all violations")
+.description("If true, the validation.error.message attribute 
would contain the list of all the violations"
++ " for the first invalid line. Note that setting this 
property to true would slightly decrease"
++ " the performances as all columns would be validated. If 
false, a line is invalid as soon as a"
++ " column is found violating the specified constraint and 
only this violation for the first invalid"
++ " line will be indicated in the validation.error.message 
attribute.")
+.required(true)
+.allowableValues("true", "false")
+.defaultValue("false")
+.build();

Review Comment:
   `Include All Violations` sounds sufficient.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12593 - ValidateCSV - get all constraint violations for an invalid line [nifi]

2024-01-15 Thread via GitHub


dan-s1 commented on code in PR #8229:
URL: https://github.com/apache/nifi/pull/8229#discussion_r1452648947


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java:
##
@@ -619,18 +632,83 @@ public NifiCsvListReader(Reader reader, CsvPreference 
preferences) {
 super(reader, preferences);
 }
 
-@Override
-public List read(CellProcessor... processors) throws 
IOException {
+public List read(boolean getAllViolations, CellProcessor... 
processors) throws IOException {
 if( processors == null ) {
 throw new NullPointerException("Processors should not be 
null");
 }
 if( readRow() ) {
-super.executeProcessors(new 
ArrayList(getColumns().size()), processors);
+executeProcessors(new ArrayList(getColumns().size()), 
processors, getAllViolations);
 return new ArrayList(getColumns());
 }
 return null; // EOF
 }
 
+protected List executeProcessors(List 
processedColumns, CellProcessor[] processors, boolean getAllViolations) {
+this.executeCellProcessors(processedColumns, getColumns(), 
processors, getLineNumber(), getRowNumber(), getAllViolations);
+return processedColumns;
+}
+
+private void executeCellProcessors(final List destination, 
final List source,
+final CellProcessor[] processors, final int lineNo, final int 
rowNo, boolean getAllViolations) {
+
+if( destination == null ) {
+throw new NullPointerException("destination should not be 
null");
+} else if( source == null ) {
+throw new NullPointerException("source should not be 
null");
+} else if( processors == null ) {
+throw new NullPointerException("processors should not be 
null");
+}

Review Comment:
   I understand but it seems redundant as I pointed out. Even if it would be 
necessary to throw an exception, I believe `IllegalArgumentException`  would be 
more appropriate.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12593 - ValidateCSV - get all constraint violations for an invalid line [nifi]

2024-01-15 Thread via GitHub


dan-s1 commented on code in PR #8229:
URL: https://github.com/apache/nifi/pull/8229#discussion_r1452648947


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java:
##
@@ -619,18 +632,83 @@ public NifiCsvListReader(Reader reader, CsvPreference 
preferences) {
 super(reader, preferences);
 }
 
-@Override
-public List read(CellProcessor... processors) throws 
IOException {
+public List read(boolean getAllViolations, CellProcessor... 
processors) throws IOException {
 if( processors == null ) {
 throw new NullPointerException("Processors should not be 
null");
 }
 if( readRow() ) {
-super.executeProcessors(new 
ArrayList(getColumns().size()), processors);
+executeProcessors(new ArrayList(getColumns().size()), 
processors, getAllViolations);
 return new ArrayList(getColumns());
 }
 return null; // EOF
 }
 
+protected List executeProcessors(List 
processedColumns, CellProcessor[] processors, boolean getAllViolations) {
+this.executeCellProcessors(processedColumns, getColumns(), 
processors, getLineNumber(), getRowNumber(), getAllViolations);
+return processedColumns;
+}
+
+private void executeCellProcessors(final List destination, 
final List source,
+final CellProcessor[] processors, final int lineNo, final int 
rowNo, boolean getAllViolations) {
+
+if( destination == null ) {
+throw new NullPointerException("destination should not be 
null");
+} else if( source == null ) {
+throw new NullPointerException("source should not be 
null");
+} else if( processors == null ) {
+throw new NullPointerException("processors should not be 
null");
+}

Review Comment:
   I understand but it seems redundant as I pointed out. Even it would be 
necessary to throw an exception, I believe `IllegalArgumentException`  would be 
more appropriate.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12593 - ValidateCSV - get all constraint violations for an invalid line [nifi]

2024-01-15 Thread via GitHub


pvillard31 commented on code in PR #8229:
URL: https://github.com/apache/nifi/pull/8229#discussion_r1452639503


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java:
##
@@ -174,6 +175,19 @@ public class ValidateCsv extends AbstractProcessor {
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor GET_ALL_VIOLATIONS = new 
PropertyDescriptor.Builder()
+.name("validate-csv-violations")
+.displayName("Get all violations")
+.description("If true, the validation.error.message attribute 
would contain the list of all the violations"
++ " for the first invalid line. Note that setting this 
property to true would slightly decrease"
++ " the performances as all columns would be validated. If 
false, a line is invalid as soon as a"
++ " column is found violating the specified constraint and 
only this violation for the first invalid"
++ " line will be indicated in the validation.error.message 
attribute.")
+.required(true)
+.allowableValues("true", "false")
+.defaultValue("false")
+.build();

Review Comment:
   I like the ``include`` suggestion. Using a strategy seems a bit overkill to 
me. If there is a consensus in favor of using a strategy, I'll make the change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12593 - ValidateCSV - get all constraint violations for an invalid line [nifi]

2024-01-15 Thread via GitHub


exceptionfactory commented on code in PR #8229:
URL: https://github.com/apache/nifi/pull/8229#discussion_r1452630053


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java:
##
@@ -174,6 +175,19 @@ public class ValidateCsv extends AbstractProcessor {
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor GET_ALL_VIOLATIONS = new 
PropertyDescriptor.Builder()
+.name("validate-csv-violations")
+.displayName("Get all violations")
+.description("If true, the validation.error.message attribute 
would contain the list of all the violations"
++ " for the first invalid line. Note that setting this 
property to true would slightly decrease"
++ " the performances as all columns would be validated. If 
false, a line is invalid as soon as a"
++ " column is found violating the specified constraint and 
only this violation for the first invalid"
++ " line will be indicated in the validation.error.message 
attribute.")
+.required(true)
+.allowableValues("true", "false")
+.defaultValue("false")
+.build();

Review Comment:
   I agree that `Get` is not optimal, but `All Violations` seems unclear. What 
about `Include All Violations`? Perhaps for better clarity, even though there 
are only two options, what about `Error Reporting Strategy` that is either 
`ALL` or `FIRST`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12593 - ValidateCSV - get all constraint violations for an invalid line [nifi]

2024-01-15 Thread via GitHub


pvillard31 commented on code in PR #8229:
URL: https://github.com/apache/nifi/pull/8229#discussion_r1452629738


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java:
##
@@ -174,6 +175,19 @@ public class ValidateCsv extends AbstractProcessor {
 .addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor GET_ALL_VIOLATIONS = new 
PropertyDescriptor.Builder()
+.name("validate-csv-violations")
+.displayName("Get all violations")
+.description("If true, the validation.error.message attribute 
would contain the list of all the violations"
++ " for the first invalid line. Note that setting this 
property to true would slightly decrease"
++ " the performances as all columns would be validated. If 
false, a line is invalid as soon as a"
++ " column is found violating the specified constraint and 
only this violation for the first invalid"
++ " line will be indicated in the validation.error.message 
attribute.")
+.required(true)
+.allowableValues("true", "false")
+.defaultValue("false")
+.build();

Review Comment:
   I've not looked at all of the codebase but usually a True/False property's 
description is starting with a verb. I'm fine using another work like "check" 
but I do think a verb should be used.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12593 - ValidateCSV - get all constraint violations for an invalid line [nifi]

2024-01-15 Thread via GitHub


exceptionfactory commented on code in PR #8229:
URL: https://github.com/apache/nifi/pull/8229#discussion_r1452629026


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java:
##
@@ -619,18 +632,83 @@ public NifiCsvListReader(Reader reader, CsvPreference 
preferences) {
 super(reader, preferences);
 }
 
-@Override
-public List read(CellProcessor... processors) throws 
IOException {
+public List read(boolean getAllViolations, CellProcessor... 
processors) throws IOException {
 if( processors == null ) {
 throw new NullPointerException("Processors should not be 
null");
 }
 if( readRow() ) {
-super.executeProcessors(new 
ArrayList(getColumns().size()), processors);
+executeProcessors(new ArrayList(getColumns().size()), 
processors, getAllViolations);
 return new ArrayList(getColumns());

Review Comment:
   IntellIJ flags this as unnecessary, so it seems like a worthwhile cleanup 
opportunity.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12593 - ValidateCSV - get all constraint violations for an invalid line [nifi]

2024-01-15 Thread via GitHub


pvillard31 commented on code in PR #8229:
URL: https://github.com/apache/nifi/pull/8229#discussion_r1452628163


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java:
##
@@ -619,18 +632,83 @@ public NifiCsvListReader(Reader reader, CsvPreference 
preferences) {
 super(reader, preferences);
 }
 
-@Override
-public List read(CellProcessor... processors) throws 
IOException {
+public List read(boolean getAllViolations, CellProcessor... 
processors) throws IOException {
 if( processors == null ) {
 throw new NullPointerException("Processors should not be 
null");
 }
 if( readRow() ) {
-super.executeProcessors(new 
ArrayList(getColumns().size()), processors);
+executeProcessors(new ArrayList(getColumns().size()), 
processors, getAllViolations);
 return new ArrayList(getColumns());
 }
 return null; // EOF
 }
 
+protected List executeProcessors(List 
processedColumns, CellProcessor[] processors, boolean getAllViolations) {
+this.executeCellProcessors(processedColumns, getColumns(), 
processors, getLineNumber(), getRowNumber(), getAllViolations);
+return processedColumns;
+}
+
+private void executeCellProcessors(final List destination, 
final List source,
+final CellProcessor[] processors, final int lineNo, final int 
rowNo, boolean getAllViolations) {
+
+if( destination == null ) {
+throw new NullPointerException("destination should not be 
null");
+} else if( source == null ) {
+throw new NullPointerException("source should not be 
null");
+} else if( processors == null ) {
+throw new NullPointerException("processors should not be 
null");
+}

Review Comment:
   This code is copied from the Utils class of the underlying library we're 
using and I didn't want to change its code. That's the reason it is this way. I 
don't mind changing it though.



##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java:
##
@@ -619,18 +632,83 @@ public NifiCsvListReader(Reader reader, CsvPreference 
preferences) {
 super(reader, preferences);
 }
 
-@Override
-public List read(CellProcessor... processors) throws 
IOException {
+public List read(boolean getAllViolations, CellProcessor... 
processors) throws IOException {
 if( processors == null ) {
 throw new NullPointerException("Processors should not be 
null");
 }
 if( readRow() ) {
-super.executeProcessors(new 
ArrayList(getColumns().size()), processors);
+executeProcessors(new ArrayList(getColumns().size()), 
processors, getAllViolations);
 return new ArrayList(getColumns());
 }
 return null; // EOF
 }
 
+protected List executeProcessors(List 
processedColumns, CellProcessor[] processors, boolean getAllViolations) {
+this.executeCellProcessors(processedColumns, getColumns(), 
processors, getLineNumber(), getRowNumber(), getAllViolations);
+return processedColumns;
+}
+
+private void executeCellProcessors(final List destination, 
final List source,
+final CellProcessor[] processors, final int lineNo, final int 
rowNo, boolean getAllViolations) {
+
+if( destination == null ) {
+throw new NullPointerException("destination should not be 
null");
+} else if( source == null ) {
+throw new NullPointerException("source should not be 
null");
+} else if( processors == null ) {
+throw new NullPointerException("processors should not be 
null");
+}
+
+// the context used when cell processors report exceptions
+final CsvContext context = new CsvContext(lineNo, rowNo, 1);
+context.setRowSource(new ArrayList(source));

Review Comment:
   While I agree specifying the type is not required, isn't it making the code 
more readable?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12593 - ValidateCSV - get all constraint violations for an invalid line [nifi]

2024-01-15 Thread via GitHub


pvillard31 commented on code in PR #8229:
URL: https://github.com/apache/nifi/pull/8229#discussion_r1452627532


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java:
##
@@ -619,18 +632,83 @@ public NifiCsvListReader(Reader reader, CsvPreference 
preferences) {
 super(reader, preferences);
 }
 
-@Override
-public List read(CellProcessor... processors) throws 
IOException {
+public List read(boolean getAllViolations, CellProcessor... 
processors) throws IOException {
 if( processors == null ) {
 throw new NullPointerException("Processors should not be 
null");
 }
 if( readRow() ) {
-super.executeProcessors(new 
ArrayList(getColumns().size()), processors);
+executeProcessors(new ArrayList(getColumns().size()), 
processors, getAllViolations);
 return new ArrayList(getColumns());

Review Comment:
   While I agree specifying the type is not required, isn't it making the code 
more readable?



##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateCsv.java:
##
@@ -619,18 +632,83 @@ public NifiCsvListReader(Reader reader, CsvPreference 
preferences) {
 super(reader, preferences);
 }
 
-@Override
-public List read(CellProcessor... processors) throws 
IOException {
+public List read(boolean getAllViolations, CellProcessor... 
processors) throws IOException {
 if( processors == null ) {
 throw new NullPointerException("Processors should not be 
null");
 }
 if( readRow() ) {
-super.executeProcessors(new 
ArrayList(getColumns().size()), processors);
+executeProcessors(new ArrayList(getColumns().size()), 
processors, getAllViolations);
 return new ArrayList(getColumns());
 }
 return null; // EOF
 }
 
+protected List executeProcessors(List 
processedColumns, CellProcessor[] processors, boolean getAllViolations) {
+this.executeCellProcessors(processedColumns, getColumns(), 
processors, getLineNumber(), getRowNumber(), getAllViolations);
+return processedColumns;
+}
+
+private void executeCellProcessors(final List destination, 
final List source,
+final CellProcessor[] processors, final int lineNo, final int 
rowNo, boolean getAllViolations) {
+
+if( destination == null ) {
+throw new NullPointerException("destination should not be 
null");
+} else if( source == null ) {
+throw new NullPointerException("source should not be 
null");
+} else if( processors == null ) {
+throw new NullPointerException("processors should not be 
null");
+}
+
+// the context used when cell processors report exceptions
+final CsvContext context = new CsvContext(lineNo, rowNo, 1);
+context.setRowSource(new ArrayList(source));
+
+if( source.size() != processors.length ) {
+throw new SuperCsvException(String.format(
+"The number of columns to be processed (%d) must match 
the number of CellProcessors (%d): check that the number"
++ " of CellProcessors you have defined matches the 
expected number of columns being read/written",
+source.size(), processors.length), context);
+}
+
+destination.clear();
+
+List errors = new ArrayList();

Review Comment:
   While I agree specifying the type is not required, isn't it making the code 
more readable?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-8278) Add Credentials Type property to ADLSCredentialsControllerService

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-8278:
---
Fix Version/s: 2.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Add Credentials Type property to ADLSCredentialsControllerService
> -
>
> Key: NIFI-8278
> URL: https://issues.apache.org/jira/browse/NIFI-8278
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {{ADLSCredentialsControllerService}} supports different authentication modes: 
> Account Key, SAS Token, Managed Identity, Service Principal.
> All these modes have their own properties and the configuration is not really 
> straightforward.
> Add new {{Authentication Type}} property with the enumerated values of 
> authentication types. Use {{dependsOn()}} for the other properties in order 
> to show only the relevant ones. Also support the current logic with default 
> type AUTO for backward compatibility.
> Update:
> Add the feature only in NiFi 2.0. In that case, no AUTO option is needed for 
> supporting the legacy configuration. The old-style config can be migrated 
> using the new property migration feature (NIFI-12139).
> Use {{Credentials Type}} for the property name (common property with the 
> existing {{AzureStorageCredentialsControllerService_v12}}).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-8278 Added Credentials Type property to ADLSCredentialsControlle… [nifi]

2024-01-15 Thread via GitHub


exceptionfactory closed pull request #8205: NIFI-8278 Added Credentials Type 
property to ADLSCredentialsControlle…
URL: https://github.com/apache/nifi/pull/8205


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-8278) Add Credentials Type property to ADLSCredentialsControllerService

2024-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17806922#comment-17806922
 ] 

ASF subversion and git services commented on NIFI-8278:
---

Commit e8783f33253b3897636d10b6d68af65bd90271c9 in nifi's branch 
refs/heads/main from Peter Turcsanyi
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=e8783f3325 ]

NIFI-8278 Added Credentials Type to ADLSCredentialsControllerService

Used migrateProperties() for migrating old flows to the new property structure.
Moved common properties to AzureStorageUtils and also updated/consolidated some 
property descriptions

This closes #8205

Signed-off-by: David Handermann 


> Add Credentials Type property to ADLSCredentialsControllerService
> -
>
> Key: NIFI-8278
> URL: https://issues.apache.org/jira/browse/NIFI-8278
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Peter Turcsanyi
>Assignee: Peter Turcsanyi
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {{ADLSCredentialsControllerService}} supports different authentication modes: 
> Account Key, SAS Token, Managed Identity, Service Principal.
> All these modes have their own properties and the configuration is not really 
> straightforward.
> Add new {{Authentication Type}} property with the enumerated values of 
> authentication types. Use {{dependsOn()}} for the other properties in order 
> to show only the relevant ones. Also support the current logic with default 
> type AUTO for backward compatibility.
> Update:
> Add the feature only in NiFi 2.0. In that case, no AUTO option is needed for 
> supporting the legacy configuration. The old-style config can be migrated 
> using the new property migration feature (NIFI-12139).
> Use {{Credentials Type}} for the property name (common property with the 
> existing {{AzureStorageCredentialsControllerService_v12}}).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] NIFI-12612 In asn1 bundle handle OBJECT IDENTIFIER type as string. [nifi]

2024-01-15 Thread via GitHub


tpalfy opened a new pull request, #8247:
URL: https://github.com/apache/nifi/pull/8247

   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-12612](https://issues.apache.org/jira/browse/NIFI-12612)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [ ] Pull Request based on current revision of the `main` branch
   - [ ] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 21
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-12610) Typo for 'default_value' in Python developer documentation

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-12610:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Typo for 'default_value' in Python developer documentation
> --
>
> Key: NIFI-12610
> URL: https://issues.apache.org/jira/browse/NIFI-12610
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation  Website
>Affects Versions: 2.0.0-M1
>Reporter: Joe Gresock
>Assignee: Joe Gresock
>Priority: Trivial
> Fix For: 2.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In the Python developer guide, the section on creating a property descriptor 
> has a typo: defaultValue should actually be default_value.  The result of 
> pasting this example in a Python processor is that the property descriptors 
> will not load.
> {code:java}
> numspaces = PropertyDescriptor(name="Number of Spaces",
> description="Number of spaces to use for pretty-printing",
> validators=[StandardValidators.POSITIVE_INTEGER_VALIDATOR],
> defaultValue="4",
> required=True)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-12608) Processor Bundle Maven Archetype broken build

2024-01-15 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-12608.
-
Fix Version/s: 2.0.0
 Assignee: Shane O'Neill
   Resolution: Fixed

> Processor Bundle Maven Archetype broken build
> -
>
> Key: NIFI-12608
> URL: https://issues.apache.org/jira/browse/NIFI-12608
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 2.0.0-M1
>Reporter: Shane O'Neill
>Assignee: Shane O'Neill
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I am seeing...
> {noformat}
> mvn archetype:generate -DarchetypeGroupId=org.apache.nifi 
> -DarchetypeArtifactId=nifi-processor-bundle-archetype{noformat}
> fail to produce a buildable project for the 
> {{{}nifi-processor-bundle-archetype{}}}. The error is:
> {code:java}
> java.lang.ClassNotFoundException: 
> org.apache.nifi.processor.util.StandardValidators
> at java.net.URLClassLoader.findClass (URLClassLoader.java:445)
> at java.lang.ClassLoader.loadClass (ClassLoader.java:593)
> at java.lang.ClassLoader.loadClass (ClassLoader.java:526)
> at org.example.processors.example.MyProcessor. (MyProcessor.java:53)
> at jdk.internal.misc.Unsafe.ensureClassInitialized0 (Native Method)
> at jdk.internal.misc.Unsafe.ensureClassInitialized (Unsafe.java:1160)
> at jdk.internal.reflect.MethodHandleAccessorFactory.ensureClassInitialized 
> (MethodHandleAccessorFactory.java:300)
> at jdk.internal.reflect.MethodHandleAccessorFactory.newConstructorAccessor 
> (MethodHandleAccessorFactory.java:103)
> at jdk.internal.reflect.ReflectionFactory.newConstructorAccessor 
> (ReflectionFactory.java:200)
> at java.lang.reflect.Constructor.acquireConstructorAccessor 
> (Constructor.java:549)
> at java.lang.reflect.Constructor.newInstanceWithCaller (Constructor.java:499)
> at java.lang.reflect.Constructor.newInstance (Constructor.java:486) {code}
> Adding {{nifi-standard-services-api-nar}} to the NAR packaging module fixes 
> it.
> To recreate the issue, generate a project and run mvn package.
> {code:java}
> mvn archetype:generate \
> -DarchetypeGroupId=org.apache.nifi \
> -DarchetypeArtifactId=nifi-processor-bundle-archetype \
> -DnifiVersion=2.0.0-M1 \
> -DartifactBaseName=example \
> -DgroupId=org.example \
> -DartifactId=myproject \
> -Dversion=0.0.1 \
> -Dpackage=org.example.processors.example {code}
> {code:java}
> cd myproject
> mvn package {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12610: Correcting default_value example in Python Developer guide [nifi]

2024-01-15 Thread via GitHub


exceptionfactory closed pull request #8245: NIFI-12610: Correcting 
default_value example in Python Developer guide
URL: https://github.com/apache/nifi/pull/8245


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-12608) Processor Bundle Maven Archetype broken build

2024-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17806920#comment-17806920
 ] 

ASF subversion and git services commented on NIFI-12608:


Commit 9d947741d2a852e42be58ac7bb951ad38399d1e3 in nifi's branch 
refs/heads/main from zeevo
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=9d947741d2 ]

NIFI-12608 Add nifi-standard-services-api-nar to Processor Archetype

This closes #8239

Signed-off-by: David Handermann 


> Processor Bundle Maven Archetype broken build
> -
>
> Key: NIFI-12608
> URL: https://issues.apache.org/jira/browse/NIFI-12608
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 2.0.0-M1
>Reporter: Shane O'Neill
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I am seeing...
> {noformat}
> mvn archetype:generate -DarchetypeGroupId=org.apache.nifi 
> -DarchetypeArtifactId=nifi-processor-bundle-archetype{noformat}
> fail to produce a buildable project for the 
> {{{}nifi-processor-bundle-archetype{}}}. The error is:
> {code:java}
> java.lang.ClassNotFoundException: 
> org.apache.nifi.processor.util.StandardValidators
> at java.net.URLClassLoader.findClass (URLClassLoader.java:445)
> at java.lang.ClassLoader.loadClass (ClassLoader.java:593)
> at java.lang.ClassLoader.loadClass (ClassLoader.java:526)
> at org.example.processors.example.MyProcessor. (MyProcessor.java:53)
> at jdk.internal.misc.Unsafe.ensureClassInitialized0 (Native Method)
> at jdk.internal.misc.Unsafe.ensureClassInitialized (Unsafe.java:1160)
> at jdk.internal.reflect.MethodHandleAccessorFactory.ensureClassInitialized 
> (MethodHandleAccessorFactory.java:300)
> at jdk.internal.reflect.MethodHandleAccessorFactory.newConstructorAccessor 
> (MethodHandleAccessorFactory.java:103)
> at jdk.internal.reflect.ReflectionFactory.newConstructorAccessor 
> (ReflectionFactory.java:200)
> at java.lang.reflect.Constructor.acquireConstructorAccessor 
> (Constructor.java:549)
> at java.lang.reflect.Constructor.newInstanceWithCaller (Constructor.java:499)
> at java.lang.reflect.Constructor.newInstance (Constructor.java:486) {code}
> Adding {{nifi-standard-services-api-nar}} to the NAR packaging module fixes 
> it.
> To recreate the issue, generate a project and run mvn package.
> {code:java}
> mvn archetype:generate \
> -DarchetypeGroupId=org.apache.nifi \
> -DarchetypeArtifactId=nifi-processor-bundle-archetype \
> -DnifiVersion=2.0.0-M1 \
> -DartifactBaseName=example \
> -DgroupId=org.example \
> -DartifactId=myproject \
> -Dversion=0.0.1 \
> -Dpackage=org.example.processors.example {code}
> {code:java}
> cd myproject
> mvn package {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12610) Typo for 'default_value' in Python developer documentation

2024-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17806919#comment-17806919
 ] 

ASF subversion and git services commented on NIFI-12610:


Commit 1e27cb907ae24125f74d6d6855063ba1f5377c43 in nifi's branch 
refs/heads/main from Joe Gresock
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=1e27cb907a ]

NIFI-12610: Corrected default_value example in Python Developer guide

This closes #8245

Signed-off-by: David Handermann 


> Typo for 'default_value' in Python developer documentation
> --
>
> Key: NIFI-12610
> URL: https://issues.apache.org/jira/browse/NIFI-12610
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation  Website
>Affects Versions: 2.0.0-M1
>Reporter: Joe Gresock
>Assignee: Joe Gresock
>Priority: Trivial
> Fix For: 2.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In the Python developer guide, the section on creating a property descriptor 
> has a typo: defaultValue should actually be default_value.  The result of 
> pasting this example in a Python processor is that the property descriptors 
> will not load.
> {code:java}
> numspaces = PropertyDescriptor(name="Number of Spaces",
> description="Number of spaces to use for pretty-printing",
> validators=[StandardValidators.POSITIVE_INTEGER_VALIDATOR],
> defaultValue="4",
> required=True)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12608 Add nifi-standard-services-api-nar to mvn archetype [nifi]

2024-01-15 Thread via GitHub


exceptionfactory closed pull request #8239: NIFI-12608 Add 
nifi-standard-services-api-nar to mvn archetype
URL: https://github.com/apache/nifi/pull/8239


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (NIFI-12612) Fix JASN1Reader to handle OBJECT IDENTIFIER

2024-01-15 Thread Tamas Palfy (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Palfy reassigned NIFI-12612:
--

Assignee: Tamas Palfy

> Fix JASN1Reader to handle OBJECT IDENTIFIER
> ---
>
> Key: NIFI-12612
> URL: https://issues.apache.org/jira/browse/NIFI-12612
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
>
> OBJECT IDENTIFIER is a special asn1 type that is not recognized by 
> JASN1Reader. Instead it tries to handle it as a Record as a fallback.
> The fix would be to simply treat (convert really) OBJECT IDENTIFIER types as 
> strings.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-12612) Fix JASN1Reader to handle OBJECT IDENTIFIER

2024-01-15 Thread Tamas Palfy (Jira)
Tamas Palfy created NIFI-12612:
--

 Summary: Fix JASN1Reader to handle OBJECT IDENTIFIER
 Key: NIFI-12612
 URL: https://issues.apache.org/jira/browse/NIFI-12612
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Tamas Palfy


OBJECT IDENTIFIER is a special asn1 type that is not recognized by JASN1Reader. 
Instead it tries to handle it as a Record as a fallback.
The fix would be to simply treat (convert really) OBJECT IDENTIFIER types as 
strings.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINIFICPP-2262 Add processor to push logs to Grafana Loki through gRPC [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


lordgamez commented on code in PR #1698:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1698#discussion_r1452556325


##
extensions/grafana-loki/protos/grafana-loki-push.proto:
##


Review Comment:
   Added reference in 407abbc65a8900dc9937dc0587c52d68693335ff



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2262 Add processor to push logs to Grafana Loki through gRPC [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


lordgamez commented on code in PR #1698:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1698#discussion_r1452556096


##
.dockerignore:
##
@@ -57,6 +57,11 @@ extensions/expression-language/Scanner.cpp
 extensions/expression-language/location.hh
 extensions/expression-language/position.hh
 extensions/expression-language/stack.h
+extensions/grafana-loki/protobuf-generated/grafana-loki-push.grpc.pb.h
+extensions/grafana-loki/protobuf-generated/grafana-loki-push.grpc.pb.cc
+extensions/grafana-loki/protobuf-generated/grafana-loki-push.pb.h
+extensions/grafana-loki/protobuf-generated/grafana-loki-push.pb.cc

Review Comment:
   Good idea, moved them in 407abbc65a8900dc9937dc0587c52d68693335ff



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2262 Add processor to push logs to Grafana Loki through gRPC [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


lordgamez commented on code in PR #1698:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1698#discussion_r1452555861


##
extensions/grafana-loki/CMakeLists.txt:
##
@@ -16,20 +16,46 @@
 # specific language governing permissions and limitations
 # under the License.
 #
-
 if (NOT (ENABLE_ALL OR ENABLE_GRAFANA_LOKI))
 return()
 endif()
 
 include(${CMAKE_SOURCE_DIR}/extensions/ExtensionHeader.txt)
 
-file(GLOB SOURCES  "*.cpp")
+if (ENABLE_GRPC)
+include(Grpc)
 
-add_library(minifi-grafana-loki SHARED ${SOURCES})
+file(MAKE_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated)
+
+add_custom_command(
+OUTPUT  
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.grpc.pb.cc 
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.grpc.pb.h 
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.pb.h 
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.pb.cc
+COMMAND ${PROTOBUF_COMPILER} 
--plugin=protoc-gen-grpc=${GRPC_CPP_PLUGIN} 
-I=${CMAKE_CURRENT_SOURCE_DIR}/protos/ -I=${protobuf_SOURCE_DIR}/src 
--grpc_out=${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/ 
--cpp_out=${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/ 
${CMAKE_CURRENT_SOURCE_DIR}/protos/grafana-loki-push.proto
+DEPENDS protobuf::protoc grpc_cpp_plugin)
 
+add_custom_target(grafana-loki-protos ALL DEPENDS 
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.grpc.pb.cc 
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.grpc.pb.h 
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.pb.h 
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.pb.cc)
+
+file(GLOB SOURCES "*.cpp")
+list(APPEND SOURCES
+
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.grpc.pb.cc
+${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.pb.cc
+)
+else()
+set(SOURCES
+${CMAKE_CURRENT_SOURCE_DIR}/PushGrafanaLoki.cpp
+${CMAKE_CURRENT_SOURCE_DIR}/PushGrafanaLokiREST.cpp
+)
+endif()
+
+add_library(minifi-grafana-loki SHARED ${SOURCES})
 target_include_directories(minifi-grafana-loki PRIVATE BEFORE 
"${CMAKE_SOURCE_DIR}/extensions/http-curl")
-target_link_libraries(minifi-grafana-loki ${LIBMINIFI})
-target_link_libraries(minifi-grafana-loki minifi-http-curl)
+target_link_libraries(minifi-grafana-loki ${LIBMINIFI} minifi-http-curl)
+add_dependencies(minifi-grafana-loki minifi-http-curl)
+
+if (ENABLE_GRPC)
+target_include_directories(minifi-grafana-loki PRIVATE BEFORE 
"${GRPC_INCLUDE_DIR}" "${PROTOBUF_INCLUDE_DIR}")
+target_link_libraries(minifi-grafana-loki grpc++ protobuf::libprotobuf)
+add_dependencies(minifi-grafana-loki grpc grpc++ protobuf::libprotobuf 
grafana-loki-protos)

Review Comment:
   The grafana-loki-protos is the target to generate the protobuf cpp files, 
the minifi-grafana-loki target should depend on that, but you are right the 
other targets should not be included in the dependencies, updated in 
407abbc65a8900dc9937dc0587c52d68693335ff



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2262 Add processor to push logs to Grafana Loki through gRPC [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


lordgamez commented on code in PR #1698:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1698#discussion_r1452554318


##
cmake/MiNiFiOptions.cmake:
##
@@ -122,6 +122,7 @@ add_minifi_option(ENABLE_KUBERNETES "Enables the Kubernetes 
extensions." ON)
 add_minifi_option(ENABLE_TEST_PROCESSORS "Enables test processors" OFF)
 add_minifi_option(ENABLE_PROMETHEUS "Enables Prometheus support." ON)
 add_minifi_option(ENABLE_GRAFANA_LOKI "Enable Grafana Loki support" OFF)
+add_minifi_option(ENABLE_GRPC "Enable gRPC for Grafana Loki extension" ON)

Review Comment:
   Good point, renamed to `ENABLE_GRPC_FOR_LOKI` in 
407abbc65a8900dc9937dc0587c52d68693335ff



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2262 Add processor to push logs to Grafana Loki through gRPC [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


lordgamez commented on code in PR #1698:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1698#discussion_r1452482580


##
LICENSE:
##
@@ -2442,52 +2443,6 @@ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 
LIABILITY, OR TORT
 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
 
-This product bundles 'protobuf' within 'OpenCV' under a 3-Clause BSD license:

Review Comment:
   Yes, I disabled the build of the dnn module of OpenCV in 
`cmake/BundledOpenCV.cmake` that used the protobuf library as we don't use it 
in our OpenCV processors.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2262 Add processor to push logs to Grafana Loki through gRPC [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


lordgamez commented on code in PR #1698:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1698#discussion_r1452480781


##
extensions/grafana-loki/protos/grafana-loki-push.proto:
##


Review Comment:
   This file defines the grpc service and the protobuf messages structures 
according to grafana's protocol definition in this file: 
https://github.com/grafana/loki/blob/main/pkg/push/push.proto
   
   Only the used fields and attributes are kept, removing the go specific 
attributes and imports. In the Grafana repository the containing directory 
holds an Apache license 
(https://github.com/grafana/loki/blob/main/pkg/push/LICENSE) so I think we 
should be okay with the license header, I can add a reference comment to the 
original proto file it that's okay.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (MINIFICPP-2287) Site-to-site with large files: "Site2Site transaction xxx peer unknown respond code 14"

2024-01-15 Thread Marton Szasz (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-2287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Szasz updated MINIFICPP-2287:

Description: 
It looks like nifi may have extended the protocol, and minifi c++ didn't follow 
the development.

 

>From Thomas on the nifi slack: 
>[https://apachenifi.slack.com/archives/CDF1VC1UZ/p1705327811015419]

 
{quote}Running minifi c++ v0.15, I am getting errors when transferring large 
files (10gb) via site to site to a Nifi (v1.20) cluster. Per the logs,the 
transfer is on going for a while (warning logs, inputPortName has been running 
for x ms in \{connection ID}
then it looks like the transfer completes (info log, Site to Site transaction 
... set flow 1 flow records with total size xxx-yyy-zzz ) ALSO, the large file 
appears on the remote Nifi cluster
then it looks like the transfer failed (warning log, Site2Site transaction xxx 
peer unknown respond code 14)
then another error, (warning log , ProcessSession rollback for inputPortName 
executed )
the finally, (warning protocol transmission failed, yielding ( xxx-yyy-zzz )

This results in endless copies of the large files as presumably minifi retries 
the file despite successfully transferring the file.

The logs show that other smaller files continue to be transferred while the 
large files yield. (edited) 
{quote}

  was:
It looks like nifi may have extended the protocol, and minifi c++ didn't follow 
the development.

 

>From Thomas on the nifi slack: 
>[https://apachenifi.slack.com/archives/CDF1VC1UZ/p1705327811015419]

 
{quote}Running minifi c++ v0.15, I am getting errors when transferring large 
files (10gb) via site to site to a Nifi (v1.20) cluster. Per the logs,the 
transfer is on going for a while (warning logs, inputPortName has been running 
for x ms in \{connection ID}
then it looks like the transfer completes (info log, Site to Site transaction 
... set flow 1 flow records with total size xxx-yyy-zzz ) ALSO, the large file 
appears on the remote Nifi cluster
then it looks like the transfer failed (warning log, Site2Site transaction xxx 
peer unknown respond code 14)
then another error, (warning log , ProcessSession rollback for inputPortName 
executed )
the finally, (warning protocol transmission failed, yielding ( xxx-yyy-zzz 
)This results in endless copies of the large files as presumably minifi retries 
the file despite successfully transferring the file.The logs show that other 
smaller files continue to be transferred while the large files yield. (edited) 
{quote}


> Site-to-site with large files: "Site2Site transaction xxx peer unknown 
> respond code 14"
> ---
>
> Key: MINIFICPP-2287
> URL: https://issues.apache.org/jira/browse/MINIFICPP-2287
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Marton Szasz
>Priority: Major
>
> It looks like nifi may have extended the protocol, and minifi c++ didn't 
> follow the development.
>  
> From Thomas on the nifi slack: 
> [https://apachenifi.slack.com/archives/CDF1VC1UZ/p1705327811015419]
>  
> {quote}Running minifi c++ v0.15, I am getting errors when transferring large 
> files (10gb) via site to site to a Nifi (v1.20) cluster. Per the logs,the 
> transfer is on going for a while (warning logs, inputPortName has been 
> running for x ms in \{connection ID}
> then it looks like the transfer completes (info log, Site to Site transaction 
> ... set flow 1 flow records with total size xxx-yyy-zzz ) ALSO, the large 
> file appears on the remote Nifi cluster
> then it looks like the transfer failed (warning log, Site2Site transaction 
> xxx peer unknown respond code 14)
> then another error, (warning log , ProcessSession rollback for inputPortName 
> executed )
> the finally, (warning protocol transmission failed, yielding ( xxx-yyy-zzz )
> This results in endless copies of the large files as presumably minifi 
> retries the file despite successfully transferring the file.
> The logs show that other smaller files continue to be transferred while the 
> large files yield. (edited) 
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12400) Remaining items to migrate UI to currently supported/active framework

2024-01-15 Thread Matt Gilman (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-12400:
---
Description: 
The purpose of this Jira is to track all remaining items following the initial 
commit [1] for NIFI-11481. The description will be kept up to date with 
remaining features, tasks, and improvements. As each items is worked, a new sub 
task Jira will be created and referenced in this description.
 * Support Parameters in Properties with Allowable Values (NIFI-12401)
 * Summary (NIFI-12437)
 ** Remaining work not addressed in initial Jira:
 *** input ports (NIFI-12504)
 *** output ports (NIFI-12504)
 *** remote process groups (NIFI-12504)
 *** process groups (NIFI-12504)
 *** connections (NIFI-12504)
 *** System Diagnostics (NIFI-12505)
 *** support for cluster-specific ui elements (NIFI-12537)
 *** Add pagination (NIFI-12552)
 * Counters (NIFI-12415)
 * Bulletin Board (NIFI-12560)
 * Provenance (NIFI-12445)
 ** Event Listing (NIFI-12445)
 ** Search (NIFI-12445)
 ** Event Dialog (NIFI-12445)
 ** Lineage (NIFI-12485)
 ** Replay from context menu (NIFI-12445)

 * Configure Reporting Task (NIFI-12563)
 * Flow Analysis Rules (NIFI-12588)
 * Registry Clients (NIFI-12486)
 * Import from Registry
 * Parameter Providers
 * Cluster
 * Flow Configuration History
 * Node Status History (NIFI-12553)
 * Status history for components from canvas context menu (NIFI-12553)
 * Users (NIFI-12543)
 * Policies (NIFI-12548)
 * Help
 * About
 * Show Upstream/Downstream
 * Align
 * List Queue (NIFI-12589)
 * Empty [all] Queue (NIFI-12604)
 * View Content (NIFI-12589 and NIFI-12445)
 * View State (NIFI-12611)
 * Change Version
 * PG Version
 ** Start
 ** Commit
 ** Force Commit
 ** Show changes
 ** Revert changes
 ** Change Flow version
 ** Stop

 * Configure PG (NIFI-12417)
 * Process Group Services (NIFI-12425)
 ** Listing (NIFI-12425)
 ** Create (NIFI-12425)
 ** Configure (NIFI-12425)
 ** Delete (NIFI-12425)
 ** Enable (NIFI-12529)
 ** Disable (NIFI-12529)
 ** Improve layout and breadcrumbs
 * Configure Processor
 ** Service Link (NIFI-12425)
 ** Create inline Service (NIFI-12425)
 ** Parameter Link (NIFI-12502)
 ** Convert to Parameter (NIFI-12502)
 ** Fix issue with Property Editor width (NIFI-12547)
 ** Stop and Configure
 ** Open Custom UI
 ** Property History
 ** Unable to re-add any removed Property
 * Property Verification
 * More Details (Processor, Controller Service, Reporting Task)

 * Download Flow
 * Create RPG
 * Configure RPG
 * RPG Remote Ports
 * RPG Go To
 * Color
 * Move to Front
 * Copy/Paste
 * Add/Update Info Icons in dialogs throughout the application
 * Better theme support
 * Run unit tests are part of standard build
 * Update all API calls to consider disconnect node confirmation
 * Update API calls to use uiOnly flag
 * Routing error handling
 * Introduce header in new pages to unify with canvas and offer better 
navigation. (NIFI-12597)
 * Prompt user to save Parameter Context when Edit form is dirty
 * Start/Stop processors, process groups, ... (NIFI-12568)
 * Dialog vertical resizing on smaller screens do not allow users to access all 
fields (NIFI-12603)

[1] [https://github.com/apache/nifi/pull/8053]

  was:
The purpose of this Jira is to track all remaining items following the initial 
commit [1] for NIFI-11481. The description will be kept up to date with 
remaining features, tasks, and improvements. As each items is worked, a new sub 
task Jira will be created and referenced in this description.
 * Support Parameters in Properties with Allowable Values (NIFI-12401)
 * Summary (NIFI-12437)
 ** Remaining work not addressed in initial Jira:
 *** input ports (NIFI-12504)
 *** output ports (NIFI-12504)
 *** remote process groups (NIFI-12504)
 *** process groups (NIFI-12504)
 *** connections (NIFI-12504)
 *** System Diagnostics (NIFI-12505)
 *** support for cluster-specific ui elements (NIFI-12537)
 *** Add pagination (NIFI-12552)
 * Counters (NIFI-12415)
 * Bulletin Board (NIFI-12560)
 * Provenance (NIFI-12445)
 ** Event Listing (NIFI-12445)
 ** Search (NIFI-12445)
 ** Event Dialog (NIFI-12445)
 ** Lineage (NIFI-12485)
 ** Replay from context menu (NIFI-12445)

 * Configure Reporting Task (NIFI-12563)
 * Flow Analysis Rules (NIFI-12588)
 * Registry Clients (NIFI-12486)
 * Import from Registry
 * Parameter Providers
 * Cluster
 * Flow Configuration History
 * Node Status History (NIFI-12553)
 * Status history for components from canvas context menu (NIFI-12553)
 * Users (NIFI-12543)
 * Policies (NIFI-12548)
 * Help
 * About
 * Show Upstream/Downstream
 * Align
 * List Queue (NIFI-12589)
 * Empty [all] Queue (NIFI-12604)
 * View Content (NIFI-12589 and NIFI-12445)
 * View State
 * Change Version
 * PG Version
 ** Start
 ** Commit
 ** Force Commit
 ** Show changes
 ** Revert changes
 ** Change Flow version
 ** Stop

 * Configure PG (NIFI-12417)
 * Process Group Services (NIFI-12425)
 ** Listing 

[jira] [Created] (NIFI-12611) View State

2024-01-15 Thread Matt Gilman (Jira)
Matt Gilman created NIFI-12611:
--

 Summary: View State
 Key: NIFI-12611
 URL: https://issues.apache.org/jira/browse/NIFI-12611
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Core UI
Reporter: Matt Gilman
Assignee: Matt Gilman


Introduce the ability to view and clear state for extension types that support 
state.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12400) Remaining items to migrate UI to currently supported/active framework

2024-01-15 Thread Matt Gilman (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-12400:
---
Description: 
The purpose of this Jira is to track all remaining items following the initial 
commit [1] for NIFI-11481. The description will be kept up to date with 
remaining features, tasks, and improvements. As each items is worked, a new sub 
task Jira will be created and referenced in this description.
 * Support Parameters in Properties with Allowable Values (NIFI-12401)
 * Summary (NIFI-12437)
 ** Remaining work not addressed in initial Jira:
 *** input ports (NIFI-12504)
 *** output ports (NIFI-12504)
 *** remote process groups (NIFI-12504)
 *** process groups (NIFI-12504)
 *** connections (NIFI-12504)
 *** System Diagnostics (NIFI-12505)
 *** support for cluster-specific ui elements (NIFI-12537)
 *** Add pagination (NIFI-12552)
 * Counters (NIFI-12415)
 * Bulletin Board (NIFI-12560)
 * Provenance (NIFI-12445)
 ** Event Listing (NIFI-12445)
 ** Search (NIFI-12445)
 ** Event Dialog (NIFI-12445)
 ** Lineage (NIFI-12485)
 ** Replay from context menu (NIFI-12445)

 * Configure Reporting Task (NIFI-12563)
 * Flow Analysis Rules (NIFI-12588)
 * Registry Clients (NIFI-12486)
 * Import from Registry
 * Parameter Providers
 * Cluster
 * Flow Configuration History
 * Node Status History (NIFI-12553)
 * Status history for components from canvas context menu (NIFI-12553)
 * Users (NIFI-12543)
 * Policies (NIFI-12548)
 * Help
 * About
 * Show Upstream/Downstream
 * Align
 * List Queue (NIFI-12589)
 * Empty [all] Queue (NIFI-12604)
 * View Content (NIFI-12589 and NIFI-12445)
 * View State
 * Change Version
 * PG Version
 ** Start
 ** Commit
 ** Force Commit
 ** Show changes
 ** Revert changes
 ** Change Flow version
 ** Stop

 * Configure PG (NIFI-12417)
 * Process Group Services (NIFI-12425)
 ** Listing (NIFI-12425)
 ** Create (NIFI-12425)
 ** Configure (NIFI-12425)
 ** Delete (NIFI-12425)
 ** Enable (NIFI-12529)
 ** Disable (NIFI-12529)
 ** Improve layout and breadcrumbs
 * Configure Processor
 ** Service Link (NIFI-12425)
 ** Create inline Service (NIFI-12425)
 ** Parameter Link (NIFI-12502)
 ** Convert to Parameter (NIFI-12502)
 ** Fix issue with Property Editor width (NIFI-12547)
 ** Stop and Configure
 ** Open Custom UI
 ** Property History
 ** Unable to re-add any removed Property
 * Property Verification
 * More Details (Processor, Controller Service, Reporting Task)

 * Download Flow
 * Create RPG
 * Configure RPG
 * RPG Remote Ports
 * RPG Go To
 * Color
 * Move to Front
 * Copy/Paste
 * Add/Update Info Icons in dialogs throughout the application
 * Better theme support
 * Run unit tests are part of standard build
 * Update all API calls to consider disconnect node confirmation
 * Update API calls to use uiOnly flag
 * Routing error handling
 * Introduce header in new pages to unify with canvas and offer better 
navigation. (NIFI-12597)
 * Prompt user to save Parameter Context when Edit form is dirty
 * Start/Stop processors, process groups, ... (NIFI-12568)
 * Dialog vertical resizing on smaller screens do not allow users to access all 
fields (NIFI-12603)

[1] [https://github.com/apache/nifi/pull/8053]

  was:
The purpose of this Jira is to track all remaining items following the initial 
commit [1] for NIFI-11481. The description will be kept up to date with 
remaining features, tasks, and improvements. As each items is worked, a new sub 
task Jira will be created and referenced in this description.
 * Support Parameters in Properties with Allowable Values (NIFI-12401)
 * Summary (NIFI-12437)
 ** Remaining work not addressed in initial Jira:
 *** input ports (NIFI-12504)
 *** output ports (NIFI-12504)
 *** remote process groups (NIFI-12504)
 *** process groups (NIFI-12504)
 *** connections (NIFI-12504)
 *** System Diagnostics (NIFI-12505)
 *** support for cluster-specific ui elements (NIFI-12537)
 *** Add pagination (NIFI-12552)
 * Counters (NIFI-12415)
 * Bulletin Board (NIFI-12560)
 * Provenance (NIFI-12445)
 ** Event Listing (NIFI-12445)
 ** Search (NIFI-12445)
 ** Event Dialog (NIFI-12445)
 ** Lineage (NIFI-12485)
 ** Replay from context menu (NIFI-12445)

 * Configure Reporting Task (NIFI-12563)
 * Flow Analysis Rules (NIFI-12588)
 * Registry Clients (NIFI-12486)
 * Import from Registry
 * Parameter Providers
 * Cluster
 * Flow Configuration History
 * Node Status History (NIFI-12553)
 * Status history for components from canvas context menu (NIFI-12553)
 * Users (NIFI-12543)
 * Policies (NIFI-12548)
 * Help
 * About
 * Show Upstream/Downstream
 * Align
 * List Queue (NIFI-12589)
 * Empty [all] Queue (NIFI-12604)
 * View Content
 * View State
 * Change Version
 * PG Version
 ** Start
 ** Commit
 ** Force Commit
 ** Show changes
 ** Revert changes
 ** Change Flow version
 ** Stop

 * Configure PG (NIFI-12417)
 * Process Group Services (NIFI-12425)
 ** Listing (NIFI-12425)
 ** Create (NIFI-12425)
 ** 

[jira] [Updated] (NIFI-12400) Remaining items to migrate UI to currently supported/active framework

2024-01-15 Thread Matt Gilman (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-12400:
---
Description: 
The purpose of this Jira is to track all remaining items following the initial 
commit [1] for NIFI-11481. The description will be kept up to date with 
remaining features, tasks, and improvements. As each items is worked, a new sub 
task Jira will be created and referenced in this description.
 * Support Parameters in Properties with Allowable Values (NIFI-12401)
 * Summary (NIFI-12437)
 ** Remaining work not addressed in initial Jira:
 *** input ports (NIFI-12504)
 *** output ports (NIFI-12504)
 *** remote process groups (NIFI-12504)
 *** process groups (NIFI-12504)
 *** connections (NIFI-12504)
 *** System Diagnostics (NIFI-12505)
 *** support for cluster-specific ui elements (NIFI-12537)
 *** Add pagination (NIFI-12552)
 * Counters (NIFI-12415)
 * Bulletin Board (NIFI-12560)
 * Provenance (NIFI-12445)
 ** Event Listing (NIFI-12445)
 ** Search (NIFI-12445)
 ** Event Dialog (NIFI-12445)
 ** Lineage (NIFI-12485)
 ** Replay from context menu (NIFI-12445)

 * Configure Reporting Task (NIFI-12563)
 * Flow Analysis Rules (NIFI-12588)
 * Registry Clients (NIFI-12486)
 * Import from Registry
 * Parameter Providers
 * Cluster
 * Flow Configuration History
 * Node Status History (NIFI-12553)
 * Status history for components from canvas context menu (NIFI-12553)
 * Users (NIFI-12543)
 * Policies (NIFI-12548)
 * Help
 * About
 * Show Upstream/Downstream
 * Align
 * List Queue (NIFI-12589)
 * Empty [all] Queue (NIFI-12604)
 * View Content
 * View State
 * Change Version
 * PG Version
 ** Start
 ** Commit
 ** Force Commit
 ** Show changes
 ** Revert changes
 ** Change Flow version
 ** Stop

 * Configure PG (NIFI-12417)
 * Process Group Services (NIFI-12425)
 ** Listing (NIFI-12425)
 ** Create (NIFI-12425)
 ** Configure (NIFI-12425)
 ** Delete (NIFI-12425)
 ** Enable (NIFI-12529)
 ** Disable (NIFI-12529)
 ** Improve layout and breadcrumbs
 * Configure Processor
 ** Service Link (NIFI-12425)
 ** Create inline Service (NIFI-12425)
 ** Parameter Link (NIFI-12502)
 ** Convert to Parameter (NIFI-12502)
 ** Fix issue with Property Editor width (NIFI-12547)
 ** Stop and Configure
 ** Open Custom UI
 ** Property History
 ** Unable to re-add any removed Property
 * Property Verification
 * More Details (Processor, Controller Service, Reporting Task)

 * Download Flow
 * Create RPG
 * Configure RPG
 * RPG Remote Ports
 * RPG Go To
 * Color
 * Move to Front
 * Copy/Paste
 * Add/Update Info Icons in dialogs throughout the application
 * Better theme support
 * Run unit tests are part of standard build
 * Update all API calls to consider disconnect node confirmation
 * Update API calls to use uiOnly flag
 * Routing error handling
 * Introduce header in new pages to unify with canvas and offer better 
navigation. (NIFI-12597)
 * Prompt user to save Parameter Context when Edit form is dirty
 * Start/Stop processors, process groups, ... (NIFI-12568)
 * Dialog vertical resizing on smaller screens do not allow users to access all 
fields (NIFI-12603)

[1] [https://github.com/apache/nifi/pull/8053]

  was:
The purpose of this Jira is to track all remaining items following the initial 
commit [1] for NIFI-11481. The description will be kept up to date with 
remaining features, tasks, and improvements. As each items is worked, a new sub 
task Jira will be created and referenced in this description.
 * Support Parameters in Properties with Allowable Values (NIFI-12401)
 * Summary (NIFI-12437)
 ** Remaining work not addressed in initial Jira:
 *** input ports (NIFI-12504)
 *** output ports (NIFI-12504)
 *** remote process groups (NIFI-12504)
 *** process groups (NIFI-12504)
 *** connections (NIFI-12504)
 *** System Diagnostics (NIFI-12505)
 *** support for cluster-specific ui elements (NIFI-12537)
 *** Add pagination (NIFI-12552)
 * Counters (NIFI-12415)
 * Bulletin Board (NIFI-12560)
 * Provenance (NIFI-12445)
 ** Event Listing (NIFI-12445)
 ** Search (NIFI-12445)
 ** Event Dialog (NIFI-12445)
 ** Lineage (NIFI-12485)
 ** Replay from context menu (NIFI-12445)

 * Configure Reporting Task (NIFI-12563)
 * Flow Analysis Rules (NIFI-12588)
 * Registry Clients (NIFI-12486)
 * Import from Registry
 * Parameter Providers
 * Cluster
 * Flow Configuration History
 * Node Status History (NIFI-12553)
 * Status history for components from canvas context menu (NIFI-12553)
 * Users (NIFI-12543)
 * Policies (NIFI-12548)
 * Help
 * About
 * Show Upstream/Downstream
 * Align
 * List Queue (NIFI-12589)
 * Empty [all] Queue
 * View Content
 * View State
 * Change Version
 * PG Version
 ** Start
 ** Commit
 ** Force Commit
 ** Show changes
 ** Revert changes
 ** Change Flow version
 ** Stop

 * Configure PG (NIFI-12417)
 * Process Group Services (NIFI-12425)
 ** Listing (NIFI-12425)
 ** Create (NIFI-12425)
 ** Configure (NIFI-12425)
 ** Delete 

[jira] [Updated] (NIFI-12604) Add support to empty queues

2024-01-15 Thread Matt Gilman (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-12604:
---
Status: Patch Available  (was: In Progress)

> Add support to empty queues
> ---
>
> Key: NIFI-12604
> URL: https://issues.apache.org/jira/browse/NIFI-12604
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Matt Gilman
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Introduce actions for emptying a queue or all queues within a Process Group.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MINIFICPP-2287) Site-to-site with large files: "Site2Site transaction xxx peer unknown respond code 14"

2024-01-15 Thread Marton Szasz (Jira)
Marton Szasz created MINIFICPP-2287:
---

 Summary: Site-to-site with large files: "Site2Site transaction xxx 
peer unknown respond code 14"
 Key: MINIFICPP-2287
 URL: https://issues.apache.org/jira/browse/MINIFICPP-2287
 Project: Apache NiFi MiNiFi C++
  Issue Type: Bug
Reporter: Marton Szasz


It looks like nifi may have extended the protocol, and minifi c++ didn't follow 
the development.

 

>From the nifi slack: 
>[https://apachenifi.slack.com/archives/CDF1VC1UZ/p1705327811015419]

 
{quote}Running minifi c++ v0.15, I am getting errors when transferring large 
files (10gb) via site to site to a Nifi (v1.20) cluster. Per the logs,the 
transfer is on going for a while (warning logs, inputPortName has been running 
for x ms in \{connection ID}
then it looks like the transfer completes (info log, Site to Site transaction 
... set flow 1 flow records with total size xxx-yyy-zzz ) ALSO, the large file 
appears on the remote Nifi cluster
then it looks like the transfer failed (warning log, Site2Site transaction xxx 
peer unknown respond code 14)
then another error, (warning log , ProcessSession rollback for inputPortName 
executed )
the finally, (warning protocol transmission failed, yielding ( xxx-yyy-zzz 
)This results in endless copies of the large files as presumably minifi retries 
the file despite successfully transferring the file.The logs show that other 
smaller files continue to be transferred while the large files yield. (edited) 
{quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MINIFICPP-2287) Site-to-site with large files: "Site2Site transaction xxx peer unknown respond code 14"

2024-01-15 Thread Marton Szasz (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-2287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Szasz updated MINIFICPP-2287:

Description: 
It looks like nifi may have extended the protocol, and minifi c++ didn't follow 
the development.

 

>From Thomas on the nifi slack: 
>[https://apachenifi.slack.com/archives/CDF1VC1UZ/p1705327811015419]

 
{quote}Running minifi c++ v0.15, I am getting errors when transferring large 
files (10gb) via site to site to a Nifi (v1.20) cluster. Per the logs,the 
transfer is on going for a while (warning logs, inputPortName has been running 
for x ms in \{connection ID}
then it looks like the transfer completes (info log, Site to Site transaction 
... set flow 1 flow records with total size xxx-yyy-zzz ) ALSO, the large file 
appears on the remote Nifi cluster
then it looks like the transfer failed (warning log, Site2Site transaction xxx 
peer unknown respond code 14)
then another error, (warning log , ProcessSession rollback for inputPortName 
executed )
the finally, (warning protocol transmission failed, yielding ( xxx-yyy-zzz 
)This results in endless copies of the large files as presumably minifi retries 
the file despite successfully transferring the file.The logs show that other 
smaller files continue to be transferred while the large files yield. (edited) 
{quote}

  was:
It looks like nifi may have extended the protocol, and minifi c++ didn't follow 
the development.

 

>From the nifi slack: 
>[https://apachenifi.slack.com/archives/CDF1VC1UZ/p1705327811015419]

 
{quote}Running minifi c++ v0.15, I am getting errors when transferring large 
files (10gb) via site to site to a Nifi (v1.20) cluster. Per the logs,the 
transfer is on going for a while (warning logs, inputPortName has been running 
for x ms in \{connection ID}
then it looks like the transfer completes (info log, Site to Site transaction 
... set flow 1 flow records with total size xxx-yyy-zzz ) ALSO, the large file 
appears on the remote Nifi cluster
then it looks like the transfer failed (warning log, Site2Site transaction xxx 
peer unknown respond code 14)
then another error, (warning log , ProcessSession rollback for inputPortName 
executed )
the finally, (warning protocol transmission failed, yielding ( xxx-yyy-zzz 
)This results in endless copies of the large files as presumably minifi retries 
the file despite successfully transferring the file.The logs show that other 
smaller files continue to be transferred while the large files yield. (edited) 
{quote}


> Site-to-site with large files: "Site2Site transaction xxx peer unknown 
> respond code 14"
> ---
>
> Key: MINIFICPP-2287
> URL: https://issues.apache.org/jira/browse/MINIFICPP-2287
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Marton Szasz
>Priority: Major
>
> It looks like nifi may have extended the protocol, and minifi c++ didn't 
> follow the development.
>  
> From Thomas on the nifi slack: 
> [https://apachenifi.slack.com/archives/CDF1VC1UZ/p1705327811015419]
>  
> {quote}Running minifi c++ v0.15, I am getting errors when transferring large 
> files (10gb) via site to site to a Nifi (v1.20) cluster. Per the logs,the 
> transfer is on going for a while (warning logs, inputPortName has been 
> running for x ms in \{connection ID}
> then it looks like the transfer completes (info log, Site to Site transaction 
> ... set flow 1 flow records with total size xxx-yyy-zzz ) ALSO, the large 
> file appears on the remote Nifi cluster
> then it looks like the transfer failed (warning log, Site2Site transaction 
> xxx peer unknown respond code 14)
> then another error, (warning log , ProcessSession rollback for inputPortName 
> executed )
> the finally, (warning protocol transmission failed, yielding ( xxx-yyy-zzz 
> )This results in endless copies of the large files as presumably minifi 
> retries the file despite successfully transferring the file.The logs show 
> that other smaller files continue to be transferred while the large files 
> yield. (edited) 
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINIFICPP-2264 GenerateFlowFile: 'Custom Text' should be reevaluated … [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


adamdebreceni commented on code in PR #1706:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1706#discussion_r1452390024


##
extensions/standard-processors/processors/GenerateFlowFile.h:
##
@@ -70,14 +66,15 @@ class GenerateFlowFile : public core::Processor {
   .withDefaultValue("Binary")
   .build();
   EXTENSIONAPI static constexpr auto UniqueFlowFiles = 
core::PropertyDefinitionBuilder<>::createProperty("Unique FlowFiles")
-  .withDescription("If true, each FlowFile that is generated will be 
unique. If false, a random value will be generated and all FlowFiles")
+  .withDescription("If true, each FlowFile that is generated will be 
unique. "
+  "If false, a random value will be generated and all FlowFiles will 
get the same content but this offers much higher throughput")

Review Comment:
   as discussed the description of CustomText is clear enough on how to use 
that and what to expect



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2261 Add processor for pushing logs to Grafana Loki through REST API [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


lordgamez commented on code in PR #1695:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1695#discussion_r1452385790


##
extensions/grafana-loki/PushGrafanaLokiREST.h:
##
@@ -0,0 +1,178 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#pragma once
+
+#include 
+#include 
+#include 
+#include 
+
+#include "controllers/SSLContextService.h"
+#include "core/Processor.h"
+#include "core/PropertyDefinition.h"
+#include "core/PropertyDefinitionBuilder.h"
+#include "core/PropertyType.h"
+#include "core/RelationshipDefinition.h"
+#include "client/HTTPClient.h"
+#include "core/StateManager.h"
+
+namespace org::apache::nifi::minifi::extensions::grafana::loki {
+
+class PushGrafanaLokiREST : public core::Processor {
+ public:
+  EXTENSIONAPI static constexpr const char* Description = "A Grafana Loki push 
processor that uses the Grafana Loki REST API. The processor expects each flow 
file to contain a single log line to be "
+  "pushed to Grafana 
Loki, therefore it is usually used together with the TailFile processor.";
+
+  explicit PushGrafanaLokiREST(const std::string& name, const 
utils::Identifier& uuid = {})
+  : Processor(name, uuid),
+log_batch_(logger_) {
+  }
+  ~PushGrafanaLokiREST() override = default;
+
+  EXTENSIONAPI static constexpr auto Url = 
core::PropertyDefinitionBuilder<>::createProperty("Url")
+.withDescription("Url of the Grafana Loki server. For example 
http://localhost:3100/.;)
+.isRequired(true)
+.build();
+  EXTENSIONAPI static constexpr auto StreamLabels = 
core::PropertyDefinitionBuilder<>::createProperty("Stream Labels")
+.withDescription("Comma separated list of = labels to be sent 
as stream labels.")
+.isRequired(true)
+.build();
+  EXTENSIONAPI static constexpr auto LogLineMetadataAttributes = 
core::PropertyDefinitionBuilder<>::createProperty("Log Line Metadata 
Attributes")
+.withDescription("Comma separated list of attributes to be sent as log 
line metadata for a log line.")
+.build();
+  EXTENSIONAPI static constexpr auto TenantID = 
core::PropertyDefinitionBuilder<>::createProperty("Tenant ID")
+.withDescription("The tenant ID used by default to push logs to Grafana 
Loki. If omitted or empty it assumes Grafana Loki is running in single-tenant 
mode and no X-Scope-OrgID header is sent.")
+.build();
+  EXTENSIONAPI static constexpr auto MaxBatchSize = 
core::PropertyDefinitionBuilder<>::createProperty("Max Batch Size")
+.withDescription("The maximum number of flow files to process at a time. 
If not set, or set to 0, all FlowFiles will be processed at once.")
+.withPropertyType(core::StandardPropertyTypes::UNSIGNED_LONG_TYPE)
+.withDefaultValue("100")
+.build();
+  EXTENSIONAPI static constexpr auto LogLineBatchWait = 
core::PropertyDefinitionBuilder<>::createProperty("Log Line Batch Wait")
+.withDescription("Time to wait before sending a log line batch to Grafana 
Loki, full or not. If this property and Log Line Batch Size are both unset, "
+ "the log batch of the current trigger will be sent 
immediately.")
+.withPropertyType(core::StandardPropertyTypes::TIME_PERIOD_TYPE)
+.build();
+  EXTENSIONAPI static constexpr auto LogLineBatchSize = 
core::PropertyDefinitionBuilder<>::createProperty("Log Line Batch Size")
+.withDescription("Number of log lines to send in a batch to Loki. If this 
property and Log Line Batch Wait are both unset, "
+ "the log batch of the current trigger will be sent 
immediately.")
+.withPropertyType(core::StandardPropertyTypes::UNSIGNED_INT_TYPE)
+.build();
+  EXTENSIONAPI static constexpr auto ConnectTimeout = 
core::PropertyDefinitionBuilder<>::createProperty("Connection Timeout")
+.withDescription("Max wait time for connection to the Grafana Loki 
service.")
+.withPropertyType(core::StandardPropertyTypes::TIME_PERIOD_TYPE)
+.withDefaultValue("5 s")
+.isRequired(true)
+.build();
+  EXTENSIONAPI static constexpr auto ReadTimeout = 
core::PropertyDefinitionBuilder<>::createProperty("Read Timeout")
+.withDescription("Max wait time for response from remote 

[jira] [Updated] (NIFI-12610) Typo for 'default_value' in Python developer documentation

2024-01-15 Thread Joe Gresock (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Gresock updated NIFI-12610:
---
Status: Patch Available  (was: Open)

> Typo for 'default_value' in Python developer documentation
> --
>
> Key: NIFI-12610
> URL: https://issues.apache.org/jira/browse/NIFI-12610
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation  Website
>Affects Versions: 2.0.0-M1
>Reporter: Joe Gresock
>Assignee: Joe Gresock
>Priority: Trivial
> Fix For: 2.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In the Python developer guide, the section on creating a property descriptor 
> has a typo: defaultValue should actually be default_value.  The result of 
> pasting this example in a Python processor is that the property descriptors 
> will not load.
> {code:java}
> numspaces = PropertyDescriptor(name="Number of Spaces",
> description="Number of spaces to use for pretty-printing",
> validators=[StandardValidators.POSITIVE_INTEGER_VALIDATOR],
> defaultValue="4",
> required=True)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] NIFI-12610: Correcting default_value example in Python Developer guide [nifi]

2024-01-15 Thread via GitHub


gresockj opened a new pull request, #8245:
URL: https://github.com/apache/nifi/pull/8245

   
   # Summary
   
   [NIFI-12610](https://issues.apache.org/jira/browse/NIFI-12610)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [ ] Pull Request based on current revision of the `main` branch
   - [ ] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 21
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-12610) Typo for 'default_value' in Python developer documentation

2024-01-15 Thread Joe Gresock (Jira)
Joe Gresock created NIFI-12610:
--

 Summary: Typo for 'default_value' in Python developer documentation
 Key: NIFI-12610
 URL: https://issues.apache.org/jira/browse/NIFI-12610
 Project: Apache NiFi
  Issue Type: Bug
  Components: Documentation  Website
Affects Versions: 2.0.0-M1
Reporter: Joe Gresock
Assignee: Joe Gresock
 Fix For: 2.0.0


In the Python developer guide, the section on creating a property descriptor 
has a typo: defaultValue should actually be default_value.  The result of 
pasting this example in a Python processor is that the property descriptors 
will not load.


{code:java}
numspaces = PropertyDescriptor(name="Number of Spaces",
description="Number of spaces to use for pretty-printing",
validators=[StandardValidators.POSITIVE_INTEGER_VALIDATOR],
defaultValue="4",
required=True)
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINIFICPP-2261 Add processor for pushing logs to Grafana Loki through REST API [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


adamdebreceni commented on code in PR #1695:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1695#discussion_r1452371311


##
extensions/grafana-loki/PushGrafanaLokiREST.h:
##
@@ -0,0 +1,178 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#pragma once
+
+#include 
+#include 
+#include 
+#include 
+
+#include "controllers/SSLContextService.h"
+#include "core/Processor.h"
+#include "core/PropertyDefinition.h"
+#include "core/PropertyDefinitionBuilder.h"
+#include "core/PropertyType.h"
+#include "core/RelationshipDefinition.h"
+#include "client/HTTPClient.h"
+#include "core/StateManager.h"
+
+namespace org::apache::nifi::minifi::extensions::grafana::loki {
+
+class PushGrafanaLokiREST : public core::Processor {
+ public:
+  EXTENSIONAPI static constexpr const char* Description = "A Grafana Loki push 
processor that uses the Grafana Loki REST API. The processor expects each flow 
file to contain a single log line to be "
+  "pushed to Grafana 
Loki, therefore it is usually used together with the TailFile processor.";
+
+  explicit PushGrafanaLokiREST(const std::string& name, const 
utils::Identifier& uuid = {})
+  : Processor(name, uuid),
+log_batch_(logger_) {
+  }
+  ~PushGrafanaLokiREST() override = default;
+
+  EXTENSIONAPI static constexpr auto Url = 
core::PropertyDefinitionBuilder<>::createProperty("Url")
+.withDescription("Url of the Grafana Loki server. For example 
http://localhost:3100/.;)
+.isRequired(true)
+.build();
+  EXTENSIONAPI static constexpr auto StreamLabels = 
core::PropertyDefinitionBuilder<>::createProperty("Stream Labels")
+.withDescription("Comma separated list of = labels to be sent 
as stream labels.")
+.isRequired(true)
+.build();
+  EXTENSIONAPI static constexpr auto LogLineMetadataAttributes = 
core::PropertyDefinitionBuilder<>::createProperty("Log Line Metadata 
Attributes")
+.withDescription("Comma separated list of attributes to be sent as log 
line metadata for a log line.")
+.build();
+  EXTENSIONAPI static constexpr auto TenantID = 
core::PropertyDefinitionBuilder<>::createProperty("Tenant ID")
+.withDescription("The tenant ID used by default to push logs to Grafana 
Loki. If omitted or empty it assumes Grafana Loki is running in single-tenant 
mode and no X-Scope-OrgID header is sent.")
+.build();
+  EXTENSIONAPI static constexpr auto MaxBatchSize = 
core::PropertyDefinitionBuilder<>::createProperty("Max Batch Size")
+.withDescription("The maximum number of flow files to process at a time. 
If not set, or set to 0, all FlowFiles will be processed at once.")
+.withPropertyType(core::StandardPropertyTypes::UNSIGNED_LONG_TYPE)
+.withDefaultValue("100")
+.build();
+  EXTENSIONAPI static constexpr auto LogLineBatchWait = 
core::PropertyDefinitionBuilder<>::createProperty("Log Line Batch Wait")
+.withDescription("Time to wait before sending a log line batch to Grafana 
Loki, full or not. If this property and Log Line Batch Size are both unset, "
+ "the log batch of the current trigger will be sent 
immediately.")
+.withPropertyType(core::StandardPropertyTypes::TIME_PERIOD_TYPE)
+.build();
+  EXTENSIONAPI static constexpr auto LogLineBatchSize = 
core::PropertyDefinitionBuilder<>::createProperty("Log Line Batch Size")
+.withDescription("Number of log lines to send in a batch to Loki. If this 
property and Log Line Batch Wait are both unset, "
+ "the log batch of the current trigger will be sent 
immediately.")
+.withPropertyType(core::StandardPropertyTypes::UNSIGNED_INT_TYPE)
+.build();
+  EXTENSIONAPI static constexpr auto ConnectTimeout = 
core::PropertyDefinitionBuilder<>::createProperty("Connection Timeout")
+.withDescription("Max wait time for connection to the Grafana Loki 
service.")
+.withPropertyType(core::StandardPropertyTypes::TIME_PERIOD_TYPE)
+.withDefaultValue("5 s")
+.isRequired(true)
+.build();
+  EXTENSIONAPI static constexpr auto ReadTimeout = 
core::PropertyDefinitionBuilder<>::createProperty("Read Timeout")
+.withDescription("Max wait time for response from remote 

Re: [PR] MINIFICPP-2269 upgrade xz/liblzma [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


szaszm closed pull request #1701: MINIFICPP-2269 upgrade xz/liblzma
URL: https://github.com/apache/nifi-minifi-cpp/pull/1701


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2261 Add processor for pushing logs to Grafana Loki through REST API [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


lordgamez commented on code in PR #1695:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1695#discussion_r1452345696


##
PROCESSORS.md:
##
@@ -2196,6 +2197,40 @@ In the list below, the names of required properties 
appear in bold. Any other pr
 | failure | FlowFiles that failed to be sent to the destination are 
transferred to this relationship |
 
 
+## PushGrafanaLokiREST
+
+### Description
+
+A Grafana Loki push processor that uses the Grafana Loki REST API. The 
processor expects each flow file to contain a single log line to be pushed to 
Grafana Loki, therefore it is usually used together with the TailFile processor.
+
+### Properties
+
+In the list below, the names of required properties appear in bold. Any other 
properties (not in bold) are considered optional. The table also indicates any 
default values, and whether a property supports the NiFi Expression Language.
+
+| Name | Default Value | Allowable Values | 
Description 

   |
+|--|---|--||
+| **Url**  |   |  | Url of the 
Grafana Loki server. For example http://localhost:3100/.

|
+| **Stream Labels**|   |  | Comma 
separated list of = labels to be sent as stream labels. 

 |
+| Log Line Metadata Attributes |   |  | Comma 
separated list of attributes to be sent as log line metadata for a log line.

 |
+| Tenant ID|   |  | The tenant 
ID used by default to push logs to Grafana Loki. If omitted or empty it assumes 
Grafana Loki is running in single-tenant mode and no X-Scope-OrgID header is 
sent.  |
+| Max Batch Size   | 100   |  | The 
maximum number of flow files to process at a time. If not set, or set to 0, all 
FlowFiles will be processed at once.
   |

Review Comment:
   Good catch, fixed in 0f1a614f803abf8ea538df7f95330a953ae2afd7



##
extensions/grafana-loki/PushGrafanaLokiREST.cpp:
##
@@ -0,0 +1,390 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "PushGrafanaLokiREST.h"
+
+#include 
+#include 
+#include 
+
+#include "core/ProcessContext.h"
+#include "core/ProcessSession.h"
+#include "core/Resource.h"
+#include "utils/ProcessorConfigUtils.h"
+#include "utils/StringUtils.h"
+#include "rapidjson/document.h"
+#include "rapidjson/stream.h"
+#include "rapidjson/writer.h"
+
+namespace org::apache::nifi::minifi::extensions::grafana::loki {
+
+void PushGrafanaLokiREST::LogBatch::add(const std::shared_ptr& 
flowfile) {
+  gsl_Expects(state_manager_);
+  if (log_line_batch_wait_ && batched_flowfiles_.empty()) {
+start_push_time_ = std::chrono::system_clock::now();
+std::unordered_map state;
+state["start_push_time"] = 
std::to_string(std::chrono::duration_cast(start_push_time_.time_since_epoch()).count());
+logger_->log_debug("Saved start push time to state: {}", 
state["start_push_time"]);
+state_manager_->set(state);
+  }
+  batched_flowfiles_.push_back(flowfile);
+}
+
+void PushGrafanaLokiREST::LogBatch::restore(const 
std::shared_ptr& flowfile) {
+  batched_flowfiles_.push_back(flowfile);
+}
+
+std::vector> 
PushGrafanaLokiREST::LogBatch::flush() {
+  gsl_Expects(state_manager_);
+  start_push_time_ = {};
+  auto result 

Re: [PR] MINIFICPP-62 Add SplitText processor [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


szaszm commented on code in PR #1682:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1682#discussion_r1452325647


##
extensions/standard-processors/processors/SplitText.cpp:
##
@@ -0,0 +1,378 @@
+/**
+ * @file SplitText.cpp
+ * SplitText class implementation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "SplitText.h"
+#include "core/ProcessContext.h"
+#include "core/ProcessSession.h"
+#include "core/Resource.h"
+#include "core/FlowFile.h"
+#include "utils/gsl.h"
+#include "utils/ProcessorConfigUtils.h"
+
+namespace org::apache::nifi::minifi::processors {
+
+namespace detail {
+
+LineReader::LineReader(const std::shared_ptr& stream)
+: stream_(stream) {
+  if (!stream_ || stream_->size() == 0) {
+state_ = StreamReadState::EndOfStream;
+  }
+}
+
+uint8_t LineReader::getEndLineSize(size_t newline_index) {
+  gsl_Expects(buffer_.size() > newline_index);
+  if (buffer_[newline_index] != '\n') {
+return 0;
+  }
+  if (newline_index == 0 || buffer_[newline_index - 1] != '\r') {
+return 1;
+  }
+  return 2;
+}
+
+void LineReader::setLastLineInfoAttributes(uint8_t endline_size, const 
std::optional& starts_with) {
+  const uint64_t size_from_beginning_of_stream = (current_buffer_count_ - 1) * 
SPLIT_TEXT_BUFFER_SIZE + buffer_offset_;
+  if (last_line_info_) {
+LineInfo previous_line_info = *last_line_info_;
+last_line_info_->offset = previous_line_info.offset + 
previous_line_info.size;
+last_line_info_->size = size_from_beginning_of_stream - 
previous_line_info.offset - previous_line_info.size;
+last_line_info_->endline_size = endline_size;
+last_line_info_->matches_starts_with = true;
+  } else {
+last_line_info_ = LineInfo{.offset = 0, .size = read_size_ - 
last_read_size_ + buffer_offset_, .endline_size = endline_size, 
.matches_starts_with = true};
+  }
+
+  if (starts_with) {
+last_line_info_->matches_starts_with = last_line_info_->size >= 
starts_with->size() &&
+  std::equal(starts_with->begin(), starts_with->end(), buffer_.begin() + 
last_line_info_->offset, buffer_.begin() + last_line_info_->offset + 
starts_with->size());
+  }
+}
+
+bool LineReader::readNextBuffer() {
+  buffer_offset_ = 0;
+  last_read_size_ = (std::min)(gsl::narrow(stream_->size() - 
read_size_), SPLIT_TEXT_BUFFER_SIZE);
+  const auto read_ret = 
stream_->read(as_writable_bytes(std::span(buffer_).subspan(0, 
last_read_size_)));
+  read_size_ += read_ret;
+  if (io::isError(read_ret)) {
+state_ = StreamReadState::StreamReadError;
+return false;
+  }
+  ++current_buffer_count_;
+  return true;
+}
+
+std::optional LineReader::finalizeLineInfo(uint8_t 
endline_size, const std::optional& starts_with) {
+  setLastLineInfoAttributes(endline_size, starts_with);
+  if (last_line_info_->size == 0) {
+return std::nullopt;
+  }
+  return last_line_info_;
+}
+
+std::optional LineReader::readNextLine(const 
std::optional& starts_with) {
+  if (state_ != StreamReadState::Ok) {
+return std::nullopt;
+  }
+
+  const auto isLastReadProcessed = [this]() { return last_read_size_ <= 
buffer_offset_; };
+  while (read_size_ < stream_->size() || !isLastReadProcessed()) {
+if (isLastReadProcessed() && !readNextBuffer()) {
+  return std::nullopt;
+}
+
+for (auto i = buffer_offset_; i < last_read_size_; ++i) {
+  if (buffer_[i] == '\n') {
+buffer_offset_ = i + 1;
+return finalizeLineInfo(getEndLineSize(i), starts_with);
+  }
+}
+buffer_offset_ = last_read_size_;
+  }
+
+  state_ = StreamReadState::EndOfStream;
+  return finalizeLineInfo(0, starts_with);
+}
+
+SplitTextFragmentGenerator::SplitTextFragmentGenerator(const 
std::shared_ptr& stream, const SplitTextConfiguration& 
split_text_config)
+: line_reader_(stream),
+  split_text_config_(split_text_config) {
+}
+
+void SplitTextFragmentGenerator::finalizeFragmentOffset(Fragment& 
current_fragment) {
+  current_fragment.fragment_offset = flow_file_offset_;
+  flow_file_offset_ += current_fragment.fragment_size;
+}
+
+void SplitTextFragmentGenerator::addLineToFragment(Fragment& current_fragment, 
const LineReader::LineInfo& line) {
+  if (line.endline_size == line.size) {  // if line consists only of 

Re: [PR] MINIFICPP-62 Add SplitText processor [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


lordgamez commented on code in PR #1682:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1682#discussion_r1452321684


##
extensions/standard-processors/processors/SplitText.cpp:
##
@@ -0,0 +1,378 @@
+/**
+ * @file SplitText.cpp
+ * SplitText class implementation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "SplitText.h"
+#include "core/ProcessContext.h"
+#include "core/ProcessSession.h"
+#include "core/Resource.h"
+#include "core/FlowFile.h"
+#include "utils/gsl.h"
+#include "utils/ProcessorConfigUtils.h"
+
+namespace org::apache::nifi::minifi::processors {
+
+namespace detail {
+
+LineReader::LineReader(const std::shared_ptr& stream)
+: stream_(stream) {
+  if (!stream_ || stream_->size() == 0) {
+state_ = StreamReadState::EndOfStream;
+  }
+}
+
+uint8_t LineReader::getEndLineSize(size_t newline_index) {
+  gsl_Expects(buffer_.size() > newline_index);
+  if (buffer_[newline_index] != '\n') {
+return 0;
+  }
+  if (newline_index == 0 || buffer_[newline_index - 1] != '\r') {
+return 1;
+  }
+  return 2;
+}
+
+void LineReader::setLastLineInfoAttributes(uint8_t endline_size, const 
std::optional& starts_with) {
+  const uint64_t size_from_beginning_of_stream = (current_buffer_count_ - 1) * 
SPLIT_TEXT_BUFFER_SIZE + buffer_offset_;
+  if (last_line_info_) {
+LineInfo previous_line_info = *last_line_info_;
+last_line_info_->offset = previous_line_info.offset + 
previous_line_info.size;
+last_line_info_->size = size_from_beginning_of_stream - 
previous_line_info.offset - previous_line_info.size;
+last_line_info_->endline_size = endline_size;
+last_line_info_->matches_starts_with = true;
+  } else {
+last_line_info_ = LineInfo{.offset = 0, .size = read_size_ - 
last_read_size_ + buffer_offset_, .endline_size = endline_size, 
.matches_starts_with = true};
+  }
+
+  if (starts_with) {
+last_line_info_->matches_starts_with = last_line_info_->size >= 
starts_with->size() &&
+  std::equal(starts_with->begin(), starts_with->end(), buffer_.begin() + 
last_line_info_->offset, buffer_.begin() + last_line_info_->offset + 
starts_with->size());
+  }
+}
+
+bool LineReader::readNextBuffer() {
+  buffer_offset_ = 0;
+  last_read_size_ = (std::min)(gsl::narrow(stream_->size() - 
read_size_), SPLIT_TEXT_BUFFER_SIZE);
+  const auto read_ret = 
stream_->read(as_writable_bytes(std::span(buffer_).subspan(0, 
last_read_size_)));
+  read_size_ += read_ret;
+  if (io::isError(read_ret)) {
+state_ = StreamReadState::StreamReadError;
+return false;
+  }
+  ++current_buffer_count_;
+  return true;
+}
+
+std::optional LineReader::finalizeLineInfo(uint8_t 
endline_size, const std::optional& starts_with) {
+  setLastLineInfoAttributes(endline_size, starts_with);
+  if (last_line_info_->size == 0) {
+return std::nullopt;
+  }
+  return last_line_info_;
+}
+
+std::optional LineReader::readNextLine(const 
std::optional& starts_with) {
+  if (state_ != StreamReadState::Ok) {
+return std::nullopt;
+  }
+
+  const auto isLastReadProcessed = [this]() { return last_read_size_ <= 
buffer_offset_; };
+  while (read_size_ < stream_->size() || !isLastReadProcessed()) {
+if (isLastReadProcessed() && !readNextBuffer()) {
+  return std::nullopt;
+}
+
+for (auto i = buffer_offset_; i < last_read_size_; ++i) {
+  if (buffer_[i] == '\n') {
+buffer_offset_ = i + 1;
+return finalizeLineInfo(getEndLineSize(i), starts_with);
+  }
+}
+buffer_offset_ = last_read_size_;
+  }
+
+  state_ = StreamReadState::EndOfStream;
+  return finalizeLineInfo(0, starts_with);
+}
+
+SplitTextFragmentGenerator::SplitTextFragmentGenerator(const 
std::shared_ptr& stream, const SplitTextConfiguration& 
split_text_config)
+: line_reader_(stream),
+  split_text_config_(split_text_config) {
+}
+
+void SplitTextFragmentGenerator::finalizeFragmentOffset(Fragment& 
current_fragment) {
+  current_fragment.fragment_offset = flow_file_offset_;
+  flow_file_offset_ += current_fragment.fragment_size;
+}
+
+void SplitTextFragmentGenerator::addLineToFragment(Fragment& current_fragment, 
const LineReader::LineInfo& line) {
+  if (line.endline_size == line.size) {  // if line consists only 

Re: [PR] MINIFICPP-62 Add SplitText processor [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


lordgamez commented on code in PR #1682:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1682#discussion_r1452321449


##
extensions/standard-processors/processors/SplitText.h:
##
@@ -0,0 +1,234 @@
+/**
+ * @file SplitText.h
+ * SplitText class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#pragma once
+
+#include 
+#include 
+#include 
+#include 
+
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/PropertyDefinitionBuilder.h"
+#include "core/PropertyDefinition.h"
+#include "core/PropertyType.h"
+#include "core/RelationshipDefinition.h"
+#include "FlowFileRecord.h"
+#include "utils/Export.h"
+#include "utils/expected.h"
+
+namespace org::apache::nifi::minifi::processors {
+
+struct SplitTextConfiguration {
+  uint64_t line_split_count = 0;
+  std::optional maximum_fragment_size;
+  uint64_t header_line_count = 0;
+  std::optional header_line_marker_characters;
+  bool remove_trailing_new_lines = true;
+};
+
+namespace detail {

Review Comment:
   Moved those classes to the .cpp file in 
bb46bb709ddf6529c5b9a47603cc238384c66a13



##
extensions/standard-processors/processors/SplitText.cpp:
##
@@ -0,0 +1,378 @@
+/**
+ * @file SplitText.cpp
+ * SplitText class implementation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "SplitText.h"
+#include "core/ProcessContext.h"
+#include "core/ProcessSession.h"
+#include "core/Resource.h"
+#include "core/FlowFile.h"
+#include "utils/gsl.h"
+#include "utils/ProcessorConfigUtils.h"
+
+namespace org::apache::nifi::minifi::processors {
+
+namespace detail {
+
+LineReader::LineReader(const std::shared_ptr& stream)
+: stream_(stream) {
+  if (!stream_ || stream_->size() == 0) {
+state_ = StreamReadState::EndOfStream;
+  }
+}
+
+uint8_t LineReader::getEndLineSize(size_t newline_index) {

Review Comment:
   Renamed to `newline_position` in bb46bb709ddf6529c5b9a47603cc238384c66a13



##
extensions/standard-processors/processors/SplitText.h:
##
@@ -0,0 +1,234 @@
+/**
+ * @file SplitText.h
+ * SplitText class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#pragma once
+
+#include 
+#include 
+#include 
+#include 
+
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/PropertyDefinitionBuilder.h"
+#include "core/PropertyDefinition.h"
+#include "core/PropertyType.h"
+#include "core/RelationshipDefinition.h"
+#include "FlowFileRecord.h"
+#include "utils/Export.h"
+#include "utils/expected.h"
+
+namespace org::apache::nifi::minifi::processors {
+
+struct SplitTextConfiguration {
+  uint64_t line_split_count = 0;
+  std::optional maximum_fragment_size;
+  uint64_t 

[jira] [Resolved] (NIFI-12594) ListS3 minimum object age filter not observed when entity state tracking is used

2024-01-15 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi resolved NIFI-12594.

Fix Version/s: 1.25.0
   2.0.0
 Assignee: Peter Kimberley
   Resolution: Fixed

> ListS3 minimum object age filter not observed when entity state tracking is 
> used
> 
>
> Key: NIFI-12594
> URL: https://issues.apache.org/jira/browse/NIFI-12594
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.24.0, 2.0.0
> Environment: Docker, on-prem S3
>Reporter: Peter Kimberley
>Assignee: Peter Kimberley
>Priority: Major
> Fix For: 1.25.0, 2.0.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When ListS3 is configured to use the {{Tracking Entities}} listing strategy, 
> the following is observed:
>  # Configure ListS3 with a Minimum Object Age of {{1 hour.}} Ensure processor 
> is stopped.
>  # Create a new FlowFile with GenerateFlowFile and run once
>  # Put the FlowFile to an S3 bucket with PutS3
>  # Open ListS3 configuration
>  # Click Verify. UI reports: ??Successfully listed contents of bucket  name>, finding 0 objects matching the filter.??
>  # Run ListS3 once. Flowfile is retrieved, even though the 1 hour interval 
> has not yet elapsed.
> The issue is the ListS3 {{Minimum Object Age}} property is not being observed 
> when using the Tracking Entities listing strategy. When using {{{}Tracking 
> Timestamps{}}}, the processor behaves as expected.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12594) ListS3 minimum object age filter not observed when entity state tracking is used

2024-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17806772#comment-17806772
 ] 

ASF subversion and git services commented on NIFI-12594:


Commit b4487a0bf0c67530c19824adbe7d0e002dc255b5 in nifi's branch 
refs/heads/support/nifi-1.x from p-kimberley
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b4487a0bf0 ]

NIFI-12594: ListS3 - observe min/max object age when entity state tracking is 
used

This closes #8231.

Signed-off-by: Peter Turcsanyi 

(cherry picked from commit 3ebad40fae458db3fe664ddd6738b770a26289c8)


> ListS3 minimum object age filter not observed when entity state tracking is 
> used
> 
>
> Key: NIFI-12594
> URL: https://issues.apache.org/jira/browse/NIFI-12594
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.24.0, 2.0.0
> Environment: Docker, on-prem S3
>Reporter: Peter Kimberley
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When ListS3 is configured to use the {{Tracking Entities}} listing strategy, 
> the following is observed:
>  # Configure ListS3 with a Minimum Object Age of {{1 hour.}} Ensure processor 
> is stopped.
>  # Create a new FlowFile with GenerateFlowFile and run once
>  # Put the FlowFile to an S3 bucket with PutS3
>  # Open ListS3 configuration
>  # Click Verify. UI reports: ??Successfully listed contents of bucket  name>, finding 0 objects matching the filter.??
>  # Run ListS3 once. Flowfile is retrieved, even though the 1 hour interval 
> has not yet elapsed.
> The issue is the ListS3 {{Minimum Object Age}} property is not being observed 
> when using the Tracking Entities listing strategy. When using {{{}Tracking 
> Timestamps{}}}, the processor behaves as expected.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINIFICPP-2262 Add processor to push logs to Grafana Loki through gRPC [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


szaszm commented on code in PR #1698:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1698#discussion_r1452235982


##
extensions/grafana-loki/protos/grafana-loki-push.proto:
##


Review Comment:
   Is this file coming from somewhere else? We should include the source, and 
use the appropriate license.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2262 Add processor to push logs to Grafana Loki through gRPC [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


szaszm commented on code in PR #1698:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1698#discussion_r1452228611


##
extensions/grafana-loki/CMakeLists.txt:
##
@@ -16,20 +16,46 @@
 # specific language governing permissions and limitations
 # under the License.
 #
-
 if (NOT (ENABLE_ALL OR ENABLE_GRAFANA_LOKI))
 return()
 endif()
 
 include(${CMAKE_SOURCE_DIR}/extensions/ExtensionHeader.txt)
 
-file(GLOB SOURCES  "*.cpp")
+if (ENABLE_GRPC)
+include(Grpc)
 
-add_library(minifi-grafana-loki SHARED ${SOURCES})
+file(MAKE_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated)
+
+add_custom_command(
+OUTPUT  
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.grpc.pb.cc 
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.grpc.pb.h 
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.pb.h 
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.pb.cc
+COMMAND ${PROTOBUF_COMPILER} 
--plugin=protoc-gen-grpc=${GRPC_CPP_PLUGIN} 
-I=${CMAKE_CURRENT_SOURCE_DIR}/protos/ -I=${protobuf_SOURCE_DIR}/src 
--grpc_out=${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/ 
--cpp_out=${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/ 
${CMAKE_CURRENT_SOURCE_DIR}/protos/grafana-loki-push.proto
+DEPENDS protobuf::protoc grpc_cpp_plugin)
 
+add_custom_target(grafana-loki-protos ALL DEPENDS 
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.grpc.pb.cc 
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.grpc.pb.h 
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.pb.h 
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.pb.cc)
+
+file(GLOB SOURCES "*.cpp")
+list(APPEND SOURCES
+
${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.grpc.pb.cc
+${CMAKE_CURRENT_SOURCE_DIR}/protobuf-generated/grafana-loki-push.pb.cc
+)
+else()
+set(SOURCES
+${CMAKE_CURRENT_SOURCE_DIR}/PushGrafanaLoki.cpp
+${CMAKE_CURRENT_SOURCE_DIR}/PushGrafanaLokiREST.cpp
+)
+endif()
+
+add_library(minifi-grafana-loki SHARED ${SOURCES})
 target_include_directories(minifi-grafana-loki PRIVATE BEFORE 
"${CMAKE_SOURCE_DIR}/extensions/http-curl")
-target_link_libraries(minifi-grafana-loki ${LIBMINIFI})
-target_link_libraries(minifi-grafana-loki minifi-http-curl)
+target_link_libraries(minifi-grafana-loki ${LIBMINIFI} minifi-http-curl)
+add_dependencies(minifi-grafana-loki minifi-http-curl)
+
+if (ENABLE_GRPC)
+target_include_directories(minifi-grafana-loki PRIVATE BEFORE 
"${GRPC_INCLUDE_DIR}" "${PROTOBUF_INCLUDE_DIR}")
+target_link_libraries(minifi-grafana-loki grpc++ protobuf::libprotobuf)
+add_dependencies(minifi-grafana-loki grpc grpc++ protobuf::libprotobuf 
grafana-loki-protos)

Review Comment:
   `target_link_libraries`: they should be linked together
   `add_dependencies`: these targets should be compiled before the first in the 
list, not allowing concurrent compilation.
   
   We should avoid `add_dependencies`, unless we're running the binaries of the 
result of a previous build step, or similar. In this case, is that the 
generation of sources with protobuf?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2262 Add processor to push logs to Grafana Loki through gRPC [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


szaszm commented on code in PR #1698:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1698#discussion_r1446385712


##
LICENSE:
##
@@ -2442,52 +2443,6 @@ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 
LIABILITY, OR TORT
 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
 
-This product bundles 'protobuf' within 'OpenCV' under a 3-Clause BSD license:

Review Comment:
   Is it no longer enabled within OpenCV?



##
.dockerignore:
##
@@ -57,6 +57,11 @@ extensions/expression-language/Scanner.cpp
 extensions/expression-language/location.hh
 extensions/expression-language/position.hh
 extensions/expression-language/stack.h
+extensions/grafana-loki/protobuf-generated/grafana-loki-push.grpc.pb.h
+extensions/grafana-loki/protobuf-generated/grafana-loki-push.grpc.pb.cc
+extensions/grafana-loki/protobuf-generated/grafana-loki-push.pb.h
+extensions/grafana-loki/protobuf-generated/grafana-loki-push.pb.cc

Review Comment:
   Could we generate these sources in the build directory instead, and add them 
to the build from there?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2261 Add processor for pushing logs to Grafana Loki through REST API [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


szaszm commented on code in PR #1695:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1695#discussion_r1446396780


##
PROCESSORS.md:
##
@@ -2196,6 +2197,40 @@ In the list below, the names of required properties 
appear in bold. Any other pr
 | failure | FlowFiles that failed to be sent to the destination are 
transferred to this relationship |
 
 
+## PushGrafanaLokiREST
+
+### Description
+
+A Grafana Loki push processor that uses the Grafana Loki REST API. The 
processor expects each flow file to contain a single log line to be pushed to 
Grafana Loki, therefore it is usually used together with the TailFile processor.
+
+### Properties
+
+In the list below, the names of required properties appear in bold. Any other 
properties (not in bold) are considered optional. The table also indicates any 
default values, and whether a property supports the NiFi Expression Language.
+
+| Name | Default Value | Allowable Values | 
Description 

   |
+|--|---|--||
+| **Url**  |   |  | Url of the 
Grafana Loki server. For example http://localhost:3100/.

|
+| **Stream Labels**|   |  | Comma 
separated list of = labels to be sent as stream labels. 

 |
+| Log Line Metadata Attributes |   |  | Comma 
separated list of attributes to be sent as log line metadata for a log line.

 |
+| Tenant ID|   |  | The tenant 
ID used by default to push logs to Grafana Loki. If omitted or empty it assumes 
Grafana Loki is running in single-tenant mode and no X-Scope-OrgID header is 
sent.  |
+| Max Batch Size   | 100   |  | The 
maximum number of flow files to process at a time. If not set, or set to 0, all 
FlowFiles will be processed at once.
   |

Review Comment:
   If not set, isn't it just using the default 100?



##
extensions/grafana-loki/PushGrafanaLokiREST.cpp:
##
@@ -0,0 +1,390 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "PushGrafanaLokiREST.h"
+
+#include 
+#include 
+#include 
+
+#include "core/ProcessContext.h"
+#include "core/ProcessSession.h"
+#include "core/Resource.h"
+#include "utils/ProcessorConfigUtils.h"
+#include "utils/StringUtils.h"
+#include "rapidjson/document.h"
+#include "rapidjson/stream.h"
+#include "rapidjson/writer.h"
+
+namespace org::apache::nifi::minifi::extensions::grafana::loki {
+
+void PushGrafanaLokiREST::LogBatch::add(const std::shared_ptr& 
flowfile) {
+  gsl_Expects(state_manager_);
+  if (log_line_batch_wait_ && batched_flowfiles_.empty()) {
+start_push_time_ = std::chrono::system_clock::now();
+std::unordered_map state;
+state["start_push_time"] = 
std::to_string(std::chrono::duration_cast(start_push_time_.time_since_epoch()).count());
+logger_->log_debug("Saved start push time to state: {}", 
state["start_push_time"]);
+state_manager_->set(state);
+  }
+  batched_flowfiles_.push_back(flowfile);
+}
+
+void PushGrafanaLokiREST::LogBatch::restore(const 
std::shared_ptr& flowfile) {
+  batched_flowfiles_.push_back(flowfile);
+}
+
+std::vector> 
PushGrafanaLokiREST::LogBatch::flush() {
+  gsl_Expects(state_manager_);
+  start_push_time_ = {};
+  auto result = 

Re: [PR] MINIFICPP-2261 Add processor for pushing logs to Grafana Loki through REST API [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


lordgamez commented on code in PR #1695:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1695#discussion_r1452221843


##
extensions/grafana-loki/PushGrafanaLokiREST.h:
##
@@ -0,0 +1,178 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#pragma once
+
+#include 
+#include 
+#include 
+#include 
+
+#include "controllers/SSLContextService.h"
+#include "core/Processor.h"
+#include "core/PropertyDefinition.h"
+#include "core/PropertyDefinitionBuilder.h"
+#include "core/PropertyType.h"
+#include "core/RelationshipDefinition.h"
+#include "client/HTTPClient.h"
+#include "core/StateManager.h"
+
+namespace org::apache::nifi::minifi::extensions::grafana::loki {
+
+class PushGrafanaLokiREST : public core::Processor {
+ public:
+  EXTENSIONAPI static constexpr const char* Description = "A Grafana Loki push 
processor that uses the Grafana Loki REST API. The processor expects each flow 
file to contain a single log line to be "
+  "pushed to Grafana 
Loki, therefore it is usually used together with the TailFile processor.";
+
+  explicit PushGrafanaLokiREST(const std::string& name, const 
utils::Identifier& uuid = {})
+  : Processor(name, uuid),
+log_batch_(logger_) {
+  }
+  ~PushGrafanaLokiREST() override = default;
+
+  EXTENSIONAPI static constexpr auto Url = 
core::PropertyDefinitionBuilder<>::createProperty("Url")
+.withDescription("Url of the Grafana Loki server. For example 
http://localhost:3100/.;)
+.isRequired(true)
+.build();
+  EXTENSIONAPI static constexpr auto StreamLabels = 
core::PropertyDefinitionBuilder<>::createProperty("Stream Labels")
+.withDescription("Comma separated list of = labels to be sent 
as stream labels.")
+.isRequired(true)
+.build();
+  EXTENSIONAPI static constexpr auto LogLineMetadataAttributes = 
core::PropertyDefinitionBuilder<>::createProperty("Log Line Metadata 
Attributes")
+.withDescription("Comma separated list of attributes to be sent as log 
line metadata for a log line.")
+.build();
+  EXTENSIONAPI static constexpr auto TenantID = 
core::PropertyDefinitionBuilder<>::createProperty("Tenant ID")
+.withDescription("The tenant ID used by default to push logs to Grafana 
Loki. If omitted or empty it assumes Grafana Loki is running in single-tenant 
mode and no X-Scope-OrgID header is sent.")
+.build();
+  EXTENSIONAPI static constexpr auto MaxBatchSize = 
core::PropertyDefinitionBuilder<>::createProperty("Max Batch Size")
+.withDescription("The maximum number of flow files to process at a time. 
If not set, or set to 0, all FlowFiles will be processed at once.")
+.withPropertyType(core::StandardPropertyTypes::UNSIGNED_LONG_TYPE)
+.withDefaultValue("100")
+.build();
+  EXTENSIONAPI static constexpr auto LogLineBatchWait = 
core::PropertyDefinitionBuilder<>::createProperty("Log Line Batch Wait")
+.withDescription("Time to wait before sending a log line batch to Grafana 
Loki, full or not. If this property and Log Line Batch Size are both unset, "
+ "the log batch of the current trigger will be sent 
immediately.")
+.withPropertyType(core::StandardPropertyTypes::TIME_PERIOD_TYPE)
+.build();
+  EXTENSIONAPI static constexpr auto LogLineBatchSize = 
core::PropertyDefinitionBuilder<>::createProperty("Log Line Batch Size")
+.withDescription("Number of log lines to send in a batch to Loki. If this 
property and Log Line Batch Wait are both unset, "
+ "the log batch of the current trigger will be sent 
immediately.")
+.withPropertyType(core::StandardPropertyTypes::UNSIGNED_INT_TYPE)
+.build();
+  EXTENSIONAPI static constexpr auto ConnectTimeout = 
core::PropertyDefinitionBuilder<>::createProperty("Connection Timeout")
+.withDescription("Max wait time for connection to the Grafana Loki 
service.")
+.withPropertyType(core::StandardPropertyTypes::TIME_PERIOD_TYPE)
+.withDefaultValue("5 s")
+.isRequired(true)
+.build();
+  EXTENSIONAPI static constexpr auto ReadTimeout = 
core::PropertyDefinitionBuilder<>::createProperty("Read Timeout")
+.withDescription("Max wait time for response from remote 

Re: [PR] MINIFICPP-2217 - Implement jolt processor shift operation [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


szaszm commented on code in PR #1692:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1692#discussion_r1452196886


##
extensions/standard-processors/utils/JoltUtils.h:
##
@@ -0,0 +1,205 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "logging/Logger.h"
+#include "utils/gsl.h"
+#include "rapidjson/document.h"
+#include "utils/expected.h"
+#include "utils/StringUtils.h"
+
+namespace org::apache::nifi::minifi::utils::jolt {
+
+class Spec {
+ public:
+  using It = std::string_view::const_iterator;
+
+  struct Context {
+   public:
+const Context* parent{nullptr};
+
+std::string path() const {
+  std::string res;
+  if (parent) {
+res = parent->path();
+  }
+  res.append("/").append(matches.at(0));
+  return res;
+}
+
+const Context* find(size_t idx) const {
+  if (idx == 0) return this;
+  if (parent) return parent->find(idx - 1);
+  return nullptr;
+}
+
+::gsl::final_action> 
log(std::function)> on_enter, 
std::function)> on_exit) const {

Review Comment:
   Since this is all in the header anyway, this could be done without 
`std::function`, just using the function objects themselves directly, by making 
this a template.



##
extensions/standard-processors/processors/JoltTransformJSON.h:
##
@@ -0,0 +1,91 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/PropertyDefinition.h"
+#include "core/PropertyDefinitionBuilder.h"
+#include "utils/Enum.h"
+#include "utils/Searcher.h"
+#include "../utils/JoltUtils.h"
+
+namespace org::apache::nifi::minifi::processors::jolt_transform_json {
+enum class JoltTransform {
+  Shift
+};
+}  // namespace org::apache::nifi::minifi::processors::jolt_transform_json
+
+namespace org::apache::nifi::minifi::processors {
+
+class JoltTransformJSON : public core::Processor {
+ public:
+  explicit JoltTransformJSON(std::string_view name, const utils::Identifier& 
uuid = {})
+  : Processor(name, uuid) {}
+
+
+  EXTENSIONAPI static constexpr const char* Description = "Applies a list of 
Jolt specifications to the flowfile JSON payload. A new FlowFile is created "
+  "with transformed content and is routed to the 'success' relationship. 
If the JSON transform "
+  "fails, the original FlowFile is routed to the 'failure' relationship.";
+
+  EXTENSIONAPI static constexpr auto JoltTransform = 
core::PropertyDefinitionBuilder()>::createProperty("Jolt
 Transformation DSL")
+  .withDescription("Specifies the Jolt Transformation that should be used 
with the provided specification.")
+  
.withDefaultValue(magic_enum::enum_name(jolt_transform_json::JoltTransform::Shift))
+  
.withAllowedValues(magic_enum::enum_names())
+  .isRequired(true)
+  .build();
+
+  EXTENSIONAPI static constexpr auto JoltSpecification = 
core::PropertyDefinitionBuilder<>::createProperty("Jolt Specification")
+  .withDescription("Jolt Specification for transformation of JSON data. 
The value for this property may be the text of a Jolt specification "
+  "or the path to a file containing a Jolt specification. 'Jolt 
Specification' must be set, or "
+  "the value is ignored if the Jolt Sort Transformation is selected.")
+  

Re: [PR] MINIFICPP-62 Add SplitText processor [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


szaszm commented on code in PR #1682:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1682#discussion_r1452130248


##
extensions/standard-processors/processors/SplitText.cpp:
##
@@ -0,0 +1,378 @@
+/**
+ * @file SplitText.cpp
+ * SplitText class implementation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "SplitText.h"
+#include "core/ProcessContext.h"
+#include "core/ProcessSession.h"
+#include "core/Resource.h"
+#include "core/FlowFile.h"
+#include "utils/gsl.h"
+#include "utils/ProcessorConfigUtils.h"
+
+namespace org::apache::nifi::minifi::processors {
+
+namespace detail {
+
+LineReader::LineReader(const std::shared_ptr& stream)
+: stream_(stream) {
+  if (!stream_ || stream_->size() == 0) {
+state_ = StreamReadState::EndOfStream;
+  }
+}
+
+uint8_t LineReader::getEndLineSize(size_t newline_index) {

Review Comment:
   Minor, but I'd call the argument `newline_offset` or `newline_position`. My 
first thought when reading the name was that this refers to whether this is the 
first, second, etc. line ending in the buffer, not its position in the buffer.



##
extensions/standard-processors/processors/SplitText.h:
##
@@ -0,0 +1,234 @@
+/**
+ * @file SplitText.h
+ * SplitText class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#pragma once
+
+#include 
+#include 
+#include 
+#include 
+
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/PropertyDefinitionBuilder.h"
+#include "core/PropertyDefinition.h"
+#include "core/PropertyType.h"
+#include "core/RelationshipDefinition.h"
+#include "FlowFileRecord.h"
+#include "utils/Export.h"
+#include "utils/expected.h"
+
+namespace org::apache::nifi::minifi::processors {
+
+struct SplitTextConfiguration {
+  uint64_t line_split_count = 0;
+  std::optional maximum_fragment_size;
+  uint64_t header_line_count = 0;
+  std::optional header_line_marker_characters;
+  bool remove_trailing_new_lines = true;
+};
+
+namespace detail {

Review Comment:
   Most classes in this namespace are not referred to anywhere other than the 
SplitText implementation. I think they should be moved to the .cpp file.



##
extensions/standard-processors/processors/SplitText.h:
##
@@ -0,0 +1,234 @@
+/**
+ * @file SplitText.h
+ * SplitText class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#pragma once
+
+#include 
+#include 
+#include 
+#include 
+
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/PropertyDefinitionBuilder.h"
+#include "core/PropertyDefinition.h"
+#include "core/PropertyType.h"
+#include "core/RelationshipDefinition.h"
+#include "FlowFileRecord.h"

Re: [PR] MINIFICPP-2261 Add processor for pushing logs to Grafana Loki through REST API [nifi-minifi-cpp]

2024-01-15 Thread via GitHub


adamdebreceni commented on code in PR #1695:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1695#discussion_r1452092124


##
extensions/grafana-loki/PushGrafanaLokiREST.h:
##
@@ -0,0 +1,178 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#pragma once
+
+#include 
+#include 
+#include 
+#include 
+
+#include "controllers/SSLContextService.h"
+#include "core/Processor.h"
+#include "core/PropertyDefinition.h"
+#include "core/PropertyDefinitionBuilder.h"
+#include "core/PropertyType.h"
+#include "core/RelationshipDefinition.h"
+#include "client/HTTPClient.h"
+#include "core/StateManager.h"
+
+namespace org::apache::nifi::minifi::extensions::grafana::loki {
+
+class PushGrafanaLokiREST : public core::Processor {
+ public:
+  EXTENSIONAPI static constexpr const char* Description = "A Grafana Loki push 
processor that uses the Grafana Loki REST API. The processor expects each flow 
file to contain a single log line to be "
+  "pushed to Grafana 
Loki, therefore it is usually used together with the TailFile processor.";
+
+  explicit PushGrafanaLokiREST(const std::string& name, const 
utils::Identifier& uuid = {})
+  : Processor(name, uuid),
+log_batch_(logger_) {
+  }
+  ~PushGrafanaLokiREST() override = default;
+
+  EXTENSIONAPI static constexpr auto Url = 
core::PropertyDefinitionBuilder<>::createProperty("Url")
+.withDescription("Url of the Grafana Loki server. For example 
http://localhost:3100/.;)
+.isRequired(true)
+.build();
+  EXTENSIONAPI static constexpr auto StreamLabels = 
core::PropertyDefinitionBuilder<>::createProperty("Stream Labels")
+.withDescription("Comma separated list of = labels to be sent 
as stream labels.")
+.isRequired(true)
+.build();
+  EXTENSIONAPI static constexpr auto LogLineMetadataAttributes = 
core::PropertyDefinitionBuilder<>::createProperty("Log Line Metadata 
Attributes")
+.withDescription("Comma separated list of attributes to be sent as log 
line metadata for a log line.")
+.build();
+  EXTENSIONAPI static constexpr auto TenantID = 
core::PropertyDefinitionBuilder<>::createProperty("Tenant ID")
+.withDescription("The tenant ID used by default to push logs to Grafana 
Loki. If omitted or empty it assumes Grafana Loki is running in single-tenant 
mode and no X-Scope-OrgID header is sent.")
+.build();
+  EXTENSIONAPI static constexpr auto MaxBatchSize = 
core::PropertyDefinitionBuilder<>::createProperty("Max Batch Size")
+.withDescription("The maximum number of flow files to process at a time. 
If not set, or set to 0, all FlowFiles will be processed at once.")
+.withPropertyType(core::StandardPropertyTypes::UNSIGNED_LONG_TYPE)
+.withDefaultValue("100")
+.build();
+  EXTENSIONAPI static constexpr auto LogLineBatchWait = 
core::PropertyDefinitionBuilder<>::createProperty("Log Line Batch Wait")
+.withDescription("Time to wait before sending a log line batch to Grafana 
Loki, full or not. If this property and Log Line Batch Size are both unset, "
+ "the log batch of the current trigger will be sent 
immediately.")
+.withPropertyType(core::StandardPropertyTypes::TIME_PERIOD_TYPE)
+.build();
+  EXTENSIONAPI static constexpr auto LogLineBatchSize = 
core::PropertyDefinitionBuilder<>::createProperty("Log Line Batch Size")
+.withDescription("Number of log lines to send in a batch to Loki. If this 
property and Log Line Batch Wait are both unset, "
+ "the log batch of the current trigger will be sent 
immediately.")
+.withPropertyType(core::StandardPropertyTypes::UNSIGNED_INT_TYPE)
+.build();
+  EXTENSIONAPI static constexpr auto ConnectTimeout = 
core::PropertyDefinitionBuilder<>::createProperty("Connection Timeout")
+.withDescription("Max wait time for connection to the Grafana Loki 
service.")
+.withPropertyType(core::StandardPropertyTypes::TIME_PERIOD_TYPE)
+.withDefaultValue("5 s")
+.isRequired(true)
+.build();
+  EXTENSIONAPI static constexpr auto ReadTimeout = 
core::PropertyDefinitionBuilder<>::createProperty("Read Timeout")
+.withDescription("Max wait time for response from remote 

[jira] [Commented] (NIFI-12594) ListS3 minimum object age filter not observed when entity state tracking is used

2024-01-15 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17806696#comment-17806696
 ] 

ASF subversion and git services commented on NIFI-12594:


Commit 3ebad40fae458db3fe664ddd6738b770a26289c8 in nifi's branch 
refs/heads/main from p-kimberley
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=3ebad40fae ]

NIFI-12594: ListS3 - observe min/max object age when entity state tracking is 
used

This closes #8231.

Signed-off-by: Peter Turcsanyi 


> ListS3 minimum object age filter not observed when entity state tracking is 
> used
> 
>
> Key: NIFI-12594
> URL: https://issues.apache.org/jira/browse/NIFI-12594
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.24.0, 2.0.0
> Environment: Docker, on-prem S3
>Reporter: Peter Kimberley
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When ListS3 is configured to use the {{Tracking Entities}} listing strategy, 
> the following is observed:
>  # Configure ListS3 with a Minimum Object Age of {{1 hour.}} Ensure processor 
> is stopped.
>  # Create a new FlowFile with GenerateFlowFile and run once
>  # Put the FlowFile to an S3 bucket with PutS3
>  # Open ListS3 configuration
>  # Click Verify. UI reports: ??Successfully listed contents of bucket  name>, finding 0 objects matching the filter.??
>  # Run ListS3 once. Flowfile is retrieved, even though the 1 hour interval 
> has not yet elapsed.
> The issue is the ListS3 {{Minimum Object Age}} property is not being observed 
> when using the Tracking Entities listing strategy. When using {{{}Tracking 
> Timestamps{}}}, the processor behaves as expected.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >