[jira] [Created] (NIFI-12223) Process group reports aggregate metrics for components within it

2023-10-12 Thread Eric Secules (Jira)
Eric Secules created NIFI-12223:
---

 Summary: Process group reports aggregate metrics for components 
within it
 Key: NIFI-12223
 URL: https://issues.apache.org/jira/browse/NIFI-12223
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Core Framework
Reporter: Eric Secules


The metrics reported by process groups should contain aggregated metrics from 
all the processors and connections within them so that one can reduce the 
amount of metrics sent to prometheus while still getting a high level view of 
the information from processor and connection metrics. For example, the 
average, min, and max time a flow file spends queued in the process group, or 
the total amount of cpu time spend within processors in the process group over 
the last 5 minutes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12222) NPE in StandardVersionedComponentSynchronizer if PG references missing param context

2023-10-12 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-1:
---
Status: Patch Available  (was: Open)

> NPE in StandardVersionedComponentSynchronizer if PG references missing param 
> context
> 
>
> Key: NIFI-1
> URL: https://issues.apache.org/jira/browse/NIFI-1
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.23.2
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> java.lang.NullPointerException: null
>   at 
> org.apache.nifi.flow.synchronization.StandardVersionedComponentSynchronizer.updateParameterContext(StandardVersionedComponentSynchronizer.java:1963)
>   at 
> org.apache.nifi.flow.synchronization.StandardVersionedComponentSynchronizer.synchronize(StandardVersionedComponentSynchronizer.java:301)
>   at 
> org.apache.nifi.flow.synchronization.StandardVersionedComponentSynchronizer.addProcessGroup(StandardVersionedComponentSynchronizer.java:1203)
>   at 
> org.apache.nifi.flow.synchronization.StandardVersionedComponentSynchronizer.synchronizeChildGroups(StandardVersionedComponentSynchronizer.java:545)
>   at 
> org.apache.nifi.flow.synchronization.StandardVersionedComponentSynchronizer.synchronize(StandardVersionedComponentSynchronizer.java:431)
>   at 
> org.apache.nifi.flow.synchronization.StandardVersionedComponentSynchronizer.lambda$synchronize$0(StandardVersionedComponentSynchronizer.java:262)
>   at 
> org.apache.nifi.controller.flow.AbstractFlowManager.withParameterContextResolution(AbstractFlowManager.java:550)
>   at 
> org.apache.nifi.flow.synchronization.StandardVersionedComponentSynchronizer.synchronize(StandardVersionedComponentSynchronizer.java:257)
>   at 
> org.apache.nifi.groups.StandardProcessGroup.synchronizeFlow(StandardProcessGroup.java:3986)
>   at 
> org.apache.nifi.groups.StandardProcessGroup.updateFlow(StandardProcessGroup.java:3966)
>   at 
> org.apache.nifi.web.dao.impl.StandardProcessGroupDAO.updateProcessGroupFlow(StandardProcessGroupDAO.java:435)
>   at 
> org.apache.nifi.web.dao.impl.StandardProcessGroupDAO$$FastClassBySpringCGLIB$$10a99b47.invoke()
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] NIFI-12222 Protect against missing parameter context when syncing a P… [nifi]

2023-10-12 Thread via GitHub


bbende opened a new pull request, #7877:
URL: https://github.com/apache/nifi/pull/7877

   …G in component synchronizer
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-1](https://issues.apache.org/jira/browse/NIFI-1)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [X] Pull Request based on current revision of the `main` branch
   - [X] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Run unit tests for the synchronizer.
   
   ### Build
   
   - [X] Build completed using `mvn clean install -P contrib-check`
 - [X] JDK 21
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-12222) NPE in StandardVersionedComponentSynchronizer if PG references missing param context

2023-10-12 Thread Bryan Bende (Jira)
Bryan Bende created NIFI-1:
--

 Summary: NPE in StandardVersionedComponentSynchronizer if PG 
references missing param context
 Key: NIFI-1
 URL: https://issues.apache.org/jira/browse/NIFI-1
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.23.2
Reporter: Bryan Bende
Assignee: Bryan Bende


{code:java}
java.lang.NullPointerException: null
  at 
org.apache.nifi.flow.synchronization.StandardVersionedComponentSynchronizer.updateParameterContext(StandardVersionedComponentSynchronizer.java:1963)
  at 
org.apache.nifi.flow.synchronization.StandardVersionedComponentSynchronizer.synchronize(StandardVersionedComponentSynchronizer.java:301)
  at 
org.apache.nifi.flow.synchronization.StandardVersionedComponentSynchronizer.addProcessGroup(StandardVersionedComponentSynchronizer.java:1203)
  at 
org.apache.nifi.flow.synchronization.StandardVersionedComponentSynchronizer.synchronizeChildGroups(StandardVersionedComponentSynchronizer.java:545)
  at 
org.apache.nifi.flow.synchronization.StandardVersionedComponentSynchronizer.synchronize(StandardVersionedComponentSynchronizer.java:431)
  at 
org.apache.nifi.flow.synchronization.StandardVersionedComponentSynchronizer.lambda$synchronize$0(StandardVersionedComponentSynchronizer.java:262)
  at 
org.apache.nifi.controller.flow.AbstractFlowManager.withParameterContextResolution(AbstractFlowManager.java:550)
  at 
org.apache.nifi.flow.synchronization.StandardVersionedComponentSynchronizer.synchronize(StandardVersionedComponentSynchronizer.java:257)
  at 
org.apache.nifi.groups.StandardProcessGroup.synchronizeFlow(StandardProcessGroup.java:3986)
  at 
org.apache.nifi.groups.StandardProcessGroup.updateFlow(StandardProcessGroup.java:3966)
  at 
org.apache.nifi.web.dao.impl.StandardProcessGroupDAO.updateProcessGroupFlow(StandardProcessGroupDAO.java:435)
  at 
org.apache.nifi.web.dao.impl.StandardProcessGroupDAO$$FastClassBySpringCGLIB$$10a99b47.invoke()
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12221) Make heartbeat responses more lenient in some cases

2023-10-12 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-12221:
--
Status: Patch Available  (was: Open)

> Make heartbeat responses more lenient in some cases
> ---
>
> Key: NIFI-12221
> URL: https://issues.apache.org/jira/browse/NIFI-12221
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 2.latest
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When a heartbeat is received by the Cluster Coordinator, it responds based on 
> the node's current connection state. In the case of a disconnected node, it 
> either notifies the node that it is disconnected so that it will stop 
> hearting, or it requests the node to reconnect to the cluster.
> Due to changes that were made in 1.16, as well as a few additional changes 
> that have been made since, we can be much more lenient about when we ask the 
> node to reconnect vs. disconnect. For example, if a node was disconnected due 
> to not handling an update request, we previously needed to request that the 
> node disconnect again. However, now we can ask the node to reconnect, as it 
> may well be able to reconcile any differences and rejoin.
> We even currently request that a node disconnect if receiving a heartbeat 
> from a node whose last state was "Disconnected because Node was Shutdown". We 
> should definitely be more lenient in this case, as it's occasionally causing 
> System Test failures (e.g., 
> [https://github.com/apache/nifi/actions/runs/6498488206).|https://github.com/apache/nifi/actions/runs/6498488206)]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] NIFI-12221: Be more lenient about which Disconnection Codes we allow … [nifi]

2023-10-12 Thread via GitHub


markap14 opened a new pull request, #7876:
URL: https://github.com/apache/nifi/pull/7876

   …a node to be reconnected to a cluster vs. when we notify the node to 
disconnect again. Also updated the timeout for OffloadIT because it 
occasionally times ou out while running properly.
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-0](https://issues.apache.org/jira/browse/NIFI-0)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [ ] Pull Request based on current revision of the `main` branch
   - [ ] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 21
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-12221) Make heartbeat responses more lenient in some cases

2023-10-12 Thread Mark Payne (Jira)
Mark Payne created NIFI-12221:
-

 Summary: Make heartbeat responses more lenient in some cases
 Key: NIFI-12221
 URL: https://issues.apache.org/jira/browse/NIFI-12221
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 2.latest


When a heartbeat is received by the Cluster Coordinator, it responds based on 
the node's current connection state. In the case of a disconnected node, it 
either notifies the node that it is disconnected so that it will stop hearting, 
or it requests the node to reconnect to the cluster.

Due to changes that were made in 1.16, as well as a few additional changes that 
have been made since, we can be much more lenient about when we ask the node to 
reconnect vs. disconnect. For example, if a node was disconnected due to not 
handling an update request, we previously needed to request that the node 
disconnect again. However, now we can ask the node to reconnect, as it may well 
be able to reconcile any differences and rejoin.

We even currently request that a node disconnect if receiving a heartbeat from 
a node whose last state was "Disconnected because Node was Shutdown". We should 
definitely be more lenient in this case, as it's occasionally causing System 
Test failures (e.g., 
[https://github.com/apache/nifi/actions/runs/6498488206).|https://github.com/apache/nifi/actions/runs/6498488206)]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12122 Fixed bug where parameter context descriptions were not loaded on NiFi startup and overwritten as empty [nifi]

2023-10-12 Thread via GitHub


Hexoplon commented on PR #7787:
URL: https://github.com/apache/nifi/pull/7787#issuecomment-1760333692

   @exceptionfactory Great! It's been rebased now, and the test modified to be 
similar to the other modified tests


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-12191) MacOS GitHub Runner does not correctly run Docker (Colima)

2023-10-12 Thread Chris Sampson (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774687#comment-17774687
 ] 

Chris Sampson commented on NIFI-12191:
--

It might simply be a docker/colima config problem, see 
https://golang.testcontainers.org/system_requirements/using_colima/ for some 
suggestions.

Other things to consider are whether colima needs to be configured to use more 
resources within the runner - 
https://github.com/abiosoft/colima#customizing-the-vm, or use containerd 
runtime - https://github.com/abiosoft/colima#containerd

> MacOS GitHub Runner does not correctly run Docker (Colima)
> --
>
> Key: NIFI-12191
> URL: https://issues.apache.org/jira/browse/NIFI-12191
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Chris Sampson
>Priority: Major
> Fix For: 2.latest
>
>
> NIFI-12177 introduced the {{integration-tests}} and {{docker-tests}} GitHub 
> Action Workflows, which runs fine on {{Ubuntu}}, but cannot run as 
> successfully on {{MacOS}}.
> This is because the MacOS runner **does not** contain Docker by default, and 
> installing it via {{brew install docker}} actually installs 
> [Colima|https://github.com/abiosoft/colima]. While this allows Docker Images 
> to be built and (to a degree) run as Containers, it has been noted that 
> Testcontainers **do not** work correctly with such a setup, and some of the 
> Docker Image Tests (e.g. for NiFi - see NIFI-12177) fail because containers 
> don't run in a consistent manner compared to Ubuntu with `docker` installed.
> This ticket is to investigate and fix the use of Docker within the MacOS 
> based GitHub runner for these workflows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] NIFI-12198 Add API and CLI commands to import reporting task snapshots [nifi]

2023-10-12 Thread via GitHub


bbende opened a new pull request, #7875:
URL: https://github.com/apache/nifi/pull/7875

   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-12198](https://issues.apache.org/jira/browse/NIFI-12198)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [X] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [X] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [X] Pull Request based on current revision of the `main` branch
   - [X] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Use CLI commands to export reporting tasks and then use that as input to the 
new command:
   ```
   nifi export-reporting-tasks -o /tmp/reporting-tasks.json
   nifi import-reporting-tasks -i /tmp/reporting-tasks.json
   ```
   
   ### Build
   
   - [X] Build completed using `mvn clean install -P contrib-check`
 - [X] JDK 21
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-12198) Add API to import a VersionedReportingTaskSnapshot

2023-10-12 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-12198:
---
Status: Patch Available  (was: Open)

> Add API to import a VersionedReportingTaskSnapshot
> --
>
> Key: NIFI-12198
> URL: https://issues.apache.org/jira/browse/NIFI-12198
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is the corresponding import API to go along with the export API that was 
> added in NIFI-12186 . We should be able to POST a previously exported 
> VersionedReportingTaskSnapshot to a given NiFi instance to create any 
> reporting tasks and controller services provided in the snapshot. We should 
> also support uploading a file. The import process should be additive, we are 
> not trying to sync existing components, just create whatever is in the 
> snapshot.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12177) Docker builds (dockermaven) should include testing of the built images

2023-10-12 Thread Chris Sampson (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Sampson updated NIFI-12177:
-
Description: 
The NiFi, NiFi Toolkit & NiFi Registry {{dockermaven}} builds currently run an 
{{integration_test}} script once the image has been built in order to confirm a 
container can be started and the component can be contacted.

The same (or similar) approaches should be considered for:
* MiNiFi
* MiNiFi C2

This would help avoid situations such as NIFI-12175

Also, the NiFi `dockermaven` test script fails if run with `macos-latest` in 
GitHub Actions (NIFI-12178), likely an incompatibility with the use of Colima 
to get Docker working within the runner, an alternative should be found (to the 
test assertions or the GitHub workflow - see NIFI-12191 as this is possibly an 
issue with the use of Colima)

  was:
The NiFi & NiFi Registry {{dockermaven}} builds currently run an 
{{integration_test}} script once the image has been built in order to confirm a 
container can be started and the component can be contacted.

The same (or similar) approaches should be considered for:
* NiFi Toolkit
* MiNiFi
* MiNiFi C2

This would help avoid situations such as NIFI-12175

Also, the NiFi `dockermaven` test script fails if run with `macos-latest` in 
GitHub Actions (NIFI-12178), likely an incompatibility with the use of Colima 
to get Docker working within the runner, an alternative should be found (to the 
test assertions or the GitHub workflow - see NIFI-12191 as this is possibly an 
issue with the use of Colima)


> Docker builds (dockermaven) should include testing of the built images
> --
>
> Key: NIFI-12177
> URL: https://issues.apache.org/jira/browse/NIFI-12177
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Chris Sampson
>Priority: Major
> Fix For: 1.latest, 2.latest
>
>
> The NiFi, NiFi Toolkit & NiFi Registry {{dockermaven}} builds currently run 
> an {{integration_test}} script once the image has been built in order to 
> confirm a container can be started and the component can be contacted.
> The same (or similar) approaches should be considered for:
> * MiNiFi
> * MiNiFi C2
> This would help avoid situations such as NIFI-12175
> Also, the NiFi `dockermaven` test script fails if run with `macos-latest` in 
> GitHub Actions (NIFI-12178), likely an incompatibility with the use of Colima 
> to get Docker working within the runner, an alternative should be found (to 
> the test assertions or the GitHub workflow - see NIFI-12191 as this is 
> possibly an issue with the use of Colima)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10841) Enable dockermaven builds for arm64 architectures

2023-10-12 Thread Chris Sampson (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774679#comment-17774679
 ] 

Chris Sampson commented on NIFI-10841:
--

[~kdoran] I think this is fixed now, do you agree? I've recently been 
successfully building images on arm64 & amd64 based machine as part of 
NIFI-12175 & NIFI-12178

The only issue I've seen has been in running the {{apache/nifi}} image as part 
of a {{macos}} runner in github, but the build seems to work fine (NIFI-12191)

> Enable dockermaven builds for arm64 architectures
> -
>
> Key: NIFI-10841
> URL: https://issues.apache.org/jira/browse/NIFI-10841
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.18.0
>Reporter: Chris Sampson
>Priority: Major
>
> The dockermaven builds (e.g. nifi, nifi-toolkit, nifi-registry, minifi, 
> minifi-c2) currently do not work on arm64 architecture machines.
> This is due to a known limitation of the Spotify docker build plugin used in 
> the maven profiles.
> An alternative (e.g. fabric8) should allow builds on both amd64 and arm64 
> architectures (and possibly others).
> NIFI-9177 allowed the docker image (e.g. produced using the dockerhub 
> modules) to run on arm64 machines, so that's not a problem to fix, just the 
> actual dockermaven module builds (and tests).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12177) Docker builds (dockermaven) should include testing of the built images

2023-10-12 Thread Chris Sampson (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Sampson updated NIFI-12177:
-
Description: 
The NiFi & NiFi Registry {{dockermaven}} builds currently run an 
{{integration_test}} script once the image has been built in order to confirm a 
container can be started and the component can be contacted.

The same (or similar) approaches should be considered for:
* NiFi Toolkit
* MiNiFi
* MiNiFi C2

This would help avoid situations such as NIFI-12175

Also, the NiFi `dockermaven` test script fails if run with `macos-latest` in 
GitHub Actions (NIFI-12178), likely an incompatibility with the use of Colima 
to get Docker working within the runner, an alternative should be found (to the 
test assertions or the GitHub workflow - see NIFI-12191 as this is possibly an 
issue with the use of Colima)

  was:
The NiFi & NiFi Registry {{dockermaven}} builds currently run an 
{{integration_test}} script once the image has been built in order to confirm a 
container can be started and the component can be contacted.

The same (or similar) approaches should be considered for:
* NiFi Toolkit
* MiNiFi
* MiNiFi C2

This would help avoid situations such as NIFI-12175

Also, the NiFi `dockermaven` test script fails if run with `macos-latest` in 
GitHub Actions (NIFI-12178), likely an incompatibility with the use of Colima 
to get Docker working within the runner, an alternative should be found (to the 
test assertions or the GitHub workflow).


> Docker builds (dockermaven) should include testing of the built images
> --
>
> Key: NIFI-12177
> URL: https://issues.apache.org/jira/browse/NIFI-12177
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Chris Sampson
>Priority: Major
> Fix For: 1.latest, 2.latest
>
>
> The NiFi & NiFi Registry {{dockermaven}} builds currently run an 
> {{integration_test}} script once the image has been built in order to confirm 
> a container can be started and the component can be contacted.
> The same (or similar) approaches should be considered for:
> * NiFi Toolkit
> * MiNiFi
> * MiNiFi C2
> This would help avoid situations such as NIFI-12175
> Also, the NiFi `dockermaven` test script fails if run with `macos-latest` in 
> GitHub Actions (NIFI-12178), likely an incompatibility with the use of Colima 
> to get Docker working within the runner, an alternative should be found (to 
> the test assertions or the GitHub workflow - see NIFI-12191 as this is 
> possibly an issue with the use of Colima)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12178 add integration-tests and docker-tests github actions [nifi]

2023-10-12 Thread via GitHub


ChrisSamo632 commented on PR #7858:
URL: https://github.com/apache/nifi/pull/7858#issuecomment-1760278886

   @exceptionfactory I've rebased from latest `main` as there have been quite a 
few changes over the past few days.
   
   I also noticed I'd missed the `nifi-toolkit` tests from the new 
`docker-tests` workflow, which are failing on `main` due to the `file-manager` 
command no longer being present in the CLI - something I spotted and corrected 
also in [NIFI-12175](https://github.com/apache/nifi/pull/7864)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-12175) Update all component Docker Image builds to java 21

2023-10-12 Thread Chris Sampson (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Sampson updated NIFI-12175:
-
Status: Patch Available  (was: In Progress)

> Update all component Docker Image builds to java 21
> ---
>
> Key: NIFI-12175
> URL: https://issues.apache.org/jira/browse/NIFI-12175
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Chris Sampson
>Assignee: Chris Sampson
>Priority: Major
> Fix For: 2.latest
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> All nifi components (nifi, nifi-registry, nifi-toolkit, minifi, minifi-c2) 
> come with dockerhub and dockermaven modules to build Docker Images. The 
> former builds an image using the source code released to the Apache servers, 
> the latter uses the binary files from the component's assembly module one it 
> has been built locally.
> Nifi has already been updated. Nifi Registry's dockermaven has been updated. 
> Other image builds are outstanding:
> * Nifi registry (dockerhub, found in nifi-registry-core/nifi-registry-docker)
> * Nifi toolkit (both)
> * minifi (both)
> * minifi-c2 (both)
> Note that the {{docker}} maven profile must be enabled to trigger the build 
> of the Docker Images during {{install}} (although some builds only need 
> {{package}} instead, confusingly).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-12175) Update all component Docker Image builds to java 21

2023-10-12 Thread Chris Sampson (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Sampson reassigned NIFI-12175:


Assignee: Chris Sampson

> Update all component Docker Image builds to java 21
> ---
>
> Key: NIFI-12175
> URL: https://issues.apache.org/jira/browse/NIFI-12175
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Chris Sampson
>Assignee: Chris Sampson
>Priority: Major
> Fix For: 2.latest
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> All nifi components (nifi, nifi-registry, nifi-toolkit, minifi, minifi-c2) 
> come with dockerhub and dockermaven modules to build Docker Images. The 
> former builds an image using the source code released to the Apache servers, 
> the latter uses the binary files from the component's assembly module one it 
> has been built locally.
> Nifi has already been updated. Nifi Registry's dockermaven has been updated. 
> Other image builds are outstanding:
> * Nifi registry (dockerhub, found in nifi-registry-core/nifi-registry-docker)
> * Nifi toolkit (both)
> * minifi (both)
> * minifi-c2 (both)
> Note that the {{docker}} maven profile must be enabled to trigger the build 
> of the Docker Images during {{install}} (although some builds only need 
> {{package}} instead, confusingly).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] NIFI-12220: Added ability to create Controller Services from migrateProperties. [nifi]

2023-10-12 Thread via GitHub


markap14 opened a new pull request, #7874:
URL: https://github.com/apache/nifi/pull/7874

   Added ability to get raw property values from PropertyConfiguration instead 
of just effective values. Updated TestRunner to allow for testing these 
migration methods. Eliminated authentication properties from AWS processors and 
migrated all processors to using Controller Service or authentication. 
Eliminated Proxy properties in all AWS processors and instead just make use of 
the Proxy Configuration controller service
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-0](https://issues.apache.org/jira/browse/NIFI-0)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [ ] Pull Request based on current revision of the `main` branch
   - [ ] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 21
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-12220) Allow migrating properties to Controller Services

2023-10-12 Thread Mark Payne (Jira)
Mark Payne created NIFI-12220:
-

 Summary: Allow migrating properties to Controller Services
 Key: NIFI-12220
 URL: https://issues.apache.org/jira/browse/NIFI-12220
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework, Extensions
Reporter: Mark Payne
Assignee: Mark Payne
 Fix For: 2.latest


In NIFI-12139, we allowed processors, controller services, and reporting tasks 
to evolve over time by allowing properties to be renamed/removed and allowing 
Relationships to be renamed, split, etc.

However, there's a common pattern that we see that NIFI-12139 doesn't account 
for: the ability to move properties to Controller Services. This happens 
commonly, especially with authentication types of properties. For example, in 
the AWS processors, we started with a "Secret Key" and an "Access Key ID." But 
over time, we needed other authentication methods. Eventually a Controller 
Service was created for performing authentication, and it allows for many 
different authentication options.

But we cannot simply remove the Access Key ID and Secret Key properties. Their 
values need to be transferred to a Controller Service - one that likely doesn't 
exist yet. We need the ability to have the processor create that controller 
service and configure it in its migrateProperties method. And then remove the 
properties from the processor.

The exact same thing has happened with the AWS Processors' proxy configuration. 
And something similar with Kerberos authentication services. And others.

We need to allow for a seamless migration that supports creating Controller 
Services and configuring them, such that we can remove things like Access Key 
ID / Secret Key from the processor configuration and have the processors 
continue to function just as they did before.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-12219) Implement Migration from H2 to Xodus for Flow Configuration History

2023-10-12 Thread David Handermann (Jira)
David Handermann created NIFI-12219:
---

 Summary: Implement Migration from H2 to Xodus for Flow 
Configuration History
 Key: NIFI-12219
 URL: https://issues.apache.org/jira/browse/NIFI-12219
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Reporter: David Handermann
Assignee: David Handermann
 Fix For: 1.latest


With the refactoring of Flow Configuration History to use JetBrains Xodus 
instead of H2, migration capabilities should be implemented on the support 
branch for subsequent version 1 releases.

The existing {{nifi-h2}} modules have supported migrating between H2 versions, 
so some of these components will enable automated export from H2 and import 
into Xodus when upgrading.

Implementing migration support will also streamline upgrades from NiFi 1 to 
NiFi 2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12205: Moved loading of Python dependencies into background thre… [nifi]

2023-10-12 Thread via GitHub


markap14 commented on PR #7863:
URL: https://github.com/apache/nifi/pull/7863#issuecomment-1760203161

   Thanks @exceptionfactory I rebased.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-12206) Refactor Flow Configuration History from H2 to JetBrains Xodus

2023-10-12 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-12206:
--
Fix Version/s: 2.0.0
   (was: 2.latest)
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Refactor Flow Configuration History from H2 to JetBrains Xodus
> --
>
> Key: NIFI-12206
> URL: https://issues.apache.org/jira/browse/NIFI-12206
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The [H2 Database Engine|https://h2database.com/html/main.html] has provided 
> local persistent storage for several types of information across Apache NiFi 
> versions. With multiple refactoring efforts over several versions, H2 no 
> longer stores user session information or identity provider group 
> information, leaving the Flow Configuration History as the only remaining 
> reference to H2.
> H2 version 2.1 introduced incompatible changes in the binary storage format 
> from H2 version 1.4, and H2 version 2.2 was also unable to read files from 
> earlier H2 versions. These binary changes required custom migration modules 
> and shaded distribution of H2 libraries to support upgrading between Apache 
> NiFi versions.
> With the scope of H2 usage narrowed to Flow Configuration History in Apache 
> NiFi 1.23.0 and following, the storage strategy should be changed. Apache 
> Derby, SQLite, and HSQLDB are other potential options supporting file-based 
> relational storage, but maintenance level and platform-specific limitations 
> present concerns with these libraries.
> The [JetBrains Xodus|https://github.com/JetBrains/xodus] library provides 
> persistent and scalable storage that avoids several issues present in other 
> alternatives. The framework is licensed under Apache Software License Version 
> 2.0 and has a narrow set of dependencies aside from the Kotlin standard 
> libraries. Xodus is now in version 2.0 and has maintained format 
> compatibility when upgrading between major versions.
> Based on the H2 database migration modules on the Apache NiFi support branch, 
> a subsequent issue can implement automated migration from H2 to Xodus, 
> providing an upgrade path from NiFi 1 to 2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12206) Refactor Flow Configuration History from H2 to JetBrains Xodus

2023-10-12 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774657#comment-17774657
 ] 

ASF subversion and git services commented on NIFI-12206:


Commit 22ad7d542d627e767f962b13236da90a0d6410f5 in nifi's branch 
refs/heads/main from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=22ad7d542d ]

NIFI-12206 Refactor Flow History using JetBrains Xodus (#7870)

* NIFI-12206 Refactored Flow History using JetBrains Xodus

- Replaced H2 Database Engine with JetBrains Xodus for persistent storage of 
FlowConfigurationHistory
- Added EntityStoreAuditService implementation using Xodus PersistentEntityStore
- Removed nifi.h2.url.append from properties


> Refactor Flow Configuration History from H2 to JetBrains Xodus
> --
>
> Key: NIFI-12206
> URL: https://issues.apache.org/jira/browse/NIFI-12206
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Major
> Fix For: 2.latest
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The [H2 Database Engine|https://h2database.com/html/main.html] has provided 
> local persistent storage for several types of information across Apache NiFi 
> versions. With multiple refactoring efforts over several versions, H2 no 
> longer stores user session information or identity provider group 
> information, leaving the Flow Configuration History as the only remaining 
> reference to H2.
> H2 version 2.1 introduced incompatible changes in the binary storage format 
> from H2 version 1.4, and H2 version 2.2 was also unable to read files from 
> earlier H2 versions. These binary changes required custom migration modules 
> and shaded distribution of H2 libraries to support upgrading between Apache 
> NiFi versions.
> With the scope of H2 usage narrowed to Flow Configuration History in Apache 
> NiFi 1.23.0 and following, the storage strategy should be changed. Apache 
> Derby, SQLite, and HSQLDB are other potential options supporting file-based 
> relational storage, but maintenance level and platform-specific limitations 
> present concerns with these libraries.
> The [JetBrains Xodus|https://github.com/JetBrains/xodus] library provides 
> persistent and scalable storage that avoids several issues present in other 
> alternatives. The framework is licensed under Apache Software License Version 
> 2.0 and has a narrow set of dependencies aside from the Kotlin standard 
> libraries. Xodus is now in version 2.0 and has maintained format 
> compatibility when upgrading between major versions.
> Based on the H2 database migration modules on the Apache NiFi support branch, 
> a subsequent issue can implement automated migration from H2 to Xodus, 
> providing an upgrade path from NiFi 1 to 2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12206 Refactor Flow History using JetBrains Xodus [nifi]

2023-10-12 Thread via GitHub


markap14 merged PR #7870:
URL: https://github.com/apache/nifi/pull/7870


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12206 Refactor Flow History using JetBrains Xodus [nifi]

2023-10-12 Thread via GitHub


markap14 commented on PR #7870:
URL: https://github.com/apache/nifi/pull/7870#issuecomment-1760198304

   Great work here, @exceptionfactory! I did some testing and tried some 
interesting cases, purging part of the history but not all, changing multiple 
properties at the same time, moving components, creating several types of 
components, configuring, deleting. Parameter contexts, processors, controller 
services, etc. But it worked flawlessly each time! +1 will merge to main.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-12216) If there is an update in Keycloak user ID, Nifi needs to be re-deployed/upgraded

2023-10-12 Thread Gourish (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774622#comment-17774622
 ] 

Gourish commented on NIFI-12216:


This issue seems to happen when a realm update or restore happens in the 
keycloak from the backup. 

does the data/conf_directory/authorizations.xml only can store the id not the 
name of the group?

> If there is an update in Keycloak user ID, Nifi needs to be 
> re-deployed/upgraded
> 
>
> Key: NIFI-12216
> URL: https://issues.apache.org/jira/browse/NIFI-12216
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration Management
>Affects Versions: 1.23.2
>Reporter: Jeetendra G Vasisht
>Priority: Major
>
> If there is an update in Keycloak user ID, Nifi needs to be 
> re-deployed/upgraded. Though nifi uses username/groupname along with Keycloak 
> integration user will see login failure. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12139) Allow for cleaner migration of extensions' configuration

2023-10-12 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-12139:

Fix Version/s: 2.0.0
   (was: 2.latest)
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Allow for cleaner migration of extensions' configuration
> 
>
> Key: NIFI-12139
> URL: https://issues.apache.org/jira/browse/NIFI-12139
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework, Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Over time, Processors, Controller Services, and Reporting Tasks need to 
> evolve. New capabilities become available in the library that it uses or the 
> endpoint it interacts with. The developer overlooked some configuration that 
> is important for some use cases, etc. Or we just have a typo or best 
> practices for naming conventions evolve.
> Today, we have some ways to handle these scenarios:
>  * We can add a new property descriptor with a default value.
>  * We can add a displayName for Property Descriptors
> But these mechanisms are lacking. Some of the problems that we have:
>  * If we add a new Property Descriptor, we generally have to set the default 
> value such that we don't change the behavior of existing components. This 
> sometimes means that we have to use a default value that's not really the 
> best default, just to maintain backward compatibility.
>  * We have to maintain both a 'name' and a 'displayName' for property 
> descriptors. This makes the UI and the API confusing. If the UI shows a 
> property is named 'My Property', setting the value of 'My Property' should 
> work. Instead, we have to set the value of 'my-property' because the that's 
> the property's 'name' - even though it's not made clear in the UI.
>  * If we add a new PropertyDescriptor that doesn't have a default value, all 
> existing instances are made invalid upon upgrade.
>  * If we want to add a new Relationship, unless it is auto-terminated, all 
> existing instances are made invalid upon upgrade - and making the new 
> relationship auto-terminated is rarely OK because it could result in data 
> loss.
>  * There is no way to remove a Relationship.
>  * There is no way to remove a Property Descriptor. Once added, it must stay 
> there, even though it is ignored.
> We need to introduce a new ability to migrate old configuration to a new 
> configuration. Something along the lines of:
> {code:java}
> public void migrateProperties(PropertyConfiguration existingConfig) {
> }{code}
> A default implementation would mean that nothing happens. But an 
> implementation might decide to implement this as such:
> {code:java}
> public void migrateProperties(PropertyConfiguration config) {
> config.renameProperty("old-name", "New Name");
> } {code}
> Or, if a property is no longer necessary, instead of leaving it to be 
> ignored, we could simply use:
> {code:java}
> public void migrateProperties(PropertyConfiguration config) {
> config.removeProperty("deprecated property name");
> }{code}
> This would mean we can actually eliminate the user of displayName and instead 
> just use clean, clear names for properties. This would lead to much less 
> confusion.
> It gives us MUCH more freedom to evolve processors, as well. For example, 
> let's say that we have a Processor that processes a file and then deletes it 
> from a directory. We now decide that it should be configurable - and by 
> default we don't want to delete the file. But for existing processors, we 
> don't want to change their behavior. We can handle this by introducing the 
> new Property Descriptor with a default but having the migration ensure that 
> we don't change existing behavior:
> {code:java}
> static PropertyDescriptor DELETE_FILE_ON_COMPLETION = new 
> PropertyDescriptor.Builder()
>   .name("Delete File on Completion")
>   .description("Whether or not to delete the file after processing 
> completes.")
>   .defaultValue("false")
>   .allowableValues("true", "false")
>   .build();
> ...
> public void migrateProperties(PropertyConfiguration config) {
> // Maintain existing behavior for processors that were created before
> // the option was added to delete files or not.
> if (!config.hasProperty(DELETE_FILE_ON_COMPLETION)) {
> config.setProperty(DELETE_FILE_ON_COMPLETION, "true");
> }
> }{code}
> Importantly, we also want the ability to handle evolution of Relationships:
> {code:java}
> public void migrateRelationships(RelationshipConfiguration config) {
> } {code}
> If we decide that we now want to rename the "comms.failure" relationship to 
> "Communications 

Re: [PR] NIFI-12160: Kafka Connect: Check if all the necessary nars have been … [nifi]

2023-10-12 Thread via GitHub


pgyori commented on PR #7832:
URL: https://github.com/apache/nifi/pull/7832#issuecomment-1759980543

   Thank you @exceptionfactory ! I made the changes you recommended. Can you 
please recheck when your time permits?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-12139) Allow for cleaner migration of extensions' configuration

2023-10-12 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-12139:

Issue Type: New Feature  (was: Task)

> Allow for cleaner migration of extensions' configuration
> 
>
> Key: NIFI-12139
> URL: https://issues.apache.org/jira/browse/NIFI-12139
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework, Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 2.latest
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Over time, Processors, Controller Services, and Reporting Tasks need to 
> evolve. New capabilities become available in the library that it uses or the 
> endpoint it interacts with. The developer overlooked some configuration that 
> is important for some use cases, etc. Or we just have a typo or best 
> practices for naming conventions evolve.
> Today, we have some ways to handle these scenarios:
>  * We can add a new property descriptor with a default value.
>  * We can add a displayName for Property Descriptors
> But these mechanisms are lacking. Some of the problems that we have:
>  * If we add a new Property Descriptor, we generally have to set the default 
> value such that we don't change the behavior of existing components. This 
> sometimes means that we have to use a default value that's not really the 
> best default, just to maintain backward compatibility.
>  * We have to maintain both a 'name' and a 'displayName' for property 
> descriptors. This makes the UI and the API confusing. If the UI shows a 
> property is named 'My Property', setting the value of 'My Property' should 
> work. Instead, we have to set the value of 'my-property' because the that's 
> the property's 'name' - even though it's not made clear in the UI.
>  * If we add a new PropertyDescriptor that doesn't have a default value, all 
> existing instances are made invalid upon upgrade.
>  * If we want to add a new Relationship, unless it is auto-terminated, all 
> existing instances are made invalid upon upgrade - and making the new 
> relationship auto-terminated is rarely OK because it could result in data 
> loss.
>  * There is no way to remove a Relationship.
>  * There is no way to remove a Property Descriptor. Once added, it must stay 
> there, even though it is ignored.
> We need to introduce a new ability to migrate old configuration to a new 
> configuration. Something along the lines of:
> {code:java}
> public void migrateProperties(PropertyConfiguration existingConfig) {
> }{code}
> A default implementation would mean that nothing happens. But an 
> implementation might decide to implement this as such:
> {code:java}
> public void migrateProperties(PropertyConfiguration config) {
> config.renameProperty("old-name", "New Name");
> } {code}
> Or, if a property is no longer necessary, instead of leaving it to be 
> ignored, we could simply use:
> {code:java}
> public void migrateProperties(PropertyConfiguration config) {
> config.removeProperty("deprecated property name");
> }{code}
> This would mean we can actually eliminate the user of displayName and instead 
> just use clean, clear names for properties. This would lead to much less 
> confusion.
> It gives us MUCH more freedom to evolve processors, as well. For example, 
> let's say that we have a Processor that processes a file and then deletes it 
> from a directory. We now decide that it should be configurable - and by 
> default we don't want to delete the file. But for existing processors, we 
> don't want to change their behavior. We can handle this by introducing the 
> new Property Descriptor with a default but having the migration ensure that 
> we don't change existing behavior:
> {code:java}
> static PropertyDescriptor DELETE_FILE_ON_COMPLETION = new 
> PropertyDescriptor.Builder()
>   .name("Delete File on Completion")
>   .description("Whether or not to delete the file after processing 
> completes.")
>   .defaultValue("false")
>   .allowableValues("true", "false")
>   .build();
> ...
> public void migrateProperties(PropertyConfiguration config) {
> // Maintain existing behavior for processors that were created before
> // the option was added to delete files or not.
> if (!config.hasProperty(DELETE_FILE_ON_COMPLETION)) {
> config.setProperty(DELETE_FILE_ON_COMPLETION, "true");
> }
> }{code}
> Importantly, we also want the ability to handle evolution of Relationships:
> {code:java}
> public void migrateRelationships(RelationshipConfiguration config) {
> } {code}
> If we decide that we now want to rename the "comms.failure" relationship to 
> "Communications Failure" we can do so thusly:
> {code:java}
> public void 

[jira] [Commented] (NIFI-12139) Allow for cleaner migration of extensions' configuration

2023-10-12 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774612#comment-17774612
 ] 

ASF subversion and git services commented on NIFI-12139:


Commit abfc49e21268ac75228ea4a08161b107bdf5b132 in nifi's branch 
refs/heads/main from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=abfc49e212 ]

NIFI-12139: Implemented migrateProperties, migrateRelationships for processors, 
as well as migrateProperties for ReportingTasks and Controller Services. Added 
system tests to verify behavior.

- Ensure that after restoring nars in the lib/ directory we restart NiFi so 
that they take effect. This is important if this test is not the last one to run

This closes #7837

Signed-off-by: David Handermann 


> Allow for cleaner migration of extensions' configuration
> 
>
> Key: NIFI-12139
> URL: https://issues.apache.org/jira/browse/NIFI-12139
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Core Framework, Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 2.latest
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Over time, Processors, Controller Services, and Reporting Tasks need to 
> evolve. New capabilities become available in the library that it uses or the 
> endpoint it interacts with. The developer overlooked some configuration that 
> is important for some use cases, etc. Or we just have a typo or best 
> practices for naming conventions evolve.
> Today, we have some ways to handle these scenarios:
>  * We can add a new property descriptor with a default value.
>  * We can add a displayName for Property Descriptors
> But these mechanisms are lacking. Some of the problems that we have:
>  * If we add a new Property Descriptor, we generally have to set the default 
> value such that we don't change the behavior of existing components. This 
> sometimes means that we have to use a default value that's not really the 
> best default, just to maintain backward compatibility.
>  * We have to maintain both a 'name' and a 'displayName' for property 
> descriptors. This makes the UI and the API confusing. If the UI shows a 
> property is named 'My Property', setting the value of 'My Property' should 
> work. Instead, we have to set the value of 'my-property' because the that's 
> the property's 'name' - even though it's not made clear in the UI.
>  * If we add a new PropertyDescriptor that doesn't have a default value, all 
> existing instances are made invalid upon upgrade.
>  * If we want to add a new Relationship, unless it is auto-terminated, all 
> existing instances are made invalid upon upgrade - and making the new 
> relationship auto-terminated is rarely OK because it could result in data 
> loss.
>  * There is no way to remove a Relationship.
>  * There is no way to remove a Property Descriptor. Once added, it must stay 
> there, even though it is ignored.
> We need to introduce a new ability to migrate old configuration to a new 
> configuration. Something along the lines of:
> {code:java}
> public void migrateProperties(PropertyConfiguration existingConfig) {
> }{code}
> A default implementation would mean that nothing happens. But an 
> implementation might decide to implement this as such:
> {code:java}
> public void migrateProperties(PropertyConfiguration config) {
> config.renameProperty("old-name", "New Name");
> } {code}
> Or, if a property is no longer necessary, instead of leaving it to be 
> ignored, we could simply use:
> {code:java}
> public void migrateProperties(PropertyConfiguration config) {
> config.removeProperty("deprecated property name");
> }{code}
> This would mean we can actually eliminate the user of displayName and instead 
> just use clean, clear names for properties. This would lead to much less 
> confusion.
> It gives us MUCH more freedom to evolve processors, as well. For example, 
> let's say that we have a Processor that processes a file and then deletes it 
> from a directory. We now decide that it should be configurable - and by 
> default we don't want to delete the file. But for existing processors, we 
> don't want to change their behavior. We can handle this by introducing the 
> new Property Descriptor with a default but having the migration ensure that 
> we don't change existing behavior:
> {code:java}
> static PropertyDescriptor DELETE_FILE_ON_COMPLETION = new 
> PropertyDescriptor.Builder()
>   .name("Delete File on Completion")
>   .description("Whether or not to delete the file after processing 
> completes.")
>   .defaultValue("false")
>   .allowableValues("true", "false")
>   .build();
> ...
> public void migrateProperties(PropertyConfiguration config) {
> // Maintain existing behavior for 

Re: [PR] NIFI-12139: Implemented migrateProperties, migrateRelationships for p… [nifi]

2023-10-12 Thread via GitHub


exceptionfactory closed pull request #7837: NIFI-12139: Implemented 
migrateProperties, migrateRelationships for p…
URL: https://github.com/apache/nifi/pull/7837


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12139: Implemented migrateProperties, migrateRelationships for p… [nifi]

2023-10-12 Thread via GitHub


exceptionfactory commented on code in PR #7837:
URL: https://github.com/apache/nifi/pull/7837#discussion_r1357078379


##
nifi-system-tests/nifi-system-test-suite/src/test/resources/conf/default/bootstrap.conf:
##
@@ -27,7 +27,7 @@ java.arg.3=-Xmx512m
 
 java.arg.14=-Djava.awt.headless=true
 
-#java.arg.debug=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8002
+java.arg.debug=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8002

Review Comment:
   It looks like this change should not be included, so that remote debugging 
is disabled by default in the system tests.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] NIFI-12206 Refactor Flow History using JetBrains Xodus [nifi]

2023-10-12 Thread via GitHub


markap14 commented on code in PR #7870:
URL: https://github.com/apache/nifi/pull/7870#discussion_r1357030282


##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-administration/src/main/java/org/apache/nifi/admin/service/EntityStoreAuditService.java:
##
@@ -0,0 +1,636 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.admin.service;
+
+import jetbrains.exodus.entitystore.Entity;
+import jetbrains.exodus.entitystore.EntityId;
+import jetbrains.exodus.entitystore.EntityIterable;
+import jetbrains.exodus.entitystore.PersistentEntityStore;
+import jetbrains.exodus.entitystore.PersistentEntityStores;
+import jetbrains.exodus.entitystore.StoreTransaction;
+import jetbrains.exodus.env.Environment;
+import jetbrains.exodus.env.EnvironmentConfig;
+import jetbrains.exodus.env.Environments;
+import org.apache.nifi.action.Action;
+import org.apache.nifi.action.Component;
+import org.apache.nifi.action.FlowChangeAction;
+import org.apache.nifi.action.Operation;
+import org.apache.nifi.action.component.details.ComponentDetails;
+import org.apache.nifi.action.component.details.ExtensionDetails;
+import org.apache.nifi.action.component.details.FlowChangeExtensionDetails;
+import 
org.apache.nifi.action.component.details.FlowChangeRemoteProcessGroupDetails;
+import org.apache.nifi.action.component.details.RemoteProcessGroupDetails;
+import org.apache.nifi.action.details.ActionDetails;
+import org.apache.nifi.action.details.ConfigureDetails;
+import org.apache.nifi.action.details.ConnectDetails;
+import org.apache.nifi.action.details.FlowChangeConfigureDetails;
+import org.apache.nifi.action.details.FlowChangeConnectDetails;
+import org.apache.nifi.action.details.FlowChangeMoveDetails;
+import org.apache.nifi.action.details.FlowChangePurgeDetails;
+import org.apache.nifi.action.details.MoveDetails;
+import org.apache.nifi.action.details.PurgeDetails;
+import org.apache.nifi.admin.service.entity.ActionEntity;
+import org.apache.nifi.admin.service.entity.ActionLink;
+import org.apache.nifi.admin.service.entity.ConfigureDetailsEntity;
+import org.apache.nifi.admin.service.entity.ConnectDetailsEntity;
+import org.apache.nifi.admin.service.entity.EntityProperty;
+import org.apache.nifi.admin.service.entity.EntityType;
+import org.apache.nifi.admin.service.entity.MoveDetailsEntity;
+import org.apache.nifi.admin.service.entity.PurgeDetailsEntity;
+import org.apache.nifi.history.History;
+import org.apache.nifi.history.HistoryQuery;
+import org.apache.nifi.history.PreviousValue;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.Closeable;
+import java.io.File;
+import java.io.IOException;
+import java.io.UncheckedIOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.StandardCopyOption;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Date;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.stream.Stream;
+
+/**
+ * Audit Service implementation based on JetBrains Xodus Entity Store
+ */
+public class EntityStoreAuditService implements AuditService, Closeable {
+private static final long FIRST_START_TIME = 0;
+
+private static final int PREVIOUS_VALUES_LIMIT = 5;
+
+private static final String ASCENDING_SORT_ORDER = "ASC";
+
+private static final int DEFAULT_COUNT = 100;
+
+private static final String BACKUP_FILE_NAME_FORMAT = "%s.backup.%d";
+
+private static final Logger logger = 
LoggerFactory.getLogger(EntityStoreAuditService.class);
+
+private final PersistentEntityStore entityStore;
+
+private final Environment environment;
+
+/**
+ * Entity Store Audit Service constructor with required properties for 
persistent location
+ *
+ * @param directory Persistent Entity Store directory
+ */
+public EntityStoreAuditService(final File directory) {
+environment = loadEnvironment(directory);
+entityStore = PersistentEntityStores.newInstance(environment);
+logger.info("Environment configured with directory [{}]", directory);
+}
+
+/**
+ * Add Actions to Persistent Store
+ 

[jira] [Updated] (NIFI-12218) Remove SecureHasher and SensitiveValueEncoder

2023-10-12 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-12218:
--
Fix Version/s: 2.0.0
   (was: 2.latest)
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Remove SecureHasher and SensitiveValueEncoder
> -
>
> Key: NIFI-12218
> URL: https://issues.apache.org/jira/browse/NIFI-12218
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 2.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Support for flow configuration in XML required a fingerprinting strategy that 
> relied on hashing sensitive values for comparison. With the removal of 
> support for flow.xml.gz persistence, the SensitiveValueEncoder interface and 
> implementation are no longer required. The standard encoder implementation 
> used the SecureHasher interface and implementations in nifi-security-utils, 
> but these components have no other uses in the application. Both the encoder 
> and hasher interface and implementations should be removed from the main 
> branch.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12218 Remove SensitiveValueEncoder and SecureHasher [nifi]

2023-10-12 Thread via GitHub


asfgit closed pull request #7873: NIFI-12218 Remove SensitiveValueEncoder and 
SecureHasher
URL: https://github.com/apache/nifi/pull/7873


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-12218) Remove SecureHasher and SensitiveValueEncoder

2023-10-12 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774580#comment-17774580
 ] 

ASF subversion and git services commented on NIFI-12218:


Commit a849ca044c7f41f1f3c445abfe9164e248f62fe5 in nifi's branch 
refs/heads/main from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=a849ca044c ]

NIFI-12218 Removed SensitiveValueEncoder and SecureHasher

- SensitiveValueEncoder and SecureHasher are no longer required following the 
removal of support for flow.xml.gz

Signed-off-by: Pierre Villard 

This closes #7873.


> Remove SecureHasher and SensitiveValueEncoder
> -
>
> Key: NIFI-12218
> URL: https://issues.apache.org/jira/browse/NIFI-12218
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 2.latest
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Support for flow configuration in XML required a fingerprinting strategy that 
> relied on hashing sensitive values for comparison. With the removal of 
> support for flow.xml.gz persistence, the SensitiveValueEncoder interface and 
> implementation are no longer required. The standard encoder implementation 
> used the SecureHasher interface and implementations in nifi-security-utils, 
> but these components have no other uses in the application. Both the encoder 
> and hasher interface and implementations should be removed from the main 
> branch.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12206 Refactor Flow History using JetBrains Xodus [nifi]

2023-10-12 Thread via GitHub


markap14 commented on PR #7870:
URL: https://github.com/apache/nifi/pull/7870#issuecomment-1759631930

   Thanks @exceptionfactory will review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


fgerlits commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356813751


##
libminifi/include/core/logging/Logger.h:
##
@@ -112,19 +62,42 @@ enum LOG_LEVEL {
   off = 6
 };
 
+inline spdlog::level::level_enum mapToSpdLogLevel(LOG_LEVEL level) {
+  switch (level) {
+case trace: return spdlog::level::trace;
+case debug: return spdlog::level::debug;
+case info: return spdlog::level::info;
+case warn: return spdlog::level::warn;
+case err: return spdlog::level::err;
+case critical: return spdlog::level::critical;
+case off: return spdlog::level::off;
+  }
+  throw std::invalid_argument(fmt::format("Invalid LOG_LEVEL {}", 
magic_enum::enum_underlying(level)));
+}
+
+inline LOG_LEVEL mapFromSpdLogLevel(spdlog::level::level_enum level) {
+  switch (level) {
+case spdlog::level::trace: return LOG_LEVEL::trace;
+case spdlog::level::debug: return LOG_LEVEL::debug;
+case spdlog::level::info: return LOG_LEVEL::info;
+case spdlog::level::warn: return LOG_LEVEL::warn;
+case spdlog::level::err: return LOG_LEVEL::err;
+case spdlog::level::critical: return LOG_LEVEL::critical;
+case spdlog::level::off: return LOG_LEVEL::off;
+case spdlog::level::n_levels: {}  // fallthrough
+  }
+  throw std::invalid_argument(fmt::format("Invalid spdlog::level::level_enum 
{}", magic_enum::enum_underlying(level)));
+}
+

Review Comment:
   my fault, I suggested falling through -- `break` is fine, too, of course



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


martinzink commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356813775


##
libminifi/include/core/logging/Logger.h:
##
@@ -112,19 +62,42 @@ enum LOG_LEVEL {
   off = 6
 };
 
+inline spdlog::level::level_enum mapToSpdLogLevel(LOG_LEVEL level) {
+  switch (level) {
+case trace: return spdlog::level::trace;
+case debug: return spdlog::level::debug;
+case info: return spdlog::level::info;
+case warn: return spdlog::level::warn;
+case err: return spdlog::level::err;
+case critical: return spdlog::level::critical;
+case off: return spdlog::level::off;
+  }
+  throw std::invalid_argument(fmt::format("Invalid LOG_LEVEL {}", 
magic_enum::enum_underlying(level)));
+}
+
+inline LOG_LEVEL mapFromSpdLogLevel(spdlog::level::level_enum level) {
+  switch (level) {
+case spdlog::level::trace: return LOG_LEVEL::trace;
+case spdlog::level::debug: return LOG_LEVEL::debug;
+case spdlog::level::info: return LOG_LEVEL::info;
+case spdlog::level::warn: return LOG_LEVEL::warn;
+case spdlog::level::err: return LOG_LEVEL::err;
+case spdlog::level::critical: return LOG_LEVEL::critical;
+case spdlog::level::off: return LOG_LEVEL::off;
+case spdlog::level::n_levels: {}  // fallthrough
+  }
+  throw std::invalid_argument(fmt::format("Invalid spdlog::level::level_enum 
{}", magic_enum::enum_underlying(level)));
+}
+

Review Comment:
   
https://github.com/apache/nifi-minifi-cpp/pull/1670/commits/3c1466679cbc9c76bc7d702cf03916426c387a4a



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-12218) Remove SecureHasher and SensitiveValueEncoder

2023-10-12 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-12218:

Status: Patch Available  (was: Open)

> Remove SecureHasher and SensitiveValueEncoder
> -
>
> Key: NIFI-12218
> URL: https://issues.apache.org/jira/browse/NIFI-12218
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 2.latest
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Support for flow configuration in XML required a fingerprinting strategy that 
> relied on hashing sensitive values for comparison. With the removal of 
> support for flow.xml.gz persistence, the SensitiveValueEncoder interface and 
> implementation are no longer required. The standard encoder implementation 
> used the SecureHasher interface and implementations in nifi-security-utils, 
> but these components have no other uses in the application. Both the encoder 
> and hasher interface and implementations should be removed from the main 
> branch.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] NIFI-12218 Remove SensitiveValueEncoder and SecureHasher [nifi]

2023-10-12 Thread via GitHub


exceptionfactory opened a new pull request, #7873:
URL: https://github.com/apache/nifi/pull/7873

   # Summary
   
   [NIFI-12218](https://issues.apache.org/jira/browse/NIFI-12218) Removes 
`SensitiveValueEncoder` from `nifi-framework-core` and `SecureHasher` from 
`nifi-security-utils`. These components supported hashing sensitive values as 
part of flow fingerprint comparison, which is no longer required following the 
removal of support for flow.xml.gz persistent configuration.
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [X] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [X] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [X] Pull Request based on current revision of the `main` branch
   - [X] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [X] Build completed using `mvn clean install -P contrib-check`
 - [X] JDK 21
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-12218) Remove SecureHasher and SensitiveValueEncoder

2023-10-12 Thread David Handermann (Jira)
David Handermann created NIFI-12218:
---

 Summary: Remove SecureHasher and SensitiveValueEncoder
 Key: NIFI-12218
 URL: https://issues.apache.org/jira/browse/NIFI-12218
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: David Handermann
Assignee: David Handermann
 Fix For: 2.latest


Support for flow configuration in XML required a fingerprinting strategy that 
relied on hashing sensitive values for comparison. With the removal of support 
for flow.xml.gz persistence, the SensitiveValueEncoder interface and 
implementation are no longer required. The standard encoder implementation used 
the SecureHasher interface and implementations in nifi-security-utils, but 
these components have no other uses in the application. Both the encoder and 
hasher interface and implementations should be removed from the main branch.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


szaszm commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356765726


##
libminifi/include/core/logging/Logger.h:
##
@@ -112,19 +62,42 @@ enum LOG_LEVEL {
   off = 6
 };
 
+inline spdlog::level::level_enum mapToSpdLogLevel(LOG_LEVEL level) {
+  switch (level) {
+case trace: return spdlog::level::trace;
+case debug: return spdlog::level::debug;
+case info: return spdlog::level::info;
+case warn: return spdlog::level::warn;
+case err: return spdlog::level::err;
+case critical: return spdlog::level::critical;
+case off: return spdlog::level::off;
+  }
+  throw std::invalid_argument(fmt::format("Invalid LOG_LEVEL {}", 
magic_enum::enum_underlying(level)));
+}
+
+inline LOG_LEVEL mapFromSpdLogLevel(spdlog::level::level_enum level) {
+  switch (level) {
+case spdlog::level::trace: return LOG_LEVEL::trace;
+case spdlog::level::debug: return LOG_LEVEL::debug;
+case spdlog::level::info: return LOG_LEVEL::info;
+case spdlog::level::warn: return LOG_LEVEL::warn;
+case spdlog::level::err: return LOG_LEVEL::err;
+case spdlog::level::critical: return LOG_LEVEL::critical;
+case spdlog::level::off: return LOG_LEVEL::off;
+case spdlog::level::n_levels: return LOG_LEVEL::off;
+  }
+  throw std::invalid_argument(fmt::format("Invalid spdlog::level::level_enum 
{}", magic_enum::enum_underlying(level)));
+}
+
 class BaseLogger {
  public:
   virtual ~BaseLogger();
 
   virtual void log_string(LOG_LEVEL level, std::string str) = 0;
-
-  virtual bool should_log(const LOG_LEVEL );
+  virtual bool should_log(LOG_LEVEL level) = 0;
+  [[nodiscard]] virtual LOG_LEVEL level() const = 0;
 };
 
-/**
- * LogBuilder is a class to facilitate using the LOG macros below and an 
associated put-to operator.
- *
- */
 class LogBuilder {

Review Comment:
   The LogBuilder was added back in f3675504efe178492a66dc96dc988be23925a086



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


szaszm commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356765726


##
libminifi/include/core/logging/Logger.h:
##
@@ -112,19 +62,42 @@ enum LOG_LEVEL {
   off = 6
 };
 
+inline spdlog::level::level_enum mapToSpdLogLevel(LOG_LEVEL level) {
+  switch (level) {
+case trace: return spdlog::level::trace;
+case debug: return spdlog::level::debug;
+case info: return spdlog::level::info;
+case warn: return spdlog::level::warn;
+case err: return spdlog::level::err;
+case critical: return spdlog::level::critical;
+case off: return spdlog::level::off;
+  }
+  throw std::invalid_argument(fmt::format("Invalid LOG_LEVEL {}", 
magic_enum::enum_underlying(level)));
+}
+
+inline LOG_LEVEL mapFromSpdLogLevel(spdlog::level::level_enum level) {
+  switch (level) {
+case spdlog::level::trace: return LOG_LEVEL::trace;
+case spdlog::level::debug: return LOG_LEVEL::debug;
+case spdlog::level::info: return LOG_LEVEL::info;
+case spdlog::level::warn: return LOG_LEVEL::warn;
+case spdlog::level::err: return LOG_LEVEL::err;
+case spdlog::level::critical: return LOG_LEVEL::critical;
+case spdlog::level::off: return LOG_LEVEL::off;
+case spdlog::level::n_levels: return LOG_LEVEL::off;
+  }
+  throw std::invalid_argument(fmt::format("Invalid spdlog::level::level_enum 
{}", magic_enum::enum_underlying(level)));
+}
+
 class BaseLogger {
  public:
   virtual ~BaseLogger();
 
   virtual void log_string(LOG_LEVEL level, std::string str) = 0;
-
-  virtual bool should_log(const LOG_LEVEL );
+  virtual bool should_log(LOG_LEVEL level) = 0;
+  [[nodiscard]] virtual LOG_LEVEL level() const = 0;
 };
 
-/**
- * LogBuilder is a class to facilitate using the LOG macros below and an 
associated put-to operator.
- *
- */
 class LogBuilder {

Review Comment:
   This "simplifications" commit doesn't seem to be included in the diff. Was 
it accidentally reverted or removed?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


szaszm commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356763815


##
libminifi/include/core/logging/Logger.h:
##
@@ -112,19 +62,42 @@ enum LOG_LEVEL {
   off = 6
 };
 
+inline spdlog::level::level_enum mapToSpdLogLevel(LOG_LEVEL level) {
+  switch (level) {
+case trace: return spdlog::level::trace;
+case debug: return spdlog::level::debug;
+case info: return spdlog::level::info;
+case warn: return spdlog::level::warn;
+case err: return spdlog::level::err;
+case critical: return spdlog::level::critical;
+case off: return spdlog::level::off;
+  }
+  throw std::invalid_argument(fmt::format("Invalid LOG_LEVEL {}", 
magic_enum::enum_underlying(level)));
+}
+
+inline LOG_LEVEL mapFromSpdLogLevel(spdlog::level::level_enum level) {
+  switch (level) {
+case spdlog::level::trace: return LOG_LEVEL::trace;
+case spdlog::level::debug: return LOG_LEVEL::debug;
+case spdlog::level::info: return LOG_LEVEL::info;
+case spdlog::level::warn: return LOG_LEVEL::warn;
+case spdlog::level::err: return LOG_LEVEL::err;
+case spdlog::level::critical: return LOG_LEVEL::critical;
+case spdlog::level::off: return LOG_LEVEL::off;
+case spdlog::level::n_levels: {}  // fallthrough
+  }
+  throw std::invalid_argument(fmt::format("Invalid spdlog::level::level_enum 
{}", magic_enum::enum_underlying(level)));
+}
+

Review Comment:
   Why does this fall through? A break would mean the same in this case.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


martinzink commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356644621


##
extensions/windows-event-log/CollectorInitiatedSubscription.cpp:
##
@@ -481,7 +486,7 @@ bool CollectorInitiatedSubscription::subscribe(const 
std::shared_ptr 
pCollectorInitiatedSubscription->max_buffer_size_.getValue()) {
-logger->log_error("Dropping event %p because it couldn't be 
rendered within %llu bytes.", hEvent, 
pCollectorInitiatedSubscription->max_buffer_size_.getValue());
+logger->log_error("Dropping event {} because it couldn't be 
rendered within {} bytes.", hEvent, 
pCollectorInitiatedSubscription->max_buffer_size_.getValue());

Review Comment:
   Originally it was %x (hex integer) and %p (address in hex), I did the 
replacements by type not by file.
   On closer inspection it is due to the "ingenious" design from microsoft. 
This is an event handler if everything goes according to plan or an error 
number if its not.
   
   I think it would be okay to print it with the default both times, but I am 
not that familiar with CollectorInitiatedSubscription. 
   
   
https://learn.microsoft.com/en-us/windows/win32/api/winevt/nc-winevt-evt_subscribe_callback
   
   
   > Event
   > 
   > A handle to the event. The event handle is only valid for the duration of 
the callback function. You can use this handle with any event log function that 
takes an event handle (for example, 
[EvtRender](https://learn.microsoft.com/en-us/windows/desktop/api/winevt/nf-winevt-evtrender)
 or 
[EvtFormatMessage](https://learn.microsoft.com/en-us/windows/desktop/api/winevt/nf-winevt-evtformatmessage)).
   > 
   > Do not call 
[EvtClose](https://learn.microsoft.com/en-us/windows/desktop/api/winevt/nf-winevt-evtclose)
 to close this handle; the service will close the handle when the callback 
returns.
   > 
   > If the Action parameter is EvtSubscribeActionError, cast Event to a DWORD 
to access the Win32 error code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


szaszm commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356704640


##
extensions/windows-event-log/CollectorInitiatedSubscription.cpp:
##
@@ -481,7 +486,7 @@ bool CollectorInitiatedSubscription::subscribe(const 
std::shared_ptr 
pCollectorInitiatedSubscription->max_buffer_size_.getValue()) {
-logger->log_error("Dropping event %p because it couldn't be 
rendered within %llu bytes.", hEvent, 
pCollectorInitiatedSubscription->max_buffer_size_.getValue());
+logger->log_error("Dropping event {} because it couldn't be 
rendered within {} bytes.", hEvent, 
pCollectorInitiatedSubscription->max_buffer_size_.getValue());

Review Comment:
   Makes sense, it's probably fine.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2248 Refactor string::join_pack [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


szaszm commented on code in PR #1680:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1680#discussion_r1356699329


##
libminifi/include/utils/StringUtils.h:
##
@@ -187,48 +187,40 @@ constexpr size_t size(const 
std::basic_string_view& str) noexcept { retur
 
 namespace detail {
 // implementation detail of join_pack
-template
-struct str_detector {
-  template
-  using valid_string_t = 
decltype(std::declval>().append(std::declval()));
+template
+concept valid_string = requires(std::basic_string left, Str appended) {
+left.append(appended);
 };
 
-template
-using valid_string_pack_t = 
std::enable_if_t<(meta::is_detected_v::template 
valid_string_t, Strs> && ...), ResultT>;
+// normally CharT would be at the end, because we're checking that Strs are 
valid strings for CharT,
+// but the syntax needs the parameter pack at the end
+template
+concept valid_string_pack = (valid_string && ...);
 
-template* = nullptr>
-static std::basic_string join_pack(const Strs&... strs) {
-  std::basic_string result;
-  result.reserve((size(strs) + ...));
-  (result.append(strs), ...);
-  return result;
-}
+template struct char_type_of_impl {};
+template struct char_type_of_impl> : 
std::type_identity {};
+template struct char_type_of_impl : 
std::type_identity> {};

Review Comment:
   `const char*` and `char*` are both considered strings of `char`.



##
libminifi/include/utils/StringUtils.h:
##
@@ -187,48 +187,40 @@ constexpr size_t size(const 
std::basic_string_view& str) noexcept { retur
 
 namespace detail {
 // implementation detail of join_pack
-template
-struct str_detector {
-  template
-  using valid_string_t = 
decltype(std::declval>().append(std::declval()));
+template
+concept valid_string = requires(std::basic_string left, Str appended) {
+left.append(appended);
 };
 
-template
-using valid_string_pack_t = 
std::enable_if_t<(meta::is_detected_v::template 
valid_string_t, Strs> && ...), ResultT>;
+// normally CharT would be at the end, because we're checking that Strs are 
valid strings for CharT,
+// but the syntax needs the parameter pack at the end
+template
+concept valid_string_pack = (valid_string && ...);
 
-template* = nullptr>
-static std::basic_string join_pack(const Strs&... strs) {
-  std::basic_string result;
-  result.reserve((size(strs) + ...));
-  (result.append(strs), ...);
-  return result;
-}
+template struct char_type_of_impl {};
+template struct char_type_of_impl> : 
std::type_identity {};
+template struct char_type_of_impl : 
std::type_identity> {};
+template struct 
char_type_of_impl> : std::type_identity {};
+template using char_type_of = typename 
char_type_of_impl>::type;

Review Comment:
   Decay here is intentional: not only do we want to ignore cv-qualifiers and 
references, but `char` arrays should be decayed to pointers, to match the 
second specialization above.



##
CPPLINT.cfg:
##
@@ -1,2 +1,2 @@
 set noparent
-filter=-runtime/reference,-runtime/string,-build/c++11,-build/include_subdir,-whitespace/forcolon,-build/namespaces_literals,-readability/check,-build/include_what_you_use,-readability/nolint
+filter=-runtime/reference,-runtime/string,-build/c++11,-build/include_subdir,-whitespace/forcolon,-build/namespaces_literals,-readability/check,-build/include_what_you_use,-readability/nolint,-readability/braces

Review Comment:
   Got this after both concept declarations:
   ```
   /home/szaszm/nifi-minifi-cpp-3/libminifi/include/utils/StringUtils.h:193:  
You don't need a ; after a }  [readability/braces] [4]
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] MINIFICPP-2248 Refactor string::join_pack [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


szaszm opened a new pull request, #1680:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1680

   + remove obsolete `-fconcepts` flag from `RangeV3.cmake`
   + disable `readability/braces` of cpplint, because it can't handle concepts
   
   ---
   
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [x] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [x] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [x] If applicable, have you updated the LICENSE file?
   - [x] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [x] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (MINIFICPP-2248) Refactor string::join_pack

2023-10-12 Thread Marton Szasz (Jira)
Marton Szasz created MINIFICPP-2248:
---

 Summary: Refactor string::join_pack
 Key: MINIFICPP-2248
 URL: https://issues.apache.org/jira/browse/MINIFICPP-2248
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Marton Szasz






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


martinzink commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356662133


##
extensions/sftp/processors/ListSFTP.cpp:
##
@@ -539,7 +537,7 @@ void ListSFTP::listByTrackingTimestamps(
   }
   /* If the latest timestamp is not old enough, we wait another cycle */
   if (latest_listed_entry_timestamp_this_cycle && 
minimum_reliable_timestamp < latest_listed_entry_timestamp_this_cycle) {
-logger_->log_debug("Skipping files with latest timestamp because their 
modification date is not smaller than the minimum reliable timestamp: %lu ms >= 
%lu ms",
+logger_->log_debug("Skipping files with latest timestamp because their 
modification date is not smaller than the minimum reliable timestamp: {} ms >= 
{} ms",

toUnixTime(latest_listed_entry_timestamp_this_cycle),
toUnixTime(minimum_reliable_timestamp));

Review Comment:
   good idea, 
https://github.com/apache/nifi-minifi-cpp/commit/20f5d63d2aef618ca6e98fa298a06376e544f1bc#diff-2a26794f434d15c4c7911b91d7bf66a6e8069d49856da2b30c556439adb1b67cL541-L542



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


martinzink commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356659650


##
libminifi/src/c2/ControllerSocketProtocol.cpp:
##
@@ -401,7 +401,7 @@ asio::awaitable 
ControllerSocketProtocol::handleCommand(std::unique_ptrlog_error("Unhandled C2 operation: %s", std::to_string(head));
+  logger_->log_error("Unhandled C2 operation: {}", std::to_string(head));

Review Comment:
   :+1: 
https://github.com/apache/nifi-minifi-cpp/pull/1670/commits/92a5f25321f81e3b7bf7a5d12d611d450ea432dd#diff-a88b2fa8fb48f48447f432950baa01228e306398d287546060ec32594cdb1b70L404



##
libminifi/include/core/logging/Logger.h:
##
@@ -112,19 +62,42 @@ enum LOG_LEVEL {
   off = 6
 };
 
+inline spdlog::level::level_enum mapToSpdLogLevel(LOG_LEVEL level) {
+  switch (level) {
+case trace: return spdlog::level::trace;
+case debug: return spdlog::level::debug;
+case info: return spdlog::level::info;
+case warn: return spdlog::level::warn;
+case err: return spdlog::level::err;
+case critical: return spdlog::level::critical;
+case off: return spdlog::level::off;
+  }
+  throw std::invalid_argument(fmt::format("Invalid LOG_LEVEL {}", 
magic_enum::enum_underlying(level)));
+}
+
+inline LOG_LEVEL mapFromSpdLogLevel(spdlog::level::level_enum level) {
+  switch (level) {
+case spdlog::level::trace: return LOG_LEVEL::trace;
+case spdlog::level::debug: return LOG_LEVEL::debug;
+case spdlog::level::info: return LOG_LEVEL::info;
+case spdlog::level::warn: return LOG_LEVEL::warn;
+case spdlog::level::err: return LOG_LEVEL::err;
+case spdlog::level::critical: return LOG_LEVEL::critical;
+case spdlog::level::off: return LOG_LEVEL::off;
+case spdlog::level::n_levels: return LOG_LEVEL::off;

Review Comment:
   good idea, 
https://github.com/apache/nifi-minifi-cpp/pull/1670/commits/92a5f25321f81e3b7bf7a5d12d611d450ea432dd#diff-609cfbc9531d250927c98865424e809b582691daacc2d9540cfb857059eff348L87



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


martinzink commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356661024


##
libminifi/test/TestBase.cpp:
##
@@ -59,7 +59,7 @@ std::shared_ptr 
LogTestController::getInstance(const std::sha
 
 void LogTestController::setLevel(std::string_view name, 
spdlog::level::level_enum level) {
   const auto levelView(spdlog::level::to_string_view(level));
-  logger_->log_info("Setting log level for %s to %s", std::string(name), 
std::string(levelView.begin(), levelView.end()));
+  logger_->log_info("Setting log level for {} to {}", std::string(name), 
std::string(levelView.begin(), levelView.end()));

Review Comment:
   :+1: 
https://github.com/apache/nifi-minifi-cpp/pull/1670/commits/6a5aebd7790dbf83d5418c03b6168ae3a0b6bf38#diff-90730063bb5cd6c614c0a28440ffe3223775f73126b426a0059eff1957b5b1c8R62



##
libminifi/src/utils/net/AsioSocketUtils.cpp:
##
@@ -72,18 +72,18 @@ bool AsioSocketConnection::connectTcpSocketOverSsl() {
   asio::error_code err;
   asio::ip::tcp::resolver::results_type endpoints = 
resolver.resolve(socket_data_.host, std::to_string(socket_data_.port), err);
   if (err) {
-logger_->log_error("Resolving host '%s' on port '%s' failed with the 
following message: '%s'", socket_data_.host, std::to_string(socket_data_.port), 
err.message());
+logger_->log_error("Resolving host '{}' on port '{}' failed with the 
following message: '{}'", socket_data_.host, std::to_string(socket_data_.port), 
err.message());

Review Comment:
   :+1: 
https://github.com/apache/nifi-minifi-cpp/pull/1670/commits/6a5aebd7790dbf83d5418c03b6168ae3a0b6bf38#diff-1c49ef68c8086eafeeba05f851885876f763cbb327cc8103c6a3a5d8460246a8L75



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


martinzink commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356660563


##
libminifi/src/core/logging/LoggerConfiguration.cpp:
##
@@ -305,7 +305,7 @@ std::shared_ptr 
LoggerConfiguration::get_logger(const std::share
   }
   if (logger != nullptr) {
 const auto levelView(spdlog::level::to_string_view(level));
-logger->log_debug("%s logger got sinks from namespace %s and level %s from 
namespace %s", name, sink_namespace_str, std::string(levelView.begin(), 
levelView.end()), level_namespace_str);
+logger->log_debug("{} logger got sinks from namespace {} and level {} from 
namespace {}", name, sink_namespace_str, std::string(levelView.begin(), 
levelView.end()), level_namespace_str);

Review Comment:
   :+1: 
https://github.com/apache/nifi-minifi-cpp/pull/1670/commits/6a5aebd7790dbf83d5418c03b6168ae3a0b6bf38#diff-9a79d1f5586ffc516b0d49c31345d7938befc39eef2e8f584dddee618dc9d942R307



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


martinzink commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356660306


##
libminifi/include/core/logging/Logger.h:
##
@@ -112,19 +62,42 @@ enum LOG_LEVEL {
   off = 6
 };
 
+inline spdlog::level::level_enum mapToSpdLogLevel(LOG_LEVEL level) {
+  switch (level) {
+case trace: return spdlog::level::trace;
+case debug: return spdlog::level::debug;
+case info: return spdlog::level::info;
+case warn: return spdlog::level::warn;
+case err: return spdlog::level::err;
+case critical: return spdlog::level::critical;
+case off: return spdlog::level::off;
+  }
+  throw std::invalid_argument(fmt::format("Invalid LOG_LEVEL {}", 
magic_enum::enum_underlying(level)));
+}
+
+inline LOG_LEVEL mapFromSpdLogLevel(spdlog::level::level_enum level) {
+  switch (level) {
+case spdlog::level::trace: return LOG_LEVEL::trace;
+case spdlog::level::debug: return LOG_LEVEL::debug;
+case spdlog::level::info: return LOG_LEVEL::info;
+case spdlog::level::warn: return LOG_LEVEL::warn;
+case spdlog::level::err: return LOG_LEVEL::err;
+case spdlog::level::critical: return LOG_LEVEL::critical;
+case spdlog::level::off: return LOG_LEVEL::off;
+case spdlog::level::n_levels: return LOG_LEVEL::off;
+  }
+  throw std::invalid_argument(fmt::format("Invalid spdlog::level::level_enum 
{}", magic_enum::enum_underlying(level)));
+}
+
 class BaseLogger {
  public:
   virtual ~BaseLogger();
 
   virtual void log_string(LOG_LEVEL level, std::string str) = 0;
-
-  virtual bool should_log(const LOG_LEVEL );
+  virtual bool should_log(LOG_LEVEL level) = 0;
+  [[nodiscard]] virtual LOG_LEVEL level() const = 0;
 };
 
-/**
- * LogBuilder is a class to facilitate using the LOG macros below and an 
associated put-to operator.
- *
- */
 class LogBuilder {

Review Comment:
   doesnt seem to be the case, 
https://github.com/apache/nifi-minifi-cpp/pull/1670/commits/6a5aebd7790dbf83d5418c03b6168ae3a0b6bf38#diff-609cfbc9531d250927c98865424e809b582691daacc2d9540cfb857059eff348L101



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


martinzink commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356658920


##
libminifi/src/core/flow/StructuredConfiguration.cpp:
##
@@ -235,20 +235,20 @@ void StructuredConfiguration::parseProcessorNode(const 
Node& processors_node, co
 
 if (procCfg.schedulingStrategy == "TIMER_DRIVEN" || 
procCfg.schedulingStrategy == "EVENT_DRIVEN") {
   if (auto scheduling_period = 
utils::timeutils::StringToDuration(procCfg.schedulingPeriod))
 {
-logger_->log_debug("convert: parseProcessorNode: schedulingPeriod => 
[%" PRId64 "] ns", scheduling_period->count());
+logger_->log_debug("convert: parseProcessorNode: schedulingPeriod => 
[{}] ns", scheduling_period->count());

Review Comment:
    I've simplified these remaining chrono duration logging in 
https://github.com/apache/nifi-minifi-cpp/commit/92a5f25321f81e3b7bf7a5d12d611d450ea432dd,
 but had to revert the MQTT ones due to some mysterious centos warnings in 
https://github.com/apache/nifi-minifi-cpp/commit/1704d2444a12b53ce27797c17ef4dc5bb2ff0399



##
libminifi/src/core/flow/StructuredConfiguration.cpp:
##
@@ -235,20 +235,20 @@ void StructuredConfiguration::parseProcessorNode(const 
Node& processors_node, co
 
 if (procCfg.schedulingStrategy == "TIMER_DRIVEN" || 
procCfg.schedulingStrategy == "EVENT_DRIVEN") {
   if (auto scheduling_period = 
utils::timeutils::StringToDuration(procCfg.schedulingPeriod))
 {
-logger_->log_debug("convert: parseProcessorNode: schedulingPeriod => 
[%" PRId64 "] ns", scheduling_period->count());
+logger_->log_debug("convert: parseProcessorNode: schedulingPeriod => 
[{}] ns", scheduling_period->count());

Review Comment:
    I've simplified these remaining chrono duration logging in 
https://github.com/apache/nifi-minifi-cpp/commit/92a5f25321f81e3b7bf7a5d12d611d450ea432dd,
 but had to revert the MQTT ones due to some mysterious centos warnings in 
https://github.com/apache/nifi-minifi-cpp/commit/1704d2444a12b53ce27797c17ef4dc5bb2ff0399



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


martinzink commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356658748


##
libminifi/src/core/state/LogMetricsPublisher.cpp:
##
@@ -59,10 +59,10 @@ void LogMetricsPublisher::readLoggingInterval() {
   if (auto logging_interval_str = 
configuration_->get(Configure::nifi_metrics_publisher_log_metrics_logging_interval))
 {
 if (auto logging_interval = 
minifi::core::TimePeriodValue::fromString(logging_interval_str.value())) {
   logging_interval_ = logging_interval->getMilliseconds();
-  logger_->log_info("Metric logging interval is set to %" PRId64 " 
milliseconds", int64_t{logging_interval_.count()});
+  logger_->log_info("Metric logging interval is set to {} milliseconds", 
int64_t{logging_interval_.count()});

Review Comment:
   :+1: I've simplified these remaining chrono duration logging in 
https://github.com/apache/nifi-minifi-cpp/pull/1670/commits/92a5f25321f81e3b7bf7a5d12d611d450ea432dd,
 but had to revert the MQTT ones due to some mysterious centos warnings in 
https://github.com/apache/nifi-minifi-cpp/pull/1670/commits/1704d2444a12b53ce27797c17ef4dc5bb2ff0399



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


martinzink commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356644621


##
extensions/windows-event-log/CollectorInitiatedSubscription.cpp:
##
@@ -481,7 +486,7 @@ bool CollectorInitiatedSubscription::subscribe(const 
std::shared_ptr 
pCollectorInitiatedSubscription->max_buffer_size_.getValue()) {
-logger->log_error("Dropping event %p because it couldn't be 
rendered within %llu bytes.", hEvent, 
pCollectorInitiatedSubscription->max_buffer_size_.getValue());
+logger->log_error("Dropping event {} because it couldn't be 
rendered within {} bytes.", hEvent, 
pCollectorInitiatedSubscription->max_buffer_size_.getValue());

Review Comment:
   Originally it was %x (hex integer) and %p (address in hex), I did the 
replacements by type not by file.
   On closer inspection it is due to the ingenius design from microsoft. This 
is an event handler if everything goes according to plan or an error number if 
its not.
   
   I think it would be okay to print it with the default both times, but I am 
not that familiar with CollectorInitiatedSubscription. 
   
   
https://learn.microsoft.com/en-us/windows/win32/api/winevt/nc-winevt-evt_subscribe_callback
   
   
   > Event
   > 
   > A handle to the event. The event handle is only valid for the duration of 
the callback function. You can use this handle with any event log function that 
takes an event handle (for example, 
[EvtRender](https://learn.microsoft.com/en-us/windows/desktop/api/winevt/nf-winevt-evtrender)
 or 
[EvtFormatMessage](https://learn.microsoft.com/en-us/windows/desktop/api/winevt/nf-winevt-evtformatmessage)).
   > 
   > Do not call 
[EvtClose](https://learn.microsoft.com/en-us/windows/desktop/api/winevt/nf-winevt-evtclose)
 to close this handle; the service will close the handle when the callback 
returns.
   > 
   > If the Action parameter is EvtSubscribeActionError, cast Event to a DWORD 
to access the Win32 error code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2220 Change core::logging::Logger to use fmt format [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


szaszm commented on code in PR #1670:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1670#discussion_r1356620166


##
extensions/windows-event-log/CollectorInitiatedSubscription.cpp:
##
@@ -481,7 +486,7 @@ bool CollectorInitiatedSubscription::subscribe(const 
std::shared_ptr 
pCollectorInitiatedSubscription->max_buffer_size_.getValue()) {
-logger->log_error("Dropping event %p because it couldn't be 
rendered within %llu bytes.", hEvent, 
pCollectorInitiatedSubscription->max_buffer_size_.getValue());
+logger->log_error("Dropping event {} because it couldn't be 
rendered within {} bytes.", hEvent, 
pCollectorInitiatedSubscription->max_buffer_size_.getValue());

Review Comment:
   The other log uses the `{:#x}` format specifier, while this just uses the 
default `{}` for the same handle. Why?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-12217) In PutDatabaseRecord processor, when you try to insert a CLOB and a SQLException gets catched, the exception message gets lost

2023-10-12 Thread Alessandro Polselli (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774402#comment-17774402
 ] 

Alessandro Polselli commented on NIFI-12217:


The
{code:java}
    } catch (SQLException e) {
    throw new IOException("Unable to parse data as CLOB/String 
" + value, e.getCause());
    }  {code}
was introduced by NIFI-6061

> In PutDatabaseRecord processor, when you try to insert a CLOB and a 
> SQLException gets catched, the exception message gets lost
> --
>
> Key: NIFI-12217
> URL: https://issues.apache.org/jira/browse/NIFI-12217
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.23.2
>Reporter: Alessandro Polselli
>Priority: Trivial
>  Labels: putdatabaserecord
>
> In PutDatabaseRecord processor, when you try to insert a CLOB and a 
> SQLException gets catched
> [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java#L866C4-L866C4]
> the original Exception message (the most valuable part) gets completely lost, 
> because only its .getCause() is wrapped in a generic IOException that states 
> "Unable to parse data as CLOB/String", making it extremely difficult to 
> identify which is the real problem.
> In my case, the problem was something like "ORA-25153: Tablespace temporanea 
> vuota" but this valuable message wasn't logged at all.
>  
> I suggest to replace 
> {code:java}
>     } catch (SQLException e) {
>     throw new IOException("Unable to parse data as 
> CLOB/String " + value, e.getCause());
>     } {code}
> with
> {code:java}
>     } catch (SQLException e) {
>     throw new IOException("Unable to parse data as 
> CLOB/String " + value, e);
>     } {code}
>  
> Thank you



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12217) In PutDatabaseRecord processor, when you try to insert a CLOB and a SQLException gets catched, the exception message gets lost

2023-10-12 Thread Alessandro Polselli (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro Polselli updated NIFI-12217:
---
Labels: putdatabaserecord  (was: )

> In PutDatabaseRecord processor, when you try to insert a CLOB and a 
> SQLException gets catched, the exception message gets lost
> --
>
> Key: NIFI-12217
> URL: https://issues.apache.org/jira/browse/NIFI-12217
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.23.2
>Reporter: Alessandro Polselli
>Priority: Trivial
>  Labels: putdatabaserecord
>
> In PutDatabaseRecord processor, when you try to insert a CLOB and a 
> SQLException gets catched
> [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java#L866C4-L866C4]
> the original Exception message (the most valuable part) gets completely lost, 
> because only its .getCause() is wrapped in a generic IOException that states 
> "Unable to parse data as CLOB/String", making it extremely difficult to 
> identify which is the real problem.
> In my case, the problem was something like "ORA-25153: Tablespace temporanea 
> vuota" but this valuable message wasn't logged at all.
>  
> I suggest to replace 
> {code:java}
>     } catch (SQLException e) {
>     throw new IOException("Unable to parse data as 
> CLOB/String " + value, e.getCause());
>     } {code}
> with
> {code:java}
>     } catch (SQLException e) {
>     throw new IOException("Unable to parse data as 
> CLOB/String " + value, e);
>     } {code}
>  
> Thank you



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12217) In PutDatabaseRecord processor, when you try to insert a CLOB and a SQLException gets catched, the exception message gets lost

2023-10-12 Thread Alessandro Polselli (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro Polselli updated NIFI-12217:
---
Description: 
In PutDatabaseRecord processor, when you try to insert a CLOB and a 
SQLException gets catched

[https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java#L866C4-L866C4]

the original Exception message (the most valuable part) gets completely lost, 
because only its .getCause() is wrapped in a generic IOException that states 
"Unable to parse data as CLOB/String", making it extremely difficult to 
identify which is the real problem.

In my case, the problem was something like "ORA-25153: Tablespace temporanea 
vuota" but this valuable message wasn't logged at all.

 

I suggest to replace 
{code:java}
    } catch (SQLException e) {
    throw new IOException("Unable to parse data as CLOB/String 
" + value, e.getCause());
    } {code}
with
{code:java}
    } catch (SQLException e) {
    throw new IOException("Unable to parse data as CLOB/String 
" + value, e);
    } {code}
 

Thank you

  was:
In PutDatabaseRecord processor, when you try to insert a CLOB and a 
SQLException gets catched

[https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java#L866C4-L866C4]

the original Exception message (the most valuable part) gets completely lost, 
because only its .getCause() is wrapped in a generic IOException that states 
"Unable to parse data as CLOB/String", making it extremely difficult to 
identify which is the real problem.

In my case, the problem was something like "ORA-25153: Tablespace temporanea 
vuota" but this valuable message wasn't logged at all.

 

I suggest to replace 
{code:java}
    throw new IOException("Unable to parse data as CLOB/String 
" + value, e.getCause());  {code}
with
{code:java}
    throw new IOException("Unable to parse data as CLOB/String 
" + value, e);  {code}
 

Thank you


> In PutDatabaseRecord processor, when you try to insert a CLOB and a 
> SQLException gets catched, the exception message gets lost
> --
>
> Key: NIFI-12217
> URL: https://issues.apache.org/jira/browse/NIFI-12217
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.23.2
>Reporter: Alessandro Polselli
>Priority: Trivial
>
> In PutDatabaseRecord processor, when you try to insert a CLOB and a 
> SQLException gets catched
> [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java#L866C4-L866C4]
> the original Exception message (the most valuable part) gets completely lost, 
> because only its .getCause() is wrapped in a generic IOException that states 
> "Unable to parse data as CLOB/String", making it extremely difficult to 
> identify which is the real problem.
> In my case, the problem was something like "ORA-25153: Tablespace temporanea 
> vuota" but this valuable message wasn't logged at all.
>  
> I suggest to replace 
> {code:java}
>     } catch (SQLException e) {
>     throw new IOException("Unable to parse data as 
> CLOB/String " + value, e.getCause());
>     } {code}
> with
> {code:java}
>     } catch (SQLException e) {
>     throw new IOException("Unable to parse data as 
> CLOB/String " + value, e);
>     } {code}
>  
> Thank you



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12217) In PutDatabaseRecord processor, when you try to insert a CLOB and a SQLException gets catched, the exception message gets lost

2023-10-12 Thread Alessandro Polselli (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro Polselli updated NIFI-12217:
---
Priority: Trivial  (was: Major)

> In PutDatabaseRecord processor, when you try to insert a CLOB and a 
> SQLException gets catched, the exception message gets lost
> --
>
> Key: NIFI-12217
> URL: https://issues.apache.org/jira/browse/NIFI-12217
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.23.2
>Reporter: Alessandro Polselli
>Priority: Trivial
>
> In PutDatabaseRecord processor, when you try to insert a CLOB and a 
> SQLException gets catched
> [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java#L866C4-L866C4]
> the original Exception message (the most valuable part) gets completely lost, 
> because only its .getCause() is wrapped in a generic IOException that states 
> "Unable to parse data as CLOB/String", making it extremely difficult to 
> identify which is the real problem.
> In my case, the problem was something like "ORA-25153: Tablespace temporanea 
> vuota" but this valuable message wasn't logged at all.
>  
> I suggest to replace 
> {code:java}
>     throw new IOException("Unable to parse data as 
> CLOB/String " + value, e.getCause());  {code}
> with
> {code:java}
>     throw new IOException("Unable to parse data as 
> CLOB/String " + value, e);  {code}
>  
> Thank you



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-12217) In PutDatabaseRecord processor, when you try to insert a CLOB and a SQLException gets catched, the exception message gets lost

2023-10-12 Thread Alessandro Polselli (Jira)
Alessandro Polselli created NIFI-12217:
--

 Summary: In PutDatabaseRecord processor, when you try to insert a 
CLOB and a SQLException gets catched, the exception message gets lost
 Key: NIFI-12217
 URL: https://issues.apache.org/jira/browse/NIFI-12217
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.23.2
Reporter: Alessandro Polselli


In PutDatabaseRecord processor, when you try to insert a CLOB and a 
SQLException gets catched

[https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java#L866C4-L866C4]

the original Exception message (the most valuable part) gets completely lost, 
because only its .getCause() is wrapped in a generic IOException that states 
"Unable to parse data as CLOB/String", making it extremely difficult to 
identify which is the real problem.

In my case, the problem was something like "ORA-25153: Tablespace temporanea 
vuota" but this valuable message wasn't logged at all.

 

I suggest to replace 
{code:java}
    throw new IOException("Unable to parse data as CLOB/String 
" + value, e.getCause());  {code}
with
{code:java}
    throw new IOException("Unable to parse data as CLOB/String 
" + value, e);  {code}
 

Thank you



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MINIFICPP-2247) Rewrite StringUtils::to_base64/from_base64 to satisfy clang-tidy

2023-10-12 Thread Marton Szasz (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Szasz updated MINIFICPP-2247:

Labels: Hacktoberfest beginner beginner-friendly firstbug  (was: 
Hacktoberfest)

> Rewrite StringUtils::to_base64/from_base64 to satisfy clang-tidy
> 
>
> Key: MINIFICPP-2247
> URL: https://issues.apache.org/jira/browse/MINIFICPP-2247
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Martin Zink
>Priority: Minor
>  Labels: Hacktoberfest, beginner, beginner-friendly, firstbug
>
> libminifi/src/utils/StringUtils.cpp contains a couple of functions that use c 
> style arrays which should be replaced by std::array, the changes are not 
> trivial so we probably want to create a separate PR instead of doing in 
> whichever PR that touches the file



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] MINIFICPP-2186 Remove workarounds no longer needed in Visual Studio 2022 [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


fgerlits opened a new pull request, #1679:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1679

   Also update the minumum compiler version to Visual Studio 2022, and remove 
the vs2019 CI job, since the project no longer compiles with Visual Studio 2019.
   
   https://issues.apache.org/jira/browse/MINIFICPP-2186
   
   ---
   
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [x] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [x] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (MINIFICPP-2247) Rewrite StringUtils::to_base64/from_base64 to satisfy clang-tidy

2023-10-12 Thread Martin Zink (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin Zink updated MINIFICPP-2247:
---
Labels: Hacktoberfest  (was: )

> Rewrite StringUtils::to_base64/from_base64 to satisfy clang-tidy
> 
>
> Key: MINIFICPP-2247
> URL: https://issues.apache.org/jira/browse/MINIFICPP-2247
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Martin Zink
>Priority: Minor
>  Labels: Hacktoberfest
>
> libminifi/src/utils/StringUtils.cpp contains a couple of functions that use c 
> style arrays which should be replaced by std::array, the changes are not 
> trivial so we probably want to create a separate PR instead of doing in 
> whichever PR that touches the file



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MINIFICPP-2247) Rewrite StringUtils::to_base64/from_base64 to satisfy clang-tidy

2023-10-12 Thread Martin Zink (Jira)
Martin Zink created MINIFICPP-2247:
--

 Summary: Rewrite StringUtils::to_base64/from_base64 to satisfy 
clang-tidy
 Key: MINIFICPP-2247
 URL: https://issues.apache.org/jira/browse/MINIFICPP-2247
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Martin Zink


libminifi/src/utils/StringUtils.cpp contains a couple of functions that use c 
style arrays which should be replaced by std::array, the changes are not 
trivial so we probably want to create a separate PR instead of doing in 
whichever PR that touches the file



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINIFICPP-2239 Add bandwidth limit to InvokeHTTP processor [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


lordgamez commented on code in PR #1674:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1674#discussion_r1356403290


##
PROCESSORS.md:
##
@@ -1320,6 +1320,8 @@ In the list below, the names of required properties 
appear in bold. Any other pr
 | Always Output Response  | false| 
truefalse  | Will force a response FlowFile to be generated 
and routed to the 'Response' relationship regardless of what the server status 
code received is

  |
 | Penalize on "No Retry"  | false| 
truefalse  | Enabling this property will penalize FlowFiles 
that are routed to the "No Retry" relationship. 


 |
 | **Invalid HTTP Header Field Handling Strategy** | transform| 
failtransformdrop | Indicates what should happen when an attribute's 
name is not a valid HTTP header field name. Options: transform - invalid 
characters are replaced, fail - flow file is transferred to failure, drop - 
drops invalid attributes from HTTP message  
  |
+| Upload Speed Limit  |  | 
| Maximum data per second to send (e.g. '500 
KB/s'). Leave this empty if you want no limit.  


 |
+| Download Speed Limit|  | 
| Maximum data per second to receive (e.g. '500 
KB/s'). Leave this empty if you want no limit.  


  |

Review Comment:
   Updated in d305cce439c0b60fac04b646b79f46f3a4e68d54



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2239 Add bandwidth limit to InvokeHTTP processor [nifi-minifi-cpp]

2023-10-12 Thread via GitHub


lordgamez commented on code in PR #1674:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1674#discussion_r1356398691


##
libminifi/include/core/PropertyType.h:
##
@@ -319,6 +319,24 @@ class TimePeriodPropertyType : public PropertyType {
   }
 };
 
+class DataTransferSpeedPropertyType : public PropertyType {
+ public:
+  constexpr ~DataTransferSpeedPropertyType() override {}  // NOLINT see 
comment at parent
+
+  [[nodiscard]] std::string_view getValidatorName() const override { return 
"DATA_SIZE_VALIDATOR"; }

Review Comment:
   It seems that some C2 servers do not ignore the new validator, but return on 
error on it, so I removed the validator for this type in 
2ec3285238625ebd7348f517d14cf6840f456351



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-12216) If there is an update in Keycloak user ID, Nifi needs to be re-deployed/upgraded

2023-10-12 Thread Jeetendra G Vasisht (Jira)
Jeetendra G Vasisht created NIFI-12216:
--

 Summary: If there is an update in Keycloak user ID, Nifi needs to 
be re-deployed/upgraded
 Key: NIFI-12216
 URL: https://issues.apache.org/jira/browse/NIFI-12216
 Project: Apache NiFi
  Issue Type: Bug
  Components: Configuration Management
Affects Versions: 1.23.2
Reporter: Jeetendra G Vasisht


If there is an update in Keycloak user ID, Nifi needs to be 
re-deployed/upgraded. Though nifi uses username/groupname along with Keycloak 
integration user will see login failure. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)