[GitHub] [nifi] patalwell commented on a diff in pull request #5905: NiFi-9817 Add a Validator for the PutCloudWatchMetric Processor's Unit Field

2022-04-07 Thread GitBox


patalwell commented on code in PR #5905:
URL: https://github.com/apache/nifi/pull/5905#discussion_r845617286


##
nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/test/java/org/apache/nifi/processors/aws/cloudwatch/TestPutCloudWatchMetric.java:
##
@@ -264,27 +265,34 @@ public void testMetricExpressionInvalidRoutesToFailure() 
throws Exception {
 runner.enqueue(new byte[] {}, attributes);
 runner.run();
 
-assertEquals(0, mockPutCloudWatchMetric.putMetricDataCallCount);
+Assert.assertEquals(0, mockPutCloudWatchMetric.putMetricDataCallCount);
 runner.assertAllFlowFilesTransferred(PutCloudWatchMetric.REL_FAILURE, 
1);
 }
 
-@Test
-public void testInvalidUnitRoutesToFailure() throws Exception {
+@ParameterizedTest
+@CsvSource({"nan","percent","count"})
+public void testInvalidUnit(String unit) throws Exception {
 MockPutCloudWatchMetric mockPutCloudWatchMetric = new 
MockPutCloudWatchMetric();
-mockPutCloudWatchMetric.throwException = new 
InvalidParameterValueException("Unit error message");
 final TestRunner runner = 
TestRunners.newTestRunner(mockPutCloudWatchMetric);
 
 runner.setProperty(PutCloudWatchMetric.NAMESPACE, "TestNamespace");
 runner.setProperty(PutCloudWatchMetric.METRIC_NAME, "TestMetric");
-runner.setProperty(PutCloudWatchMetric.UNIT, "BogusUnit");
-runner.setProperty(PutCloudWatchMetric.VALUE, "1");
-runner.assertValid();
+runner.setProperty(PutCloudWatchMetric.UNIT, unit);
+runner.setProperty(PutCloudWatchMetric.VALUE, "1.0");
+runner.assertNotValid();
+}
 
-runner.enqueue(new byte[] {});
-runner.run();
+@ParameterizedTest
+@CsvSource({"Count","Bytes","Percent"})

Review Comment:
   How would you like me to import the Unit set from the PutCloudWatch?  I just 
made a getter() for it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] greyp9 commented on a diff in pull request #5941: NIFI-9884 - JacksonCSVRecordReader ignores specified encoding

2022-04-07 Thread GitBox


greyp9 commented on code in PR #5941:
URL: https://github.com/apache/nifi/pull/5941#discussion_r845596500


##
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/test/java/org/apache/nifi/csv/TestJacksonCSVRecordReader.java:
##
@@ -69,7 +70,7 @@ public void testUTF8() throws IOException, 
MalformedRecordException {
 fields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 final RecordSchema schema = new SimpleRecordSchema(fields);
 
-try (final InputStream bais = new 
ByteArrayInputStream(text.getBytes());
+try (final InputStream bais = new 
ByteArrayInputStream(text.getBytes(StandardCharsets.UTF_8));

Review Comment:
   > Does this fail without the fix above? Wondering if we should have another 
unit test that specifies a different charset
   
   It would fail if the test were run in a process with `-Dfile.encoding` other 
than UTF-8.  I noticed it on the Github CI (Windows node):
   - https://github.com/greyp9/nifi/runs/5843639230?check_suite_focus=true
   
   Good call; another test might be useful.  Of the choices, ISO_8859_1 would 
probably work best:
   - 
https://docs.oracle.com/javase/7/docs/api/java/nio/charset/StandardCharsets.html
   
   I'll have a look.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-9893) Ensure orderly cluster node removal on node delete via UI

2022-04-07 Thread Paul Grey (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Grey updated NIFI-9893:

Status: Patch Available  (was: In Progress)

https://github.com/apache/nifi/pull/5946/


> Ensure orderly cluster node removal on node delete via UI
> -
>
> Key: NIFI-9893
> URL: https://issues.apache.org/jira/browse/NIFI-9893
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Paul Grey
>Assignee: Paul Grey
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When a NiFi cluster node is deleted (via the UI), a series of steps is 
> executed to update the cluster state.  If the node is being deleted due to 
> node process failure, then any attempt to communicate with that node will 
> fail, causing an exception. 
> In `NodeClusterCoordinator.removeNode()`, the sequence of steps could be 
> improved to perform the deletion before the node participants are notified.  
> This reduces the chance that node failure will prevent the UI operation from 
> completing.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [nifi] greyp9 opened a new pull request, #5946: NIFI-9893 - Ensure orderly cluster node removal on node delete via UI

2022-04-07 Thread GitBox


greyp9 opened a new pull request, #5946:
URL: https://github.com/apache/nifi/pull/5946

    Description of PR
   
   Alter order of steps executed when UI cluster node deletion is requested.  
The node should be removed from the cluster state before any notifications are 
sent to the cluster participants.  This ensures that communications failures do 
not cause the operation to fail.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [X] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-9893) Ensure orderly cluster node removal on node delete via UI

2022-04-07 Thread Paul Grey (Jira)
Paul Grey created NIFI-9893:
---

 Summary: Ensure orderly cluster node removal on node delete via UI
 Key: NIFI-9893
 URL: https://issues.apache.org/jira/browse/NIFI-9893
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Paul Grey
Assignee: Paul Grey


When a NiFi cluster node is deleted (via the UI), a series of steps is executed 
to update the cluster state.  If the node is being deleted due to node process 
failure, then any attempt to communicate with that node will fail, causing an 
exception. 

In `NodeClusterCoordinator.removeNode()`, the sequence of steps could be 
improved to perform the deletion before the node participants are notified.  
This reduces the chance that node failure will prevent the UI operation from 
completing.




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (NIFI-9871) Error Messages Repeat Stack Trace Causes

2022-04-07 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann updated NIFI-9871:
---
Status: Patch Available  (was: Open)

> Error Messages Repeat Stack Trace Causes
> 
>
> Key: NIFI-9871
> URL: https://issues.apache.org/jira/browse/NIFI-9871
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.16.0, 1.15.0, 1.14.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Application error log messages duplicate the exception cause and message as 
> shown in the following log and stack trace:
> {noformat}
> ERROR [Timer-Driven Process Thread-5] o.a.nifi.processors.standard.ListSFTP 
> ListSFTP[id=fc08e081-ee32-3105-b09e-9b18a0b97dbb] Failed to process session 
> due to 
> org.apache.nifi.processors.standard.socket.ClientAuthenticationException: SSH 
> Client authentication failed [127.0.0.1:22]: 
> org.apache.nifi.processors.standard.socket.ClientAuthenticationException: SSH 
> Client authentication failed [127.0.0.1:22]
> - Caused by: net.schmizz.sshj.userauth.UserAuthException: Exhausted available 
> authentication methods
> org.apache.nifi.processors.standard.socket.ClientAuthenticationException: SSH 
> Client authentication failed [127.0.0.1:22]
>   at 
> org.apache.nifi.processors.standard.ssh.StandardSSHClientProvider.getClient(StandardSSHClientProvider.java:124)
>   at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getSFTPClient(SFTPTransfer.java:598)
>   at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:302)
>   at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:264)
>   at 
> org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:120)
>   at 
> org.apache.nifi.processors.standard.ListSFTP.performListing(ListSFTP.java:151)
>   at 
> org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:112)
>   at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.listByNoTracking(AbstractListProcessor.java:562)
>   at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:532)
>   at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>   at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1283)
>   at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
>   at 
> org.apache.nifi.controller.scheduling.AbstractTimeBasedSchedulingAgent.lambda$doScheduleOnce$0(AbstractTimeBasedSchedulingAgent.java:63)
>   at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:750)
> Caused by: net.schmizz.sshj.userauth.UserAuthException: Exhausted available 
> authentication methods
>   at net.schmizz.sshj.SSHClient.auth(SSHClient.java:227)
>   at 
> org.apache.nifi.processors.standard.ssh.StandardSSHClientProvider.getClient(StandardSSHClientProvider.java:121)
>   ... 20 common frames omitted
> {noformat}
> The log formatting should be corrected so that the message does not duplicate 
> the stack trace information. Bulletin messages should continue to include the 
> stack trace summary.
> The updated log message and stack trace should appear as follows:
> {noformat}
> ERROR [Timer-Driven Process Thread-5] o.a.nifi.processors.standard.ListSFTP 
> ListSFTP[id=fc08e081-ee32-3105-b09e-9b18a0b97dbb] Failed to process session
> org.apache.nifi.processors.standard.socket.ClientAuthenticationException: SSH 
> Client authentication failed [127.0.0.1:22]
>   at 
> org.apache.nifi.processors.standard.ssh.StandardSSHClientProvider.getClient(StandardSSHClientProvider.java:124)
>   at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getSFTPClient(SFTPTransfer.java:598)
>   at 
> org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:302)
>   at 
> 

[GitHub] [nifi] exceptionfactory opened a new pull request, #5945: NIFI-9871 Correct Component Stack Trace Logging

2022-04-07 Thread GitBox


exceptionfactory opened a new pull request, #5945:
URL: https://github.com/apache/nifi/pull/5945

    Description of PR
   
   NIFI-9871 Corrects stack trace logging for extension components to avoid 
sending summarized stack trace causes to the SLF4J Logger. This change 
preserves the stack trace summary in bulletin messages and avoids writing 
duplicate information to application log files.
   
   The `ConnectableTask` includes log wording adjustments to indicate the 
difference between termination, failure, and halted because of an uncaught 
exception.
   
   The updated `TestSimpleProcessLogger` includes several new test methods to 
improve coverage and ensure consistent formatting.
   
   The updated `SimpleProcessLogger` implementation of `ComponentLog` behaves 
as follows when logging an error:
   
   1. Bulletin Messages contain a simple message with a summary of causes:
   
   ```
   ListSFTP[id=d4ee5196-b059-3bab-4416-9f59a45cd7b0] Processing failed: 
org.apache.nifi.processors.standard.socket.ClientConnectException: SSH Client 
connection failed [192.168.1.1:22]
   - Caused by: com.exceptionfactory.socketbroker.BrokeredConnectException: 
Proxy Address [/192.168.1.100:] connection failed
   - Caused by: java.net.NoRouteToHostException: No route to host (Host 
unreachable)
   ```
   
   2. Application Log Messages contain a simple message and full stack trace 
without duplicating the summary of causes:
   
   ```
   2022-04-07 12:30:45,500 ERROR [Timer-Driven Process Thread-7] 
o.a.nifi.processors.standard.ListSFTP 
ListSFTP[id=d4ee5196-b059-3bab-4416-9f59a45cd7b0] Processing failed
   org.apache.nifi.processors.standard.socket.ClientConnectException: SSH 
Client connection failed [192.168.1.1:22]
at 
org.apache.nifi.processors.standard.ssh.StandardSSHClientProvider.getClient(StandardSSHClientProvider.java:115)
at 
org.apache.nifi.processors.standard.util.SFTPTransfer.getSFTPClient(SFTPTransfer.java:598)
at 
org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:302)
at 
org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:264)
at 
org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:120)
at 
org.apache.nifi.processors.standard.ListSFTP.performListing(ListSFTP.java:151)
at 
org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:112)
at 
org.apache.nifi.processor.util.list.AbstractListProcessor.listByNoTracking(AbstractListProcessor.java:562)
at 
org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:532)
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1283)
at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
at 
org.apache.nifi.controller.scheduling.AbstractTimeBasedSchedulingAgent.lambda$doScheduleOnce$0(AbstractTimeBasedSchedulingAgent.java:63)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
   Caused by: com.exceptionfactory.socketbroker.BrokeredConnectException: Proxy 
Address [/192.168.1.100:] connection failed
at 
com.exceptionfactory.socketbroker.BrokeredSocket.connect(BrokeredSocket.java:100)
at net.schmizz.sshj.SocketClient.connect(SocketClient.java:138)
at 
org.apache.nifi.processors.standard.ssh.StandardSSHClientProvider.getClient(StandardSSHClientProvider.java:112)
... 20 common frames omitted
   Caused by: java.net.NoRouteToHostException: No route to host (Host 
unreachable)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at 
com.exceptionfactory.socketbroker.BrokeredSocket.connect(BrokeredSocket.java:97)
... 22 common frames omitted
   
   ```

[GitHub] [nifi] markap14 opened a new pull request, #5944: NIFI-9892: Updated Azure storage related processors to adhere to NiFi…

2022-04-07 Thread GitBox


markap14 opened a new pull request, #5944:
URL: https://github.com/apache/nifi/pull/5944

   … best practices and cleaned up code a bit. Fixed several integration tests.
   
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-9892) Update Azure storage related processors to align with NiFi best practices

2022-04-07 Thread Mark Payne (Jira)
Mark Payne created NIFI-9892:


 Summary: Update Azure storage related processors to align with 
NiFi best practices
 Key: NIFI-9892
 URL: https://issues.apache.org/jira/browse/NIFI-9892
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Mark Payne
Assignee: Mark Payne


The Azure Storage related processors have been through many iterations and 
updates. At this point, they are not following best practices for NiFi 
processors and, as a result, make configuring the processors difficult for 
users. Specifically, I have found the following problems in using them:
 * Properties are ordered inconsistently. Some processors have Credentials 
Service as the first property, some as 4th property. The ordering of 
'directory' vs 'blob name' vs 'file system' are random. Processors should take 
great care when ordering properties in order to expose the most important 
properties first, in order to make configuration as simple as possible for the 
user.
 * Default values are inconsistent. Some processors use a default value for 
"Blob", some don't. The same sort of inconsistency exists for many properties.
 * Default values are largely missing. It does not make sense to have a default 
value for the "Container" property for "ListAzureBlobStorage" but the default 
value should absolutely be set for "FetchAzureBlobStorage", as well as move, 
delete, put, etc.
 * Poor default values. Some property values do have defaults. But they default 
to values like "${azure.blob}" for the filename when the NiFI Core Attribute of 
"filename" makes more sense.
 * Inconsistent property names. Some properties use the property name "Blob" 
while others use "Blob Name" to mean the same thing. There may be other, 
similar examples.
 * List processors do not populate core attributes. Listing processors should 
always populate attributes such as "filename" and "path", rather than just more 
specific attributes such as "azure.blob"
 * Abstract processors exist and implement `getSupportedPropertyDescriptors()`. 
 This is an anti-pattern. While it makes sense in many cases to implement 
`Set getRelationships()`, it should NOT implement 
`List getSupportedPropertyDescriptors()`. This is because 
the point of the abstract class is for others to extend it. Others that extend 
it will have more customized Lists of Property Descriptors. This often leads to 
a pattern of calling `super.getSupportedPropertyDescriptors()` and then adding 
specific properties to the end of the List. This is bad because, as noted 
above, properties should be ordered in a way that makes the most sense for that 
particular processor, adding the most important properties first and keeping 
like properties together.
 * Directory Name property is awkward and confusing. The Directory Name 
property is required for several properties. A value of "/" is invalid and no 
value can have a leading "/". To write to the root directory, the user must go 
in and click the checkbox for "Empty Value". The fact that this needed to be 
explicitly called out in the documentation is a key indicator that it violates 
user expectations. While the Azure client may not allow a leading `/`, the 
property should absolutely allow it. And the property should not be required, 
allowing an unset value to default to the root directory.
 * Code cleanup
 ** Processors use old API for session read/write callbacks. Then create 
`AtomicReference` to hold Exceptions so that they can be inspected 
later, etc. This can be cleaned up by using newer API methods that return 
`InputStream` / `OutputStream`.
 ** Code should mark variables `final` when possible.
 * Integration tests no longer work
 ** Some of the Integration tests do not work. Some work sometimes and fail 
intermittently. They need to either be fixed or deleted
 * FetchAzureDatalakeStorage emits both CONTENT_MODIFIED and FETCH provenance 
events. These two should not be emitted by the same processor, as FETCH implies 
that the content of the FlowFile was modified to match that of the data that 
was FETCHed.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-9863) Controller Service for managing custom Grok patterns

2022-04-07 Thread Otto Fowler (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17519127#comment-17519127
 ] 

Otto Fowler commented on NIFI-9863:
---

A new reader may indeed be a better idea.
For the reader:

I'm not sure how you would do the selection in the UI, I would think a custom 
page/tab/view that lets you select the items.

The items should have a display name ( editable in the configuration ) and an 
actual id that doesn't change etc etc.


> Controller Service for managing custom Grok patterns
> 
>
> Key: NIFI-9863
> URL: https://issues.apache.org/jira/browse/NIFI-9863
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Otto Fowler
>Priority: Major
>
> Managing custom Grok expressions in properties for the Grok processors or 
> Record readers is cumbersome and not ideal.
> Having a service that managed these expressions in a centralized and reusable 
> way would be a benefit to those using Grok patterns.
> This service would allow the configuration of some number custom Grok 
> patterns as the service configuration.  The MVP would be manual entry, but 
> loading patterns from File ( upload to configuration? ) or from some external 
> location could be allowed as well down the line.
> In use, it could be argued that the patterns should be loaded from something 
> like the schema registry.
> consumers of the service should then be able select the specific service 
> instance and then using dependent properties select which patterns provided 
> by the service to consume.
> To this end, it may be nice to have the service support pattern 'groups', 
> such that you can select all patterns for a group at once.  This would be the 
> easy button version of the linked multiple expressions to grok reader issue.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-9862) Update JsonTreeReader to read Records from a Nested Array

2022-04-07 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17519103#comment-17519103
 ] 

David Handermann commented on NIFI-9862:


Thanks for the suggestions [~mattyb149]. The problem is the a nested array 
could contain a very large number of elements, which could consume large 
amounts of memory to read the initial record before splitting. For some use 
cases, the additional fields, such as {{total}} in the example, could be 
ignored.

> Update JsonTreeReader to read Records from a Nested Array
> -
>
> Key: NIFI-9862
> URL: https://issues.apache.org/jira/browse/NIFI-9862
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: David Handermann
>Assignee: Lehel Boér
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{JsonTreeReader}} leverages the Jackson JSON streaming API to read one 
> or more records from an {{{}InputStream{}}}. The supporting {{RecordReader}} 
> implementation expects input JSON to be formatted with the an array as the 
> root element, or an object containing the entire Record. This approach 
> supports streamed reading of JSON objects contained within an array as the 
> root element, but does not support streaming of JSON objects contained within 
> an array nested inside a wrapping root object.
> Some services provide JSON responses that include multiple records in a 
> wrapping root object as follows:
> {noformat}
> {
>   "total": 2,
>   "records": [
> {
>   "id": 1
> },
> {
>   "id": 2
> }
>   ]
> }
> {noformat}
> In order to enable streamed processing of nested records, the 
> {{JsonTreeReader}} should be updated to support an optional property defining 
> the Property Name of a nested field containing records.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-9862) Update JsonTreeReader to read Records from a Nested Array

2022-04-07 Thread Matt Burgess (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17519086#comment-17519086
 ] 

Matt Burgess commented on NIFI-9862:


Is there a workaround using ForkRecord, either with a "split" strategy (to get 
the elements) or "extract" (to keep the "total" field and have the element as a 
single object under the parent?

> Update JsonTreeReader to read Records from a Nested Array
> -
>
> Key: NIFI-9862
> URL: https://issues.apache.org/jira/browse/NIFI-9862
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: David Handermann
>Assignee: Lehel Boér
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{JsonTreeReader}} leverages the Jackson JSON streaming API to read one 
> or more records from an {{{}InputStream{}}}. The supporting {{RecordReader}} 
> implementation expects input JSON to be formatted with the an array as the 
> root element, or an object containing the entire Record. This approach 
> supports streamed reading of JSON objects contained within an array as the 
> root element, but does not support streaming of JSON objects contained within 
> an array nested inside a wrapping root object.
> Some services provide JSON responses that include multiple records in a 
> wrapping root object as follows:
> {noformat}
> {
>   "total": 2,
>   "records": [
> {
>   "id": 1
> },
> {
>   "id": 2
> }
>   ]
> }
> {noformat}
> In order to enable streamed processing of nested records, the 
> {{JsonTreeReader}} should be updated to support an optional property defining 
> the Property Name of a nested field containing records.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [nifi] mattyb149 commented on a diff in pull request #5941: NIFI-9884 - JacksonCSVRecordReader ignores specified encoding

2022-04-07 Thread GitBox


mattyb149 commented on code in PR #5941:
URL: https://github.com/apache/nifi/pull/5941#discussion_r845448314


##
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/test/java/org/apache/nifi/csv/TestJacksonCSVRecordReader.java:
##
@@ -69,7 +70,7 @@ public void testUTF8() throws IOException, 
MalformedRecordException {
 fields.add(new RecordField("name", 
RecordFieldType.STRING.getDataType()));
 final RecordSchema schema = new SimpleRecordSchema(fields);
 
-try (final InputStream bais = new 
ByteArrayInputStream(text.getBytes());
+try (final InputStream bais = new 
ByteArrayInputStream(text.getBytes(StandardCharsets.UTF_8));

Review Comment:
   Does this fail without the fix above? Wondering if we should have another 
unit test that specifies a different charset



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] mattyb149 commented on a diff in pull request #5900: NIFI-7234 Standardized on Avro 1.11.0

2022-04-07 Thread GitBox


mattyb149 commented on code in PR #5900:
URL: https://github.com/apache/nifi/pull/5900#discussion_r845440339


##
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java:
##
@@ -326,7 +326,7 @@ private static Schema nullable(final Schema schema) {
 return Schema.createUnion(unionTypes);
 }
 
-return Schema.createUnion(Schema.create(Type.NULL), schema);
+return Schema.createUnion(schema, Schema.create(Type.NULL));

Review Comment:
   We should change inference to typ,null if it isn't in here already



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-8677) Add endpoint suffix for ConsumeAzureEventHub Processor

2022-04-07 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-8677:
---
Fix Version/s: 1.16.1

> Add endpoint suffix for ConsumeAzureEventHub Processor
> --
>
> Key: NIFI-8677
> URL: https://issues.apache.org/jira/browse/NIFI-8677
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Timea Barna
>Assignee: Timea Barna
>Priority: Major
> Fix For: 1.17.0, 1.16.1
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> ConsumeAzureEventHub Processor does not work for any special regions 
> including US Government, China, and Germany. need to find a way to add 
> endpoint suffix support.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-9886) Upgrade MongoDB driver to 4.5.0

2022-04-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17519055#comment-17519055
 ] 

ASF subversion and git services commented on NIFI-9886:
---

Commit 340cefc3c997e73d8586c468ff0f1c3cc1c50071 in nifi's branch 
refs/heads/support/nifi-1.16 from Lance Kinley
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=340cefc3c9 ]

NIFI-9886 Upgrade MongoDB driver to version 4.5.0

Resolves performance issues that impact versions 4.4 and 4.3 of
the driver and adds support up through MongoDB 5.1
Add support for Java 17

This closes #5940

Signed-off-by: Mike Thomsen 


> Upgrade MongoDB driver to 4.5.0
> ---
>
> Key: NIFI-9886
> URL: https://issues.apache.org/jira/browse/NIFI-9886
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Lance Kinley
>Priority: Major
> Fix For: 1.17.0, 1.16.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Update the MongoDB driver from 4.3.2 to 4.5.0
> Notable improvements include:
>  * Resolved performance issues that impacted versions 4.4 and 4.3 of the 
> driver. Performance in this version should be similar to performance in 4.2.
>  * Compatibility with MongoDB 5.1 and support for Java 17



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-9887) Set Minimum Java Build Version to 1.8.0-251

2022-04-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17519056#comment-17519056
 ] 

ASF subversion and git services commented on NIFI-9887:
---

Commit 436bf943d92ca70290b1fe6508eaadbd4957 in nifi's branch 
refs/heads/support/nifi-1.16 from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=436bf943d9 ]

NIFI-9887 Set minimum Java build version to 1.8.0-251

- Set minimalJavaBuildVersion property for maven-enforcer-plugin configuration
- Updated README to mention Java 8 Update 251 in Minimum Requirements
- Disabled site-plugin from parent configuration

This closes #5942

Signed-off-by: Mike Thomsen 


> Set Minimum Java Build Version to 1.8.0-251
> ---
>
> Key: NIFI-9887
> URL: https://issues.apache.org/jira/browse/NIFI-9887
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.15.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 1.17.0, 1.16.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Improvements to JSON Web Token signing and verification required the use the 
> RSASSA-PSS signing algorithm, necessitating a minimum Java version of JDK 8 
> Update 251.
> The Maven build configuration should be updated to require a minimum Java 
> version using the {{maven-enforcer-plugin}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-8677) Add endpoint suffix for ConsumeAzureEventHub Processor

2022-04-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17519054#comment-17519054
 ] 

ASF subversion and git services commented on NIFI-8677:
---

Commit ba4db79196e5b36fdece1fb91b572dfc96c6530f in nifi's branch 
refs/heads/support/nifi-1.16 from Timea Barna
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ba4db79196 ]

NIFI-8677 Added endpoint suffix for Azure EventHub Processors

This closes #5303

Signed-off-by: Joey Frazee 


> Add endpoint suffix for ConsumeAzureEventHub Processor
> --
>
> Key: NIFI-8677
> URL: https://issues.apache.org/jira/browse/NIFI-8677
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Timea Barna
>Assignee: Timea Barna
>Priority: Major
> Fix For: 1.17.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> ConsumeAzureEventHub Processor does not work for any special regions 
> including US Government, China, and Germany. need to find a way to add 
> endpoint suffix support.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-9882) NullPointerException on startup from HtmlDocumentationWriter results in component documentation not being available

2022-04-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17519052#comment-17519052
 ] 

ASF subversion and git services commented on NIFI-9882:
---

Commit e6151762ce5e26465c0da5f0784ac5ed45d32443 in nifi's branch 
refs/heads/support/nifi-1.16 from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=e6151762ce ]

NIFI-9882 Updated HtmlDocumentationWriter to avoid writing null characters

This closes #5935

Signed-off-by: Mike Thomsen 


> NullPointerException on startup from HtmlDocumentationWriter results in 
> component documentation not being available
> ---
>
> Key: NIFI-9882
> URL: https://issues.apache.org/jira/browse/NIFI-9882
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: David Handermann
>Priority: Blocker
> Fix For: 1.17.0, 1.16.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> On startup, we see this in the nifi-app.log:
> {code:java}
> 2022-04-06 11:32:42,656 WARN [main] o.apache.nifi.documentation.DocGenerator 
> Unable to document: class org.apache.nifi.processors.mongodb.gridfs.PutGridFS
> java.lang.NullPointerException: null
>     at 
> com.ctc.wstx.sw.BaseStreamWriter.writeCharacters(BaseStreamWriter.java:458)
>     at 
> org.apache.nifi.documentation.html.HtmlDocumentationWriter.writeSimpleElement(HtmlDocumentationWriter.java:852)
>     at 
> org.apache.nifi.documentation.html.HtmlDocumentationWriter.writeProperties(HtmlDocumentationWriter.java:518)
>     at 
> org.apache.nifi.documentation.html.HtmlDocumentationWriter.writeBody(HtmlDocumentationWriter.java:160)
>     at 
> org.apache.nifi.documentation.html.HtmlDocumentationWriter.write(HtmlDocumentationWriter.java:92)
>     at 
> org.apache.nifi.documentation.DocGenerator.document(DocGenerator.java:142)
>     at 
> org.apache.nifi.documentation.DocGenerator.documentConfigurableComponent(DocGenerator.java:104)
>     at 
> org.apache.nifi.documentation.DocGenerator.generate(DocGenerator.java:65)
>     at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1028)
>     at org.apache.nifi.NiFi.(NiFi.java:170)
>     at org.apache.nifi.NiFi.(NiFi.java:82)
>     at org.apache.nifi.NiFi.main(NiFi.java:330)
> 2022-04-06 11:32:42,657 WARN [main] o.apache.nifi.documentation.DocGenerator 
> Unable to document: class org.apache.nifi.processors.standard.SegmentContent
> java.lang.NullPointerException: null
>     at 
> com.ctc.wstx.sw.BaseStreamWriter.writeCharacters(BaseStreamWriter.java:458)
>     at 
> org.apache.nifi.documentation.html.HtmlDocumentationWriter.writeSimpleElement(HtmlDocumentationWriter.java:852)
>     at 
> org.apache.nifi.documentation.html.HtmlDocumentationWriter.writeProperties(HtmlDocumentationWriter.java:518)
>     at 
> org.apache.nifi.documentation.html.HtmlDocumentationWriter.writeBody(HtmlDocumentationWriter.java:160)
>     at 
> org.apache.nifi.documentation.html.HtmlDocumentationWriter.write(HtmlDocumentationWriter.java:92)
>     at 
> org.apache.nifi.documentation.DocGenerator.document(DocGenerator.java:142)
>     at 
> org.apache.nifi.documentation.DocGenerator.documentConfigurableComponent(DocGenerator.java:104)
>     at 
> org.apache.nifi.documentation.DocGenerator.generate(DocGenerator.java:65)
>     at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1028)
>     at org.apache.nifi.NiFi.(NiFi.java:170)
>     at org.apache.nifi.NiFi.(NiFi.java:82)
>     at org.apache.nifi.NiFi.main(NiFi.java:330) {code}
> For pretty much every component. As a result, the documentation is 
> unavailable when clicking "View Usage"



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-9881) Migrate JSON reader/writer services to Jackson 2.X

2022-04-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17519053#comment-17519053
 ] 

ASF subversion and git services commented on NIFI-9881:
---

Commit b97752954003451e2dedabf94c719273ddb038a6 in nifi's branch 
refs/heads/support/nifi-1.16 from Mike Thomsen
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b977529540 ]

NIFI-9881 Refactored the JSON services to use Jackson 2

This closes #5934

Signed-off-by: David Handermann 


> Migrate JSON reader/writer services to Jackson 2.X
> --
>
> Key: NIFI-9881
> URL: https://issues.apache.org/jira/browse/NIFI-9881
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Major
> Fix For: 1.17.0, 1.16.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Update the JSON services in the standard services bundle to use Jackson 2.X 
> instead of 1.9



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [nifi] exceptionfactory commented on a diff in pull request #5900: NIFI-7234 Standardized on Avro 1.11.0

2022-04-07 Thread GitBox


exceptionfactory commented on code in PR #5900:
URL: https://github.com/apache/nifi/pull/5900#discussion_r844029996


##
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/pom.xml:
##
@@ -57,6 +56,11 @@
 org.apache.nifi
 nifi-record
 
+
+com.fasterxml.jackson.core
+jackson-databind
+2.13.1

Review Comment:
   This specific version can be removed so that it leverages the version that 
`jackson-bom` provides.
   ```suggestion
   2.13.1
   ```
   ```suggestion
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] mattyb149 commented on a diff in pull request #5900: NIFI-7234 Standardized on Avro 1.11.0

2022-04-07 Thread GitBox


mattyb149 commented on code in PR #5900:
URL: https://github.com/apache/nifi/pull/5900#discussion_r845388703


##
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/avro/AvroTypeUtil.java:
##
@@ -326,7 +326,7 @@ private static Schema nullable(final Schema schema) {
 return Schema.createUnion(unionTypes);
 }
 
-return Schema.createUnion(Schema.create(Type.NULL), schema);
+return Schema.createUnion(schema, Schema.create(Type.NULL));

Review Comment:
   I think we are ok upgrading to something that actually enforces the spec, 
our inference stuff will not generate default values so the non-null type is 
supposed to come first. If the user has an invalid spec that got by previous 
versions' validation, it's something we can mention in Migration Guidance as 
something Avro is checking now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] exceptionfactory commented on a diff in pull request #5937: NIFI-9862: Update JsonTreeReader to read Records from a Nested Array

2022-04-07 Thread GitBox


exceptionfactory commented on code in PR #5937:
URL: https://github.com/apache/nifi/pull/5937#discussion_r845222070


##
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/json/AbstractJsonRowRecordReader.java:
##
@@ -61,11 +60,11 @@
 
 private static final JsonFactory jsonFactory = new JsonFactory();
 private static final ObjectMapper codec = new ObjectMapper();
+private JsonParser jsonParser;
+private JsonNode firstJsonNode;

Review Comment:
   Is there a reason these are no longer marked as `final`?



##
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/json/AbstractJsonRowRecordReader.java:
##
@@ -95,6 +100,41 @@ public AbstractJsonRowRecordReader(final InputStream in, 
final ComponentLog logg
 }
 }
 
+protected AbstractJsonRowRecordReader(final InputStream in, final 
ComponentLog logger, final String dateFormat, final String timeFormat, final 
String timestampFormat,
+  final String skipToNestedJsonField) 
throws IOException, MalformedRecordException {

Review Comment:
   Minor naming suggestion, since the class context is already JSON, `Json` is 
not necessary in the variable name.
   ```suggestion
 final String nestedFieldName) 
throws IOException, MalformedRecordException {
   ```



##
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/json/JsonRecordSource.java:
##
@@ -19,37 +19,59 @@
 import com.fasterxml.jackson.core.JsonFactory;
 import com.fasterxml.jackson.core.JsonParser;
 import com.fasterxml.jackson.core.JsonToken;
+import com.fasterxml.jackson.core.io.SerializedString;
 import com.fasterxml.jackson.databind.JsonNode;
 import com.fasterxml.jackson.databind.ObjectMapper;
 import org.apache.nifi.schema.inference.RecordSource;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.io.InputStream;
 
 public class JsonRecordSource implements RecordSource {
+private static final Logger logger = 
LoggerFactory.getLogger(JsonRecordSource.class);
 private static final JsonFactory jsonFactory;
 private final JsonParser jsonParser;
+private final String skipToNestedJsonField;
 
 static {
 jsonFactory = new JsonFactory();
 jsonFactory.setCodec(new ObjectMapper());
 }
 
-public JsonRecordSource(final InputStream in) throws IOException {
-jsonParser = jsonFactory.createJsonParser(in);
+public JsonRecordSource(final InputStream in, final String 
skipToNestedJsonField) throws IOException {
+jsonParser = jsonFactory.createParser(in);
+this.skipToNestedJsonField = skipToNestedJsonField;
+}
+
+@Override
+public void init() throws IOException {
+if (skipToNestedJsonField != null) {
+while (!jsonParser.nextFieldName(new 
SerializedString(skipToNestedJsonField))) {

Review Comment:
   As mentioned in the RecordReader, `SerializedString` should be declared and 
reused.



##
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/json/AbstractJsonRowRecordReader.java:
##
@@ -95,6 +100,41 @@ public AbstractJsonRowRecordReader(final InputStream in, 
final ComponentLog logg
 }
 }
 
+protected AbstractJsonRowRecordReader(final InputStream in, final 
ComponentLog logger, final String dateFormat, final String timeFormat, final 
String timestampFormat,
+  final String skipToNestedJsonField) 
throws IOException, MalformedRecordException {
+
+this(logger, dateFormat, timeFormat, timestampFormat);
+
+try {
+jsonParser = jsonFactory.createParser(in);
+jsonParser.setCodec(codec);
+
+if (skipToNestedJsonField != null) {
+while (!jsonParser.nextFieldName(new 
SerializedString(skipToNestedJsonField))) {
+// go to nested field if specified
+if (!jsonParser.hasCurrentToken()) {
+throw new IOException("The defined skipTo json field 
is not found when processing json as NiFi record.");
+}
+}
+logger.debug("Skipped to specified json field [{}] when 
processing json as NiFI record.", skipToNestedJsonField);
+}
+
+JsonToken token = jsonParser.nextToken();
+if (skipToNestedJsonField != null && 
!jsonParser.isExpectedStartArrayToken() && token != JsonToken.START_OBJECT) {
+logger.debug("Specified json field [{}] to skip to is not 
found. Schema 

[jira] [Created] (NIFI-9891) Add documentation of Parameter Context inheritance

2022-04-07 Thread Mark Bean (Jira)
Mark Bean created NIFI-9891:
---

 Summary: Add documentation of Parameter Context inheritance
 Key: NIFI-9891
 URL: https://issues.apache.org/jira/browse/NIFI-9891
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Documentation  Website
Affects Versions: 1.16.0
Reporter: Mark Bean


A new feature supporting inheritance of Parameter Contexts was added in version 
1.16.0. Details of its usage should be added to the User Guide.

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [nifi] Lehel44 commented on pull request #5088: NIFI-3320: SendTrapSNMP and ListenTrapSNMP processors added.

2022-04-07 Thread GitBox


Lehel44 commented on PR #5088:
URL: https://github.com/apache/nifi/pull/5088#issuecomment-1091899299

   @esend7881 Thanks for the feedback. I can see that the user conf file is 
required with SNMPv3 even if the security level is changed to _noAuthNoPriv_ 
which might be unintentional. I'll have a look and get back to you soon.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (MINIFICPP-1782) Upgrade and verify interworking with latest NiFi version

2022-04-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gábor Gyimesi resolved MINIFICPP-1782.
--
Fix Version/s: 0.12.0
   Resolution: Fixed

> Upgrade and verify interworking with latest NiFi version
> 
>
> Key: MINIFICPP-1782
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1782
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Gábor Gyimesi
>Assignee: Gábor Gyimesi
>Priority: Minor
> Fix For: 0.12.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Site to site tests are using an old 1.7 version of NiFi. This should be 
> upgraded to the latest released version. We can also add a test for 
> MiNiFi-NiFi interworking through HTTP requests.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (MINIFICPP-1793) Fix GCP toggling in bootstrap.sh

2022-04-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gábor Gyimesi resolved MINIFICPP-1793.
--
Fix Version/s: 0.12.0
   Resolution: Fixed

> Fix GCP toggling in bootstrap.sh
> 
>
> Key: MINIFICPP-1793
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1793
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Gábor Gyimesi
>Assignee: Gábor Gyimesi
>Priority: Trivial
> Fix For: 0.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [nifi-minifi-cpp] fgerlits closed pull request #1286: MINIFICPP-1782 Upgrade and verify interworking with latest NiFi version

2022-04-07 Thread GitBox


fgerlits closed pull request #1286: MINIFICPP-1782 Upgrade and verify 
interworking with latest NiFi version
URL: https://github.com/apache/nifi-minifi-cpp/pull/1286


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] fgerlits closed pull request #1295: MINIFICPP-1793 Fix GCP toggling in bootstrap.sh

2022-04-07 Thread GitBox


fgerlits closed pull request #1295: MINIFICPP-1793 Fix GCP toggling in 
bootstrap.sh
URL: https://github.com/apache/nifi-minifi-cpp/pull/1295


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-9890) Record conversion fails in JoltTransformREcord

2022-04-07 Thread Martin Hynar (Jira)
Martin Hynar created NIFI-9890:
--

 Summary: Record conversion fails in JoltTransformREcord
 Key: NIFI-9890
 URL: https://issues.apache.org/jira/browse/NIFI-9890
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.15.3
Reporter: Martin Hynar


Hello, I am struggling with JOLT transformation of record flow file. The flow 
is quite simple
 # Consume records from Kafka - ConsumeKafkaRecord, where reader is Avro Reader 
Confluent Reference and writer is Json Record Set - this step is OK and 
flowfiles are correctly sent to success.
 # Transform records - using JoltTransformRecord, with JsonTreeReader and Json 
Record Set Writer - the transformation is very simple, new field is created 
from existing fields. On writing result, bellow exception is thrown. Worth 
mentioning, the problematic field is not used in the transformation at all.

The exception is this

 
{code:java}
2022-04-07 03:32:40,170 ERROR [Timer-Driven Process Thread-1] 
o.a.n.p.jolt.record.JoltTransformRecord 
JoltTransformRecord[id=2ea633ad-c1f3-1909--686dd600] Unable to 
transform 
StandardFlowFileRecord[uuid=e32039eb-e7f9-46ea-a7a4-ad9cf3709b71,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1649063785609-18280, 
container=default, section=872], offset=381094, 
length=1143917],offset=0,name=e32039eb-e7f9-46ea-a7a4-ad9cf3709b71,size=1143917]
 due to 
org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
Cannot convert value [[Ljava.lang.Object;@d786ea4] of type 
CHOICE[ARRAY[STRING], ARRAY[INT]] to Map for field IgnoreIssues because the 
type is not supported: 
org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
Cannot convert value [[Ljava.lang.Object;@d786ea4] of type 
CHOICE[ARRAY[STRING], ARRAY[INT]] to Map for field IgnoreIssues because the 
type is not supported
org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
Cannot convert value [[Ljava.lang.Object;@d786ea4] of type 
CHOICE[ARRAY[STRING], ARRAY[INT]] to Map for field IgnoreIssues because the 
type is not supported
    at 
org.apache.nifi.serialization.record.util.DataTypeUtils.convertRecordFieldtoObject(DataTypeUtils.java:858)
    at 
org.apache.nifi.processors.jolt.record.JoltTransformRecord.transform(JoltTransformRecord.java:409)
    at 
org.apache.nifi.processors.jolt.record.JoltTransformRecord.onTrigger(JoltTransformRecord.java:334)
    at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
    at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1273)
    at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
    at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:103)
    at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
    at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at 
java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834) {code}
 

What I found about the field is that in the record, there are these values
{noformat}
  "IgnoreIssues" : [ ],
  "IgnoreIssues" : [ ],
  "IgnoreIssues" : [ 0 ],
  "IgnoreIssues" : [ 0 ],
  "IgnoreIssues" : [ ],
  "IgnoreIssues" : [ ],
{noformat}
so, single value arrays mixed with empty arrays.

 

Also, what I tried was to redirect all failed flow files into SplitRecord 
processor with max records = 1. When these single record flowfiles were 
returned to the same transformation, they passed.

The array is very often empty and I have flow files that pass the 
transformation, I think it is when all records have empty array in this field.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [nifi] tpalfy commented on a diff in pull request #5931: NIFI-9875 Fix: StandardProcessGroupSynchronizer mishandles output ports

2022-04-07 Thread GitBox


tpalfy commented on code in PR #5931:
URL: https://github.com/apache/nifi/pull/5931#discussion_r845093021


##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/test/java/org/apache/nifi/integration/versioned/ImportFlowIT.java:
##
@@ -511,45 +511,45 @@ public void 
testUpdateFlowWithModifyingConnectionDeletingAndMovingPort() {
 final ProcessGroup groupA = createProcessGroup("group-a-id", "Group 
A", getRootGroup());
 
 //Create Process Group B under Process Group A
-final ProcessGroup groupB = createProcessGroup("group-b-id", "Group 
B", groupA);
+final ProcessGroup groupBunderA = createProcessGroup("group-b-id", 
"Group B", groupA);
 
 //Add Input port under Process Group B
-final Port inputPort = 
getFlowController().getFlowManager().createLocalInputPort("input-port-id", 
"Input Port");
-groupB.addInputPort(inputPort);
+final Port inputPortBThenStayThenDelete = 
getFlowController().getFlowManager().createLocalInputPort("input-port-id", 
"Input Port");
+groupBunderA.addInputPort(inputPortBThenStayThenDelete);
 
 //Add Processor 1 under Process Group A
-final ProcessorNode processor1 = 
createProcessorNode(GenerateProcessor.class, groupA);
+final ProcessorNode processorA1 = 
createProcessorNode(GenerateProcessor.class, groupA);
 
 //Add Processor 2 under Process Group A
-final ProcessorNode processor2 = 
createProcessorNode(GenerateProcessor.class, groupA);
+final ProcessorNode processorA2 = 
createProcessorNode(GenerateProcessor.class, groupA);
 
 //Add Output Port under Process Group A
-final Port outputPort = 
getFlowController().getFlowManager().createLocalOutputPort("output-port-id", 
"Output Port");
-groupA.addOutputPort(outputPort);
+final Port outputPortAThenB = 
getFlowController().getFlowManager().createLocalOutputPort("output-port-id", 
"Output Port");
+groupA.addOutputPort(outputPortAThenB);
 
 //Connect Processor 1 and Output Port as Connection 1
-final Connection connection1 = connect(groupA, processor1, outputPort, 
processor1.getRelationships());
+final Connection connectionProcessorA1ToOutputPortAThenProcessorA2 = 
connect(groupA, processorA1, outputPortAThenB, processorA1.getRelationships());
 
 //Connect Processor 1 and Input Port as Connection 2
-final Connection connection2 = connect(groupA, processor1, inputPort, 
processor1.getRelationships());
+final Connection connectionProcessorA1ToInputPortBThenStayThenDelete = 
connect(groupA, processorA1, inputPortBThenStayThenDelete, 
processorA1.getRelationships());

Review Comment:
   I had hard time put together in my mind what was actually going on in this 
test. I had to rename the components just to be able to easily keep track of 
the flow while trying to understand the test case.
   
   I think in general by far the best name for a variable is one that describes 
its purpose.
   In tests the purpose is usually different than in production. Especially in 
this test their purpose is to play out a scenario. Hence the naming.
   
   I understand that at first glance the names look verbose.
   But when one wants to actually understand the multi-step test scenario, it 
can help tremendously.



##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/test/java/org/apache/nifi/integration/versioned/ImportFlowIT.java:
##
@@ -511,45 +511,45 @@ public void 
testUpdateFlowWithModifyingConnectionDeletingAndMovingPort() {
 final ProcessGroup groupA = createProcessGroup("group-a-id", "Group 
A", getRootGroup());
 
 //Create Process Group B under Process Group A
-final ProcessGroup groupB = createProcessGroup("group-b-id", "Group 
B", groupA);
+final ProcessGroup groupBunderA = createProcessGroup("group-b-id", 
"Group B", groupA);
 
 //Add Input port under Process Group B
-final Port inputPort = 
getFlowController().getFlowManager().createLocalInputPort("input-port-id", 
"Input Port");
-groupB.addInputPort(inputPort);
+final Port inputPortBThenStayThenDelete = 
getFlowController().getFlowManager().createLocalInputPort("input-port-id", 
"Input Port");
+groupBunderA.addInputPort(inputPortBThenStayThenDelete);
 
 //Add Processor 1 under Process Group A
-final ProcessorNode processor1 = 
createProcessorNode(GenerateProcessor.class, groupA);
+final ProcessorNode processorA1 = 
createProcessorNode(GenerateProcessor.class, groupA);
 
 //Add Processor 2 under Process Group A
-final ProcessorNode processor2 = 
createProcessorNode(GenerateProcessor.class, groupA);
+final ProcessorNode processorA2 = 
createProcessorNode(GenerateProcessor.class, groupA);
 
 //Add Output Port under Process Group A
-final Port outputPort = 

[GitHub] [nifi] tpalfy commented on a diff in pull request #5931: NIFI-9875 Fix: StandardProcessGroupSynchronizer mishandles output ports

2022-04-07 Thread GitBox


tpalfy commented on code in PR #5931:
URL: https://github.com/apache/nifi/pull/5931#discussion_r845086854


##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/test/java/org/apache/nifi/integration/versioned/ImportFlowIT.java:
##
@@ -431,18 +431,18 @@ public void 
testUpdateFlowWithInputPortMovedFromGroupAToGroupB() {
 assertTrue(groupA.getProcessors().isEmpty());
 assertTrue(groupA.getConnections().isEmpty());
 assertEquals(1, groupA.getInputPorts().size());
-assertEquals(port.getVersionedComponentId(), 
groupA.getInputPorts().stream().findFirst().get().getVersionedComponentId());
+assertEquals(port.getName(), 
groupA.getInputPorts().stream().findFirst().get().getName());
 
 //Change Process Group A version to Version 2
 groupA.updateFlow(version2, null, false, true, true);
 
 //Process Group A should have a Process Group, a Processor and a 
Connection and no Input Ports
 assertEquals(1, groupA.getProcessGroups().size());
-assertEquals(groupB.getVersionedComponentId(), 
groupA.getProcessGroups().stream().findFirst().get().getVersionedComponentId());
+assertEquals(groupB.getName(), 
groupA.getProcessGroups().stream().findFirst().get().getName());
 assertEquals(1, groupA.getProcessors().size());
-assertEquals(processor.getVersionedComponentId(), 
groupA.getProcessors().stream().findFirst().get().getVersionedComponentId());
+assertEquals(processor.getName(), 
groupA.getProcessors().stream().findFirst().get().getName());
 assertEquals(1, groupA.getConnections().size());
-assertEquals(connection.getVersionedComponentId(), 
groupA.getConnections().stream().findFirst().get().getVersionedComponentId());
+assertEquals(connection.getName(), 
groupA.getConnections().stream().findFirst().get().getName());

Review Comment:
   The problem is that the `versionedComponentId` was _not_ handled by the 
production code. That part is not in the scope of this test.
   Before this change the test code itself copied those ids over. Basically the 
test simulated what the production code does somewhere sometime and checked 
_its own behaviour_.
   
   The test tries to correlate the mapped- and their corresponding in-memory 
objects. The `name` property is perfectly for this.
   



##
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/test/java/org/apache/nifi/integration/versioned/ImportFlowIT.java:
##
@@ -431,18 +431,18 @@ public void 
testUpdateFlowWithInputPortMovedFromGroupAToGroupB() {
 assertTrue(groupA.getProcessors().isEmpty());
 assertTrue(groupA.getConnections().isEmpty());
 assertEquals(1, groupA.getInputPorts().size());
-assertEquals(port.getVersionedComponentId(), 
groupA.getInputPorts().stream().findFirst().get().getVersionedComponentId());
+assertEquals(port.getName(), 
groupA.getInputPorts().stream().findFirst().get().getName());
 
 //Change Process Group A version to Version 2
 groupA.updateFlow(version2, null, false, true, true);
 
 //Process Group A should have a Process Group, a Processor and a 
Connection and no Input Ports
 assertEquals(1, groupA.getProcessGroups().size());
-assertEquals(groupB.getVersionedComponentId(), 
groupA.getProcessGroups().stream().findFirst().get().getVersionedComponentId());
+assertEquals(groupB.getName(), 
groupA.getProcessGroups().stream().findFirst().get().getName());
 assertEquals(1, groupA.getProcessors().size());
-assertEquals(processor.getVersionedComponentId(), 
groupA.getProcessors().stream().findFirst().get().getVersionedComponentId());
+assertEquals(processor.getName(), 
groupA.getProcessors().stream().findFirst().get().getName());
 assertEquals(1, groupA.getConnections().size());
-assertEquals(connection.getVersionedComponentId(), 
groupA.getConnections().stream().findFirst().get().getVersionedComponentId());
+assertEquals(connection.getName(), 
groupA.getConnections().stream().findFirst().get().getName());

Review Comment:
   The problem is that the `versionedComponentId` was _not_ handled by the 
production code. That part is not in the scope of this test.
   Before this change the test code itself copied those ids over. Basically the 
test simulated what the production code does somewhere sometime and checked 
_its own behaviour_.
   
   The test tries to correlate the mapped- and their corresponding in-memory 
objects. The `name` property is perfectly for this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] pgyori commented on pull request #5896: NIFI-9832: Fix disappearing XML element content when the element has attribute

2022-04-07 Thread GitBox


pgyori commented on PR #5896:
URL: https://github.com/apache/nifi/pull/5896#issuecomment-1091686272

   @markap14 I rebased, resolved the conflict, force pushed my modifications.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1271: MINIFICPP-1763 - Move extension inclusion logic into the extensions

2022-04-07 Thread GitBox


fgerlits commented on code in PR #1271:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1271#discussion_r845055249


##
libminifi/include/core/ConfigurableComponent.h:
##
@@ -56,6 +56,9 @@ class ConfigurableComponent {
   ConfigurableComponent& operator=(const ConfigurableComponent ) = 
delete;
   ConfigurableComponent& operator=(ConfigurableComponent &) = delete;
 
+  template>>
+  std::optional getProperty(const Property& property) const;

Review Comment:
   what is it for?
   you could add a commit with an explanation of this change, so whoever will 
merge the PR can include it in the squashed commit message



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1271: MINIFICPP-1763 - Move extension inclusion logic into the extensions

2022-04-07 Thread GitBox


fgerlits commented on code in PR #1271:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1271#discussion_r845053025


##
CONTRIB.md:
##
@@ -103,22 +103,29 @@ are contributing a custom Processor or Controller 
Service, the mechanism to regi
 
 To use this include REGISTER_RESOURCE(YourClassName); in your header file. The 
default class loader will make instances of YourClassName available for 
inclusion.  
 
-The extensions sub-directory allows you to contribute conditionally built 
extensions. An example of the GPS extension will provide an example. In this a 
conditional
-allows flags to specify that your extension is to be include or excluded by 
default. In this example -DENABLE_GPS=ON must be specified by the builder to  
include it.
-The function call will then create an extension that will automatically be 
while main is built. The first argument of createExtension will be the target
-reference that is automatically used for documentation and linking. The second 
and third arguments are used for printing information on what was built or 
linked in
-the consumer's build. The last two argument represent where the extension and 
tests exist. 
-
-   if (ENABLE_ALL OR ENABLE_GPS)
-   createExtension(GPS-EXTENSION "GPS EXTENSIONS" "Enables LibGPS 
Functionality and the GetGPS processor." "extensions/gps" 
"${TEST_DIR}/gps-tests")
-   endif()
-
-   
-Once the createExtension target is made in the root CMakeLists.txt , you may 
load your dependencies and build your targets. Once you are finished defining 
your build
-and link commands, you must set your target reference to a target within your 
build. In this case, the previously mentioned GPS-EXTENSION will be assigned to 
minifi-gps.
-The next call register_extension will ensure that minifi-gps is linked 
appropriately for inclusion into the final binary.  
-   
-   SET (GPS-EXTENSION minifi-gps PARENT_SCOPE)
-   register_extension(minifi-gps)
-   
-   
+The extensions sub-directory allows you to contribute conditionally built 
extensions. The system adds all subdirectories in `extensions/*` that contain
+a `CMakeLists.txt` file. It is up to the extension creator's discretion how 
they handle cmake flags.
+It is important that `register_extension` be called at the end of the setup, 
for the extension to be made available to other stages of the build process.
+
+```
+# extensions/gps/CMakeLists.txt
+
+# the author chooses to look for the explicit compilation request
+if (NOT ENABLE_GPS)
+  return()
+endif()
+
+#
+# extension definition goes here
+#
+
+# at the end we should announce our extension
+register_extension(minifi-gps "GPS EXTENSIONS" GPS-EXTENSION "Enables LibGPS 
Functionality and the GetGPS processor." "${TEST_DIR}/gps-tests")

Review Comment:
   yes



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-9887) Set Minimum Java Build Version to 1.8.0-251

2022-04-07 Thread Mike Thomsen (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-9887:
---
Fix Version/s: 1.17.0
   1.16.1
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Set Minimum Java Build Version to 1.8.0-251
> ---
>
> Key: NIFI-9887
> URL: https://issues.apache.org/jira/browse/NIFI-9887
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.15.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 1.17.0, 1.16.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Improvements to JSON Web Token signing and verification required the use the 
> RSASSA-PSS signing algorithm, necessitating a minimum Java version of JDK 8 
> Update 251.
> The Maven build configuration should be updated to require a minimum Java 
> version using the {{maven-enforcer-plugin}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-9887) Set Minimum Java Build Version to 1.8.0-251

2022-04-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17518789#comment-17518789
 ] 

ASF subversion and git services commented on NIFI-9887:
---

Commit af3375669c374ab0ca703aee42d4ae32cea10efc in nifi's branch 
refs/heads/main from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=af3375669c ]

NIFI-9887 Set minimum Java build version to 1.8.0-251

- Set minimalJavaBuildVersion property for maven-enforcer-plugin configuration
- Updated README to mention Java 8 Update 251 in Minimum Requirements
- Disabled site-plugin from parent configuration

This closes #5942

Signed-off-by: Mike Thomsen 


> Set Minimum Java Build Version to 1.8.0-251
> ---
>
> Key: NIFI-9887
> URL: https://issues.apache.org/jira/browse/NIFI-9887
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Affects Versions: 1.15.0
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Improvements to JSON Web Token signing and verification required the use the 
> RSASSA-PSS signing algorithm, necessitating a minimum Java version of JDK 8 
> Update 251.
> The Maven build configuration should be updated to require a minimum Java 
> version using the {{maven-enforcer-plugin}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [nifi] asfgit closed pull request #5942: NIFI-9887 Set minimum Java build version to 1.8.0-251

2022-04-07 Thread GitBox


asfgit closed pull request #5942: NIFI-9887 Set minimum Java build version to 
1.8.0-251
URL: https://github.com/apache/nifi/pull/5942


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] martinzink opened a new pull request, #1297: MINIFICPP-1744: Add FetchGCSObject

2022-04-07 Thread GitBox


martinzink opened a new pull request, #1297:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1297

   MINIFICPP-1745: Add DeleteGCSObject
   MINIFICPP-1746: Add ListGCSBucket processor
   
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1271: MINIFICPP-1763 - Move extension inclusion logic into the extensions

2022-04-07 Thread GitBox


adamdebreceni commented on code in PR #1271:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1271#discussion_r844876255


##
libminifi/include/core/ConfigurableComponent.h:
##
@@ -56,6 +56,9 @@ class ConfigurableComponent {
   ConfigurableComponent& operator=(const ConfigurableComponent ) = 
delete;
   ConfigurableComponent& operator=(ConfigurableComponent &) = delete;
 
+  template>>
+  std::optional getProperty(const Property& property) const;

Review Comment:
   I tried to avoid opening a PR for these lines, should I remove this?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1271: MINIFICPP-1763 - Move extension inclusion logic into the extensions

2022-04-07 Thread GitBox


adamdebreceni commented on code in PR #1271:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1271#discussion_r844870498


##
extensions/systemd/CMakeLists.txt:
##
@@ -17,12 +17,19 @@
 # under the License.
 #
 
+if(CMAKE_SYSTEM_NAME STREQUAL "Linux")
+  option(ENABLE_SYSTEMD "Enables the systemd extension." ON)
+endif()

Review Comment:
   agreed, at first I wanted to move (and did move) everything including 
`option(...)`s but then moved them back, this one was duplicated here by 
accident, removed it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1271: MINIFICPP-1763 - Move extension inclusion logic into the extensions

2022-04-07 Thread GitBox


adamdebreceni commented on code in PR #1271:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1271#discussion_r844868073


##
CONTRIB.md:
##
@@ -103,22 +103,29 @@ are contributing a custom Processor or Controller 
Service, the mechanism to regi
 
 To use this include REGISTER_RESOURCE(YourClassName); in your header file. The 
default class loader will make instances of YourClassName available for 
inclusion.  
 
-The extensions sub-directory allows you to contribute conditionally built 
extensions. An example of the GPS extension will provide an example. In this a 
conditional
-allows flags to specify that your extension is to be include or excluded by 
default. In this example -DENABLE_GPS=ON must be specified by the builder to  
include it.
-The function call will then create an extension that will automatically be 
while main is built. The first argument of createExtension will be the target
-reference that is automatically used for documentation and linking. The second 
and third arguments are used for printing information on what was built or 
linked in
-the consumer's build. The last two argument represent where the extension and 
tests exist. 
-
-   if (ENABLE_ALL OR ENABLE_GPS)
-   createExtension(GPS-EXTENSION "GPS EXTENSIONS" "Enables LibGPS 
Functionality and the GetGPS processor." "extensions/gps" 
"${TEST_DIR}/gps-tests")
-   endif()
-
-   
-Once the createExtension target is made in the root CMakeLists.txt , you may 
load your dependencies and build your targets. Once you are finished defining 
your build
-and link commands, you must set your target reference to a target within your 
build. In this case, the previously mentioned GPS-EXTENSION will be assigned to 
minifi-gps.
-The next call register_extension will ensure that minifi-gps is linked 
appropriately for inclusion into the final binary.  
-   
-   SET (GPS-EXTENSION minifi-gps PARENT_SCOPE)
-   register_extension(minifi-gps)
-   
-   
+The extensions sub-directory allows you to contribute conditionally built 
extensions. The system adds all subdirectories in `extensions/*` that contain
+a `CMakeLists.txt` file. It is up to the extension creator's discretion how 
they handle cmake flags.
+It is important that `register_extension` be called at the end of the setup, 
for the extension to be made available to other stages of the build process.
+
+```
+# extensions/gps/CMakeLists.txt
+
+# the author chooses to look for the explicit compilation request
+if (NOT ENABLE_GPS)
+  return()
+endif()
+
+#
+# extension definition goes here
+#
+
+# at the end we should announce our extension
+register_extension(minifi-gps "GPS EXTENSIONS" GPS-EXTENSION "Enables LibGPS 
Functionality and the GetGPS processor." "${TEST_DIR}/gps-tests")

Review Comment:
   do you mean the `EXTENSION` vs `EXTENSIONS`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] martinzink commented on a diff in pull request #1294: MINIFICPP-1771: Reworked ListenSyslog

2022-04-07 Thread GitBox


martinzink commented on code in PR #1294:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1294#discussion_r844845310


##
extensions/standard-processors/processors/ListenSyslog.cpp:
##
@@ -17,318 +14,283 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+
+
 #include "ListenSyslog.h"
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include "utils/TimeUtil.h"
-#include "utils/StringUtils.h"
 #include "core/ProcessContext.h"
 #include "core/ProcessSession.h"
-#include "core/TypedValues.h"
 #include "core/Resource.h"
 
-namespace org {
-namespace apache {
-namespace nifi {
-namespace minifi {
-namespace processors {
-#ifndef WIN32
-core::Property ListenSyslog::RecvBufSize(
-core::PropertyBuilder::createProperty("Receive Buffer 
Size")->withDescription("The size of each buffer used to receive Syslog 
messages.")->
-withDefaultValue("65507 B")->build());
-
-core::Property ListenSyslog::MaxSocketBufSize(
-core::PropertyBuilder::createProperty("Max Size of Socket 
Buffer")->withDescription("The maximum size of the socket buffer that should be 
used.")->withDefaultValue("1 MB")
-->build());
+namespace org::apache::nifi::minifi::processors {
 
-core::Property ListenSyslog::MaxConnections(
-core::PropertyBuilder::createProperty("Max Number of TCP 
Connections")->withDescription("The maximum number of concurrent connections to 
accept Syslog messages in TCP mode.")
-->withDefaultValue(2)->build());
+const core::Property ListenSyslog::Port(
+core::PropertyBuilder::createProperty("Listening Port")
+->withDescription("The port for Syslog communication.")
+->isRequired(true)
+->withDefaultValue(514, 
core::StandardValidators::get().LISTEN_PORT_VALIDATOR)->build());
 
-core::Property ListenSyslog::MaxBatchSize(
-core::PropertyBuilder::createProperty("Max Batch 
Size")->withDescription("The maximum number of Syslog events to add to a single 
FlowFile.")->withDefaultValue(1)->build());
+const core::Property ListenSyslog::ProtocolProperty(
+core::PropertyBuilder::createProperty("Protocol")
+->withDescription("The protocol for Syslog communication.")
+->isRequired(true)
+->withAllowableValues(Protocol::values())
+->withDefaultValue(toString(Protocol::UDP))
+->build());
 
-core::Property ListenSyslog::MessageDelimiter(
-core::PropertyBuilder::createProperty("Message 
Delimiter")->withDescription("Specifies the delimiter to place between Syslog 
messages when multiple "
-   
 "messages are bundled together (see  
core::Property).")->withDefaultValue("\n")->build());
+const core::Property ListenSyslog::MaxBatchSize(
+core::PropertyBuilder::createProperty("Max Batch Size")
+->withDescription("The maximum number of Syslog events to process at a 
time.")
+->withDefaultValue(500)
+->build());
 
-core::Property ListenSyslog::ParseMessages(
-core::PropertyBuilder::createProperty("Parse 
Messages")->withDescription("Indicates if the processor should parse the Syslog 
messages. If set to false, each outgoing FlowFile will only.")
+const core::Property ListenSyslog::ParseMessages(
+core::PropertyBuilder::createProperty("Parse Messages")
+->withDescription("Indicates if the processor should parse the Syslog 
messages. "
+  "If set to false, each outgoing FlowFile will only 
contain the sender, protocol, and port, and no additional attributes.")
 ->withDefaultValue(false)->build());
 
-core::Property ListenSyslog::Protocol(
-core::PropertyBuilder::createProperty("Protocol")->withDescription("The 
protocol for Syslog 
communication.")->withAllowableValue("UDP")->withAllowableValue("TCP")->withDefaultValue(
-"UDP")->build());
+const core::Property ListenSyslog::MaxQueueSize(
+core::PropertyBuilder::createProperty("Max Size of Message Queue")
+->withDescription("Maximum number of Syslog messages allowed to be 
buffered before processing them when the processor is triggered. "
+  "If the buffer full, the message is ignored. If set 
to zero the buffer is unlimited.")
+->withDefaultValue(0)->build());
 
-core::Property ListenSyslog::Port(
-core::PropertyBuilder::createProperty("Port")->withDescription("The port 
for Syslog communication")->withDefaultValue(514, 
core::StandardValidators::get().PORT_VALIDATOR)->build());
+const core::Relationship ListenSyslog::Success("success", "Incoming messages 
that match the expected format when parsing will be sent to this relationship. "
+  "When Parse Messages 
is set to false, all incoming message will be sent to this relationship.");
+const core::Relationship ListenSyslog::Invalid("invalid", "Incoming messages 
that do not 

[GitHub] [nifi-minifi-cpp] martinzink commented on a diff in pull request #1294: MINIFICPP-1771: Reworked ListenSyslog

2022-04-07 Thread GitBox


martinzink commented on code in PR #1294:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1294#discussion_r844844652


##
extensions/standard-processors/processors/ListenSyslog.cpp:
##
@@ -17,318 +14,283 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+
+
 #include "ListenSyslog.h"
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include "utils/TimeUtil.h"
-#include "utils/StringUtils.h"
 #include "core/ProcessContext.h"
 #include "core/ProcessSession.h"
-#include "core/TypedValues.h"
 #include "core/Resource.h"
 
-namespace org {
-namespace apache {
-namespace nifi {
-namespace minifi {
-namespace processors {
-#ifndef WIN32
-core::Property ListenSyslog::RecvBufSize(
-core::PropertyBuilder::createProperty("Receive Buffer 
Size")->withDescription("The size of each buffer used to receive Syslog 
messages.")->
-withDefaultValue("65507 B")->build());
-
-core::Property ListenSyslog::MaxSocketBufSize(
-core::PropertyBuilder::createProperty("Max Size of Socket 
Buffer")->withDescription("The maximum size of the socket buffer that should be 
used.")->withDefaultValue("1 MB")
-->build());
+namespace org::apache::nifi::minifi::processors {
 
-core::Property ListenSyslog::MaxConnections(
-core::PropertyBuilder::createProperty("Max Number of TCP 
Connections")->withDescription("The maximum number of concurrent connections to 
accept Syslog messages in TCP mode.")
-->withDefaultValue(2)->build());
+const core::Property ListenSyslog::Port(
+core::PropertyBuilder::createProperty("Listening Port")
+->withDescription("The port for Syslog communication.")
+->isRequired(true)
+->withDefaultValue(514, 
core::StandardValidators::get().LISTEN_PORT_VALIDATOR)->build());
 
-core::Property ListenSyslog::MaxBatchSize(
-core::PropertyBuilder::createProperty("Max Batch 
Size")->withDescription("The maximum number of Syslog events to add to a single 
FlowFile.")->withDefaultValue(1)->build());
+const core::Property ListenSyslog::ProtocolProperty(
+core::PropertyBuilder::createProperty("Protocol")
+->withDescription("The protocol for Syslog communication.")
+->isRequired(true)
+->withAllowableValues(Protocol::values())
+->withDefaultValue(toString(Protocol::UDP))
+->build());
 
-core::Property ListenSyslog::MessageDelimiter(
-core::PropertyBuilder::createProperty("Message 
Delimiter")->withDescription("Specifies the delimiter to place between Syslog 
messages when multiple "
-   
 "messages are bundled together (see  
core::Property).")->withDefaultValue("\n")->build());
+const core::Property ListenSyslog::MaxBatchSize(
+core::PropertyBuilder::createProperty("Max Batch Size")
+->withDescription("The maximum number of Syslog events to process at a 
time.")
+->withDefaultValue(500)
+->build());
 
-core::Property ListenSyslog::ParseMessages(
-core::PropertyBuilder::createProperty("Parse 
Messages")->withDescription("Indicates if the processor should parse the Syslog 
messages. If set to false, each outgoing FlowFile will only.")
+const core::Property ListenSyslog::ParseMessages(
+core::PropertyBuilder::createProperty("Parse Messages")
+->withDescription("Indicates if the processor should parse the Syslog 
messages. "
+  "If set to false, each outgoing FlowFile will only 
contain the sender, protocol, and port, and no additional attributes.")
 ->withDefaultValue(false)->build());
 
-core::Property ListenSyslog::Protocol(
-core::PropertyBuilder::createProperty("Protocol")->withDescription("The 
protocol for Syslog 
communication.")->withAllowableValue("UDP")->withAllowableValue("TCP")->withDefaultValue(
-"UDP")->build());
+const core::Property ListenSyslog::MaxQueueSize(
+core::PropertyBuilder::createProperty("Max Size of Message Queue")
+->withDescription("Maximum number of Syslog messages allowed to be 
buffered before processing them when the processor is triggered. "
+  "If the buffer full, the message is ignored. If set 
to zero the buffer is unlimited.")
+->withDefaultValue(0)->build());
 
-core::Property ListenSyslog::Port(
-core::PropertyBuilder::createProperty("Port")->withDescription("The port 
for Syslog communication")->withDefaultValue(514, 
core::StandardValidators::get().PORT_VALIDATOR)->build());
+const core::Relationship ListenSyslog::Success("success", "Incoming messages 
that match the expected format when parsing will be sent to this relationship. "
+  "When Parse Messages 
is set to false, all incoming message will be sent to this relationship.");
+const core::Relationship ListenSyslog::Invalid("invalid", "Incoming messages 
that do not 

[jira] [Created] (NIFI-9889) client.rack for kafka consumer processors

2022-04-07 Thread Denis Jakupovic (Jira)
Denis Jakupovic created NIFI-9889:
-

 Summary: client.rack for kafka consumer processors
 Key: NIFI-9889
 URL: https://issues.apache.org/jira/browse/NIFI-9889
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.15.3
Reporter: Denis Jakupovic


Hey,

it woul be great if the kafka consumer processors would have an attribtue for 
the rack.id/client.rack of a kafka cluster/rack to fetch the follower 
partitions from one rack instead of reading from each leader of different 
racks. 
Since Kafka 2.4 the RackAwareReplicaSelector is available and through the 
client.rack as a consumer each partitions of one rack can be read for 
performance considerations e.g.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1038: MINIFICPP-1525 - Support flow file swapping in Connection

2022-04-07 Thread GitBox


adamdebreceni commented on code in PR #1038:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1038#discussion_r844804316


##
libminifi/include/utils/MinMaxHeap.h:
##
@@ -0,0 +1,320 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include 
+#include 
+
+struct MinMaxHeapTestAccessor;
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+template>
+class MinMaxHeap {
+ public:
+  void clear() {
+data_.clear();
+  }
+
+  const T& min() const {
+return data_[0];
+  }
+
+  const T& max() const {
+// the element at index 0, 1 or 2
+if (data_.size() == 1) {
+  return data_[0];
+}
+if (data_.size() == 2) {
+  return data_[1];
+}
+if (less_(data_[2], data_[1])) {
+  return data_[1];
+}
+return data_[2];
+  }
+
+  size_t size() const {
+return data_.size();
+  }
+
+  bool empty() const {
+return data_.empty();
+  }
+
+  void push(T item) {
+data_.push_back(std::move(item));
+pushUp(data_.size() - 1);
+  }
+
+  T popMin() {
+std::swap(data_[0], data_[data_.size() - 1]);
+T item = std::move(data_.back());
+data_.pop_back();
+pushDown(0);
+return item;
+  }
+
+  T popMax() {
+if (data_.size() <= 2) {
+  T item = std::move(data_.back());
+  data_.pop_back();
+  return item;
+}
+if (less_(data_[2], data_[1])) {
+  std::swap(data_[1], data_[data_.size() - 1]);
+  T item = std::move(data_.back());
+  data_.pop_back();
+  pushDown(1);
+  return item;
+} else {
+  std::swap(data_[2], data_[data_.size() - 1]);
+  T item = std::move(data_.back());
+  data_.pop_back();
+  pushDown(2);
+  return item;
+}
+  }
+
+ private:
+  friend struct ::MinMaxHeapTestAccessor;
+
+  static size_t getLevel(size_t index) {
+// more performant solutions are possible
+// investigate if this turns out to be a bottleneck
+size_t level = 0;
+++index;
+while (index >>= 1) {
+  ++level;
+}
+return level;
+  }
+
+  static bool isOnMinLevel(size_t index) {
+return getLevel(index) % 2 == 0;
+  }
+
+  static size_t getParent(size_t index) {
+return (index - 1) / 2;
+  }
+
+  /**
+   * WARNING! must only be called when index is on a min-level.

Review Comment:
   added `gsl_ExpectsAudit` to this and `getLargestChildOrGrandchild` as well 



##
extensions/rocksdb-repos/FlowFileLoader.h:
##
@@ -0,0 +1,68 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+
+#include "RocksDatabase.h"
+#include "FlowFile.h"
+#include "gsl.h"
+#include "core/ContentRepository.h"
+#include "SwapManager.h"
+#include "utils/ThreadPool.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+
+class FlowFileLoader {
+  using FlowFilePtr = std::shared_ptr;
+  using FlowFilePtrVec = std::vector;
+
+  static constexpr size_t thread_count_ = 2;
+
+ public:
+  FlowFileLoader();
+
+  ~FlowFileLoader();
+
+  void initialize(gsl::not_null db, 
std::shared_ptr content_repo);
+
+  void start();
+
+  void stop();
+
+  std::future load(std::vector flow_files);
+
+ private:
+  utils::TaskRescheduleInfo loadImpl(const std::vector& 
flow_files, std::shared_ptr>& output);
+
+  utils::ThreadPool thread_pool_{thread_count_, 
false, nullptr, "FlowFileLoaderThreadPool"};
+
+  

[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1038: MINIFICPP-1525 - Support flow file swapping in Connection

2022-04-07 Thread GitBox


adamdebreceni commented on code in PR #1038:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1038#discussion_r844803522


##
libminifi/include/utils/MinMaxHeap.h:
##
@@ -0,0 +1,320 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include 
+#include 
+
+struct MinMaxHeapTestAccessor;
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace utils {
+
+template>
+class MinMaxHeap {
+ public:
+  void clear() {
+data_.clear();
+  }
+
+  const T& min() const {
+return data_[0];
+  }
+
+  const T& max() const {
+// the element at index 0, 1 or 2
+if (data_.size() == 1) {
+  return data_[0];
+}
+if (data_.size() == 2) {
+  return data_[1];
+}
+if (less_(data_[2], data_[1])) {
+  return data_[1];
+}
+return data_[2];
+  }
+
+  size_t size() const {
+return data_.size();
+  }
+
+  bool empty() const {
+return data_.empty();
+  }
+
+  void push(T item) {
+data_.push_back(std::move(item));
+pushUp(data_.size() - 1);
+  }
+
+  T popMin() {
+std::swap(data_[0], data_[data_.size() - 1]);
+T item = std::move(data_.back());
+data_.pop_back();
+pushDown(0);
+return item;
+  }
+
+  T popMax() {
+if (data_.size() <= 2) {
+  T item = std::move(data_.back());
+  data_.pop_back();
+  return item;
+}
+if (less_(data_[2], data_[1])) {
+  std::swap(data_[1], data_[data_.size() - 1]);

Review Comment:
   good idea, changed it to use `getMaxIndex`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] malthe opened a new pull request, #5943: NIFI-9888 Publishing metrics to Log Analytics ocassionally fails

2022-04-07 Thread GitBox


malthe opened a new pull request, #5943:
URL: https://github.com/apache/nifi/pull/5943

    Description of PR
   
   This fixes the occasional "403 Forbidden" bug that we have seen, simply 
because the signature ends up being invalid due to an invalid datetime format 
used.
   
   Reference: https://stackoverflow.com/a/51636763/647151.
   
   Also related to https://issues.apache.org/jira/browse/NIFI-9866.
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-9888) Publishing metrics to Log Analytics ocassionally fails

2022-04-07 Thread Malthe Borch (Jira)
Malthe Borch created NIFI-9888:
--

 Summary: Publishing metrics to Log Analytics ocassionally fails
 Key: NIFI-9888
 URL: https://issues.apache.org/jira/browse/NIFI-9888
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Malthe Borch


We're experiencing that occasionally, using the Log Analytics reporting task to 
publish metrics fails with a "403 Forbidden".



--
This message was sent by Atlassian Jira
(v8.20.1#820001)