[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-05-19 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348073#comment-17348073
 ] 

Josef Zahner commented on NIFI-8435:


Thanks [~pvillard], did you do some tests to verify your change? How big is the 
chance that the issue has been fixed by your change?

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Fix For: 1.14.0
>
> Attachments: Screenshot 2021-04-20 at 14.27.11.png, 
> grafana_heap_overview.png, kudu_inserts_per_sec.png, 
> putkudu_processor_config.png, visualvm_bytes_detail_view.png, 
> visualvm_total_bytes_used.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8619) Allow direct access to individual cluster nodes' UI behind proxy and ensure writing & loading of valid flow.xml.gz

2021-05-19 Thread Phu-Thien Tran (Jira)
Phu-Thien Tran created NIFI-8619:


 Summary: Allow direct access to individual cluster nodes' UI 
behind proxy and ensure writing & loading of valid flow.xml.gz 
 Key: NIFI-8619
 URL: https://issues.apache.org/jira/browse/NIFI-8619
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.13.3
Reporter: Phu-Thien Tran
 Fix For: 1.13.3


Enable direct access to individual cluster nodes' UI when they are behind a 
proxy. New property "nifi.web.context.root" is added to nifi.properties and is 
set to be the web context root to which all NiFi webapps should be deployed.

For example, for a cluster with two nodes should have "nifi.web.context.root" 
set to "/node1" for node 1 and "/node2" for node 2. Consequently, the URL to 
the UI of node 1 and 2 are http://:/node1/nifi and 
http://:/node2/nifi respectively, where hostname and port are 
those of a proxy and the same for both nodes. And so are all NiFi framework 
webapps and extension UIs, e.g. /node1/nifi-api, /node1/nifi-docs.

This functionality requires reverse proxy in Apache's mod_proxy config file. 
Each node has a separate {{}} entry like this:


 RequestHeader add X-ProxyScheme "http"
 RequestHeader add X-ProxyHost "proxy-host"
 RequestHeader add X-ProxyPort "proxy-port"
 ProxyPass http://node1-host[:port]/node1/nifi
 ProxyPassReverse http://node1-host[:port]/node1/nifi


 RequestHeader add X-ProxyScheme "http"
 RequestHeader add X-ProxyHost "proxy-host"
 RequestHeader add X-ProxyPort "proxy-port"
 ProxyPass http://node1-host[:port]/node1/nifi-api
 ProxyPassReverse http://node1-host[:port]/node1/nifi-api



 RequestHeader add X-ProxyScheme "http"
 RequestHeader add X-ProxyHost "proxy-host"
 RequestHeader add X-ProxyPort "proxy-port"
 ProxyPass http://node2-host[:port]/node2/nifi
 ProxyPassReverse http://node2-host[:port]/node2/nifi


 RequestHeader add X-ProxyScheme "http"
 RequestHeader add X-ProxyHost "proxy-host"
 RequestHeader add X-ProxyPort "proxy-port"
 ProxyPass http://node2-host[:port]/node2/nifi-api
 ProxyPassReverse http://node2-host[:port]/node2/nifi-api


Where proxy-host and proxy-port are those of the proxy server.


In addition, this issue also looks at minor improvement in writing and reading 
of flow.xml.gz to prevent it from being corrupt or invalid when running in a 
cluster environment in version 1.11.4.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] github-actions[bot] closed pull request #4554: NIFI-7842 Return Lists when multiple records are returned to the restlookupservice

2021-05-19 Thread GitBox


github-actions[bot] closed pull request #4554:
URL: https://github.com/apache/nifi/pull/4554


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] github-actions[bot] closed pull request #4723: NIFI-8084: add keyboard functionality and aria labels to checkboxes

2021-05-19 Thread GitBox


github-actions[bot] closed pull request #4723:
URL: https://github.com/apache/nifi/pull/4723


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-8325) Rework SNMP processors

2021-05-19 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi resolved NIFI-8325.
---
Fix Version/s: 1.14.0
   Resolution: Fixed

> Rework SNMP processors
> --
>
> Key: NIFI-8325
> URL: https://issues.apache.org/jira/browse/NIFI-8325
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Lehel Boér
>Assignee: Lehel Boér
>Priority: Major
> Fix For: 1.14.0
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8325) Rework SNMP processors

2021-05-19 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347875#comment-17347875
 ] 

ASF subversion and git services commented on NIFI-8325:
---

Commit a3eaf0a37a939b42106bee2e24482e44f0d30763 in nifi's branch 
refs/heads/main from Lehel Boér
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=a3eaf0a ]

NIFI-8325: Complete SNMP refactor: SNMP GET and SET processors reworked, unit 
tests added

This closes #5028.

Signed-off-by: Peter Turcsanyi 


> Rework SNMP processors
> --
>
> Key: NIFI-8325
> URL: https://issues.apache.org/jira/browse/NIFI-8325
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Lehel Boér
>Assignee: Lehel Boér
>Priority: Major
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #5028: NIFI-8325: Complete SNMP refactor: SNMP GET and SET processors rework, unit tests added

2021-05-19 Thread GitBox


asfgit closed pull request #5028:
URL: https://github.com/apache/nifi/pull/5028


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-05-19 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-8435.
--
Fix Version/s: 1.14.0
   Resolution: Fixed

We believe there is no memory leak however there might be an issue with the 
newer versions of Netty being used (see link of this JIRA). I just merged an 
improvement allowing for a better control over the number of threads being used 
to send data into Kudu which should be helpful on nodes where a lot of data is 
being processed (with many Kudu processor instances) and on "large" nodes (in 
terms of cores).

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Fix For: 1.14.0
>
> Attachments: Screenshot 2021-04-20 at 14.27.11.png, 
> grafana_heap_overview.png, kudu_inserts_per_sec.png, 
> putkudu_processor_config.png, visualvm_bytes_detail_view.png, 
> visualvm_total_bytes_used.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-05-19 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347839#comment-17347839
 ] 

ASF subversion and git services commented on NIFI-8435:
---

Commit 13e5dc4691c8d6f7fb8fafe6ed16caeb603cd315 in nifi's branch 
refs/heads/main from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=13e5dc4 ]

NIFI-8435 Added Kudu Client Worker Count property

- Implemented custom ThreadPoolExecutor with maximum pool size based on Worker 
Count property
- Refactored processing methods to ensure KuduSession is always closed
- Added SystemResourceConsideration for Memory
- Removed duplicate dependency on nifi-security-kerberos
- Adjusted method naming to clarify functionality
- Reverted addition of defaultAdminOperationTimeoutMs()

Signed-off-by: Pierre Villard 

This closes #5020.


> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: Screenshot 2021-04-20 at 14.27.11.png, 
> grafana_heap_overview.png, kudu_inserts_per_sec.png, 
> putkudu_processor_config.png, visualvm_bytes_detail_view.png, 
> visualvm_total_bytes_used.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #5020: NIFI-8435 Added Kudu Client Worker Count property

2021-05-19 Thread GitBox


asfgit closed pull request #5020:
URL: https://github.com/apache/nifi/pull/5020


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on pull request #5080: NIFI-8609: Added unit test that is ignored so that it can be manually…

2021-05-19 Thread GitBox


markap14 commented on pull request #5080:
URL: https://github.com/apache/nifi/pull/5080#issuecomment-844405144


   Thanks for the review @gresockj . Not sure that we really need to encourage 
developers to use it, to be honest. I created the unit test to verify that the 
changes that I was going to make would in fact result in better performance. 
And I figured it may be useful to me and others in the future so I left it. Is 
intended to simply be a tool to aid in that process when desirable.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8618) Allow stateless NiFi to reference environment variables as parameters

2021-05-19 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-8618:
-
Assignee: Mark Payne
  Status: Patch Available  (was: Open)

> Allow stateless NiFi to reference environment variables as parameters
> -
>
> Key: NIFI-8618
> URL: https://issues.apache.org/jira/browse/NIFI-8618
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: NiFi Stateless
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> Stateless NiFi allows users to specify parameters in the configured 
> properties file but should also allow for parameters to be specified as 
> environment variables



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8617) Move nifi-stateless to top-level and create assembly

2021-05-19 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-8617:
-
Status: Patch Available  (was: Open)

> Move nifi-stateless to top-level and create assembly
> 
>
> Key: NIFI-8617
> URL: https://issues.apache.org/jira/browse/NIFI-8617
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: NiFi Stateless, Tools and Build
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, the nifi stateless api/bootstrap/nar/engine all live within 
> nifi-stateless-bundle under nar-bundles/nifi-framework-bundle.
> It makes more sense to pull this up to the top level and create a separate 
> assembly for it. This would also fall inline with the nifi-registry and 
> minifi work that's being done.
> Creating this separate assembly will make stateless applicable in more form 
> factors and may also make sense to expose config options via environment 
> variables.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 opened a new pull request #5087: NIFI-8617, NIFI-8618: Refactored stateless nifi to be a top-level component, created stateless assembly

2021-05-19 Thread GitBox


markap14 opened a new pull request #5087:
URL: https://github.com/apache/nifi/pull/5087


   Also updated to make use of environment variables.
   
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-8618) Allow stateless NiFi to reference environment variables as parameters

2021-05-19 Thread Mark Payne (Jira)
Mark Payne created NIFI-8618:


 Summary: Allow stateless NiFi to reference environment variables 
as parameters
 Key: NIFI-8618
 URL: https://issues.apache.org/jira/browse/NIFI-8618
 Project: Apache NiFi
  Issue Type: Improvement
  Components: NiFi Stateless
Reporter: Mark Payne


Stateless NiFi allows users to specify parameters in the configured properties 
file but should also allow for parameters to be specified as environment 
variables



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8466) Offloading the Single Node load balancing target causes ignored backpressure

2021-05-19 Thread Nathan Gough (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Gough updated NIFI-8466:
---
Fix Version/s: 1.14.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Offloading the Single Node load balancing target causes ignored backpressure
> 
>
> Key: NIFI-8466
> URL: https://issues.apache.org/jira/browse/NIFI-8466
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.14.0
>Reporter: Joseph Gresock
>Assignee: Joseph Gresock
>Priority: Minor
> Fix For: 1.14.0
>
> Attachments: Single_Node_Queue.xml
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In a 3-node cluster, it is possible to get a Single Node load balanced queue 
> into the following state by offloading the node that receives the load 
> balanced flow files:
> # The load balanced queue ignores backpressure
> # The processor following the load balanced queue acts like it has nothing to 
> process, even when running
> Steps to reproduce:
> # Add a Single Node load balanced queue on a multi-node cluster (see attached 
> Template)
> # Discover which node receives the flow files in the queue by running only 
> the first processor, then viewing the Connections tab of the Summary view, 
> and finally clicking the Cluster button for the load balanced queue.  The 
> node that has all the flow files is the one you will need to Offload in the 
> next step
> # Disconnect and then Offload the node that gets the flow files in Single 
> Node load balancing
> # Reconnect the node
> # Run both processors surrounding the load balanced queue



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8466) Offloading the Single Node load balancing target causes ignored backpressure

2021-05-19 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347821#comment-17347821
 ] 

ASF subversion and git services commented on NIFI-8466:
---

Commit e19940ea7ecff9b88b1fc2ac1aa5c1c8b3c3a777 in nifi's branch 
refs/heads/main from Joe Gresock
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=e19940e ]

NIFI-8466: Resolving offload bug with Single Node load balanced queues

Signed-off-by: Nathan Gough 

This closes #5025.


> Offloading the Single Node load balancing target causes ignored backpressure
> 
>
> Key: NIFI-8466
> URL: https://issues.apache.org/jira/browse/NIFI-8466
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.14.0
>Reporter: Joseph Gresock
>Assignee: Joseph Gresock
>Priority: Minor
> Attachments: Single_Node_Queue.xml
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In a 3-node cluster, it is possible to get a Single Node load balanced queue 
> into the following state by offloading the node that receives the load 
> balanced flow files:
> # The load balanced queue ignores backpressure
> # The processor following the load balanced queue acts like it has nothing to 
> process, even when running
> Steps to reproduce:
> # Add a Single Node load balanced queue on a multi-node cluster (see attached 
> Template)
> # Discover which node receives the flow files in the queue by running only 
> the first processor, then viewing the Connections tab of the Summary view, 
> and finally clicking the Cluster button for the load balanced queue.  The 
> node that has all the flow files is the one you will need to Offload in the 
> next step
> # Disconnect and then Offload the node that gets the flow files in Single 
> Node load balancing
> # Reconnect the node
> # Run both processors surrounding the load balanced queue



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] thenatog closed pull request #5025: NIFI-8466: Resolving offload bug with Single Node load balanced queues

2021-05-19 Thread GitBox


thenatog closed pull request #5025:
URL: https://github.com/apache/nifi/pull/5025


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] thenatog commented on pull request #5025: NIFI-8466: Resolving offload bug with Single Node load balanced queues

2021-05-19 Thread GitBox


thenatog commented on pull request #5025:
URL: https://github.com/apache/nifi/pull/5025#issuecomment-844342861


   +1 will merge


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] thenatog edited a comment on pull request #5025: NIFI-8466: Resolving offload bug with Single Node load balanced queues

2021-05-19 Thread GitBox


thenatog edited a comment on pull request #5025:
URL: https://github.com/apache/nifi/pull/5025#issuecomment-844332373


   Tested this out on a 3 node cluster without the fix, identified the issue as 
stated. Attached is a screenshot of what it looked like for me - files stuck in 
the queue, and the UpdateAttribute processor did not appear to see the files. 
Also a queue listing says the queue has no files as shown in the screenshot. 
   ![Screen Shot 2021-05-19 at 1 06 05 
PM](https://user-images.githubusercontent.com/3042/118860037-e9b67f00-b8a8-11eb-891f-5f008c841bd2.png)
   I then built with this PR, retested and found the issue was resolved.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits closed pull request #1074: MINIFICPP-1560 - Change some c2 log levels

2021-05-19 Thread GitBox


fgerlits closed pull request #1074:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1074


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] thenatog commented on pull request #5025: NIFI-8466: Resolving offload bug with Single Node load balanced queues

2021-05-19 Thread GitBox


thenatog commented on pull request #5025:
URL: https://github.com/apache/nifi/pull/5025#issuecomment-844332373


   Tested this out on a 3 node cluster without the fix, identified the issue as 
stated. Attached is a screenshot of what it looked like for me - files stuck in 
the queue, and the UpdateAttribute processor did appear to see the files. Also 
a queue listing says the queue has no files as shown in the screenshot. 
   ![Screen Shot 2021-05-19 at 1 06 05 
PM](https://user-images.githubusercontent.com/3042/118860037-e9b67f00-b8a8-11eb-891f-5f008c841bd2.png)
   I then built with this PR, retested and found the issue was resolved.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits closed pull request #1072: MINIFICPP-1482 - Fix C2RequestClassTest transient failure

2021-05-19 Thread GitBox


fgerlits closed pull request #1072:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1072


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits closed pull request #1063: MINIFICPP-1203 - Linter reported redundand blank lines

2021-05-19 Thread GitBox


fgerlits closed pull request #1063:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1063


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits closed pull request #1055: MINIFICPP-1454 Reduce duplication of CMake parameters in docker build

2021-05-19 Thread GitBox


fgerlits closed pull request #1055:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1055


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-8617) Move nifi-stateless to top-level and create assembly

2021-05-19 Thread Mark Payne (Jira)
Mark Payne created NIFI-8617:


 Summary: Move nifi-stateless to top-level and create assembly
 Key: NIFI-8617
 URL: https://issues.apache.org/jira/browse/NIFI-8617
 Project: Apache NiFi
  Issue Type: Improvement
  Components: NiFi Stateless, Tools and Build
Reporter: Mark Payne
Assignee: Mark Payne


Currently, the nifi stateless api/bootstrap/nar/engine all live within 
nifi-stateless-bundle under nar-bundles/nifi-framework-bundle.

It makes more sense to pull this up to the top level and create a separate 
assembly for it. This would also fall inline with the nifi-registry and minifi 
work that's being done.

Creating this separate assembly will make stateless applicable in more form 
factors and may also make sense to expose config options via environment 
variables.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] naddym commented on a change in pull request #5083: NIFI-8613: Improve FlattenJson Processor

2021-05-19 Thread GitBox


naddym commented on a change in pull request #5083:
URL: https://github.com/apache/nifi/pull/5083#discussion_r635419567



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/FlattenJson.java
##
@@ -157,25 +203,36 @@ public void onTrigger(final ProcessContext context, final 
ProcessSession session
 final String mode = context.getProperty(FLATTEN_MODE).getValue();
 final FlattenMode flattenMode = getFlattenMode(mode);
 
-String separator = 
context.getProperty(SEPARATOR).evaluateAttributeExpressions(flowFile).getValue();
-
+final Character separator = 
context.getProperty(SEPARATOR).evaluateAttributeExpressions(flowFile).getValue().charAt(0;

Review comment:
   Oh, Somehow missed it while copying after test. Thanks again for 
commenting.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on a change in pull request #5083: NIFI-8613: Improve FlattenJson Processor

2021-05-19 Thread GitBox


exceptionfactory commented on a change in pull request #5083:
URL: https://github.com/apache/nifi/pull/5083#discussion_r635391629



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/FlattenJson.java
##
@@ -157,25 +203,36 @@ public void onTrigger(final ProcessContext context, final 
ProcessSession session
 final String mode = context.getProperty(FLATTEN_MODE).getValue();
 final FlattenMode flattenMode = getFlattenMode(mode);
 
-String separator = 
context.getProperty(SEPARATOR).evaluateAttributeExpressions(flowFile).getValue();
-
+final Character separator = 
context.getProperty(SEPARATOR).evaluateAttributeExpressions(flowFile).getValue().charAt(0;

Review comment:
   Thanks for making the updates @naddym, looks like the automated builds 
failed due to missing a trailing `)` on this line.
   ```suggestion
   final Character separator = 
context.getProperty(SEPARATOR).evaluateAttributeExpressions(flowFile).getValue().charAt(0);
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (MINIFICPP-1518) Migrate integration tests (without secutity protocol settings) for ConsumeKafka to behave

2021-05-19 Thread Gabor Gyimesi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Gyimesi reassigned MINIFICPP-1518:


Assignee: Gabor Gyimesi  (was: Adam Hunyadi)

> Migrate integration tests (without secutity protocol settings) for 
> ConsumeKafka to behave
> -
>
> Key: MINIFICPP-1518
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1518
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Test
>Affects Versions: 0.9.0
>Reporter: Adam Hunyadi
>Assignee: Gabor Gyimesi
>Priority: Minor
> Fix For: 1.0.0
>
>
> *Background:*
> Currently ConsumeKafka tests are implemented as tests to be run manually that 
> assume the presence of a running broker on the network.
> *Proposal:*
> We should move these tests to the integration test framework.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1369) Implement ConsumeKafka processor

2021-05-19 Thread Gabor Gyimesi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Gyimesi updated MINIFICPP-1369:
-
Description: 
Implement ConsumeKafka processor to support consuming Kafka topic(s).

Hint: 
[https://nifi.apache.org/docs/nifi-docs/components/nifi-docs/components/org.apache.nifi/nifi-kafka-2-0-nar/1.9.0/org.apache.nifi.processors.kafka.pubsub.ConsumeKafka_2_0/index.html]

  was:
Implenent ConsumeKafka processor to support consuming Kafka topic(s).

Hint: 
https://nifi.apache.org/docs/nifi-docs/components/nifi-docs/components/org.apache.nifi/nifi-kafka-2-0-nar/1.9.0/org.apache.nifi.processors.kafka.pubsub.ConsumeKafka_2_0/index.html


> Implement ConsumeKafka processor
> 
>
> Key: MINIFICPP-1369
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1369
> Project: Apache NiFi MiNiFi C++
>  Issue Type: New Feature
>Affects Versions: 0.9.0
>Reporter: Arpad Boda
>Assignee: Adam Hunyadi
>Priority: Major
>
> Implement ConsumeKafka processor to support consuming Kafka topic(s).
> Hint: 
> [https://nifi.apache.org/docs/nifi-docs/components/nifi-docs/components/org.apache.nifi/nifi-kafka-2-0-nar/1.9.0/org.apache.nifi.processors.kafka.pubsub.ConsumeKafka_2_0/index.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (MINIFICPP-1369) Implement ConsumeKafka processor

2021-05-19 Thread Gabor Gyimesi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Gyimesi reassigned MINIFICPP-1369:


Assignee: Gabor Gyimesi  (was: Adam Hunyadi)

> Implement ConsumeKafka processor
> 
>
> Key: MINIFICPP-1369
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1369
> Project: Apache NiFi MiNiFi C++
>  Issue Type: New Feature
>Affects Versions: 0.9.0
>Reporter: Arpad Boda
>Assignee: Gabor Gyimesi
>Priority: Major
>
> Implement ConsumeKafka processor to support consuming Kafka topic(s).
> Hint: 
> [https://nifi.apache.org/docs/nifi-docs/components/nifi-docs/components/org.apache.nifi/nifi-kafka-2-0-nar/1.9.0/org.apache.nifi.processors.kafka.pubsub.ConsumeKafka_2_0/index.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (MINIFICPP-1373) Implement and test a simplified ConsumeKafka processor without security protocols

2021-05-19 Thread Gabor Gyimesi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Gyimesi reassigned MINIFICPP-1373:


Assignee: Gabor Gyimesi  (was: Adam Hunyadi)

> Implement and test a simplified ConsumeKafka processor without security 
> protocols
> -
>
> Key: MINIFICPP-1373
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1373
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Sub-task
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Assignee: Gabor Gyimesi
>Priority: Major
> Fix For: 1.0.0
>
> Attachments: ConsumeKafka_test_matrix.numbers, 
> ConsumeKafka_test_matrix.pdf
>
>  Time Spent: 33h 10m
>  Remaining Estimate: 0h
>
> *Acceptance Criteria:*
> *{color:#de350b}See attached test matrix plan.{color}*
> Additional test (that require multiple Kafka consumers):
> {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with 
> {color:#0747a6}different group ids{color} subscribed to the same topic
>  {color:#505f79}*WHEN*{color} a message is published to the topic
>  {color:#505f79}*THEN*{color} both of the ConsumeKafka processors should 
> produce identical flowfiles
> {quote}
> {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with 
> {color:#0747a6}the same group id{color} subscribed to the same topic
>  {color:#505f79}*WHEN*{color} a message is published to the topic
>  {color:#505f79}*THEN*{color} only one of the ConsumeKafka processors should 
> produce a flowfile
> {quote}
> {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with 
> {color:#0747a6}the same group id{color} subscribed to the same topic with 
> exactly two partitions with {color:#0747a6}Offset Reset{color} set to 
> {color:#0747a6}earliest{color}.
>  {color:#505f79}*WHEN*{color} a messages were already present on both 
> partitions and the second one crashes
>  {color:#505f79}*THEN*{color} the first one should process duplicates of the 
> messages that originally came to the second (at_least_once delivery)
> {quote}
> {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with 
> {color:#0747a6}the same group id{color} subscribed to the same topic with 
> exactly two partitions with {color:#0747a6}Offset Reset{color} set to 
> {color:#0747a6}latest{color}.
>  {color:#505f79}*WHEN a*{color} messages were already present on both 
> partitions and the second one crashes
>  {color:#505f79}*THEN*{color} the first one should {color:#0747a6}not{color} 
> process duplicates of the messages that originally came to the second 
> (at_most_once delivery)
> {quote}
> {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with 
> {color:#0747a6}the same group id{color} subscribed to the same topic with 
> exactly two partitions with {color:#0747a6}Offset Reset{color} set to 
> {color:#0747a6}none{color}.
>  {color:#505f79}*WHEN*{color} a messages were already present on both 
> partitions and the second one crashes
>  {color:#505f79}*THEN*{color} the first one should throw an exception
> {quote}
> *Background:*
> See parent task.
> *Proposal:*
> This should be the first part of the implementation, the second being adding 
> and testing multiple security protocols.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (MINIFICPP-1373) Implement and test a simplified ConsumeKafka processor without security protocols

2021-05-19 Thread Gabor Gyimesi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Gyimesi reopened MINIFICPP-1373:
--

> Implement and test a simplified ConsumeKafka processor without security 
> protocols
> -
>
> Key: MINIFICPP-1373
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1373
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Sub-task
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Assignee: Gabor Gyimesi
>Priority: Major
> Fix For: 1.0.0
>
> Attachments: ConsumeKafka_test_matrix.numbers, 
> ConsumeKafka_test_matrix.pdf
>
>  Time Spent: 33h 10m
>  Remaining Estimate: 0h
>
> *Acceptance Criteria:*
> *{color:#de350b}See attached test matrix plan.{color}*
> Additional test (that require multiple Kafka consumers):
> {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with 
> {color:#0747a6}different group ids{color} subscribed to the same topic
>  {color:#505f79}*WHEN*{color} a message is published to the topic
>  {color:#505f79}*THEN*{color} both of the ConsumeKafka processors should 
> produce identical flowfiles
> {quote}
> {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with 
> {color:#0747a6}the same group id{color} subscribed to the same topic
>  {color:#505f79}*WHEN*{color} a message is published to the topic
>  {color:#505f79}*THEN*{color} only one of the ConsumeKafka processors should 
> produce a flowfile
> {quote}
> {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with 
> {color:#0747a6}the same group id{color} subscribed to the same topic with 
> exactly two partitions with {color:#0747a6}Offset Reset{color} set to 
> {color:#0747a6}earliest{color}.
>  {color:#505f79}*WHEN*{color} a messages were already present on both 
> partitions and the second one crashes
>  {color:#505f79}*THEN*{color} the first one should process duplicates of the 
> messages that originally came to the second (at_least_once delivery)
> {quote}
> {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with 
> {color:#0747a6}the same group id{color} subscribed to the same topic with 
> exactly two partitions with {color:#0747a6}Offset Reset{color} set to 
> {color:#0747a6}latest{color}.
>  {color:#505f79}*WHEN a*{color} messages were already present on both 
> partitions and the second one crashes
>  {color:#505f79}*THEN*{color} the first one should {color:#0747a6}not{color} 
> process duplicates of the messages that originally came to the second 
> (at_most_once delivery)
> {quote}
> {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with 
> {color:#0747a6}the same group id{color} subscribed to the same topic with 
> exactly two partitions with {color:#0747a6}Offset Reset{color} set to 
> {color:#0747a6}none{color}.
>  {color:#505f79}*WHEN*{color} a messages were already present on both 
> partitions and the second one crashes
>  {color:#505f79}*THEN*{color} the first one should throw an exception
> {quote}
> *Background:*
> See parent task.
> *Proposal:*
> This should be the first part of the implementation, the second being adding 
> and testing multiple security protocols.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (MINIFICPP-1374) Implement security protocol support for ConsumeKafka

2021-05-19 Thread Gabor Gyimesi (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Gyimesi reassigned MINIFICPP-1374:


Assignee: Gabor Gyimesi  (was: Adam Hunyadi)

> Implement security protocol support for ConsumeKafka
> 
>
> Key: MINIFICPP-1374
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1374
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Sub-task
>Affects Versions: 0.7.0
>Reporter: Adam Hunyadi
>Assignee: Gabor Gyimesi
>Priority: Major
> Fix For: 1.0.0
>
>
> *Acceptance Criteria:*
> SSL:
> {code:python|title=Example feature definition}
> Scenario Outline: ConsumeKafka receives data via SSL
>   Given a ConsumeKafka processor set up in a "kafka-consumer-flow" flow
>   And ssl certificates are placed in "/tmp/resources" with cert name 
> "cert.pem" and key name "key.pem"
>   And these processor properties are set:
> | processor name | property name | property value |
> | ConsumeKafka | Kafka Brokers | kafka-broker:9093 |
> | ConsumeKafka | Security Protocol | ssl |
> | ConsumeKafka | Security CA | ... |
> | ConsumeKafka | Security Cert | ... |
>   And a PutFile processor with the "Directory" property set to "/tmp/output" 
> in the "kafka-consumer-flow" flow
>   And the "success" relationship of the ConsumeKafka processor is connected 
> to the PutFile
>   And a kafka broker "broker" set up to communicate via SSL is set up in 
> correspondence with the third-party kafka publisher
>   When all instances start up
>   And a message with content "Alice's Adventures in Wonderland" is published 
> to the "ConsumeKafkaTest" topic using an ssl connection
>   And a message with content "Lewis Carroll" is published to the 
> "ConsumeKafkaTest" topic using an ssl connection
>   Then two flowfiles with the contents "Alice's Adventures in Wonderland" and 
> "Lewis Carroll" are placed in the monitored directory in less than 45 seconds
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1057: MINIFICPP-1537 - Log heartbeats on demand

2021-05-19 Thread GitBox


szaszm commented on a change in pull request #1057:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1057#discussion_r635316167



##
File path: libminifi/src/Configure.cpp
##
@@ -37,6 +37,10 @@ bool Configure::get(const std::string& key, std::string& 
value) const {
 
 bool Configure::get(const std::string& key, const std::string& alternate_key, 
std::string& value) const {
   if (get(key, value)) {
+if (get(alternate_key)) {
+  const auto logger = logging::LoggerFactory::getLogger();
+  logger->log_warn("Both the property '%s' and an alternate property '%s' 
are set. Using '%s'.", key, alternate_key, key);

Review comment:
   I checked that the function is only used for this purpose, so now I 
think it's good.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1057: MINIFICPP-1537 - Log heartbeats on demand

2021-05-19 Thread GitBox


adamdebreceni commented on a change in pull request #1057:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1057#discussion_r635310034



##
File path: libminifi/include/core/logging/Logger.h
##
@@ -68,13 +69,32 @@ inline T conditional_conversion(T t) {
 }
 
 template
-inline std::string format_string(char const* format_str, Args&&... args) {
-  char buf[LOG_BUFFER_SIZE];
-  std::snprintf(buf, LOG_BUFFER_SIZE, format_str, 
conditional_conversion(std::forward(args))...);
-  return std::string(buf);
+inline std::string format_string(int max_size, char const* format_str, 
Args&&... args) {
+  // try to use static buffer
+  char buf[LOG_BUFFER_SIZE + 1];
+  int result = std::snprintf(buf, LOG_BUFFER_SIZE + 1, format_str, 
conditional_conversion(std::forward(args))...);
+  if (result < 0) {
+return std::string("Error while formatting log message");
+  }
+  if (result <= LOG_BUFFER_SIZE) {
+// static buffer was large enough
+return std::string(buf, result);
+  }
+  if (max_size >= 0 && max_size <= LOG_BUFFER_SIZE) {
+// static buffer was already larger than allowed, use the filled buffer
+return std::string(buf, LOG_BUFFER_SIZE);
+  }
+  // try to use dynamic buffer
+  size_t dynamic_buffer_size = max_size < 0 ? result : std::min(result, 
max_size);
+  std::vector buffer(dynamic_buffer_size + 1);  // extra '\0' character
+  result = std::snprintf(buffer.data(), buffer.size(), format_str, 
conditional_conversion(std::forward(args))...);

Review comment:
   this thought crossed my mind, but this explicitly says that modifying 
through the const overload is undefined behavior, I would wait until C++17




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1057: MINIFICPP-1537 - Log heartbeats on demand

2021-05-19 Thread GitBox


adamdebreceni commented on a change in pull request #1057:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1057#discussion_r635306229



##
File path: libminifi/src/Configure.cpp
##
@@ -37,6 +37,10 @@ bool Configure::get(const std::string& key, std::string& 
value) const {
 
 bool Configure::get(const std::string& key, const std::string& alternate_key, 
std::string& value) const {
   if (get(key, value)) {
+if (get(alternate_key)) {
+  const auto logger = logging::LoggerFactory::getLogger();
+  logger->log_warn("Both the property '%s' and an alternate property '%s' 
are set. Using '%s'.", key, alternate_key, key);

Review comment:
   see `C2.md`:
   > Release 0.6.0: Please note that all c2 properties now exist as 
`nifi.c2.*`. If your configuration properties files contain the former naming 
convention of `c2.*`, we will continue to support that as an alternate key, but 
you are encouraged to switch your configuration options as soon as possible.
   
   although it does not say "deprecated" per se




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1057: MINIFICPP-1537 - Log heartbeats on demand

2021-05-19 Thread GitBox


szaszm commented on a change in pull request #1057:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1057#discussion_r635299532



##
File path: libminifi/include/core/logging/Logger.h
##
@@ -68,13 +69,32 @@ inline T conditional_conversion(T t) {
 }
 
 template
-inline std::string format_string(char const* format_str, Args&&... args) {
-  char buf[LOG_BUFFER_SIZE];
-  std::snprintf(buf, LOG_BUFFER_SIZE, format_str, 
conditional_conversion(std::forward(args))...);
-  return std::string(buf);
+inline std::string format_string(int max_size, char const* format_str, 
Args&&... args) {
+  // try to use static buffer
+  char buf[LOG_BUFFER_SIZE + 1];
+  int result = std::snprintf(buf, LOG_BUFFER_SIZE + 1, format_str, 
conditional_conversion(std::forward(args))...);
+  if (result < 0) {
+return std::string("Error while formatting log message");
+  }
+  if (result <= LOG_BUFFER_SIZE) {
+// static buffer was large enough
+return std::string(buf, result);
+  }
+  if (max_size >= 0 && max_size <= LOG_BUFFER_SIZE) {
+// static buffer was already larger than allowed, use the filled buffer
+return std::string(buf, LOG_BUFFER_SIZE);
+  }
+  // try to use dynamic buffer
+  size_t dynamic_buffer_size = max_size < 0 ? result : std::min(result, 
max_size);
+  std::vector buffer(dynamic_buffer_size + 1);  // extra '\0' character
+  result = std::snprintf(buffer.data(), buffer.size(), format_str, 
conditional_conversion(std::forward(args))...);

Review comment:
   Since the size is already known at this time, we could allocate the 
memory in `std::string` directly. At least if you don't mind `const_cast`ing 
`buffer.data()` until C++17. 
https://en.cppreference.com/w/cpp/string/basic_string/data




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (MINIFICPP-304) Release MiNiFI C++ Version 0.3.0

2021-05-19 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-304:
-
Labels: release  (was: )

> Release MiNiFI C++ Version 0.3.0
> 
>
> Key: MINIFICPP-304
> URL: https://issues.apache.org/jira/browse/MINIFICPP-304
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Task
>Reporter: Marc Parisi
>Assignee: Marc Parisi
>Priority: Major
>  Labels: release
> Fix For: 0.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-379) Release 0.4.0

2021-05-19 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-379:
-
Labels: release  (was: )

> Release 0.4.0
> -
>
> Key: MINIFICPP-379
> URL: https://issues.apache.org/jira/browse/MINIFICPP-379
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Task
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
>Priority: Major
>  Labels: release
> Fix For: 0.4.0
>
>
> This ticket is to track the release process for 0.4.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-172) Release MiNiFi C++ 0.2.0

2021-05-19 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-172:
-
Labels: release  (was: )

> Release MiNiFi C++ 0.2.0
> 
>
> Key: MINIFICPP-172
> URL: https://issues.apache.org/jira/browse/MINIFICPP-172
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Task
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
>Priority: Major
>  Labels: release
> Fix For: 0.2.0
>
>
> Ticket to track efforts for NiFi MiNiFi C++ 0.2.0 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-230) Perform MiNiFi C++ 0.1.0 release

2021-05-19 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-230:
-
Labels: release  (was: )

> Perform MiNiFi C++ 0.1.0 release
> 
>
> Key: MINIFICPP-230
> URL: https://issues.apache.org/jira/browse/MINIFICPP-230
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Task
>Reporter: Aldrin Piri
>Assignee: Aldrin Piri
>Priority: Major
>  Labels: release
> Fix For: 0.1.0
>
>
> Perform release duties for the next release of MiNiFi C++ 0.1.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-519) Release MiNiFi CPP 0.5.0

2021-05-19 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-519:
-
Labels: release  (was: )

> Release MiNiFi CPP 0.5.0
> 
>
> Key: MINIFICPP-519
> URL: https://issues.apache.org/jira/browse/MINIFICPP-519
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Task
>Reporter: Jeremy Dyer
>Assignee: Jeremy Dyer
>Priority: Major
>  Labels: release
> Fix For: 0.5.0
>
>
> This is a ticket to track the release of MiNiFi CPP 0.5.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-770) Release 0.6.0

2021-05-19 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-770:
-
Labels: release  (was: )

> Release 0.6.0
> -
>
> Key: MINIFICPP-770
> URL: https://issues.apache.org/jira/browse/MINIFICPP-770
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Wish
>Reporter: Marc Parisi
>Assignee: Marc Parisi
>Priority: Major
>  Labels: release
> Fix For: 0.6.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (MINIFICPP-1463) Release 0.9.0

2021-05-19 Thread Arpad Boda (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpad Boda updated MINIFICPP-1463:
--
Labels: release  (was: )

> Release 0.9.0
> -
>
> Key: MINIFICPP-1463
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1463
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Task
>Affects Versions: 0.9.0
>Reporter: Marton Szasz
>Assignee: Marton Szasz
>Priority: Major
>  Labels: release
> Fix For: 0.9.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1057: MINIFICPP-1537 - Log heartbeats on demand

2021-05-19 Thread GitBox


szaszm commented on a change in pull request #1057:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1057#discussion_r635293708



##
File path: libminifi/src/Configure.cpp
##
@@ -37,6 +37,10 @@ bool Configure::get(const std::string& key, std::string& 
value) const {
 
 bool Configure::get(const std::string& key, const std::string& alternate_key, 
std::string& value) const {
   if (get(key, value)) {
+if (get(alternate_key)) {
+  const auto logger = logging::LoggerFactory::getLogger();
+  logger->log_warn("Both the property '%s' and an alternate property '%s' 
are set. Using '%s'.", key, alternate_key, key);

Review comment:
   Is this function only used in those cases? My impression was just a 
general function with a fallback key option, as the signature doesn't imply 
`alternate_key` being deprecated.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on pull request #1072: MINIFICPP-1482 - Fix C2RequestClassTest transient failure

2021-05-19 Thread GitBox


adamdebreceni commented on pull request #1072:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1072#issuecomment-844137446


   > What was the root cause here? Is the test still valid when the json 
parsing has failed, or is it an indication of another issue?
   
   the civet handler sometimes gets called with an empty body, unclear why that 
is, it probably has to do with the shutdown process but I did not manage to 
verify this hunch


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1057: MINIFICPP-1537 - Log heartbeats on demand

2021-05-19 Thread GitBox


adamdebreceni commented on a change in pull request #1057:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1057#discussion_r635265649



##
File path: libminifi/include/utils/Enum.h
##
@@ -60,6 +63,22 @@ namespace utils {
 #define FOR_EACH_8(fn, delim, _1, _2, _3, _4, _5, _6, _7, _8) \
   fn(_1) delim() fn(_2) delim() fn(_3) delim() fn(_4) delim() \
   fn(_5) delim() fn(_6) delim() fn(_7) delim() fn(_8)
+#define FOR_EACH_9(fn, delim, _1, _2, _3, _4, _5, _6, _7, _8, _9) \
+  fn(_1) delim() fn(_2) delim() fn(_3) delim() fn(_4) delim() \
+  fn(_5) delim() fn(_6) delim() fn(_7) delim() fn(_8) delim() \
+  fn(_9)
+#define FOR_EACH_10(fn, delim, _1, _2, _3, _4, _5, _6, _7, _8, _9, _10) \
+  fn(_1) delim() fn(_2) delim() fn(_3) delim() fn(_4) delim() \
+  fn(_5) delim() fn(_6) delim() fn(_7) delim() fn(_8) delim() \
+  fn(_9) delim() fn(_10)
+#define FOR_EACH_11(fn, delim, _1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11) \
+  fn(_1) delim() fn(_2) delim() fn(_3) delim() fn(_4) delim() \
+  fn(_5) delim() fn(_6) delim() fn(_7) delim() fn(_8) delim() \
+  fn(_9) delim() fn(_10) delim() fn(_11)
+#define FOR_EACH_12(fn, delim, _1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, 
_12) \
+  fn(_1) delim() fn(_2) delim() fn(_3) delim() fn(_4) delim() \
+  fn(_5) delim() fn(_6) delim() fn(_7) delim() fn(_8) delim() \
+  fn(_9) delim() fn(_10) delim() fn(_11) delim() fn(_12)

Review comment:
   I cannot recall what was the reasoning behind the current 
non-"recursive" macros, changed them




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1057: MINIFICPP-1537 - Log heartbeats on demand

2021-05-19 Thread GitBox


adamdebreceni commented on a change in pull request #1057:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1057#discussion_r635264523



##
File path: libminifi/src/c2/HeartbeatJsonSerializer.cpp
##
@@ -0,0 +1,333 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "c2/HeartbeatJsonSerializer.h"
+#include "rapidjson/document.h"
+#include "rapidjson/writer.h"
+#include "rapidjson/stringbuffer.h"
+#include "rapidjson/prettywriter.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace c2 {
+
+std::string HeartbeatJsonSerializer::getOperation(const C2Payload& payload) {
+  switch (payload.getOperation()) {
+case Operation::ACKNOWLEDGE:
+  return "acknowledge";
+case Operation::HEARTBEAT:
+  return "heartbeat";
+case Operation::RESTART:
+  return "restart";
+case Operation::DESCRIBE:
+  return "describe";
+case Operation::STOP:
+  return "stop";
+case Operation::START:
+  return "start";
+case Operation::UPDATE:
+  return "update";
+case Operation::PAUSE:
+  return "pause";
+case Operation::RESUME:
+  return "resume";
+default:
+  return "heartbeat";
+  }
+}
+
+static void serializeOperationInfo(rapidjson::Value& target, const C2Payload& 
payload, rapidjson::Document::AllocatorType& alloc) {
+  gsl_Expects(target.IsObject());
+
+  target.AddMember("operation", 
HeartbeatJsonSerializer::getOperation(payload), alloc);
+
+  std::string id = payload.getIdentifier();
+  if (id.empty()) {
+return;
+  }
+
+  target.AddMember("operationId", id, alloc);
+  std::string state_str;
+  switch (payload.getStatus().getState()) {
+case state::UpdateState::FULLY_APPLIED:
+  state_str = "FULLY_APPLIED";
+  break;
+case state::UpdateState::PARTIALLY_APPLIED:
+  state_str = "PARTIALLY_APPLIED";
+  break;
+case state::UpdateState::READ_ERROR:
+  state_str = "OPERATION_NOT_UNDERSTOOD";
+  break;
+case state::UpdateState::SET_ERROR:
+default:
+  state_str = "NOT_APPLIED";
+  }

Review comment:
   agree, done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1057: MINIFICPP-1537 - Log heartbeats on demand

2021-05-19 Thread GitBox


adamdebreceni commented on a change in pull request #1057:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1057#discussion_r635264297



##
File path: libminifi/src/Configure.cpp
##
@@ -37,6 +37,10 @@ bool Configure::get(const std::string& key, std::string& 
value) const {
 
 bool Configure::get(const std::string& key, const std::string& alternate_key, 
std::string& value) const {
   if (get(key, value)) {
+if (get(alternate_key)) {
+  const auto logger = logging::LoggerFactory::getLogger();
+  logger->log_warn("Both the property '%s' and an alternate property '%s' 
are set. Using '%s'.", key, alternate_key, key);

Review comment:
   I was under the impression that these "non-nifi-prefixed" property names 
are deprecated and will be removed in the future, as the user can specify these 
"ignored" properties I feel like the fact that we are discarding them is worthy 
of their attention

##
File path: libminifi/src/c2/HeartbeatJsonSerializer.cpp
##
@@ -0,0 +1,333 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "c2/HeartbeatJsonSerializer.h"
+#include "rapidjson/document.h"
+#include "rapidjson/writer.h"
+#include "rapidjson/stringbuffer.h"
+#include "rapidjson/prettywriter.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace c2 {
+
+std::string HeartbeatJsonSerializer::getOperation(const C2Payload& payload) {
+  switch (payload.getOperation()) {
+case Operation::ACKNOWLEDGE:
+  return "acknowledge";
+case Operation::HEARTBEAT:
+  return "heartbeat";
+case Operation::RESTART:
+  return "restart";
+case Operation::DESCRIBE:
+  return "describe";
+case Operation::STOP:
+  return "stop";
+case Operation::START:
+  return "start";
+case Operation::UPDATE:
+  return "update";
+case Operation::PAUSE:
+  return "pause";
+case Operation::RESUME:
+  return "resume";
+default:
+  return "heartbeat";
+  }
+}
+
+static void serializeOperationInfo(rapidjson::Value& target, const C2Payload& 
payload, rapidjson::Document::AllocatorType& alloc) {
+  gsl_Expects(target.IsObject());
+
+  target.AddMember("operation", 
HeartbeatJsonSerializer::getOperation(payload), alloc);
+
+  std::string id = payload.getIdentifier();
+  if (id.empty()) {
+return;
+  }
+
+  target.AddMember("operationId", id, alloc);
+  std::string state_str;
+  switch (payload.getStatus().getState()) {
+case state::UpdateState::FULLY_APPLIED:
+  state_str = "FULLY_APPLIED";
+  break;
+case state::UpdateState::PARTIALLY_APPLIED:
+  state_str = "PARTIALLY_APPLIED";
+  break;
+case state::UpdateState::READ_ERROR:
+  state_str = "OPERATION_NOT_UNDERSTOOD";
+  break;
+case state::UpdateState::SET_ERROR:
+default:
+  state_str = "NOT_APPLIED";
+  }

Review comment:
   done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on pull request #1072: MINIFICPP-1482 - Fix C2RequestClassTest transient failure

2021-05-19 Thread GitBox


szaszm commented on pull request #1072:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1072#issuecomment-844129936


   What was the root cause here? Is the test still valid when the json parsing 
has failed, or is it an indication of another issue?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1057: MINIFICPP-1537 - Log heartbeats on demand

2021-05-19 Thread GitBox


adamdebreceni commented on a change in pull request #1057:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1057#discussion_r635261808



##
File path: libminifi/include/core/logging/Logger.h
##
@@ -68,13 +69,32 @@ inline T conditional_conversion(T t) {
 }
 
 template
-inline std::string format_string(char const* format_str, Args&&... args) {
-  char buf[LOG_BUFFER_SIZE];
-  std::snprintf(buf, LOG_BUFFER_SIZE, format_str, 
conditional_conversion(std::forward(args))...);
-  return std::string(buf);
+inline std::string format_string(int max_size, char const* format_str, 
Args&&... args) {
+  if (0 <= max_size && max_size <= LOG_BUFFER_SIZE) {
+// use static buffer
+char buf[LOG_BUFFER_SIZE + 1];
+std::snprintf(buf, max_size + 1, format_str, 
conditional_conversion(std::forward(args))...);
+return std::string(buf);
+  }
+  // use dynamically allocated buffer
+  if (max_size < 0) {
+// query what buffer size we need
+int size_needed = std::snprintf(nullptr, 0, format_str, 
conditional_conversion(std::forward(args))...);
+if (size_needed < 0) {
+  // error
+  return std::string("Error while formatting log message");
+}
+std::vector buffer(size_needed + 1);  // extra '\0' character
+std::snprintf(buffer.data(), buffer.size(), format_str, 
conditional_conversion(std::forward(args))...);
+return std::string(buffer.data());
+  }
+  // use dynamic but fix-sized buffer
+  std::vector buffer(max_size);
+  std::snprintf(buffer.data(), buffer.size(), format_str, 
conditional_conversion(std::forward(args))...);
+  return std::string(buffer.data());

Review comment:
   changed them to use either the `(const char*, size_t)` or the `(It 
first, It last)` constructors

##
File path: libminifi/include/core/logging/Logger.h
##
@@ -68,13 +69,32 @@ inline T conditional_conversion(T t) {
 }
 
 template
-inline std::string format_string(char const* format_str, Args&&... args) {
-  char buf[LOG_BUFFER_SIZE];
-  std::snprintf(buf, LOG_BUFFER_SIZE, format_str, 
conditional_conversion(std::forward(args))...);
-  return std::string(buf);
+inline std::string format_string(int max_size, char const* format_str, 
Args&&... args) {
+  if (0 <= max_size && max_size <= LOG_BUFFER_SIZE) {
+// use static buffer
+char buf[LOG_BUFFER_SIZE + 1];
+std::snprintf(buf, max_size + 1, format_str, 
conditional_conversion(std::forward(args))...);
+return std::string(buf);
+  }
+  // use dynamically allocated buffer
+  if (max_size < 0) {
+// query what buffer size we need
+int size_needed = std::snprintf(nullptr, 0, format_str, 
conditional_conversion(std::forward(args))...);

Review comment:
   done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm closed pull request #1040: MINIFICPP-1329- Fix implementation and usages of string to bool

2021-05-19 Thread GitBox


szaszm closed pull request #1040:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1040


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] naddym commented on a change in pull request #5083: NIFI-8613: Improve FlattenJson Processor

2021-05-19 Thread GitBox


naddym commented on a change in pull request #5083:
URL: https://github.com/apache/nifi/pull/5083#discussion_r635235984



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/FlattenJson.java
##
@@ -157,25 +191,35 @@ public void onTrigger(final ProcessContext context, final 
ProcessSession session
 final String mode = context.getProperty(FLATTEN_MODE).getValue();
 final FlattenMode flattenMode = getFlattenMode(mode);
 
-String separator = 
context.getProperty(SEPARATOR).evaluateAttributeExpressions(flowFile).getValue();
-
+final String separator = 
context.getProperty(SEPARATOR).evaluateAttributeExpressions(flowFile).getValue();
+final String returnType = context.getProperty(RETURN_TYPE).getValue();
+final PrintMode printMode = 
context.getProperty(PRETTY_PRINT).asBoolean() ? PrintMode.PRETTY : 
PrintMode.MINIMAL;
 
 try {
-ByteArrayOutputStream bos = new ByteArrayOutputStream();
-session.exportTo(flowFile, bos);
-bos.close();
-
-String raw = new String(bos.toByteArray());
-final String flattened = new JsonFlattener(raw)
-.withFlattenMode(flattenMode)
-.withSeparator(separator.charAt(0))
-.withStringEscapePolicy(() -> 
StringEscapeUtils.ESCAPE_JSON)
-.flatten();
-
-flowFile = session.write(flowFile, os -> 
os.write(flattened.getBytes()));
+final StringBuilder contents = new StringBuilder();
+session.read(flowFile, in -> contents.append(IOUtils.toString(in, 
Charset.defaultCharset(;
+
+final String resultedJson;
+if (returnType.equals(RETURN_TYPE_FLATTEN)) {
+resultedJson = new JsonFlattener(contents.toString())
+.withFlattenMode(flattenMode)
+.withSeparator(separator.charAt(0))
+.withStringEscapePolicy(() -> 
StringEscapeUtils.ESCAPE_JSON)
+.withPrintMode(printMode)
+.flatten();
+} else {
+resultedJson = new JsonUnflattener(contents.toString())
+.withFlattenMode(flattenMode)
+.withSeparator(separator.charAt(0))
+.withPrintMode(printMode)
+.unflatten();
+}
+
+flowFile = session.write(flowFile, out -> 
out.write(resultedJson.getBytes()));

Review comment:
   Sure, will do.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] naddym commented on a change in pull request #5083: NIFI-8613: Improve FlattenJson Processor

2021-05-19 Thread GitBox


naddym commented on a change in pull request #5083:
URL: https://github.com/apache/nifi/pull/5083#discussion_r635235720



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/FlattenJson.java
##
@@ -157,25 +191,35 @@ public void onTrigger(final ProcessContext context, final 
ProcessSession session
 final String mode = context.getProperty(FLATTEN_MODE).getValue();
 final FlattenMode flattenMode = getFlattenMode(mode);
 
-String separator = 
context.getProperty(SEPARATOR).evaluateAttributeExpressions(flowFile).getValue();
-
+final String separator = 
context.getProperty(SEPARATOR).evaluateAttributeExpressions(flowFile).getValue();
+final String returnType = context.getProperty(RETURN_TYPE).getValue();
+final PrintMode printMode = 
context.getProperty(PRETTY_PRINT).asBoolean() ? PrintMode.PRETTY : 
PrintMode.MINIMAL;
 
 try {
-ByteArrayOutputStream bos = new ByteArrayOutputStream();
-session.exportTo(flowFile, bos);
-bos.close();
-
-String raw = new String(bos.toByteArray());
-final String flattened = new JsonFlattener(raw)
-.withFlattenMode(flattenMode)
-.withSeparator(separator.charAt(0))
-.withStringEscapePolicy(() -> 
StringEscapeUtils.ESCAPE_JSON)
-.flatten();
-
-flowFile = session.write(flowFile, os -> 
os.write(flattened.getBytes()));
+final StringBuilder contents = new StringBuilder();
+session.read(flowFile, in -> contents.append(IOUtils.toString(in, 
Charset.defaultCharset(;

Review comment:
   Yeah, let me add new character set property to stay consistent with 
other processors. Thanks. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] naddym commented on pull request #5083: NIFI-8613: Improve FlattenJson Processor

2021-05-19 Thread GitBox


naddym commented on pull request #5083:
URL: https://github.com/apache/nifi/pull/5083#issuecomment-844102150


   Thank you @exceptionfactory for the detailed review. All suggestions pointed 
out looks good, will work on changing them.. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on pull request #851: MINIFICPP-1203 - Remove obsolate distinction between headers and sources when running linter checks

2021-05-19 Thread GitBox


fgerlits commented on pull request #851:
URL: https://github.com/apache/nifi-minifi-cpp/pull/851#issuecomment-844099438


   Closing this as https://github.com/apache/nifi-minifi-cpp/pull/1069 will be 
merged instead.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits closed pull request #851: MINIFICPP-1203 - Remove obsolate distinction between headers and sources when running linter checks

2021-05-19 Thread GitBox


fgerlits closed pull request #851:
URL: https://github.com/apache/nifi-minifi-cpp/pull/851


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1057: MINIFICPP-1537 - Log heartbeats on demand

2021-05-19 Thread GitBox


szaszm commented on a change in pull request #1057:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1057#discussion_r633704910



##
File path: libminifi/include/core/logging/Logger.h
##
@@ -68,13 +69,32 @@ inline T conditional_conversion(T t) {
 }
 
 template
-inline std::string format_string(char const* format_str, Args&&... args) {
-  char buf[LOG_BUFFER_SIZE];
-  std::snprintf(buf, LOG_BUFFER_SIZE, format_str, 
conditional_conversion(std::forward(args))...);
-  return std::string(buf);
+inline std::string format_string(int max_size, char const* format_str, 
Args&&... args) {
+  if (0 <= max_size && max_size <= LOG_BUFFER_SIZE) {
+// use static buffer
+char buf[LOG_BUFFER_SIZE + 1];
+std::snprintf(buf, max_size + 1, format_str, 
conditional_conversion(std::forward(args))...);
+return std::string(buf);
+  }
+  // use dynamically allocated buffer
+  if (max_size < 0) {
+// query what buffer size we need
+int size_needed = std::snprintf(nullptr, 0, format_str, 
conditional_conversion(std::forward(args))...);

Review comment:
   `snprintf` incurs the full cost of formatting even if the buffer is 
`nullptr` AFAIK. I would prefer a conservative sized stack buffer with fallback 
to allocation, so that at least sometimes it can be faster.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1057: MINIFICPP-1537 - Log heartbeats on demand

2021-05-19 Thread GitBox


szaszm commented on a change in pull request #1057:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1057#discussion_r633702768



##
File path: libminifi/include/core/logging/Logger.h
##
@@ -68,13 +69,32 @@ inline T conditional_conversion(T t) {
 }
 
 template
-inline std::string format_string(char const* format_str, Args&&... args) {
-  char buf[LOG_BUFFER_SIZE];
-  std::snprintf(buf, LOG_BUFFER_SIZE, format_str, 
conditional_conversion(std::forward(args))...);
-  return std::string(buf);
+inline std::string format_string(int max_size, char const* format_str, 
Args&&... args) {
+  if (0 <= max_size && max_size <= LOG_BUFFER_SIZE) {
+// use static buffer
+char buf[LOG_BUFFER_SIZE + 1];
+std::snprintf(buf, max_size + 1, format_str, 
conditional_conversion(std::forward(args))...);
+return std::string(buf);
+  }
+  // use dynamically allocated buffer
+  if (max_size < 0) {
+// query what buffer size we need
+int size_needed = std::snprintf(nullptr, 0, format_str, 
conditional_conversion(std::forward(args))...);
+if (size_needed < 0) {
+  // error
+  return std::string("Error while formatting log message");
+}
+std::vector buffer(size_needed + 1);  // extra '\0' character
+std::snprintf(buffer.data(), buffer.size(), format_str, 
conditional_conversion(std::forward(args))...);
+return std::string(buffer.data());
+  }
+  // use dynamic but fix-sized buffer
+  std::vector buffer(max_size);
+  std::snprintf(buffer.data(), buffer.size(), format_str, 
conditional_conversion(std::forward(args))...);
+  return std::string(buffer.data());

Review comment:
   `std::string` has an iterator pair constructor that can be optimized to 
a `memcpy`. I would prefer using that one if the bounds are known (returned by 
`snprintf` in this case).

##
File path: libminifi/src/Configure.cpp
##
@@ -37,6 +37,10 @@ bool Configure::get(const std::string& key, std::string& 
value) const {
 
 bool Configure::get(const std::string& key, const std::string& alternate_key, 
std::string& value) const {
   if (get(key, value)) {
+if (get(alternate_key)) {
+  const auto logger = logging::LoggerFactory::getLogger();
+  logger->log_warn("Both the property '%s' and an alternate property '%s' 
are set. Using '%s'.", key, alternate_key, key);

Review comment:
   I think this is expected behavior, not something that requires extra 
attention, so I feel like the `warn` level is too high. Maybe debug?

##
File path: libminifi/src/c2/HeartbeatJsonSerializer.cpp
##
@@ -0,0 +1,333 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "c2/HeartbeatJsonSerializer.h"
+#include "rapidjson/document.h"
+#include "rapidjson/writer.h"
+#include "rapidjson/stringbuffer.h"
+#include "rapidjson/prettywriter.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace c2 {
+
+std::string HeartbeatJsonSerializer::getOperation(const C2Payload& payload) {
+  switch (payload.getOperation()) {
+case Operation::ACKNOWLEDGE:
+  return "acknowledge";
+case Operation::HEARTBEAT:
+  return "heartbeat";
+case Operation::RESTART:
+  return "restart";
+case Operation::DESCRIBE:
+  return "describe";
+case Operation::STOP:
+  return "stop";
+case Operation::START:
+  return "start";
+case Operation::UPDATE:
+  return "update";
+case Operation::PAUSE:
+  return "pause";
+case Operation::RESUME:
+  return "resume";
+default:
+  return "heartbeat";
+  }
+}
+
+static void serializeOperationInfo(rapidjson::Value& target, const C2Payload& 
payload, rapidjson::Document::AllocatorType& alloc) {
+  gsl_Expects(target.IsObject());
+
+  target.AddMember("operation", 
HeartbeatJsonSerializer::getOperation(payload), alloc);
+
+  std::string id = payload.getIdentifier();
+  if (id.empty()) {
+return;
+  }
+
+  target.AddMember("operationId", id, alloc);
+  std::string state_str;
+  switch (payload.getStatus().getState()) {
+case state::UpdateState::FULLY_APPLIED:
+  state_str = "FULLY_APPLIED";
+  break;
+case state::UpdateState::PARTIALLY_APPLIED:
+  state_str = 

[GitHub] [nifi] JonathanKessler commented on pull request #4780: NIFI-8126 Include Total Queued Duration in ConnectionStatus metrics

2021-05-19 Thread GitBox


JonathanKessler commented on pull request #4780:
URL: https://github.com/apache/nifi/pull/4780#issuecomment-844082215


   Happy to contribute, thank you for the guidance!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] timeabarna opened a new pull request #5086: NIFI-8522 NiFi can duplicate controller services when generating temp…

2021-05-19 Thread GitBox


timeabarna opened a new pull request #5086:
URL: https://github.com/apache/nifi/pull/5086


   Fix controller service duplication when generating templates
   https://issues.apache.org/jira/browse/NIFI-8522
   
    Description of PR
   
   Fixing controller service duplication
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on pull request #5085: Nifi 1.13.x fix corrupt flow.xml.gz and add web context root

2021-05-19 Thread GitBox


exceptionfactory commented on pull request #5085:
URL: https://github.com/apache/nifi/pull/5085#issuecomment-844076932


   Thanks for your contribution @phuthientran.  As noted in the checklist, all 
pull requests must meet several minimum criteria for consideration.  In 
particular, a [Jira issue](https://issues.apache.org/jira/browse/NIFI) must be 
created for tracking the changes, and the initial PR submission must have all 
commits squashed into a single commit.  Following those steps will help 
reviewers evaluate your proposed changes.
   
   As mentioned in the [Contributor 
Guide](https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-FindingIssuesandExtendingtheProject),
 joining the developers mailing list and requesting Jira contributor access 
will also allow you to assign the issue to yourself and mark it as Patch 
Available when it is linked to your PR. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8469) Change ProcessSession so that commits are asynchronous

2021-05-19 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347648#comment-17347648
 ] 

ASF subversion and git services commented on NIFI-8469:
---

Commit ecacfdaa4c663ff2831dd13a840aabd0ff3349e8 in nifi's branch 
refs/heads/main from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ecacfda ]

NIFI-8469: Introduced ProcessSession.commitAsync and updated processors to use 
it. Deprecated ProcessSession.commit()
- Updated Mock Framework to now fail tests that use ProcessSession.commit() 
unless they first call TestRunner.setAllowSynchronousSessionCommits(true)
- Updated stateless nifi in order to make use of async session commits
- Fixed bug that caused stateless to not properly handle Additional Classpath 
URLs and bug that caused warnings about validation to get generated when a flow 
that used controller services was initialized. While this is not really in 
scope of NIFI-8469, it was found when testing and blocked further progress so 
addresssed here.
- If Processor fails to progress when run from stateless, trigger from start of 
flow until that is no longer the case
- Introduced notion of TransactionThresholds that can limit the amount of data 
that a flow will bring in for a given invocation of stateless dataflow
- Several new system-level tests


> Change ProcessSession so that commits are asynchronous
> --
>
> Key: NIFI-8469
> URL: https://issues.apache.org/jira/browse/NIFI-8469
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework, Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Currently, ProcessSession.commit() guarantees that when the method call 
> returns, that all FlowFiles have been persisted to the repositories. As such, 
> it is safe to acknowledge/dispose of the data on the external system.
> For example, a processor that consumes from JMS will consume the message, 
> create a FlowFile from it, and then call ProcessSession.commit(). Only then 
> is it safe to acknowledge the message on the JMS broker. If it is 
> acknowledged prior to calling ProcessSession.commit(), and NiFi is restarted, 
> the data may be lost. But after calling ProcessSession.commit(), it is safe 
> because the content will be available to NiFi upon restart.
> This API has served us well. However, lately there has also been a great deal 
> of work and desire from the community to introduce other "runtimes." One of 
> those is MiNiFi. Another is Stateless NiFi.
> One of the distinguishing features of Stateless NiFi is that it stores 
> content in-memory. This means that fast (and large) disks are not necessary, 
> but it also means that upon restarting the application, all FlowFiles are 
> lost. In order to provide data reliability, Stateless NiFi requires that the 
> source of data be both reliable and replayable (like a JMS Broker or Apache 
> Kafka, for instance). In this way, we can keep content in-memory by avoiding 
> the message acknowledgment until after we've finished processing the message 
> completely. If the application is restarted in the middle, the message will 
> not have been acknowledged and as a result will be replayed, so we maintain 
> our strong at-least-once guarantees.
> Another distinguishing feature of Stateless NiFi is that it is 
> single-threaded. This allows us to be far more scalable and consume few 
> resources.
> This works by allowing ProcessSession.commit() to enqueue data for the next 
> Processor  in the chain and then invoke the next Processor (recursively) 
> before ProcessSession.commit() ever returns.
> Unfortunately, though, some dataflows do not work well with such a model. Any 
> flow that has MergeContent or MergeRecord in the middle will end up in a 
> situation where the Processor never progresses. Take for example, the 
> following dataflow:
> GetFile --> SplitText --> ReplaceText --> MergeContent --> PutS3Object
> In this case, assume that SplitText splits an incoming FlowFile into 10 
> smaller FlowFiles. ReplaceText performs some manipulation. MergeContent is 
> then expected to merge all 10 FlowFiles back into one.
> However, because of the nature of how this works, after SplitText, the queue 
> will have 10 FlowFiles. ReplaceText will then be called, which will consume 
> one FlowFIle, manipulate it, and call ProcessSession.commit(). This will then 
> enqueue the FlowFile for MergeContent. MergeContent will be triggered but 
> will be unable to make progress because it doesn't have enough FlowFiles. The 
> only choice that the framework has is to then call MergeContent again until 
> its entire queue is emptied, but the queue will never empty. As a result, the 
> dataflow 

[jira] [Updated] (NIFI-8469) Change ProcessSession so that commits are asynchronous

2021-05-19 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-8469:
-
Fix Version/s: 1.14.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Change ProcessSession so that commits are asynchronous
> --
>
> Key: NIFI-8469
> URL: https://issues.apache.org/jira/browse/NIFI-8469
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework, Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
> Fix For: 1.14.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Currently, ProcessSession.commit() guarantees that when the method call 
> returns, that all FlowFiles have been persisted to the repositories. As such, 
> it is safe to acknowledge/dispose of the data on the external system.
> For example, a processor that consumes from JMS will consume the message, 
> create a FlowFile from it, and then call ProcessSession.commit(). Only then 
> is it safe to acknowledge the message on the JMS broker. If it is 
> acknowledged prior to calling ProcessSession.commit(), and NiFi is restarted, 
> the data may be lost. But after calling ProcessSession.commit(), it is safe 
> because the content will be available to NiFi upon restart.
> This API has served us well. However, lately there has also been a great deal 
> of work and desire from the community to introduce other "runtimes." One of 
> those is MiNiFi. Another is Stateless NiFi.
> One of the distinguishing features of Stateless NiFi is that it stores 
> content in-memory. This means that fast (and large) disks are not necessary, 
> but it also means that upon restarting the application, all FlowFiles are 
> lost. In order to provide data reliability, Stateless NiFi requires that the 
> source of data be both reliable and replayable (like a JMS Broker or Apache 
> Kafka, for instance). In this way, we can keep content in-memory by avoiding 
> the message acknowledgment until after we've finished processing the message 
> completely. If the application is restarted in the middle, the message will 
> not have been acknowledged and as a result will be replayed, so we maintain 
> our strong at-least-once guarantees.
> Another distinguishing feature of Stateless NiFi is that it is 
> single-threaded. This allows us to be far more scalable and consume few 
> resources.
> This works by allowing ProcessSession.commit() to enqueue data for the next 
> Processor  in the chain and then invoke the next Processor (recursively) 
> before ProcessSession.commit() ever returns.
> Unfortunately, though, some dataflows do not work well with such a model. Any 
> flow that has MergeContent or MergeRecord in the middle will end up in a 
> situation where the Processor never progresses. Take for example, the 
> following dataflow:
> GetFile --> SplitText --> ReplaceText --> MergeContent --> PutS3Object
> In this case, assume that SplitText splits an incoming FlowFile into 10 
> smaller FlowFiles. ReplaceText performs some manipulation. MergeContent is 
> then expected to merge all 10 FlowFiles back into one.
> However, because of the nature of how this works, after SplitText, the queue 
> will have 10 FlowFiles. ReplaceText will then be called, which will consume 
> one FlowFIle, manipulate it, and call ProcessSession.commit(). This will then 
> enqueue the FlowFile for MergeContent. MergeContent will be triggered but 
> will be unable to make progress because it doesn't have enough FlowFiles. The 
> only choice that the framework has is to then call MergeContent again until 
> its entire queue is emptied, but the queue will never empty. As a result, the 
> dataflow will end up in an infinite loop, calling MergeContent, which will 
> make no progress.
> What we really want to do is to call ReplaceText repeated until its queue is 
> empty and only then move on to the next Processor (MergeContent). 
> Unfortunately, this can't really be accomplished with the current semantics, 
> though. If we tried to do so, when ReplaceText is triggered the first time, 
> and it calls ProcessSession.commit(), we would have two choices:
>  * Recursively call ReplaceText.onTrigger(). This very quickly results in a 
> StackOverflowException, so this approach doesn't work well.
>  * Have ProcessSession.commit() block while another thread is responsible for 
> calling ReplaceText.onTrigger(). This results in spawning a new thread for 
> each FlowFile in the queue, which can very quickly exhaust the number of 
> threads, leading to an OutOfMemoryError (or, even worse, depending on 
> system/jvm settings, causing the entire operating system to crash).
> So any approach here is not viable.
> Additionally, any dataflow that has a self-loop such as a failure 

[jira] [Commented] (NIFI-8469) Change ProcessSession so that commits are asynchronous

2021-05-19 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347647#comment-17347647
 ] 

ASF subversion and git services commented on NIFI-8469:
---

Commit ecacfdaa4c663ff2831dd13a840aabd0ff3349e8 in nifi's branch 
refs/heads/main from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ecacfda ]

NIFI-8469: Introduced ProcessSession.commitAsync and updated processors to use 
it. Deprecated ProcessSession.commit()
- Updated Mock Framework to now fail tests that use ProcessSession.commit() 
unless they first call TestRunner.setAllowSynchronousSessionCommits(true)
- Updated stateless nifi in order to make use of async session commits
- Fixed bug that caused stateless to not properly handle Additional Classpath 
URLs and bug that caused warnings about validation to get generated when a flow 
that used controller services was initialized. While this is not really in 
scope of NIFI-8469, it was found when testing and blocked further progress so 
addresssed here.
- If Processor fails to progress when run from stateless, trigger from start of 
flow until that is no longer the case
- Introduced notion of TransactionThresholds that can limit the amount of data 
that a flow will bring in for a given invocation of stateless dataflow
- Several new system-level tests


> Change ProcessSession so that commits are asynchronous
> --
>
> Key: NIFI-8469
> URL: https://issues.apache.org/jira/browse/NIFI-8469
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework, Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Critical
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Currently, ProcessSession.commit() guarantees that when the method call 
> returns, that all FlowFiles have been persisted to the repositories. As such, 
> it is safe to acknowledge/dispose of the data on the external system.
> For example, a processor that consumes from JMS will consume the message, 
> create a FlowFile from it, and then call ProcessSession.commit(). Only then 
> is it safe to acknowledge the message on the JMS broker. If it is 
> acknowledged prior to calling ProcessSession.commit(), and NiFi is restarted, 
> the data may be lost. But after calling ProcessSession.commit(), it is safe 
> because the content will be available to NiFi upon restart.
> This API has served us well. However, lately there has also been a great deal 
> of work and desire from the community to introduce other "runtimes." One of 
> those is MiNiFi. Another is Stateless NiFi.
> One of the distinguishing features of Stateless NiFi is that it stores 
> content in-memory. This means that fast (and large) disks are not necessary, 
> but it also means that upon restarting the application, all FlowFiles are 
> lost. In order to provide data reliability, Stateless NiFi requires that the 
> source of data be both reliable and replayable (like a JMS Broker or Apache 
> Kafka, for instance). In this way, we can keep content in-memory by avoiding 
> the message acknowledgment until after we've finished processing the message 
> completely. If the application is restarted in the middle, the message will 
> not have been acknowledged and as a result will be replayed, so we maintain 
> our strong at-least-once guarantees.
> Another distinguishing feature of Stateless NiFi is that it is 
> single-threaded. This allows us to be far more scalable and consume few 
> resources.
> This works by allowing ProcessSession.commit() to enqueue data for the next 
> Processor  in the chain and then invoke the next Processor (recursively) 
> before ProcessSession.commit() ever returns.
> Unfortunately, though, some dataflows do not work well with such a model. Any 
> flow that has MergeContent or MergeRecord in the middle will end up in a 
> situation where the Processor never progresses. Take for example, the 
> following dataflow:
> GetFile --> SplitText --> ReplaceText --> MergeContent --> PutS3Object
> In this case, assume that SplitText splits an incoming FlowFile into 10 
> smaller FlowFiles. ReplaceText performs some manipulation. MergeContent is 
> then expected to merge all 10 FlowFiles back into one.
> However, because of the nature of how this works, after SplitText, the queue 
> will have 10 FlowFiles. ReplaceText will then be called, which will consume 
> one FlowFIle, manipulate it, and call ProcessSession.commit(). This will then 
> enqueue the FlowFile for MergeContent. MergeContent will be triggered but 
> will be unable to make progress because it doesn't have enough FlowFiles. The 
> only choice that the framework has is to then call MergeContent again until 
> its entire queue is emptied, but the queue will never empty. As a result, the 
> dataflow 

[GitHub] [nifi] markap14 closed pull request #5042: NIFI-8469: Updated ProcessSession to have commitAsync methods and deprecated commit; updated stateless to make use of this improvement

2021-05-19 Thread GitBox


markap14 closed pull request #5042:
URL: https://github.com/apache/nifi/pull/5042


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on pull request #5042: NIFI-8469: Updated ProcessSession to have commitAsync methods and deprecated commit; updated stateless to make use of this improvement

2021-05-19 Thread GitBox


markap14 commented on pull request #5042:
URL: https://github.com/apache/nifi/pull/5042#issuecomment-844071657


   Thanks @gresockj. I've seen no other reviews and no objections anywhere. 
After putting it away for a few weeks I came back and did a thorough code 
review myself as well. And the Github Actions failure was due to a failure 
pulling a dependency, which is unrelated; the other two passed. Based on all of 
that I'll go ahead and merge to `main`. Thanks for the thorough reviews 
@gresockj !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on a change in pull request #5083: NIFI-8613: Improve FlattenJson Processor

2021-05-19 Thread GitBox


exceptionfactory commented on a change in pull request #5083:
URL: https://github.com/apache/nifi/pull/5083#discussion_r635195935



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/FlattenJson.java
##
@@ -157,25 +191,35 @@ public void onTrigger(final ProcessContext context, final 
ProcessSession session
 final String mode = context.getProperty(FLATTEN_MODE).getValue();
 final FlattenMode flattenMode = getFlattenMode(mode);
 
-String separator = 
context.getProperty(SEPARATOR).evaluateAttributeExpressions(flowFile).getValue();
-
+final String separator = 
context.getProperty(SEPARATOR).evaluateAttributeExpressions(flowFile).getValue();
+final String returnType = context.getProperty(RETURN_TYPE).getValue();
+final PrintMode printMode = 
context.getProperty(PRETTY_PRINT).asBoolean() ? PrintMode.PRETTY : 
PrintMode.MINIMAL;
 
 try {
-ByteArrayOutputStream bos = new ByteArrayOutputStream();
-session.exportTo(flowFile, bos);
-bos.close();
-
-String raw = new String(bos.toByteArray());
-final String flattened = new JsonFlattener(raw)
-.withFlattenMode(flattenMode)
-.withSeparator(separator.charAt(0))
-.withStringEscapePolicy(() -> 
StringEscapeUtils.ESCAPE_JSON)
-.flatten();
-
-flowFile = session.write(flowFile, os -> 
os.write(flattened.getBytes()));
+final StringBuilder contents = new StringBuilder();
+session.read(flowFile, in -> contents.append(IOUtils.toString(in, 
Charset.defaultCharset(;
+
+final String resultedJson;
+if (returnType.equals(RETURN_TYPE_FLATTEN)) {
+resultedJson = new JsonFlattener(contents.toString())
+.withFlattenMode(flattenMode)
+.withSeparator(separator.charAt(0))
+.withStringEscapePolicy(() -> 
StringEscapeUtils.ESCAPE_JSON)
+.withPrintMode(printMode)
+.flatten();
+} else {
+resultedJson = new JsonUnflattener(contents.toString())
+.withFlattenMode(flattenMode)
+.withSeparator(separator.charAt(0))
+.withPrintMode(printMode)
+.unflatten();
+}
+
+flowFile = session.write(flowFile, out -> 
out.write(resultedJson.getBytes()));

Review comment:
   Related to the comment on handling input, specifying a character set on 
String.getBytes() would clarify the expected output as opposed to relying on 
the system defaults.

##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/FlattenJson.java
##
@@ -157,25 +191,35 @@ public void onTrigger(final ProcessContext context, final 
ProcessSession session
 final String mode = context.getProperty(FLATTEN_MODE).getValue();
 final FlattenMode flattenMode = getFlattenMode(mode);
 
-String separator = 
context.getProperty(SEPARATOR).evaluateAttributeExpressions(flowFile).getValue();
-
+final String separator = 
context.getProperty(SEPARATOR).evaluateAttributeExpressions(flowFile).getValue();
+final String returnType = context.getProperty(RETURN_TYPE).getValue();
+final PrintMode printMode = 
context.getProperty(PRETTY_PRINT).asBoolean() ? PrintMode.PRETTY : 
PrintMode.MINIMAL;
 
 try {
-ByteArrayOutputStream bos = new ByteArrayOutputStream();
-session.exportTo(flowFile, bos);
-bos.close();
-
-String raw = new String(bos.toByteArray());
-final String flattened = new JsonFlattener(raw)
-.withFlattenMode(flattenMode)
-.withSeparator(separator.charAt(0))
-.withStringEscapePolicy(() -> 
StringEscapeUtils.ESCAPE_JSON)
-.flatten();
-
-flowFile = session.write(flowFile, os -> 
os.write(flattened.getBytes()));
+final StringBuilder contents = new StringBuilder();
+session.read(flowFile, in -> contents.append(IOUtils.toString(in, 
Charset.defaultCharset(;
+
+final String resultedJson;
+if (returnType.equals(RETURN_TYPE_FLATTEN)) {
+resultedJson = new JsonFlattener(contents.toString())
+.withFlattenMode(flattenMode)
+.withSeparator(separator.charAt(0))

Review comment:
   The separator character could be declared once outside of the 
conditional and reused as opposed to calling `separator.charAt(0)` in both 
conditional blocks.

##
File path: 

[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1044: MINIFICPP-1526 Add ConsumeJournald to consume systemd journal messages

2021-05-19 Thread GitBox


szaszm commented on a change in pull request #1044:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1044#discussion_r635062484



##
File path: CMakeLists.txt
##
@@ -564,6 +560,14 @@ if (ENABLE_ALL OR ENABLE_AZURE)
createExtension(AZURE-EXTENSIONS "AZURE EXTENSIONS" "This enables Azure 
support" "extensions/azure" "${TEST_DIR}/azure-tests")
 endif()
 
+## Add the systemd extension
+if(CMAKE_SYSTEM_NAME STREQUAL "Linux")
+   option(DISABLE_SYSTEMD "Disables the systemd extension." OFF)

Review comment:
   changed in f1cd129 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1044: MINIFICPP-1526 Add ConsumeJournald to consume systemd journal messages

2021-05-19 Thread GitBox


szaszm commented on a change in pull request #1044:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1044#discussion_r635058808



##
File path: CMakeLists.txt
##
@@ -564,6 +560,14 @@ if (ENABLE_ALL OR ENABLE_AZURE)
createExtension(AZURE-EXTENSIONS "AZURE EXTENSIONS" "This enables Azure 
support" "extensions/azure" "${TEST_DIR}/azure-tests")
 endif()
 
+## Add the systemd extension
+if(CMAKE_SYSTEM_NAME STREQUAL "Linux")

Review comment:
   Awesome, thank you




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (MINIFICPP-1562) Failing CI due to corrupted cache on Windows-2019

2021-05-19 Thread Martin Zink (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin Zink resolved MINIFICPP-1562.

Resolution: Fixed

> Failing CI due to corrupted cache on Windows-2019
> -
>
> Key: MINIFICPP-1562
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1562
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Martin Zink
>Assignee: Martin Zink
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The windows-vs2019 CI is failing due to corrupted clcache.
> Unfortunetly github doesnt have clean way to clear the cache.
> {code:java}
> GitHub will remove any cache entries that have not been accessed in over 7 
> days. There is no limit on the number of caches you can store, but the total 
> size of all caches in a repository is limited to 5 GB. If you exceed this 
> limit, GitHub will save your cache but will begin evicting caches until the 
> total size is less than 5 GB.
> {code}
> But since the cache is used I dont think github will clear it.
> A quick and dirty fix is: if we were to change the key in the .github/ci.yml 
> file, it would fix the issue.
> [https://github.community/t/how-to-clear-cache-in-github-actions/129038/3]
> If this issue comes up again maybe we could implement a workflow that clears 
> the caches, with 
> [https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] martinzink closed pull request #1075: MINIFICPP-1562: Failing CI due to corrupted cache on Windows-2019

2021-05-19 Thread GitBox


martinzink closed pull request #1075:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1075


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] lordgamez commented on pull request #1075: MINIFICPP-1562: Failing CI due to corrupted cache on Windows-2019

2021-05-19 Thread GitBox


lordgamez commented on pull request #1075:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1075#issuecomment-843832937


   I'm not sure it is needed to clear the cache anymore. After merging a new 
commit to the `main` branch the cache was updated and after rebasing my change 
#1073 the windows-vs2019 CI job passed. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1057: MINIFICPP-1537 - Log heartbeats on demand

2021-05-19 Thread GitBox


fgerlits commented on a change in pull request #1057:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1057#discussion_r634979684



##
File path: libminifi/src/c2/protocols/RESTProtocol.cpp
##
@@ -318,133 +178,7 @@ bool RESTProtocol::containsPayload(const C2Payload ) {
   return false;
 }
 
-rapidjson::Value RESTProtocol::serializeConnectionQueues(const C2Payload 
, std::string , rapidjson::Document::AllocatorType ) {
-  rapidjson::Value json_payload(payload.isContainer() ? rapidjson::kArrayType 
: rapidjson::kObjectType);
-
-  C2Payload adjusted(payload.getOperation(), payload.getIdentifier(), 
payload.isRaw());
-
-  auto name = payload.getLabel();
-  std::string uuid;
-  C2ContentResponse updatedContent(payload.getOperation());
-  for (const C2ContentResponse  : payload.getContent()) {
-for (const auto& op_arg : content.operation_arguments) {
-  if (op_arg.first == "uuid") {
-uuid = op_arg.second.to_string();
-  }
-  updatedContent.operation_arguments.insert(op_arg);
-}
-  }
-  updatedContent.name = uuid;
-  adjusted.setLabel(uuid);
-  adjusted.setIdentifier(uuid);
-  c2::AnnotatedValue nd;
-  // name should be what was previously the TLN ( top level node )
-  nd = name;
-  updatedContent.operation_arguments.insert(std::make_pair("name", nd));
-  // the rvalue reference is an unfortunate side effect of the underlying API 
decision.
-  adjusted.addContent(std::move(updatedContent), true);
-  mergePayloadContent(json_payload, adjusted, alloc);
-  label = uuid;
-  return json_payload;
-}
-
-rapidjson::Value RESTProtocol::serializeJsonPayload(const C2Payload , 
rapidjson::Document::AllocatorType ) {
-  // get the name from the content
-  rapidjson::Value json_payload(payload.isContainer() ? rapidjson::kArrayType 
: rapidjson::kObjectType);
-
-  std::vector children;
-
-  bool isQueue = payload.getLabel() == "queues";
-
-  for (const auto _payload : payload.getNestedPayloads()) {
-std::string label = nested_payload.getLabel();
-rapidjson::Value* child_payload = new rapidjson::Value(isQueue ? 
serializeConnectionQueues(nested_payload, label, alloc) : 
serializeJsonPayload(nested_payload, alloc));
-
-if (nested_payload.isCollapsible()) {
-  bool combine = false;
-  for (auto  : children) {
-if (subordinate.name == label) {
-  subordinate.values.push_back(child_payload);
-  combine = true;
-  break;
-}
-  }
-  if (!combine) {
-ValueObject obj;
-obj.name = label;
-obj.values.push_back(child_payload);
-children.push_back(obj);
-  }
-} else {
-  ValueObject obj;
-  obj.name = label;
-  obj.values.push_back(child_payload);
-  children.push_back(obj);
-}
-  }
-
-  for (auto child_vector : children) {
-rapidjson::Value children_json;
-rapidjson::Value newMemberKey = getStringValue(child_vector.name, alloc);
-if (child_vector.values.size() > 1) {
-  children_json.SetArray();
-  for (auto child : child_vector.values) {
-if (json_payload.IsArray())
-  json_payload.PushBack(child->Move(), alloc);
-else
-  children_json.PushBack(child->Move(), alloc);
-  }
-  if (!json_payload.IsArray())
-json_payload.AddMember(newMemberKey, children_json, alloc);
-} else if (child_vector.values.size() == 1) {
-  rapidjson::Value* first = child_vector.values.front();
-  if (first->IsObject() && first->HasMember(newMemberKey)) {
-if (json_payload.IsArray())
-  json_payload.PushBack((*first)[newMemberKey].Move(), alloc);
-else
-  json_payload.AddMember(newMemberKey, (*first)[newMemberKey].Move(), 
alloc);
-  } else {
-if (json_payload.IsArray()) {
-  json_payload.PushBack(first->Move(), alloc);
-} else {
-  json_payload.AddMember(newMemberKey, first->Move(), alloc);
-}
-  }
-}
-for (rapidjson::Value* child : child_vector.values)
-  delete child;
-  }
-
-  mergePayloadContent(json_payload, payload, alloc);
-  return json_payload;
-}
-
-std::string RESTProtocol::getOperation(const C2Payload ) {
-  switch (payload.getOperation()) {
-case Operation::ACKNOWLEDGE:
-  return "acknowledge";
-case Operation::HEARTBEAT:
-  return "heartbeat";
-case Operation::RESTART:
-  return "restart";
-case Operation::DESCRIBE:
-  return "describe";
-case Operation::STOP:
-  return "stop";
-case Operation::START:
-  return "start";
-case Operation::UPDATE:
-  return "update";
-case Operation::PAUSE:
-  return "pause";
-case Operation::RESUME:
-  return "resume";
-default:
-  return "heartbeat";
-  }
-}
-
-Operation RESTProtocol::stringToOperation(const std::string str) {
+Operation RESTProtocol::stringToOperation(const std::string& str) {

Review comment:
   It looks like changes are needed in other classes to use the new enum