[jira] [Updated] (MINIFICPP-1437) ExecutePythonProcessorTests fail transiently
[ https://issues.apache.org/jira/browse/MINIFICPP-1437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1437: Labels: MiNiFi-CPP-Hygiene (was: ) > ExecutePythonProcessorTests fail transiently > > > Key: MINIFICPP-1437 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1437 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Gabor Gyimesi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Attachments: ExecutePythonProcessorTests-mac.log > > > As a rare occurrence ExecutePythonProcessorTests fail with SegFault in Github > actions. See more information in attached logs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1434) CSite2SiteTests transiently fail on Mac
[ https://issues.apache.org/jira/browse/MINIFICPP-1434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1434: Labels: MiNiFi-CPP-Hygiene (was: ) > CSite2SiteTests transiently fail on Mac > --- > > Key: MINIFICPP-1434 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1434 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Gabor Gyimesi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Attachments: csite2sitetests-mac.log > > > CSite2SiteTests sometimes fail with SIGPIPE***Exception in Mac environments. > Unfortunately no logs are generated. Needs to be reproduced and fixed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1438) FlowControllerTests fail transiently
[ https://issues.apache.org/jira/browse/MINIFICPP-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1438: Labels: MiNiFi-CPP-Hygiene (was: ) > FlowControllerTests fail transiently > > > Key: MINIFICPP-1438 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1438 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Gabor Gyimesi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Attachments: FlowControllerTests-ubuntu1604.log, > FlowControllerTests-win.log > > > FlowControllerTests transiently fail in Github actions with multiple > problems. See attached logs for more information. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1436) VerifyInvokeHTTPTest and VerifyInvokeHTTPTestSecure fail transiently
[ https://issues.apache.org/jira/browse/MINIFICPP-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1436: Labels: MiNiFi-CPP-Hygiene (was: ) > VerifyInvokeHTTPTest and VerifyInvokeHTTPTestSecure fail transiently > > > Key: MINIFICPP-1436 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1436 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Gabor Gyimesi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Attachments: VerifyInvokeHTTPTest-mac.log, > VerifyInvokeHTTPTestSecure-win.log > > > VerifyInvokeHTTPTest and VerifyInvokeHTTPTestSecure sometimes fail in Github > actions. According to the logs the curl call becomes idle after 1 second and > reaches a timeout. See attached logs for more information. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1435) SFTP tests trainsiently fail
[ https://issues.apache.org/jira/browse/MINIFICPP-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1435: Labels: MiNiFi-CPP-Hygiene (was: ) > SFTP tests trainsiently fail > > > Key: MINIFICPP-1435 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1435 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Gabor Gyimesi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Attachments: SFTPTests-ubuntu1604.log > > > SFTP tests rarely fail in CI. According to the logs the port file of the SFTP > server does not get created so the server probably does not start. See > attachment for logs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (MINIFICPP-1437) ExecutePythonProcessorTests fail transiently
[ https://issues.apache.org/jira/browse/MINIFICPP-1437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi reassigned MINIFICPP-1437: --- Assignee: Adam Hunyadi > ExecutePythonProcessorTests fail transiently > > > Key: MINIFICPP-1437 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1437 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Gabor Gyimesi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Attachments: ExecutePythonProcessorTests-mac.log > > > As a rare occurrence ExecutePythonProcessorTests fail with SegFault in Github > actions. See more information in attached logs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (MINIFICPP-1437) ExecutePythonProcessorTests fail transiently
[ https://issues.apache.org/jira/browse/MINIFICPP-1437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17249660#comment-17249660 ] Adam Hunyadi commented on MINIFICPP-1437: - It is suprising that the test ran shows that it succeeded with all its assertions. I ran the same test localle, pruned the outputs and compared them word by word using {code:bash} git --no-pager diff --unified=0 --no-index --word-diff-regex=. --patience transient_execute_python_processor_fail.txt execute_python_success.txt > word_diff.log {code} The only differences it shows is in auto-generated paths and memory addresses, that is to be expected (see example). > ExecutePythonProcessorTests fail transiently > > > Key: MINIFICPP-1437 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1437 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Gabor Gyimesi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Attachments: ExecutePythonProcessorTests-mac.log > > > As a rare occurrence ExecutePythonProcessorTests fail with SegFault in Github > actions. See more information in attached logs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (MINIFICPP-1437) ExecutePythonProcessorTests fail transiently
[ https://issues.apache.org/jira/browse/MINIFICPP-1437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17249660#comment-17249660 ] Adam Hunyadi edited comment on MINIFICPP-1437 at 12/15/20, 12:29 PM: - It is suprising that the test ran shows that it succeeded with all its assertions. I ran the same test localle, pruned the outputs and compared them word by word using {code:bash} git --no-pager diff --unified=0 --no-index --word-diff-regex=. --patience transient_execute_python_processor_fail.txt execute_python_success.txt > word_diff.log {code} The only differences it shows is in auto-generated paths and memory addresses, that is to be expected (see attachment). was (Author: hunyadi): It is suprising that the test ran shows that it succeeded with all its assertions. I ran the same test localle, pruned the outputs and compared them word by word using {code:bash} git --no-pager diff --unified=0 --no-index --word-diff-regex=. --patience transient_execute_python_processor_fail.txt execute_python_success.txt > word_diff.log {code} The only differences it shows is in auto-generated paths and memory addresses, that is to be expected (see example). > ExecutePythonProcessorTests fail transiently > > > Key: MINIFICPP-1437 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1437 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Gabor Gyimesi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Attachments: ExecutePythonProcessorTests-mac.log > > > As a rare occurrence ExecutePythonProcessorTests fail with SegFault in Github > actions. See more information in attached logs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1437) ExecutePythonProcessorTests fail transiently
[ https://issues.apache.org/jira/browse/MINIFICPP-1437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1437: Attachment: word_diff.log > ExecutePythonProcessorTests fail transiently > > > Key: MINIFICPP-1437 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1437 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Gabor Gyimesi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Attachments: ExecutePythonProcessorTests-mac.log, word_diff.log > > > As a rare occurrence ExecutePythonProcessorTests fail with SegFault in Github > actions. See more information in attached logs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (MINIFICPP-1437) ExecutePythonProcessorTests fail transiently
[ https://issues.apache.org/jira/browse/MINIFICPP-1437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi reassigned MINIFICPP-1437: --- Assignee: (was: Adam Hunyadi) > ExecutePythonProcessorTests fail transiently > > > Key: MINIFICPP-1437 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1437 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Gabor Gyimesi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Attachments: ExecutePythonProcessorTests-mac.log, word_diff.log > > > As a rare occurrence ExecutePythonProcessorTests fail with SegFault in Github > actions. See more information in attached logs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1444) Remove the SentimentAnalyzer python dependece on the non-fixed google cloud API-s
Adam Hunyadi created MINIFICPP-1444: --- Summary: Remove the SentimentAnalyzer python dependece on the non-fixed google cloud API-s Key: MINIFICPP-1444 URL: https://issues.apache.org/jira/browse/MINIFICPP-1444 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 0.7.0 Reporter: Adam Hunyadi Fix For: 1.0.0 *Background:* Building a clean MiNiFi agent with Python enabled, can produce errors preventing startup. These originate from the imports of SentimentAnalyzer.py. > ImportError: cannot import name 'enums' from "'google.cloud.language'" It seems like this is part of an google cloud API is not properly fixed. On looking up the google cloud source, I needed to modify the our imports to include these: {code:bash|Title: Example google cloud import paths} ➜ cloud egrep -R "import.*enums" . ./language_v1beta2/gapic/language_service_client.py:from google.cloud.language_v1beta2.gapic import enums ./language_v1/gapic/language_service_client.py:from google.cloud.language_v1.gapic import enums ➜ cloud egrep -R "import.*types" . ./translate.py:from google.cloud.translate_v3 import types {code} *Proposal:* One would need to investigate the use-cases of the SentimentAnalyzer script and either remove it, or remove the possibility of the script preventing MINiFi startups. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1445) Clean up docker integration test framework
Adam Hunyadi created MINIFICPP-1445: --- Summary: Clean up docker integration test framework Key: MINIFICPP-1445 URL: https://issues.apache.org/jira/browse/MINIFICPP-1445 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 0.8.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Background:* Docker integration tests are currently living in a single, long __init__.py file. *Proposal:* This can be easily refactored by having the components separated into python modules based on their functionality. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1445) Clean up docker integration test framework
[ https://issues.apache.org/jira/browse/MINIFICPP-1445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1445: Description: *Background:* Docker integration tests are currently living in a two, long __init__.py files. *Proposal:* This can be easily refactored by having the components separated into python modules based on their functionality. was: *Background:* Docker integration tests are currently living in a single, long __init__.py file. *Proposal:* This can be easily refactored by having the components separated into python modules based on their functionality. > Clean up docker integration test framework > -- > > Key: MINIFICPP-1445 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1445 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.8.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > Time Spent: 10m > Remaining Estimate: 0h > > *Background:* > Docker integration tests are currently living in a two, long __init__.py > files. > *Proposal:* > This can be easily refactored by having the components separated into python > modules based on their functionality. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1458) Register and test hidden InvokeHTTP properties
Adam Hunyadi created MINIFICPP-1458: --- Summary: Register and test hidden InvokeHTTP properties Key: MINIFICPP-1458 URL: https://issues.apache.org/jira/browse/MINIFICPP-1458 Project: Apache NiFi MiNiFi C++ Issue Type: Bug Affects Versions: 0.7.0 Reporter: Adam Hunyadi Fix For: 1.0.0 *Background:* Some of the properties of InvokeHTTP eg. ("Put Response Body in Attribute") are not registered and therefore reported as incorrect when user tries to set them. They are also missing from the documentation page. *Proposal:* We should register the new properties and ensure they work as expected. Testing requirements and acceptance criteria to be refined. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (MINIFICPP-1320) Clean up connectionMap usages
[ https://issues.apache.org/jira/browse/MINIFICPP-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi reassigned MINIFICPP-1320: --- Assignee: (was: Adam Hunyadi) > Clean up connectionMap usages > - > > Key: MINIFICPP-1320 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1320 > Project: Apache NiFi MiNiFi C++ > Issue Type: Task >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Attachments: Screenshot 2020-08-05 at 11.13.50.png > > > *Background:* > Currently we propagate (eg. for serialization) connections via two methods: > # Using the obsolete {{{color:#403294}{{connectionMaps}}{color}}} > # Using the {{{color:#403294}{{Repository::containers}}{color}}} member field > Repository.h even has these declared next to each other: > !Screenshot 2020-08-05 at 11.13.50.png|width=447,height=54! > *Proposal:* > As 2. should be included in 1., we should investigate and see if we can > combine these two maps in a sensible way. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (MINIFICPP-1320) Clean up connectionMap usages
[ https://issues.apache.org/jira/browse/MINIFICPP-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi reassigned MINIFICPP-1320: --- Assignee: Adam Hunyadi > Clean up connectionMap usages > - > > Key: MINIFICPP-1320 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1320 > Project: Apache NiFi MiNiFi C++ > Issue Type: Task >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Attachments: Screenshot 2020-08-05 at 11.13.50.png > > > *Background:* > Currently we propagate (eg. for serialization) connections via two methods: > # Using the obsolete {{{color:#403294}{{connectionMaps}}{color}}} > # Using the {{{color:#403294}{{Repository::containers}}{color}}} member field > Repository.h even has these declared next to each other: > !Screenshot 2020-08-05 at 11.13.50.png|width=447,height=54! > *Proposal:* > As 2. should be included in 1., we should investigate and see if we can > combine these two maps in a sensible way. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (MINIFICPP-1320) Clean up connectionMap usages
[ https://issues.apache.org/jira/browse/MINIFICPP-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi reassigned MINIFICPP-1320: --- Assignee: (was: Adam Hunyadi) > Clean up connectionMap usages > - > > Key: MINIFICPP-1320 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1320 > Project: Apache NiFi MiNiFi C++ > Issue Type: Task >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Attachments: Screenshot 2020-08-05 at 11.13.50.png > > > *Background:* > Currently we propagate (eg. for serialization) connections via two methods: > # Using the obsolete {{{color:#403294}{{connectionMaps}}{color}}} > # Using the {{{color:#403294}{{Repository::containers}}{color}}} member field > Repository.h even has these declared next to each other: > !Screenshot 2020-08-05 at 11.13.50.png|width=447,height=54! > *Proposal:* > As 2. should be included in 1., we should investigate and see if we can > combine these two maps in a sensible way. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1264) Add getter interface helper
[ https://issues.apache.org/jira/browse/MINIFICPP-1264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi resolved MINIFICPP-1264. - Resolution: Won't Fix Not fixed, we should extend the interfaces for the commonly used getters instead. > Add getter interface helper > --- > > Key: MINIFICPP-1264 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1264 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 0.9.0 > > > *Background:* > In our codebase we have quite a lot of getter functions that follow this > signature: > {code:c++|title=Current Interface} > void getMemberField(Type& argToModify); > bool getNullableMemberField(OtherType& argToModify); > {code} > Most developers would much rather use an interface that looks like this > though: > {code:c++|title=Proposed Interface} > Type& getMemberField(); > const Type& getMemberField(); > optional getNullableMemberField(); > {code} > The problem with the former version is that in some cases it makes the code > unnecessary verbose. An example from our codebase: > {code:c++|title=Example} > # Version 1 > utils::Identifier sourceUuid; > source->getUUID(sourceUuid); > connection->setSourceUUID(sourceUuid); > # Version 2 > connection->setSourceUUID(source->getUUID(sourceUuid)); > {code} > Unfortunately more ofthen than not this is the exact pattern the getters > appear in. > *Proposal:* > As we do not want to break API, we cannot change the getters. However, we > can resort to implementing a helper utility. A godbolt implementation for a > convenience function is available here: > [[Proof of > Concept]|https://godbolt.org/#z:OYLghAFBqd5QCxAYwPYBMCmBRdBLAF1QCcAaPECAM1QDsCBlZAQwBtMQBGAFlICsupVs1qhkAUgBMAISnTSAZ0ztkBPHUqZa6AMKpWAVwC2tLgDZSW9ABk8tTADljAI0zEuAZlIAHVAsLqtHqGJuY%2BfgF0tvZORq7unF5KKmp0DATMxATBxqacFsmYqoHpmQTRji5unooZWTmh%2BbVlFbHxngCUiqgGxMgcAORSHnbIhlgA1OIeOgoExHbA09jiAAwAgsOj45hTM%2BpzxJjMRstrm5IjtGMGk9M6BmqshACeZxtb1zt7OkaYRiQ3h4Vh9Lttbrt7qhvKlaGx3hcrjc7jMAG5FIjEBGfZGQmZUAzXWHw4HnHHfe4EF7eTAAfXmzEICmxG0OBlUEwA8qx0C8dMIFMyPgB2WQbCYSiao1B4dATYCYAjrCB2AhSMwTTLADpTUWa4jAPYAEU102kuqN50lE1VmuNE04kjNZOFlo8Ys2GwI/28wm9PzGzEFEwAYssJmyOesDQAVamYZ1en1%2BvE6Kk0uF/CYAJUVpAm6a0J12/IU%2BcLmcwADoaxNo8BmaTWfN2QQ67H4/dc22IKWQCAAFQdCD1hQ1qsdcMuj3W71GX3Mf33fwALzpbYcCOtkbbCoItK1upn1slBn8ogL8ftFeLEYI6H7BAMvrpyn%2BWjVMwc%2BbmD5AT5fe5R3HM5gUfTt3StSVxFdRNNlgyCPiTecUx%2BG8s05Zw%2BHLeNK1DQkER3HNFV6WgADV4RFY8JTPRZ22AXMqGvXDb3rOMaXuEMCLA/9k0XXY9wPA17lWZZwI4xD1mtWiL3rZiM1vX9%2ByOAF0VpI4qDcLR%2BiAg1GLE/8II9KCJW7UiKNYCAdWmE0sHYb04OtMziHItgIEwvh1QmVAsPzLjaAmAlaB1EBvKw2kIHVHy%2BC6QLCQioLrNFGDLXFaC0olaE3EXEh6KskyjwK605K1RyTwlJSQDsaUAGtMGoeL82i2l8y1SdJPKiUjifFz9SWDroNgjKJm8BZUX4kACo8gcwr4Wkyolfy4toebJJSuDzltIxGVoKzCuG7leX5IMFALTA5gWiZutI4ietc1gYPNb05nzdVDr5AUFH7PcpPWtahs9QGySQoGQeB0GIfBgYulYEABgAVgGUhTAGVYkdQOGdDkOQIx6PpIUuTgkYIOG0Y6LoapAbhuCrSRuAADjMeGPAATm4VZJHpjx6bZoQ4e4JGUbR0gMYGJGvtWUgSdR6HSDgWAYEQFBUHnPB2DICgIDQVX1ZQYRRE4VYjdIKg1e9YgvogZw4aJ0hnDsTIXhtpHtb%2BehOVoVgnZl0gsG20R2FJpH8COYp0S%2Bn3MAADyKR5BmF1VlGdoQ8GcYhHb0LAg6lhYjGdroaHoJg2A4Hh%2BEEfWxGxmQU%2BcL7YErEAG1YUh0XcZ5vXJkWYUCCOAFpfxsiQZDkThhQmPvOQ8cW32KDQICsBo8i8KxWiqdwLF8fxYSXmot8iWg17iaoCln2FSnqfRchqQo59oC/yjsSpj435pL5CZe38fmJ15AMwugUHjfoXAYZw0RsjbOoso6Mz7mYbgExAwXkNlWVYKCJgQFwIQXKwxOD5j0DrNwUxCY6ixiPGQxMg5d0pvDSWsMBgC1IHnOBKD6aOnhvDYUwpDarDgcKCBPtRbixAJLaWZM5aKwgEgHoBBvCPHIJQbW3g1bVFwZgfAmJBCF0YCwQO9NSAAHd07eHznzBGgtIFwxwRMfRhAEATGgWYWB8DEGGmQag4WojoZdAQMcLA7grKmIYXnDw8Mqws04PTYU1NJCcxZizEJvAhbozhkIkRlCKYgHYVWYU8NEjcDMKseGLMuFmFwXQ6e/DhaCKluk0xkhzECJSTUmWXc27%2BA0NwIAA%3D%3D] > We can introduce this and simplify later usages of getters. > With this, the syntax would simplify to: > {code:c++|title=Using a helper} > connection->setSourceUUID(ReturnVal(source, &Processor::getUUID)); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1479) Rework integration tests for HashContent
Adam Hunyadi created MINIFICPP-1479: --- Summary: Rework integration tests for HashContent Key: MINIFICPP-1479 URL: https://issues.apache.org/jira/browse/MINIFICPP-1479 Project: Apache NiFi MiNiFi C++ Issue Type: Bug Affects Versions: 0.7.0 Reporter: Adam Hunyadi Fix For: 1.0.0 *Background:* HashContent is expected to add the hash value of the content of a flowfile to a flowfile attribute. Currently, neither the unit tests tests for the presence of such an attribute. While unit tests seem to test for logging, docker based integration test only test for the flowfile content being unmodified. *Proposal:* We should create proper integration tests for HashContent: {code:python|title=Example feature definition} Scenario: HashContent adds hash attribute to flowfiles Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with the content "" is present in "/tmp/input" And a HashContent processor with the "Hash Attribute" property set to "hash" And a HashContent processor with the "Hash Algorithm" property set to And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the HashContent And the "success" relationship of the HashContent processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with the content "" is placed in the monitored directory in less than 10 seconds And the flowfile has an attribute called "hash" set to Examples: | content | hash_algorithm | hash_value | | test| MD5| 098f6bcd4621d373cade4e832627b4f6 | | test| SHA1 | a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 | | test| SHA256 | 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 | | coffee | MD5| 24eb05d18318ac2db8b2b959315d10f2 | | coffee | SHA1 | 44213f9f4d59b557314fadcd233232eebcac8012 | | coffee | SHA256 | 37290d74ac4d186e3a8e5785d259d2ec04fac91ae28092e7620ec8bc99e830aa | {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1479) Rework integration tests for HashContent
[ https://issues.apache.org/jira/browse/MINIFICPP-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1479: Labels: MiNiFi-CPP-Hygiene (was: ) > Rework integration tests for HashContent > > > Key: MINIFICPP-1479 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1479 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > > *Background:* > HashContent is expected to add the hash value of the content of a flowfile to > a flowfile attribute. Currently, neither the unit tests tests for the > presence of such an attribute. While unit tests seem to test for logging, > docker based integration test only test for the flowfile content being > unmodified. > *Proposal:* > We should create proper integration tests for HashContent: > {code:python|title=Example feature definition} > Scenario: HashContent adds hash attribute to flowfiles > Given a GetFile processor with the "Input Directory" property set to > "/tmp/input" > And a file with the content "" is present in "/tmp/input" > And a HashContent processor with the "Hash Attribute" property set to > "hash" > And a HashContent processor with the "Hash Algorithm" property set to > > And a PutFile processor with the "Directory" property set to "/tmp/output" > And the "success" relationship of the GetFile processor is connected to > the HashContent > And the "success" relationship of the HashContent processor is connected > to the PutFile > When the MiNiFi instance starts up > Then a flowfile with the content "" is placed in the monitored > directory in less than 10 seconds > And the flowfile has an attribute called "hash" set to > Examples: >| content | hash_algorithm | hash_value >| >| test| MD5| 098f6bcd4621d373cade4e832627b4f6 >| >| test| SHA1 | a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 >| >| test| SHA256 | > 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 | >| coffee | MD5| 24eb05d18318ac2db8b2b959315d10f2 >| >| coffee | SHA1 | 44213f9f4d59b557314fadcd233232eebcac8012 >| >| coffee | SHA256 | > 37290d74ac4d186e3a8e5785d259d2ec04fac91ae28092e7620ec8bc99e830aa | > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1479) Rework integration tests for HashContent
[ https://issues.apache.org/jira/browse/MINIFICPP-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1479: Description: *Background:* HashContent is expected to add the hash value of the content of a flowfile to a flowfile attribute. Currently, neither the unit tests tests for the presence of such an attribute. While unit tests seem to test for logging, docker based integration test only test for the flowfile content being unmodified. *Proposal:* We should create proper integration tests for HashContent: {code:python|title=Example feature definition} Scenario: HashContent adds hash attribute to flowfiles Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with the content is present in "/tmp/input" And a HashContent processor with the "Hash Attribute" property set to "hash" And a HashContent processor with the "Hash Algorithm" property set to And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the HashContent And the "success" relationship of the HashContent processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with the content is placed in the monitored directory in less than 10 seconds And the flowfile has an attribute called "hash" set to Examples: | content | hash_algorithm | hash_value | | "test" | MD5| 098f6bcd4621d373cade4e832627b4f6 | | "test" | SHA1 | a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 | | "test" | SHA256 | 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 | | "coffee" | MD5| 24eb05d18318ac2db8b2b959315d10f2 | | "coffee" | SHA1 | 44213f9f4d59b557314fadcd233232eebcac8012 | | "coffee" | SHA256 | 37290d74ac4d186e3a8e5785d259d2ec04fac91ae28092e7620ec8bc99e830aa | {code} was: *Background:* HashContent is expected to add the hash value of the content of a flowfile to a flowfile attribute. Currently, neither the unit tests tests for the presence of such an attribute. While unit tests seem to test for logging, docker based integration test only test for the flowfile content being unmodified. *Proposal:* We should create proper integration tests for HashContent: {code:python|title=Example feature definition} Scenario: HashContent adds hash attribute to flowfiles Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with the content "" is present in "/tmp/input" And a HashContent processor with the "Hash Attribute" property set to "hash" And a HashContent processor with the "Hash Algorithm" property set to And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the HashContent And the "success" relationship of the HashContent processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with the content "" is placed in the monitored directory in less than 10 seconds And the flowfile has an attribute called "hash" set to Examples: | content | hash_algorithm | hash_value | | test| MD5| 098f6bcd4621d373cade4e832627b4f6 | | test| SHA1 | a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 | | test| SHA256 | 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 | | coffee | MD5| 24eb05d18318ac2db8b2b959315d10f2 | | coffee | SHA1 | 44213f9f4d59b557314fadcd233232eebcac8012 | | coffee | SHA256 | 37290d74ac4d186e3a8e5785d259d2ec04fac91ae28092e7620ec8bc99e830aa | {code} > Rework integration tests for HashContent > > > Key: MINIFICPP-1479 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1479 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > > *Background:* > HashContent is expected to add the hash value of the content of a flowfile to > a flowfile attribute. Currently, neither the unit tests tests for the > presence of such an attribute. While unit tests seem to test for logging, > docker based integration test only test for the flowfile content being > unmodifie
[jira] [Updated] (MINIFICPP-1479) Rework integration tests for HashContent
[ https://issues.apache.org/jira/browse/MINIFICPP-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1479: Issue Type: Task (was: Bug) > Rework integration tests for HashContent > > > Key: MINIFICPP-1479 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1479 > Project: Apache NiFi MiNiFi C++ > Issue Type: Task >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > > *Background:* > HashContent is expected to add the hash value of the content of a flowfile to > a flowfile attribute. Currently, neither the unit tests tests for the > presence of such an attribute. While unit tests seem to test for logging, > docker based integration test only test for the flowfile content being > unmodified. > *Proposal:* > We should create proper integration tests for HashContent: > {code:python|title=Example feature definition} > Scenario: HashContent adds hash attribute to flowfiles > Given a GetFile processor with the "Input Directory" property set to > "/tmp/input" > And a file with the content is present in "/tmp/input" > And a HashContent processor with the "Hash Attribute" property set to > "hash" > And a HashContent processor with the "Hash Algorithm" property set to > > And a PutFile processor with the "Directory" property set to "/tmp/output" > And the "success" relationship of the GetFile processor is connected to > the HashContent > And the "success" relationship of the HashContent processor is connected > to the PutFile > When the MiNiFi instance starts up > Then a flowfile with the content is placed in the monitored > directory in less than 10 seconds > And the flowfile has an attribute called "hash" set to > Examples: >| content | hash_algorithm | hash_value > | >| "test" | MD5| 098f6bcd4621d373cade4e832627b4f6 > | >| "test" | SHA1 | a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 > | >| "test" | SHA256 | > 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 | >| "coffee" | MD5| 24eb05d18318ac2db8b2b959315d10f2 > | >| "coffee" | SHA1 | 44213f9f4d59b557314fadcd233232eebcac8012 > | >| "coffee" | SHA256 | > 37290d74ac4d186e3a8e5785d259d2ec04fac91ae28092e7620ec8bc99e830aa | > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1480) Investigate PutSFTP "Disable Directory Listing" dynamic property
Adam Hunyadi created MINIFICPP-1480: --- Summary: Investigate PutSFTP "Disable Directory Listing" dynamic property Key: MINIFICPP-1480 URL: https://issues.apache.org/jira/browse/MINIFICPP-1480 Project: Apache NiFi MiNiFi C++ Issue Type: Task Affects Versions: 0.7.0 Reporter: Adam Hunyadi Fix For: 1.0.0 *Background:* PutSFTP "Disable Directory Listing" is expected to both work as property and dynamic property. It is even listed the same way in the NiFi documentation. However, making a dynamic property of a boolean with any name make little sense for setting this value; this is probably not how we are expected to handle the property. *Proposal:* Check how NiFi uses this property and upgrade the usage accordingly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1421) Investigate and fix C2JstackTest
[ https://issues.apache.org/jira/browse/MINIFICPP-1421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi resolved MINIFICPP-1421. - Assignee: Adam Hunyadi Resolution: Fixed > Investigate and fix C2JstackTest > > > Key: MINIFICPP-1421 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1421 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Affects Versions: 1.0.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 0.9.0 > > Time Spent: 1h > Remaining Estimate: 0h > > *Background:* > C2JstackTest is currently looks for a log line that contains the word > "SchedulingAgent". This line however is only present due to the > LogTestController logs that log lines from the SchedulingAgents will not be > shown. > {code:bash|title=Filtered test output} > ➜ ./extensions/http-curl/tests/C2JstackTest > ../src/libminifi/test/resources/TestHTTPGet.yml > ../src/libminifi/test/resources/ |& grep "SchedulingAgent" > > [2020-12-08 10:48:51.955] > [org::apache::nifi::minifi::core::logging::LoggerConfiguration] [debug] > org::apache::nifi::minifi::SchedulingAgent logger got sinks from namespace > root and level error from namespace root > [2020-12-08 10:48:51.955] > [org::apache::nifi::minifi::core::logging::LoggerConfiguration] [debug] > org::apache::nifi::minifi::ThreadedSchedulingAgent logger got sinks from > namespace root and level error from namespace root > [2020-12-08 10:48:51.955] > [org::apache::nifi::minifi::core::logging::LoggerConfiguration] [debug] > org::apache::nifi::minifi::TimerDrivenSchedulingAgent logger got sinks from > namespace root and level error from namespace root > {code} > Enabling all sinks still did not produce any relevant log line that would be > worth matching against: > {code:c++} > #include > "../../extensions/standard-processors/controllers/UnorderedMapKeyValueStoreService.h" > #include > "../../extensions/standard-processors/controllers/UnorderedMapPersistableKeyValueStoreService.h" > #include "../../extensions/http-curl/processors/InvokeHTTP.h" > #include "../../extensions/http-curl/processors/InvokeHTTP.h" > #include "../../extensions/standard-processors/processors/LogAttribute.h" > #include "../../extensions/standard-processors/processors/LogAttribute.h" > #include "../../libminifi/include/c2/ControllerSocketProtocol.h" > #include "../../libminifi/include/c2/ControllerSocketProtocol.h" > #include "../../extensions/standard-processors/processors/AppendHostInfo.h" > #include "../../extensions/standard-processors/processors/AppendHostInfo.h" > #include "../../extensions/standard-processors/processors/ExecuteProcess.h" > #include "../../extensions/standard-processors/processors/ExecuteProcess.h" > #include "../../extensions/standard-processors/processors/ExtractText.h" > #include "../../extensions/standard-processors/processors/ExtractText.h" > #include "../../extensions/standard-processors/processors/GenerateFlowFile.h" > #include "../../extensions/standard-processors/processors/GenerateFlowFile.h" > #include "../../extensions/standard-processors/processors/GetFile.h" > #include "../../extensions/standard-processors/processors/GetFile.h" > #include "../../extensions/standard-processors/processors/GetTCP.h" > #include "../../extensions/standard-processors/processors/GetTCP.h" > #include "../../extensions/standard-processors/processors/HashContent.h" > #include "../../extensions/standard-processors/processors/HashContent.h" > #include "../../extensions/standard-processors/processors/ListenSyslog.h" > #include "../../extensions/standard-processors/processors/ListenSyslog.h" > #include "../../extensions/standard-processors/processors/PutFile.h" > #include "../../extensions/standard-processors/processors/PutFile.h" > #include "../../extensions/standard-processors/processors/RetryFlowFile.h" > #include "../../extensions/standard-processors/processors/RetryFlowFile.h" > #include "../../extensions/standard-processors/processors/RouteOnAttribute.h" > #include "../../extensions/standard-processors/processors/RouteOnAttribute.h" > #include "../../extensions/standard-processors/processors/TailFile.h" > #include "../../extensions/standard-processors/processors/TailFile.h" > #include "../../extensions/standard-processors/processors/UpdateAttribute.h" > #include "../../extensions/standard-processors/processors/UpdateAttribute.h" > #include "../../extensions/http-curl/protocols/AgentPrinter.h" > #include "../../extensions/http-curl/protocols/AgentPrinter.h" > #include "../../extensions/http-curl/protocols/RESTReceiver.h" > #include "../../extensions/http-curl/protocols/R
[jira] [Commented] (MINIFICPP-1305) Create integration tests for MQTT processors using a dockerized MQTT broker
[ https://issues.apache.org/jira/browse/MINIFICPP-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17281752#comment-17281752 ] Adam Hunyadi commented on MINIFICPP-1305: - This is probably related to erroneous behaviours. > Create integration tests for MQTT processors using a dockerized MQTT broker > --- > > Key: MINIFICPP-1305 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1305 > Project: Apache NiFi MiNiFi C++ > Issue Type: Task >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Priority: Major > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > > *Background:* > The MQTT processors are untested and known to be unstable. We suspect, that > setting up secure connections is currently broken and it is quite likely that > there are other problems as well. > As we do not know much about what potential use-cases there are for MQTT, my > recommendation is that whoever starts with the implementations first spends a > considerable amount of time on understanding this protocol and its use-cases, > finding a containerized solution for a broker implementation that adheres to > them and plans the potential tests before even touching the code. Also > starting with compatibility (platform for the docker frame/CI job) tests and > checks before writing the tests is recommended. > *Acceptance criteria:* > The person picking up this task should investigate and propose tests and > verify it with [~aboda]. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1479) Rework integration tests for HashContent
[ https://issues.apache.org/jira/browse/MINIFICPP-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1479: Description: *Background:* HashContent is expected to add the hash value of the content of a flowfile to a flowfile attribute. Currently, neither the unit tests tests for the presence of such an attribute. The unit tests seem to test for logging and the docker based integration test only test for the flowfile content being unmodified. The actual expected behaviour of appending attributes currently seems untested. *Proposal:* We should create proper integration tests for HashContent: {code:python|title=Example feature definition} Scenario: HashContent adds hash attribute to flowfiles Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with the content is present in "/tmp/input" And a HashContent processor with the "Hash Attribute" property set to "hash" And a HashContent processor with the "Hash Algorithm" property set to And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the HashContent And the "success" relationship of the HashContent processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with the content is placed in the monitored directory in less than 10 seconds And the flowfile has an attribute called "hash" set to Examples: | content | hash_algorithm | hash_value | | "test" | MD5| 098f6bcd4621d373cade4e832627b4f6 | | "test" | SHA1 | a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 | | "test" | SHA256 | 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 | | "coffee" | MD5| 24eb05d18318ac2db8b2b959315d10f2 | | "coffee" | SHA1 | 44213f9f4d59b557314fadcd233232eebcac8012 | | "coffee" | SHA256 | 37290d74ac4d186e3a8e5785d259d2ec04fac91ae28092e7620ec8bc99e830aa | {code} was: *Background:* HashContent is expected to add the hash value of the content of a flowfile to a flowfile attribute. Currently, neither the unit tests tests for the presence of such an attribute. While unit tests seem to test for logging, docker based integration test only test for the flowfile content being unmodified. *Proposal:* We should create proper integration tests for HashContent: {code:python|title=Example feature definition} Scenario: HashContent adds hash attribute to flowfiles Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with the content is present in "/tmp/input" And a HashContent processor with the "Hash Attribute" property set to "hash" And a HashContent processor with the "Hash Algorithm" property set to And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the HashContent And the "success" relationship of the HashContent processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with the content is placed in the monitored directory in less than 10 seconds And the flowfile has an attribute called "hash" set to Examples: | content | hash_algorithm | hash_value | | "test" | MD5| 098f6bcd4621d373cade4e832627b4f6 | | "test" | SHA1 | a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 | | "test" | SHA256 | 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 | | "coffee" | MD5| 24eb05d18318ac2db8b2b959315d10f2 | | "coffee" | SHA1 | 44213f9f4d59b557314fadcd233232eebcac8012 | | "coffee" | SHA256 | 37290d74ac4d186e3a8e5785d259d2ec04fac91ae28092e7620ec8bc99e830aa | {code} > Rework integration tests for HashContent > > > Key: MINIFICPP-1479 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1479 > Project: Apache NiFi MiNiFi C++ > Issue Type: Task >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > > *Background:* > HashContent is expected to add the hash value of the content of a flowfile to > a flowfile attribute. Currently, neither the unit tests tests for the > presence of such an attribute. The unit tests seem to test for logging
[jira] [Updated] (MINIFICPP-1479) Rework integration tests for HashContent
[ https://issues.apache.org/jira/browse/MINIFICPP-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1479: Description: *Background:* HashContent is expected to add the hash value of the content of a flowfile to a flowfile attribute. Currently, neither the unit tests tests for the presence of such an attribute. The unit tests seem to test for logging and the docker based integration test only test for the flowfile content being unmodified. The actual expected behaviour of appending attributes currently seems untested. *Proposal:* We should create proper integration tests for HashContent: {code:python|title=Example feature definition} Scenario: HashContent adds hash attribute to flowfiles Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with the content is present in "/tmp/input" And a HashContent processor with the "Hash Attribute" property set to "hash" And the "Hash Algorithm" of the HashContent processor is set to "" And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the HashContent And the "success" relationship of the HashContent processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with the content is placed in the monitored directory in less than 10 seconds And the flowfile has an attribute called "hash" set to Examples: | content | hash_algorithm | hash_value | | "test" | MD5| 098f6bcd4621d373cade4e832627b4f6 | | "test" | SHA1 | a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 | | "test" | SHA256 | 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 | | "coffee" | MD5| 24eb05d18318ac2db8b2b959315d10f2 | | "coffee" | SHA1 | 44213f9f4d59b557314fadcd233232eebcac8012 | | "coffee" | SHA256 | 37290d74ac4d186e3a8e5785d259d2ec04fac91ae28092e7620ec8bc99e830aa | {code} was: *Background:* HashContent is expected to add the hash value of the content of a flowfile to a flowfile attribute. Currently, neither the unit tests tests for the presence of such an attribute. The unit tests seem to test for logging and the docker based integration test only test for the flowfile content being unmodified. The actual expected behaviour of appending attributes currently seems untested. *Proposal:* We should create proper integration tests for HashContent: {code:python|title=Example feature definition} Scenario: HashContent adds hash attribute to flowfiles Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with the content is present in "/tmp/input" And a HashContent processor with the "Hash Attribute" property set to "hash" And a HashContent processor with the "Hash Algorithm" property set to And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the HashContent And the "success" relationship of the HashContent processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with the content is placed in the monitored directory in less than 10 seconds And the flowfile has an attribute called "hash" set to Examples: | content | hash_algorithm | hash_value | | "test" | MD5| 098f6bcd4621d373cade4e832627b4f6 | | "test" | SHA1 | a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 | | "test" | SHA256 | 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 | | "coffee" | MD5| 24eb05d18318ac2db8b2b959315d10f2 | | "coffee" | SHA1 | 44213f9f4d59b557314fadcd233232eebcac8012 | | "coffee" | SHA256 | 37290d74ac4d186e3a8e5785d259d2ec04fac91ae28092e7620ec8bc99e830aa | {code} > Rework integration tests for HashContent > > > Key: MINIFICPP-1479 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1479 > Project: Apache NiFi MiNiFi C++ > Issue Type: Task >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > > *Background:* > HashContent is expected to add the hash value of the content of a flowfile to > a flowfile attribute. Currently, neither the unit tests tests
[jira] [Assigned] (MINIFICPP-1305) Create integration tests for MQTT processors using a dockerized MQTT broker
[ https://issues.apache.org/jira/browse/MINIFICPP-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi reassigned MINIFICPP-1305: --- Assignee: Ádám Markovics > Create integration tests for MQTT processors using a dockerized MQTT broker > --- > > Key: MINIFICPP-1305 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1305 > Project: Apache NiFi MiNiFi C++ > Issue Type: Task >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Ádám Markovics >Priority: Major > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > > *Background:* > The MQTT processors are untested and known to be unstable. We suspect, that > setting up secure connections is currently broken and it is quite likely that > there are other problems as well. > As we do not know much about what potential use-cases there are for MQTT, my > recommendation is that whoever starts with the implementations first spends a > considerable amount of time on understanding this protocol and its use-cases, > finding a containerized solution for a broker implementation that adheres to > them and plans the potential tests before even touching the code. Also > starting with compatibility (platform for the docker frame/CI job) tests and > checks before writing the tests is recommended. > *Acceptance criteria:* > The person picking up this task should investigate and propose tests and > verify it with [~aboda]. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (MINIFICPP-1215) Document and test SQL extension
[ https://issues.apache.org/jira/browse/MINIFICPP-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi reassigned MINIFICPP-1215: --- Assignee: Adam Debreceni > Document and test SQL extension > --- > > Key: MINIFICPP-1215 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1215 > Project: Apache NiFi MiNiFi C++ > Issue Type: New Feature >Reporter: Marton Szasz >Assignee: Adam Debreceni >Priority: Major > Labels: MiNiFi-CPP-Hygiene > Fix For: 0.9.0 > > > The SQL extension lacks documentation and test coverage. The purpose of this > ticket is to fix that. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1488) Improve tests for features lacking coverage - Spring Internship Project
Adam Hunyadi created MINIFICPP-1488: --- Summary: Improve tests for features lacking coverage - Spring Internship Project Key: MINIFICPP-1488 URL: https://issues.apache.org/jira/browse/MINIFICPP-1488 Project: Apache NiFi MiNiFi C++ Issue Type: Epic Affects Versions: 0.7.0 Reporter: Adam Hunyadi Fix For: 1.0.0 *Background:* There are quite a few features in MiNiFi that are not properly tested. *Proposal:* The person going through the Jiras should: # Understand the features to be tested (one can refer to NiFi docs for some of the features already implemented there) # Identify testing requirements and add them to the Jiras as acceptance criteria and have team-members review them. # Implement the tests. # Correct any bugs found while testing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1488) Improve tests for features lacking coverage - Spring Internship Project
[ https://issues.apache.org/jira/browse/MINIFICPP-1488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1488: Epic Name: 2021 Spring Internship Test Coverage Expansion (was: 2020 Spring Internship Test Coverage Expansion) > Improve tests for features lacking coverage - Spring Internship Project > --- > > Key: MINIFICPP-1488 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1488 > Project: Apache NiFi MiNiFi C++ > Issue Type: Epic >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > > *Background:* > There are quite a few features in MiNiFi that are not properly tested. > *Proposal:* > The person going through the Jiras should: > # Understand the features to be tested (one can refer to NiFi docs for some > of the features already implemented there) > # Identify testing requirements and add them to the Jiras as acceptance > criteria and have team-members review them. > # Implement the tests. > # Correct any bugs found while testing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1488) Improve tests for features lacking coverage - Spring Internship Project
[ https://issues.apache.org/jira/browse/MINIFICPP-1488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1488: Epic Name: 2020 Spring Internship Test Coverage Expansion (was: Spring Internship Test Coverage Expansion) > Improve tests for features lacking coverage - Spring Internship Project > --- > > Key: MINIFICPP-1488 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1488 > Project: Apache NiFi MiNiFi C++ > Issue Type: Epic >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > > *Background:* > There are quite a few features in MiNiFi that are not properly tested. > *Proposal:* > The person going through the Jiras should: > # Understand the features to be tested (one can refer to NiFi docs for some > of the features already implemented there) > # Identify testing requirements and add them to the Jiras as acceptance > criteria and have team-members review them. > # Implement the tests. > # Correct any bugs found while testing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1389) Upgrade librdkafka version
[ https://issues.apache.org/jira/browse/MINIFICPP-1389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi resolved MINIFICPP-1389. - Resolution: Fixed > Upgrade librdkafka version > -- > > Key: MINIFICPP-1389 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1389 > Project: Apache NiFi MiNiFi C++ > Issue Type: Task >Affects Versions: 0.9.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > *Background:* > The current version of librdkafka we use has cherry-picked patches applied to > it and also does not support transactions required for the implementation of > ConsumeKafka processor. > *Proposal:* > We should update the librdkafka version to 1.5.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1300) FlowController circularly referenced through ScheduleAgents
[ https://issues.apache.org/jira/browse/MINIFICPP-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi resolved MINIFICPP-1300. - Resolution: Fixed > FlowController circularly referenced through ScheduleAgents > --- > > Key: MINIFICPP-1300 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1300 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Adam Debreceni >Assignee: Adam Hunyadi >Priority: Major > Time Spent: 1h 50m > Remaining Estimate: 0h > > FlowController holds shared_ptr-s to the different schedulers, which in turn > hold shared_ptr-s to the FlowController instance, keeping it alive forever. > > ScheduleAgents should hold either a gsl::not_null or at most > a weak_ptr. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1287) Make test configuration files build artifacts
[ https://issues.apache.org/jira/browse/MINIFICPP-1287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi resolved MINIFICPP-1287. - Resolution: Won't Fix > Make test configuration files build artifacts > - > > Key: MINIFICPP-1287 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1287 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 1.0.0 >Reporter: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 0.7.0 > > > *Background:* > Currently built tests refer back to configuration files placed in the source > directory of the project. These configurations are passed by command line > arguments. > This causes multiple problems: combining integration tests with functionality > provided by the catch2 testing framework is difficult, and finding failing > tests by their name assumes understanding on where the test is defined in the > cmake file. > *Proposal:* > Ideally, we should copy configuration files required for testing to the build > directory as build artifacts, so that they could be relatively referenced by > test files. This would make the configuration <-> test definition > associations. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1501) Enforce that Processor::onTrigger calls are always invoked with a valid session
Adam Hunyadi created MINIFICPP-1501: --- Summary: Enforce that Processor::onTrigger calls are always invoked with a valid session Key: MINIFICPP-1501 URL: https://issues.apache.org/jira/browse/MINIFICPP-1501 Project: Apache NiFi MiNiFi C++ Issue Type: Task Affects Versions: 0.7.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Background:* Processor::onTrigger(..., core::ProcessSession*) calls take sessions by pointers even though they are never expected to be called with a nullptr argument. We could check whether this ptr is null in the onTrigger calls, and use it dereferenced, but this would produce a lot of redundant code. Changing the onTrigger signature would be a serious API break, so we should not change it. Currently session is not checked for validity before derefencing: {code:bash|title=Session dereferencing} ➜ git --no-pager grep 'session->' | grep -i 'processors/' | cut -d ":" -f1 | less | sort | uniq | xargs -n1 sh -c 'echo $1 && egrep "if (.*session.*)" $1' shextensions/aws/processors/DeleteS3Object.cpp extensions/aws/processors/FetchS3Object.cpp extensions/aws/processors/PutS3Object.cpp extensions/civetweb/processors/ListenHTTP.cpp extensions/http-curl/processors/InvokeHTTP.cpp extensions/mqtt/processors/ConsumeMQTT.cpp extensions/mqtt/processors/ConvertHeartBeat.cpp extensions/mqtt/processors/ConvertJSONAck.cpp extensions/mqtt/processors/PublishMQTT.cpp extensions/openwsman/processors/SourceInitiatedSubscriptionListener.cpp extensions/sftp/processors/FetchSFTP.cpp extensions/sftp/processors/ListSFTP.cpp if (createAndTransferFlowFileFromChild(session, hostname, port, username, file)) { if (!createAndTransferFlowFileFromChild(session, hostname, port, username, updated_entity)) { extensions/sftp/processors/PutSFTP.cpp if (!this->processOne(context, session)) { extensions/standard-processors/processors/AppendHostInfo.cpp extensions/standard-processors/processors/ExecuteProcess.cpp extensions/standard-processors/processors/ExtractText.cpp extensions/standard-processors/processors/GenerateFlowFile.cpp extensions/standard-processors/processors/GetFile.cpp extensions/standard-processors/processors/GetTCP.cpp extensions/standard-processors/processors/HashContent.cpp extensions/standard-processors/processors/ListenSyslog.cpp extensions/standard-processors/processors/LogAttribute.cpp extensions/standard-processors/processors/PutFile.cpp extensions/standard-processors/processors/RetryFlowFile.cpp extensions/standard-processors/processors/RouteOnAttribute.cpp extensions/standard-processors/processors/TailFile.cpp if (!session->existsFlowFileInRelationship(Success)) { extensions/standard-processors/processors/UpdateAttribute.cpp extensions/standard-processors/tests/unit/GetTCPTests.cpp extensions/standard-processors/tests/unit/ProcessorTests.cpp {code} *Proposal:* If we want to have this added safety of checking pointers before deferencing them, we can enforce adding a gsl_Expects(session) to all of our implementations that overload the mentioned signature by adding a static code-check next to the linter check that enforces the presence at the start of the function using this overload. We can also provide another overload that takes session by reference - and by default the original implementation would do the checks and call into this one, so overloads on this would not require additional checks. This would mean checking for nullptr in Processor::onTrigger(shrd_ptr<>, shrd_ptr<>) and Processor::onTrigger(*, *) and forwarding the call to Processor::onTrigger(&, &) and moving overloads to the (&, &) version whenever possible. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1501) Enforce that Processor::onTrigger calls are always invoked with a valid session
[ https://issues.apache.org/jira/browse/MINIFICPP-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1501: Description: *Background:* Processor::onTrigger(..., core::ProcessSession*) calls take sessions by pointers even though they are never expected to be called with a nullptr argument. We could check whether this ptr is null in the onTrigger calls, and use it dereferenced, but this would produce a lot of redundant code. Changing the onTrigger signature would be a serious API break, so we should not change it. Currently session is not checked for validity before derefencing: {code:bash|title=Session dereferencing} ➜ git --no-pager grep 'session->' | grep -i 'processors/' | cut -d ":" -f1 | less | sort | uniq | xargs -n1 sh -c 'echo $1 && egrep "if (.*session.*)" $1' sh extensions/aws/processors/DeleteS3Object.cpp extensions/aws/processors/FetchS3Object.cpp extensions/aws/processors/PutS3Object.cpp extensions/civetweb/processors/ListenHTTP.cpp extensions/http-curl/processors/InvokeHTTP.cpp extensions/mqtt/processors/ConsumeMQTT.cpp extensions/mqtt/processors/ConvertHeartBeat.cpp extensions/mqtt/processors/ConvertJSONAck.cpp extensions/mqtt/processors/PublishMQTT.cpp extensions/openwsman/processors/SourceInitiatedSubscriptionListener.cpp extensions/sftp/processors/FetchSFTP.cpp extensions/sftp/processors/ListSFTP.cpp if (createAndTransferFlowFileFromChild(session, hostname, port, username, file)) { if (!createAndTransferFlowFileFromChild(session, hostname, port, username, updated_entity)) { extensions/sftp/processors/PutSFTP.cpp if (!this->processOne(context, session)) { extensions/standard-processors/processors/AppendHostInfo.cpp extensions/standard-processors/processors/ExecuteProcess.cpp extensions/standard-processors/processors/ExtractText.cpp extensions/standard-processors/processors/GenerateFlowFile.cpp extensions/standard-processors/processors/GetFile.cpp extensions/standard-processors/processors/GetTCP.cpp extensions/standard-processors/processors/HashContent.cpp extensions/standard-processors/processors/ListenSyslog.cpp extensions/standard-processors/processors/LogAttribute.cpp extensions/standard-processors/processors/PutFile.cpp extensions/standard-processors/processors/RetryFlowFile.cpp extensions/standard-processors/processors/RouteOnAttribute.cpp extensions/standard-processors/processors/TailFile.cpp if (!session->existsFlowFileInRelationship(Success)) { extensions/standard-processors/processors/UpdateAttribute.cpp extensions/standard-processors/tests/unit/GetTCPTests.cpp extensions/standard-processors/tests/unit/ProcessorTests.cpp {code} *Proposal:* If we want to have this added safety of checking pointers before deferencing them, we can enforce adding a gsl_Expects(session) to all of our implementations that overload the mentioned signature by adding a static code-check next to the linter check that enforces the presence at the start of the function using this overload. We can also provide another overload that takes session by reference - and by default the original implementation would do the checks and call into this one, so overloads on this would not require additional checks. This would mean checking for nullptr in Processor::onTrigger(shrd_ptr<>, shrd_ptr<>) and Processor::onTrigger(*, *) and forwarding the call to Processor::onTrigger(&, &) and moving overloads to the (&, &) version whenever possible. was: *Background:* Processor::onTrigger(..., core::ProcessSession*) calls take sessions by pointers even though they are never expected to be called with a nullptr argument. We could check whether this ptr is null in the onTrigger calls, and use it dereferenced, but this would produce a lot of redundant code. Changing the onTrigger signature would be a serious API break, so we should not change it. Currently session is not checked for validity before derefencing: {code:bash|title=Session dereferencing} ➜ git --no-pager grep 'session->' | grep -i 'processors/' | cut -d ":" -f1 | less | sort | uniq | xargs -n1 sh -c 'echo $1 && egrep "if (.*session.*)" $1' shextensions/aws/processors/DeleteS3Object.cpp extensions/aws/processors/FetchS3Object.cpp extensions/aws/processors/PutS3Object.cpp extensions/civetweb/processors/ListenHTTP.cpp extensions/http-curl/processors/InvokeHTTP.cpp extensions/mqtt/processors/ConsumeMQTT.cpp extensions/mqtt/processors/ConvertHeartBeat.cpp extensions/mqtt/processors/ConvertJSONAck.cpp extensions/mqtt/processors/PublishMQTT.cpp extensions/openwsman/processors/SourceInitiatedSubscriptionListener.cpp extensions/sftp/processors/FetchSFTP.cpp extensions/sftp/processors/ListSFTP.cpp if (createAndTransferFlowFileFromChild(session, hostname, port, username, file)) { if (!createAndTransferFlowFileFromChild(session, hostname, port, username, updated_entity)) { extensions/sftp/proces
[jira] [Created] (MINIFICPP-1502) Update processors that take their name/id constructor arguments as values
Adam Hunyadi created MINIFICPP-1502: --- Summary: Update processors that take their name/id constructor arguments as values Key: MINIFICPP-1502 URL: https://issues.apache.org/jira/browse/MINIFICPP-1502 Project: Apache NiFi MiNiFi C++ Issue Type: Task Affects Versions: 0.7.0 Reporter: Adam Hunyadi Fix For: 1.0.0 *Background:* Currently, probably due to copy-pasting, many of the processors take their name and id arguments as values as opposed to const references. {code:bash|title:Signature grep} ➜ git --no-pager grep '(std::string name, utils::Identifier uuid = utils::Identifier())' | wc -l 46 {code} *Proposal:* It should be a trivial change to update these signatures. One should search and replace the mentioned signature and check if changing it has no impact (eg. values are not moved from). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (MINIFICPP-1352) Enable -Wall and -Wextra behind a CMake flag and resolve related warnings
[ https://issues.apache.org/jira/browse/MINIFICPP-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi reassigned MINIFICPP-1352: --- Assignee: Gabor Gyimesi (was: Adam Hunyadi) > Enable -Wall and -Wextra behind a CMake flag and resolve related warnings > - > > Key: MINIFICPP-1352 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1352 > Project: Apache NiFi MiNiFi C++ > Issue Type: Task >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Gabor Gyimesi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > Time Spent: 1.5h > Remaining Estimate: 0h > > *Background:* > The compiler flags -Wall and -Wextra can potentially show important issues > with our current codebase. > > *Proposal:* > As we do not want to be dependent on what exactly is involved in the compiler > implementations of -Wall and -Wextra for building a project with a new > compiler. We should: > # Fix the warnings reported by the current major compilers. > # Allow the ones currently listed one by one. > # Add a cmake flag to turn -Wall and -Wextra on. > # Have at least a CI job with clang defined that runs all the -Wall and > -Wextra. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1373) Implement and test a simplified ConsumeKafka processor without security protocols
[ https://issues.apache.org/jira/browse/MINIFICPP-1373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi resolved MINIFICPP-1373. - Resolution: Resolved > Implement and test a simplified ConsumeKafka processor without security > protocols > - > > Key: MINIFICPP-1373 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1373 > Project: Apache NiFi MiNiFi C++ > Issue Type: Sub-task >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Major > Fix For: 1.0.0 > > Attachments: ConsumeKafka_test_matrix.numbers, > ConsumeKafka_test_matrix.pdf > > Time Spent: 29h 40m > Remaining Estimate: 0h > > *Acceptance Criteria:* > *{color:#de350b}See attached test matrix plan.{color}* > Additional test (that require multiple Kafka consumers): > {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with > {color:#0747a6}different group ids{color} subscribed to the same topic > {color:#505f79}*WHEN*{color} a message is published to the topic > {color:#505f79}*THEN*{color} both of the ConsumeKafka processors should > produce identical flowfiles > {quote} > {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with > {color:#0747a6}the same group id{color} subscribed to the same topic > {color:#505f79}*WHEN*{color} a message is published to the topic > {color:#505f79}*THEN*{color} only one of the ConsumeKafka processors should > produce a flowfile > {quote} > {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with > {color:#0747a6}the same group id{color} subscribed to the same topic with > exactly two partitions with {color:#0747a6}Offset Reset{color} set to > {color:#0747a6}earliest{color}. > {color:#505f79}*WHEN*{color} a messages were already present on both > partitions and the second one crashes > {color:#505f79}*THEN*{color} the first one should process duplicates of the > messages that originally came to the second (at_least_once delivery) > {quote} > {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with > {color:#0747a6}the same group id{color} subscribed to the same topic with > exactly two partitions with {color:#0747a6}Offset Reset{color} set to > {color:#0747a6}latest{color}. > {color:#505f79}*WHEN a*{color} messages were already present on both > partitions and the second one crashes > {color:#505f79}*THEN*{color} the first one should {color:#0747a6}not{color} > process duplicates of the messages that originally came to the second > (at_most_once delivery) > {quote} > {quote}{color:#505f79}*GIVEN*{color} two ConsumeKafkas with > {color:#0747a6}the same group id{color} subscribed to the same topic with > exactly two partitions with {color:#0747a6}Offset Reset{color} set to > {color:#0747a6}none{color}. > {color:#505f79}*WHEN*{color} a messages were already present on both > partitions and the second one crashes > {color:#505f79}*THEN*{color} the first one should throw an exception > {quote} > *Background:* > See parent task. > *Proposal:* > This should be the first part of the implementation, the second being adding > and testing multiple security protocols. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1513) Make integration tests log using a common logger
Adam Hunyadi created MINIFICPP-1513: --- Summary: Make integration tests log using a common logger Key: MINIFICPP-1513 URL: https://issues.apache.org/jira/browse/MINIFICPP-1513 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 1.0.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Background:* Currently python based the integration tests instantiate separate loggers. This makes it difficult to capture logging when multiple test are run with the integration tests (eg. single Scenario Outline with multiple examples). *Proposal:* We should extract the logging functionality into its separate modul, and instantiate loggers exactly once. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1445) Clean up docker integration test framework
[ https://issues.apache.org/jira/browse/MINIFICPP-1445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi resolved MINIFICPP-1445. - Resolution: Done > Clean up docker integration test framework > -- > > Key: MINIFICPP-1445 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1445 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.9.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > Time Spent: 5h 40m > Remaining Estimate: 0h > > *Background:* > Docker integration tests are currently living in a two, long __init__.py > files. > *Proposal:* > This can be easily refactored by having the components separated into python > modules based on their functionality. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1514) Improve integration test coverage for MiNiFi
Adam Hunyadi created MINIFICPP-1514: --- Summary: Improve integration test coverage for MiNiFi Key: MINIFICPP-1514 URL: https://issues.apache.org/jira/browse/MINIFICPP-1514 Project: Apache NiFi MiNiFi C++ Issue Type: Epic Affects Versions: 0.9.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Background:* There are quite a lot of improvements to be done in terms of integration test improvements. *Proposal:* This epic is to be used to collect such tasks and monitor efforts done to extend integration test functionality. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1515) Add integration tests verifying that MiNiFi handles large flowfiles as expected
Adam Hunyadi created MINIFICPP-1515: --- Summary: Add integration tests verifying that MiNiFi handles large flowfiles as expected Key: MINIFICPP-1515 URL: https://issues.apache.org/jira/browse/MINIFICPP-1515 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 1.0.0, 0.9.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Background:* Issues with big flowfiles were reported here: https://stackoverflow.com/questions/66330866/minifi-getfile-processor-fails-to-get-large-files/66334615?noredirect=1#comment117275399_66334615 Seems like this is related to twi issues, one a [narrowing exception happens|https://github.com/apache/nifi-minifi-cpp/blob/minifi-cpp-0.9.0-RC2/libminifi/src/core/ContentSession.cpp#L75] when trying to determine the length of the file to be written into the content repository, and another is that we use _stat on windows even though we should be using _stat64 to query files larger than 2GB. *Proposal:* MiNiFi should be tested with large files. *Acceptance criteria:* {code:python|title=Feature definition proposal} Scenario Outline: MiNiFi is capable of manipulating flowfiles of different sizes Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with of content is present in "/tmp/input" And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with matching content is placed in the monitored directory in less than 20 seconds Examples: File size | file size | | 10 B | | 1.5 KiB | | 10 MiB| | 1.0 GiB | | 2.1 GiB | {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1514) Improve integration test coverage for MiNiFi
[ https://issues.apache.org/jira/browse/MINIFICPP-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1514: Labels: (was: MiNiFi-CPP-Hygiene) > Improve integration test coverage for MiNiFi > > > Key: MINIFICPP-1514 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1514 > Project: Apache NiFi MiNiFi C++ > Issue Type: Epic >Affects Versions: 0.9.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Background:* > There are quite a lot of improvements to be done in terms of integration test > improvements. > *Proposal:* > This epic is to be used to collect such tasks and monitor efforts done to > extend integration test functionality. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1515) Add integration tests verifying that MiNiFi handles large flowfiles as expected
[ https://issues.apache.org/jira/browse/MINIFICPP-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1515: Labels: MiNiFi-CPP-Hygiene (was: ) > Add integration tests verifying that MiNiFi handles large flowfiles as > expected > --- > > Key: MINIFICPP-1515 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1515 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 1.0.0, 0.9.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > > *Background:* > Issues with big flowfiles were reported here: > https://stackoverflow.com/questions/66330866/minifi-getfile-processor-fails-to-get-large-files/66334615?noredirect=1#comment117275399_66334615 > Seems like this is related to twi issues, one a [narrowing exception > happens|https://github.com/apache/nifi-minifi-cpp/blob/minifi-cpp-0.9.0-RC2/libminifi/src/core/ContentSession.cpp#L75] > when trying to determine the length of the file to be written into the > content repository, and another is that we use _stat on windows even though > we should be using _stat64 to query files larger than 2GB. > *Proposal:* > MiNiFi should be tested with large files. > *Acceptance criteria:* > {code:python|title=Feature definition proposal} > Scenario Outline: MiNiFi is capable of manipulating flowfiles of different > sizes > Given a GetFile processor with the "Input Directory" property set to > "/tmp/input" > And a file with of content is present in "/tmp/input" > And a PutFile processor with the "Directory" property set to "/tmp/output" > And the "success" relationship of the GetFile processor is connected to > the PutFile > When the MiNiFi instance starts up > Then a flowfile with matching content is placed in the monitored > directory in less than 20 seconds > Examples: File size > | file size | > | 10 B | > | 1.5 KiB | > | 10 MiB| > | 1.0 GiB | > | 2.1 GiB | > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1516) Fix transient issues related to docker container cleanup
Adam Hunyadi created MINIFICPP-1516: --- Summary: Fix transient issues related to docker container cleanup Key: MINIFICPP-1516 URL: https://issues.apache.org/jira/browse/MINIFICPP-1516 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 0.9.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Background:* There are still issues due to the python docker API rarely not properly cleaning up containers. *Proposal:* We should add retry attempts when starting docker containers and potentially try and improve the process of cleaning. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1515) Add integration tests verifying that MiNiFi handles large flowfiles as expected
[ https://issues.apache.org/jira/browse/MINIFICPP-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1515: Description: *Background:* Issues with big flowfiles were reported on stack overflow: [https://stackoverflow.com/questions/66330866/minifi-getfile-processor-fails-to-get-large-files/66334615?noredirect=1#comment117275399_66334615] Seems like this is related to twi issues, one a [narrowing exception happens|https://github.com/apache/nifi-minifi-cpp/blob/minifi-cpp-0.9.0-RC2/libminifi/src/core/ContentSession.cpp#L75] when trying to determine the length of the file to be written into the content repository, and another is that we use _stat on windows even though we should be using _stat64 to query files larger than 2GB. *Proposal:* MiNiFi should be tested with large files. *Acceptance criteria:* {code:python|title=Feature definition proposal} Scenario Outline: MiNiFi is capable of manipulating flowfiles of different sizes Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with of content is present in "/tmp/input" And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with matching content is placed in the monitored directory in less than 20 seconds Examples: File size | file size | | 10 B | | 1.5 KiB | | 10 MiB| | 1.0 GiB | | 2.1 GiB | {code} was: *Background:* Issues with big flowfiles were reported here: https://stackoverflow.com/questions/66330866/minifi-getfile-processor-fails-to-get-large-files/66334615?noredirect=1#comment117275399_66334615 Seems like this is related to twi issues, one a [narrowing exception happens|https://github.com/apache/nifi-minifi-cpp/blob/minifi-cpp-0.9.0-RC2/libminifi/src/core/ContentSession.cpp#L75] when trying to determine the length of the file to be written into the content repository, and another is that we use _stat on windows even though we should be using _stat64 to query files larger than 2GB. *Proposal:* MiNiFi should be tested with large files. *Acceptance criteria:* {code:python|title=Feature definition proposal} Scenario Outline: MiNiFi is capable of manipulating flowfiles of different sizes Given a GetFile processor with the "Input Directory" property set to "/tmp/input" And a file with of content is present in "/tmp/input" And a PutFile processor with the "Directory" property set to "/tmp/output" And the "success" relationship of the GetFile processor is connected to the PutFile When the MiNiFi instance starts up Then a flowfile with matching content is placed in the monitored directory in less than 20 seconds Examples: File size | file size | | 10 B | | 1.5 KiB | | 10 MiB| | 1.0 GiB | | 2.1 GiB | {code} > Add integration tests verifying that MiNiFi handles large flowfiles as > expected > --- > > Key: MINIFICPP-1515 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1515 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 1.0.0, 0.9.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > Time Spent: 10m > Remaining Estimate: 0h > > *Background:* > Issues with big flowfiles were reported on stack overflow: > > [https://stackoverflow.com/questions/66330866/minifi-getfile-processor-fails-to-get-large-files/66334615?noredirect=1#comment117275399_66334615] > Seems like this is related to twi issues, one a [narrowing exception > happens|https://github.com/apache/nifi-minifi-cpp/blob/minifi-cpp-0.9.0-RC2/libminifi/src/core/ContentSession.cpp#L75] > when trying to determine the length of the file to be written into the > content repository, and another is that we use _stat on windows even though > we should be using _stat64 to query files larger than 2GB. > *Proposal:* > MiNiFi should be tested with large files. > *Acceptance criteria:* > {code:python|title=Feature definition proposal} > Scenario Outline: MiNiFi is capable of manipulating flowfiles of different > sizes > Given a GetFile processor with the "Input Directory" property set to > "/tmp/input" > And a file with of content is present in "/tmp/input" > And a PutFile processor with the "Directory" property set to "/tmp/output" > And the "success" relationship of the GetFile processor is connected to > the PutFile > When the MiNiFi instance starts up > Then a flowfile with matching content is placed in the monitored
[jira] [Created] (MINIFICPP-1517) Remove script engine log cluttering from integration tests
Adam Hunyadi created MINIFICPP-1517: --- Summary: Remove script engine log cluttering from integration tests Key: MINIFICPP-1517 URL: https://issues.apache.org/jira/browse/MINIFICPP-1517 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 0.9.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Background:* Currently integration tests produce errors like these logged due to ScriptEngine issues: {code:bash} [warning] Cannot load SentimentAnalysis because of ModuleNotFoundError: No module named 'vaderSentiment' {code} *Proposal:* For the time of building the integration test docker image, we should disable scripting. This is to be reenabled when the ScriptEngine issue is fixed or when there is an explicit need for it in the integration tests. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1518) Migrate integration tests (without secutity protocol settings) for ConsumeKafka to behave
[ https://issues.apache.org/jira/browse/MINIFICPP-1518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1518: Issue Type: Test (was: Improvement) > Migrate integration tests (without secutity protocol settings) for > ConsumeKafka to behave > - > > Key: MINIFICPP-1518 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1518 > Project: Apache NiFi MiNiFi C++ > Issue Type: Test >Affects Versions: 0.9.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Background:* > Currently ConsumeKafka tests are implemented as tests to be run manually that > assume the presence of a running broker on the network. > *Proposal:* > We should move these tests to the integration test framework. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1518) Migrate integration tests (without secutity protocol settings) for ConsumeKafka to behave
Adam Hunyadi created MINIFICPP-1518: --- Summary: Migrate integration tests (without secutity protocol settings) for ConsumeKafka to behave Key: MINIFICPP-1518 URL: https://issues.apache.org/jira/browse/MINIFICPP-1518 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 0.9.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Background:* Currently ConsumeKafka tests are implemented as tests to be run manually that assume the presence of a running broker on the network. *Proposal:* We should move these tests to the integration test framework. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1515) Add integration tests verifying that MiNiFi handles large flowfiles as expected
[ https://issues.apache.org/jira/browse/MINIFICPP-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1515: Issue Type: Test (was: Improvement) > Add integration tests verifying that MiNiFi handles large flowfiles as > expected > --- > > Key: MINIFICPP-1515 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1515 > Project: Apache NiFi MiNiFi C++ > Issue Type: Test >Affects Versions: 1.0.0, 0.9.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > Time Spent: 10m > Remaining Estimate: 0h > > *Background:* > Issues with big flowfiles were reported on stack overflow: > > [https://stackoverflow.com/questions/66330866/minifi-getfile-processor-fails-to-get-large-files/66334615?noredirect=1#comment117275399_66334615] > Seems like this is related to twi issues, one a [narrowing exception > happens|https://github.com/apache/nifi-minifi-cpp/blob/minifi-cpp-0.9.0-RC2/libminifi/src/core/ContentSession.cpp#L75] > when trying to determine the length of the file to be written into the > content repository, and another is that we use _stat on windows even though > we should be using _stat64 to query files larger than 2GB. > *Proposal:* > MiNiFi should be tested with large files. > *Acceptance criteria:* > {code:python|title=Feature definition proposal} > Scenario Outline: MiNiFi is capable of manipulating flowfiles of different > sizes > Given a GetFile processor with the "Input Directory" property set to > "/tmp/input" > And a file with of content is present in "/tmp/input" > And a PutFile processor with the "Directory" property set to "/tmp/output" > And the "success" relationship of the GetFile processor is connected to > the PutFile > When the MiNiFi instance starts up > Then a flowfile with matching content is placed in the monitored > directory in less than 20 seconds > Examples: File size > | file size | > | 10 B | > | 1.5 KiB | > | 10 MiB| > | 1.0 GiB | > | 2.1 GiB | > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1250) Remove C2Callback.h
[ https://issues.apache.org/jira/browse/MINIFICPP-1250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1250: Description: *Background:* C2Callback.h is a piece of code never referenced (not even syntactically correct). *Proposal:* We should remove it from the codebase. was: *Background:* C2Callback.h is a piece of code never referenced (not even syntactically correct). *Proposal:* We should remove it from the codebase. > Remove C2Callback.h > --- > > Key: MINIFICPP-1250 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1250 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Trivial > Fix For: 0.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > *Background:* > C2Callback.h is a piece of code never referenced (not even syntactically > correct). > *Proposal:* > We should remove it from the codebase. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1250) Remove C2Callback.h
[ https://issues.apache.org/jira/browse/MINIFICPP-1250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1250: Description: *Background:* C2Callback.h is a piece of code never referenced (not even syntactically correct). *Proposal:* We should remove it from the codebase. was: *Background:* C2Callback.h is a piece of code never referenced (not even syntactically correct. *Proposal:* We should remove it from the codebase. > Remove C2Callback.h > --- > > Key: MINIFICPP-1250 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1250 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Trivial > Fix For: 0.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > *Background:* > C2Callback.h is a piece of code never referenced (not even syntactically > correct). > > *Proposal:* > We should remove it from the codebase. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1250) Remove C2Callback.h
[ https://issues.apache.org/jira/browse/MINIFICPP-1250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi resolved MINIFICPP-1250. - Fix Version/s: (was: 0.10.0) 0.9.0 Resolution: Fixed > Remove C2Callback.h > --- > > Key: MINIFICPP-1250 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1250 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Trivial > Fix For: 0.9.0 > > Time Spent: 20m > Remaining Estimate: 0h > > *Background:* > C2Callback.h is a piece of code never referenced (not even syntactically > correct). > *Proposal:* > We should remove it from the codebase. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1522) Log the name of the corresponding faulty properties on validation failures
Adam Hunyadi created MINIFICPP-1522: --- Summary: Log the name of the corresponding faulty properties on validation failures Key: MINIFICPP-1522 URL: https://issues.apache.org/jira/browse/MINIFICPP-1522 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 0.9.0 Reporter: Adam Hunyadi Fix For: 1.0.0 *Background:* Currently it is really hard to find out what issue prevents MiNiFi from starting up if the configuration contains a property that does not pass its validation. On a ConsumeKafka processor set up with "Max Poll Time": "6 sec", these are the only corresponding logs that were produced: {code} [2021-03-02 16:35:37.375] [org::apache::nifi::minifi::FlowController] [info] Instantiating new flow [2021-03-02 16:35:37.377] [org::apache::nifi::minifi::core::YamlConfiguration] [error] Invalid yaml configuration file [2021-03-02 16:35:37.377] [main] [error] Failed to load configuration due to exception: General Operation: Cannot convert invalid value {code} *Proposal:* We should at least try to log the name of the property that is causing a SPoF due to validation failure. *Acceptance Criteria:* *GIVEN* MiNiFi instance with a flow configuration describing a ConsumeKafka processor *AND* in the configuration the "Max Poll Time" property is set to "6 sec" *WHEN* the MiNiFi instance starts up *THEN* It should fail due to property validation *AND* The logs should contain that the reason for this was a property validation failure on "Max Poll Time" -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1522) Log the name of the corresponding faulty properties on validation failures
[ https://issues.apache.org/jira/browse/MINIFICPP-1522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1522: Description: *Acceptance Criteria:* *GIVEN* MiNiFi instance with a flow configuration describing a ConsumeKafka processor *AND* in the configuration the "Max Poll Time" property is set to "6 sec" *WHEN* the MiNiFi instance starts up *THEN* It should fail due to property validation *AND* The logs should contain that the reason for this was a property validation failure on "Max Poll Time" *Background:* Currently it is really hard to find out what issue prevents MiNiFi from starting up if the configuration contains a property that does not pass its validation. On a ConsumeKafka processor set up with "Max Poll Time": "6 sec", these are the only corresponding logs that were produced: {code} [2021-03-02 16:35:37.375] [org::apache::nifi::minifi::FlowController] [info] Instantiating new flow [2021-03-02 16:35:37.377] [org::apache::nifi::minifi::core::YamlConfiguration] [error] Invalid yaml configuration file [2021-03-02 16:35:37.377] [main] [error] Failed to load configuration due to exception: General Operation: Cannot convert invalid value {code} *Proposal:* We should at least try to log the name of the property that is causing a SPoF due to validation failure. was: *Background:* Currently it is really hard to find out what issue prevents MiNiFi from starting up if the configuration contains a property that does not pass its validation. On a ConsumeKafka processor set up with "Max Poll Time": "6 sec", these are the only corresponding logs that were produced: {code} [2021-03-02 16:35:37.375] [org::apache::nifi::minifi::FlowController] [info] Instantiating new flow [2021-03-02 16:35:37.377] [org::apache::nifi::minifi::core::YamlConfiguration] [error] Invalid yaml configuration file [2021-03-02 16:35:37.377] [main] [error] Failed to load configuration due to exception: General Operation: Cannot convert invalid value {code} *Proposal:* We should at least try to log the name of the property that is causing a SPoF due to validation failure. *Acceptance Criteria:* *GIVEN* MiNiFi instance with a flow configuration describing a ConsumeKafka processor *AND* in the configuration the "Max Poll Time" property is set to "6 sec" *WHEN* the MiNiFi instance starts up *THEN* It should fail due to property validation *AND* The logs should contain that the reason for this was a property validation failure on "Max Poll Time" > Log the name of the corresponding faulty properties on validation failures > -- > > Key: MINIFICPP-1522 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1522 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.9.0 >Reporter: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > > *Acceptance Criteria:* > *GIVEN* MiNiFi instance with a flow configuration describing a ConsumeKafka > processor > *AND* in the configuration the "Max Poll Time" property is set to "6 sec" > *WHEN* the MiNiFi instance starts up > *THEN* It should fail due to property validation > *AND* The logs should contain that the reason for this was a property > validation failure on "Max Poll Time" > *Background:* > Currently it is really hard to find out what issue prevents MiNiFi from > starting up if the configuration contains a property that does not pass its > validation. On a ConsumeKafka processor set up with "Max Poll Time": "6 sec", > these are the only corresponding logs that were produced: > {code} > [2021-03-02 16:35:37.375] [org::apache::nifi::minifi::FlowController] [info] > Instantiating new flow > [2021-03-02 16:35:37.377] > [org::apache::nifi::minifi::core::YamlConfiguration] [error] Invalid yaml > configuration file > [2021-03-02 16:35:37.377] [main] [error] Failed to load configuration due to > exception: General Operation: Cannot convert invalid value > {code} > *Proposal:* > We should at least try to log the name of the property that is causing a SPoF > due to validation failure. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1522) Log the name of the corresponding faulty properties on validation failures
[ https://issues.apache.org/jira/browse/MINIFICPP-1522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1522: Description: *Acceptance Criteria:* *GIVEN* a MiNiFi instance with a flow configuration describing a ConsumeKafka processor *AND* in the configuration the "Max Poll Time" property is set to "6 sec" *WHEN* the MiNiFi instance starts up *THEN* It should fail due to property validation *AND* The logs should contain that the reason for this was a property validation failure on "Max Poll Time" *Background:* Currently it is really hard to find out what issue prevents MiNiFi from starting up if the configuration contains a property that does not pass its validation. On a ConsumeKafka processor set up with "Max Poll Time": "6 sec", these are the only corresponding logs that were produced: {code} [2021-03-02 16:35:37.375] [org::apache::nifi::minifi::FlowController] [info] Instantiating new flow [2021-03-02 16:35:37.377] [org::apache::nifi::minifi::core::YamlConfiguration] [error] Invalid yaml configuration file [2021-03-02 16:35:37.377] [main] [error] Failed to load configuration due to exception: General Operation: Cannot convert invalid value {code} *Proposal:* We should at least try to log the name of the property that is causing a SPoF due to validation failure. was: *Acceptance Criteria:* *GIVEN* MiNiFi instance with a flow configuration describing a ConsumeKafka processor *AND* in the configuration the "Max Poll Time" property is set to "6 sec" *WHEN* the MiNiFi instance starts up *THEN* It should fail due to property validation *AND* The logs should contain that the reason for this was a property validation failure on "Max Poll Time" *Background:* Currently it is really hard to find out what issue prevents MiNiFi from starting up if the configuration contains a property that does not pass its validation. On a ConsumeKafka processor set up with "Max Poll Time": "6 sec", these are the only corresponding logs that were produced: {code} [2021-03-02 16:35:37.375] [org::apache::nifi::minifi::FlowController] [info] Instantiating new flow [2021-03-02 16:35:37.377] [org::apache::nifi::minifi::core::YamlConfiguration] [error] Invalid yaml configuration file [2021-03-02 16:35:37.377] [main] [error] Failed to load configuration due to exception: General Operation: Cannot convert invalid value {code} *Proposal:* We should at least try to log the name of the property that is causing a SPoF due to validation failure. > Log the name of the corresponding faulty properties on validation failures > -- > > Key: MINIFICPP-1522 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1522 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.9.0 >Reporter: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > > *Acceptance Criteria:* > *GIVEN* a MiNiFi instance with a flow configuration describing a ConsumeKafka > processor > *AND* in the configuration the "Max Poll Time" property is set to "6 sec" > *WHEN* the MiNiFi instance starts up > *THEN* It should fail due to property validation > *AND* The logs should contain that the reason for this was a property > validation failure on "Max Poll Time" > *Background:* > Currently it is really hard to find out what issue prevents MiNiFi from > starting up if the configuration contains a property that does not pass its > validation. On a ConsumeKafka processor set up with "Max Poll Time": "6 sec", > these are the only corresponding logs that were produced: > {code} > [2021-03-02 16:35:37.375] [org::apache::nifi::minifi::FlowController] [info] > Instantiating new flow > [2021-03-02 16:35:37.377] > [org::apache::nifi::minifi::core::YamlConfiguration] [error] Invalid yaml > configuration file > [2021-03-02 16:35:37.377] [main] [error] Failed to load configuration due to > exception: General Operation: Cannot convert invalid value > {code} > *Proposal:* > We should at least try to log the name of the property that is causing a SPoF > due to validation failure. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1522) Log the name of the corresponding faulty properties on validation failures
[ https://issues.apache.org/jira/browse/MINIFICPP-1522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1522: Description: *Acceptance Criteria:* *GIVEN* a MiNiFi instance with a flow configuration describing a ConsumeKafka processor *AND* in the configuration of ConsumeKafka the "Max Poll Time" property is set to "6 sec" *WHEN* the MiNiFi instance starts up *THEN* It should fail due to property validation *AND* The logs should mention that the reason for this was a property validation failure on "Max Poll Time" *Background:* Currently it is really hard to find out what issue prevents MiNiFi from starting up if the configuration contains a property that does not pass its validation. On a ConsumeKafka processor set up with "Max Poll Time": "6 sec", these are the only corresponding logs that were produced: {code:java} [2021-03-02 16:35:37.375] [org::apache::nifi::minifi::FlowController] [info] Instantiating new flow [2021-03-02 16:35:37.377] [org::apache::nifi::minifi::core::YamlConfiguration] [error] Invalid yaml configuration file [2021-03-02 16:35:37.377] [main] [error] Failed to load configuration due to exception: General Operation: Cannot convert invalid value {code} *Proposal:* We should at least try to log the name of the property that is causing a SPoF due to validation failure. was: *Acceptance Criteria:* *GIVEN* a MiNiFi instance with a flow configuration describing a ConsumeKafka processor *AND* in the configuration the "Max Poll Time" property is set to "6 sec" *WHEN* the MiNiFi instance starts up *THEN* It should fail due to property validation *AND* The logs should contain that the reason for this was a property validation failure on "Max Poll Time" *Background:* Currently it is really hard to find out what issue prevents MiNiFi from starting up if the configuration contains a property that does not pass its validation. On a ConsumeKafka processor set up with "Max Poll Time": "6 sec", these are the only corresponding logs that were produced: {code} [2021-03-02 16:35:37.375] [org::apache::nifi::minifi::FlowController] [info] Instantiating new flow [2021-03-02 16:35:37.377] [org::apache::nifi::minifi::core::YamlConfiguration] [error] Invalid yaml configuration file [2021-03-02 16:35:37.377] [main] [error] Failed to load configuration due to exception: General Operation: Cannot convert invalid value {code} *Proposal:* We should at least try to log the name of the property that is causing a SPoF due to validation failure. > Log the name of the corresponding faulty properties on validation failures > -- > > Key: MINIFICPP-1522 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1522 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.9.0 >Reporter: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > > *Acceptance Criteria:* > *GIVEN* a MiNiFi instance with a flow configuration describing a ConsumeKafka > processor > *AND* in the configuration of ConsumeKafka the "Max Poll Time" property is > set to "6 sec" > *WHEN* the MiNiFi instance starts up > *THEN* It should fail due to property validation > *AND* The logs should mention that the reason for this was a property > validation failure on "Max Poll Time" > *Background:* > Currently it is really hard to find out what issue prevents MiNiFi from > starting up if the configuration contains a property that does not pass its > validation. On a ConsumeKafka processor set up with "Max Poll Time": "6 sec", > these are the only corresponding logs that were produced: > {code:java} > [2021-03-02 16:35:37.375] [org::apache::nifi::minifi::FlowController] [info] > Instantiating new flow > [2021-03-02 16:35:37.377] > [org::apache::nifi::minifi::core::YamlConfiguration] [error] Invalid yaml > configuration file > [2021-03-02 16:35:37.377] [main] [error] Failed to load configuration due to > exception: General Operation: Cannot convert invalid value > {code} > *Proposal:* > We should at least try to log the name of the property that is causing a SPoF > due to validation failure. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1522) Log the name of the corresponding faulty properties on validation failures
[ https://issues.apache.org/jira/browse/MINIFICPP-1522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1522: Description: *Acceptance Criteria:* *GIVEN* a MiNiFi instance with a flow configuration describing a ConsumeKafka processor *AND* in the configuration of ConsumeKafka the "Max Poll Time" property is set to "6 sec" *WHEN* the MiNiFi instance starts up *THEN* It should fail due to property validation *AND* The logs should mention that the reason for this was a property validation failure on "Max Poll Time" *Background:* Currently it is really hard to find out what issue prevents MiNiFi from starting up if the configuration contains a property that does not pass its validation. Running the test described above, currently only these lines are logged to describe the issue: {code:java} [2021-03-02 16:35:37.375] [org::apache::nifi::minifi::FlowController] [info] Instantiating new flow [2021-03-02 16:35:37.377] [org::apache::nifi::minifi::core::YamlConfiguration] [error] Invalid yaml configuration file [2021-03-02 16:35:37.377] [main] [error] Failed to load configuration due to exception: General Operation: Cannot convert invalid value {code} *Proposal:* We should at least try to log the name of the property that is causing a SPoF due to validation failure. was: *Acceptance Criteria:* *GIVEN* a MiNiFi instance with a flow configuration describing a ConsumeKafka processor *AND* in the configuration of ConsumeKafka the "Max Poll Time" property is set to "6 sec" *WHEN* the MiNiFi instance starts up *THEN* It should fail due to property validation *AND* The logs should mention that the reason for this was a property validation failure on "Max Poll Time" *Background:* Currently it is really hard to find out what issue prevents MiNiFi from starting up if the configuration contains a property that does not pass its validation. On a ConsumeKafka processor set up with "Max Poll Time": "6 sec", these are the only corresponding logs that were produced: {code:java} [2021-03-02 16:35:37.375] [org::apache::nifi::minifi::FlowController] [info] Instantiating new flow [2021-03-02 16:35:37.377] [org::apache::nifi::minifi::core::YamlConfiguration] [error] Invalid yaml configuration file [2021-03-02 16:35:37.377] [main] [error] Failed to load configuration due to exception: General Operation: Cannot convert invalid value {code} *Proposal:* We should at least try to log the name of the property that is causing a SPoF due to validation failure. > Log the name of the corresponding faulty properties on validation failures > -- > > Key: MINIFICPP-1522 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1522 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.9.0 >Reporter: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0 > > > *Acceptance Criteria:* > *GIVEN* a MiNiFi instance with a flow configuration describing a ConsumeKafka > processor > *AND* in the configuration of ConsumeKafka the "Max Poll Time" property is > set to "6 sec" > *WHEN* the MiNiFi instance starts up > *THEN* It should fail due to property validation > *AND* The logs should mention that the reason for this was a property > validation failure on "Max Poll Time" > *Background:* > Currently it is really hard to find out what issue prevents MiNiFi from > starting up if the configuration contains a property that does not pass its > validation. Running the test described above, currently only these lines are > logged to describe the issue: > {code:java} > [2021-03-02 16:35:37.375] [org::apache::nifi::minifi::FlowController] [info] > Instantiating new flow > [2021-03-02 16:35:37.377] > [org::apache::nifi::minifi::core::YamlConfiguration] [error] Invalid yaml > configuration file > [2021-03-02 16:35:37.377] [main] [error] Failed to load configuration due to > exception: General Operation: Cannot convert invalid value > {code} > *Proposal:* > We should at least try to log the name of the property that is causing a SPoF > due to validation failure. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1517) Remove script engine log cluttering from integration tests
[ https://issues.apache.org/jira/browse/MINIFICPP-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi resolved MINIFICPP-1517. - Fix Version/s: (was: 1.0.0) 0.10.0 Resolution: Fixed > Remove script engine log cluttering from integration tests > -- > > Key: MINIFICPP-1517 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1517 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.9.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 0.10.0 > > Time Spent: 40m > Remaining Estimate: 0h > > *Background:* > Currently integration tests produce errors like these logged due to > ScriptEngine issues: > {code:bash} > [warning] Cannot load SentimentAnalysis because of ModuleNotFoundError: No > module named 'vaderSentiment' > {code} > *Proposal:* > For the time of building the integration test docker image, we should disable > scripting. This is to be reenabled when the ScriptEngine issue is fixed or > when there is an explicit need for it in the integration tests. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1513) Make integration tests log using a common logger
[ https://issues.apache.org/jira/browse/MINIFICPP-1513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi resolved MINIFICPP-1513. - Fix Version/s: (was: 1.0.0) 0.10.0 Resolution: Fixed > Make integration tests log using a common logger > > > Key: MINIFICPP-1513 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1513 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 1.0.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 0.10.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > *Background:* > Currently python based the integration tests instantiate separate loggers. > This makes it difficult to capture logging when multiple test are run with > the integration tests (eg. single Scenario Outline with multiple examples). > *Proposal:* > We should extract the logging functionality into its separate modul, and > instantiate loggers exactly once. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1203) Set up cpplint so that it can be configured per-directory
Adam Hunyadi created MINIFICPP-1203: --- Summary: Set up cpplint so that it can be configured per-directory Key: MINIFICPP-1203 URL: https://issues.apache.org/jira/browse/MINIFICPP-1203 Project: Apache NiFi MiNiFi C++ Issue Type: Bug Affects Versions: 0.7.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 0.8.0 *Background:* Currently cpplink checks are not checking all the files correctly and are hiding some errors that are meant to be displayed. Manually running the following cpplint check shows overreports the number of errors but it is a decent estimate on the number of errors ignored: {code:bash} # This command shows some errors that we otherwise suppress in the project cpplint --linelength=200 filter=-runtime/reference,-runtime/string,-build/c++11,-build/include_order,-build/include_alpha `find libminifi/ -name \.cpp -o -name \*.h` (...) Total errors found: 1730 {code} When running {{{color:#403294}{{make linter}}{color}}} these errors are supressed. It runs the following command in {{{color:#403294}run_linter.sh{color}}}: {code:bash} python ${SCRIPT_DIR}/cpplint.py --linelength=200 --headers=${HEADERS} ${SOURCES} {code} For some reason, it seems like the files specified in the {{{color:#403294}{{--headers}}{color}}} flag are ignored altogether. For example {code:bash} # Running w/ headers option set cpplint --filter="-runtime/reference,-runtime/string,-build/c++11" --linelength=200 --headers=`find . -name "*.h" | tr '\n' ','` libminifi/include/processors/ProcessorUtils.h 2>/dev/null Done processing libminifi/include/processors/ProcessorUtils.h # Running w/ unspecified headers cpplint --filter="-runtime/reference,-runtime/string,-build/c++11" --linelength=200 libminifi/include/processors/ProcessorUtils.h 2>/dev/null Done processing libminifi/include/processors/ProcessorUtils.h Total errors found: 6 {code} *Proposal:* We should remove the header specification from {{{color:#403294}{{make linter}}{color}}} and set up linter configuration files in the project directories that set all the rules to be applied on the specific directory contents recursively. There is to approach for doing this: we can either specify files or rules to be ignored when doing the linter check. The latter is preferable, so that when we want to clear them up later, we can have separate commits/pull request for each of the warning fixed (and potentially automatize fixes (eg. writing clang-tidy rules or applying linter fixes). (!) The commits on this Jira are not expected to fix any warnings reported by the linter, but to have all the checks disabled. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1204) Correct property-relationship mix-ups in Processors.md
Adam Hunyadi created MINIFICPP-1204: --- Summary: Correct property-relationship mix-ups in Processors.md Key: MINIFICPP-1204 URL: https://issues.apache.org/jira/browse/MINIFICPP-1204 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 0.7.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 0.8.0 Attachments: Screenshot 2020-04-23 at 10.22.34.png *Background:* Processors.md incorrectly shows two sets of properties for each of the processors, eg.: !Screenshot 2020-04-23 at 10.22.34.png|width=564,height=316! *Proposal:* The second table titled "Properties" are expected to have the title of "Relationships". Review the documentation and check if the words "properties" and "relationships" are used correctly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1205) Correct documentation for ExecutePythonProcessor
Adam Hunyadi created MINIFICPP-1205: --- Summary: Correct documentation for ExecutePythonProcessor Key: MINIFICPP-1205 URL: https://issues.apache.org/jira/browse/MINIFICPP-1205 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 0.7.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 0.8.0 Attachments: Screenshot 2020-04-22 at 13.29.06.png, Screenshot 2020-04-22 at 13.38.06.png *Background:* Both the documentation and the property description contains a potential for using "Script Body" for specifying inline python scripts as processors. However, this option does not seem to be supported. !Screenshot 2020-04-22 at 13.29.06.png|width=718,height=151! !Screenshot 2020-04-22 at 13.38.06.png|width=709,height=145! *Proposal:* Remove mentioning "Script Body" option from the documentation and property description. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1205) Correct documentation for ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1205: Priority: Minor (was: Major) > Correct documentation for ExecutePythonProcessor > > > Key: MINIFICPP-1205 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1205 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 0.8.0 > > Attachments: Screenshot 2020-04-22 at 13.29.06.png, Screenshot > 2020-04-22 at 13.38.06.png > > > *Background:* > Both the documentation and the property description contains a potential for > using "Script Body" for specifying inline python scripts as processors. > However, this option does not seem to be supported. > !Screenshot 2020-04-22 at 13.29.06.png|width=718,height=151! > !Screenshot 2020-04-22 at 13.38.06.png|width=709,height=145! > > *Proposal:* > Remove mentioning "Script Body" option from the documentation and property > description. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1205) Correct documentation for ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1205: Priority: Trivial (was: Minor) > Correct documentation for ExecutePythonProcessor > > > Key: MINIFICPP-1205 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1205 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Trivial > Fix For: 0.8.0 > > Attachments: Screenshot 2020-04-22 at 13.29.06.png, Screenshot > 2020-04-22 at 13.38.06.png > > > *Background:* > Both the documentation and the property description contains a potential for > using "Script Body" for specifying inline python scripts as processors. > However, this option does not seem to be supported. > !Screenshot 2020-04-22 at 13.29.06.png|width=718,height=151! > !Screenshot 2020-04-22 at 13.38.06.png|width=709,height=145! > > *Proposal:* > Remove mentioning "Script Body" option from the documentation and property > description. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1204) Correct property-relationship mix-ups in Processors.md
[ https://issues.apache.org/jira/browse/MINIFICPP-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1204: Description: *Background:* Processors.md incorrectly shows two sets of properties for each of the processors, eg.: !Screenshot 2020-04-23 at 10.22.34.png|width=678,height=380! *Proposal:* The second table titled "Properties" are expected to have the title of "Relationships". Review the documentation and check if the words "properties" and "relationships" are used correctly. was: *Background:* Processors.md incorrectly shows two sets of properties for each of the processors, eg.: !Screenshot 2020-04-23 at 10.22.34.png|width=564,height=316! *Proposal:* The second table titled "Properties" are expected to have the title of "Relationships". Review the documentation and check if the words "properties" and "relationships" are used correctly. > Correct property-relationship mix-ups in Processors.md > -- > > Key: MINIFICPP-1204 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1204 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Trivial > Fix For: 0.8.0 > > Attachments: Screenshot 2020-04-23 at 10.22.34.png > > > *Background:* > Processors.md incorrectly shows two sets of properties for each of the > processors, eg.: > !Screenshot 2020-04-23 at 10.22.34.png|width=678,height=380! > *Proposal:* > The second table titled "Properties" are expected to have the title of > "Relationships". Review the documentation and check if the words "properties" > and "relationships" are used correctly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1205) Correct documentation for ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1205: Description: *Background:* Both the documentation and the property description contains a potential for using "Script Body" for specifying inline python scripts as processors. However, this option does not seem to be supported. !Screenshot 2020-04-22 at 13.29.06.png|width=718,height=151! !Screenshot 2020-04-22 at 13.38.06.png|width=709,height=145! *Proposal:* Remove mentioning "Script Body" option from the documentation and property description. *Update:* There is also some dead code clearly linked to someone trying to implement this feature, this is also expected to be removed. was: *Background:* Both the documentation and the property description contains a potential for using "Script Body" for specifying inline python scripts as processors. However, this option does not seem to be supported. !Screenshot 2020-04-22 at 13.29.06.png|width=718,height=151! !Screenshot 2020-04-22 at 13.38.06.png|width=709,height=145! *Proposal:* Remove mentioning "Script Body" option from the documentation and property description. > Correct documentation for ExecutePythonProcessor > > > Key: MINIFICPP-1205 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1205 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Trivial > Fix For: 0.8.0 > > Attachments: Screenshot 2020-04-22 at 13.29.06.png, Screenshot > 2020-04-22 at 13.38.06.png > > Time Spent: 10m > Remaining Estimate: 0h > > *Background:* > Both the documentation and the property description contains a potential for > using "Script Body" for specifying inline python scripts as processors. > However, this option does not seem to be supported. > !Screenshot 2020-04-22 at 13.29.06.png|width=718,height=151! > !Screenshot 2020-04-22 at 13.38.06.png|width=709,height=145! > > *Proposal:* > Remove mentioning "Script Body" option from the documentation and property > description. > > *Update:* > There is also some dead code clearly linked to someone trying to implement > this feature, this is also expected to be removed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1205) Correct documentation and dead code for ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1205: Summary: Correct documentation and dead code for ExecutePythonProcessor (was: Correct documentation for ExecutePythonProcessor) > Correct documentation and dead code for ExecutePythonProcessor > -- > > Key: MINIFICPP-1205 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1205 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Trivial > Fix For: 0.8.0 > > Attachments: Screenshot 2020-04-22 at 13.29.06.png, Screenshot > 2020-04-22 at 13.38.06.png > > Time Spent: 10m > Remaining Estimate: 0h > > *Background:* > Both the documentation and the property description contains a potential for > using "Script Body" for specifying inline python scripts as processors. > However, this option does not seem to be supported. > !Screenshot 2020-04-22 at 13.29.06.png|width=718,height=151! > !Screenshot 2020-04-22 at 13.38.06.png|width=709,height=145! > > *Proposal:* > Remove mentioning "Script Body" option from the documentation and property > description. > > *Update:* > There is also some dead code clearly linked to someone trying to implement > this feature, this is also expected to be removed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1206) Implement inline-script support for ExecutePythonProcessor
Adam Hunyadi created MINIFICPP-1206: --- Summary: Implement inline-script support for ExecutePythonProcessor Key: MINIFICPP-1206 URL: https://issues.apache.org/jira/browse/MINIFICPP-1206 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 0.7.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Acceptance criteria:* # Given a python script that transfers to REL_FAILURE *Background:* We would like to start supporting inline python scripts in ExecutePythonProcessor. Someone already tried to add support for this option, but he did not finish the implementation. *Proposal:* The related dead code was removed in [PR-768|https://github.com/apache/nifi-minifi-cpp/pull/768]. The implementation can be started by reverting this commit. Upon implementation, please check if failures are handled properly and add tests for this branch as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1206) Implement inline-script support for ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1206: Description: *Acceptance criteria:* # GIVEN a python script that transfers to *REL_FAILURE* and an ExecutePythonProcessor set up with a "Script File" attribute, but not "Script Body" WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor that is set up with a "Script Body" attribute that transfers to *REL_SUCCESS*, but not "Script File" ** WHEN the processor is triggered THEN any consumer using the *success* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor that is set up with a "Script Body" attribute that transfers to *REL_FAILURE*, but not "Script File" ** WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor that is set up with both "Script Body" and "Script File" ** WHEN the processor is triggered THEN an error should be logged and neither the inline script nor the script body should be executed *Background:* We would like to start supporting inline python scripts in ExecutePythonProcessor. Someone already tried to add support for this option, but he did not finish the implementation. *Proposal:* The related dead code was removed in [PR-768|https://github.com/apache/nifi-minifi-cpp/pull/768]. The implementation can be started by reverting this commit. Upon implementation, please check if failures are handled properly and add tests for this branch as well. was: *Acceptance criteria:* # Given a python script that transfers to REL_FAILURE *Background:* We would like to start supporting inline python scripts in ExecutePythonProcessor. Someone already tried to add support for this option, but he did not finish the implementation. *Proposal:* The related dead code was removed in [PR-768|https://github.com/apache/nifi-minifi-cpp/pull/768]. The implementation can be started by reverting this commit. Upon implementation, please check if failures are handled properly and add tests for this branch as well. > Implement inline-script support for ExecutePythonProcessor > -- > > Key: MINIFICPP-1206 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1206 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Acceptance criteria:* > # GIVEN a python script that transfers to *REL_FAILURE* and an > ExecutePythonProcessor set up with a "Script File" attribute, but not "Script > Body" > WHEN the processor is triggered > THEN any consumer using the *failure* relationship as source should receive > the transfered data > # GIVEN an ExecutePythonProcessor that is set up with a "Script Body" > attribute that transfers to *REL_SUCCESS*, but not "Script File" ** > WHEN the processor is triggered > THEN any consumer using the *success* relationship as source should receive > the transfered data > # GIVEN an ExecutePythonProcessor that is set up with a "Script Body" > attribute that transfers to *REL_FAILURE*, but not "Script File" ** > WHEN the processor is triggered > THEN any consumer using the *failure* relationship as source should receive > the transfered data > # GIVEN an ExecutePythonProcessor that is set up with both "Script Body" and > "Script File" ** > WHEN the processor is triggered > THEN an error should be logged and neither the inline script nor the script > body should be executed > *Background:* > We would like to start supporting inline python scripts in > ExecutePythonProcessor. Someone already tried to add support for this option, > but he did not finish the implementation. > *Proposal:* > The related dead code was removed in > [PR-768|https://github.com/apache/nifi-minifi-cpp/pull/768]. The > implementation can be started by reverting this commit. Upon implementation, > please check if failures are handled properly and add tests for this branch > as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1206) Implement inline-script support for ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1206: Description: *Acceptance criteria:* # GIVEN a python script that transfers to *REL_FAILURE* and an ExecutePythonProcessor set up with a *"Script File"* attribute, but not "Script Body" WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor that is set up with a *"Script Body"* attribute that transfers to *REL_SUCCESS*, but not *"Script File"* WHEN the processor is triggered THEN any consumer using the *success* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor that is set up with a *"Script Body"* attribute that transfers to *REL_FAILURE*, but not *"Script File"* WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor that is set up with both *"Script Body"* and *"Script File"* WHEN the processor is triggered THEN an error should be logged and neither the inline script nor the script body should be executed *Background:* We would like to start supporting inline python scripts in ExecutePythonProcessor. Someone already tried to add support for this option, but he did not finish the implementation. *Proposal:* The related dead code was removed in [PR-768|https://github.com/apache/nifi-minifi-cpp/pull/768]. The implementation can be started by reverting this commit. Upon implementation, please check if failures are handled properly and add tests for this branch as well. was: *Acceptance criteria:* # GIVEN a python script that transfers to *REL_FAILURE* and an ExecutePythonProcessor set up with a "Script File" attribute, but not "Script Body" WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor that is set up with a "Script Body" attribute that transfers to *REL_SUCCESS*, but not "Script File" ** WHEN the processor is triggered THEN any consumer using the *success* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor that is set up with a "Script Body" attribute that transfers to *REL_FAILURE*, but not "Script File" ** WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor that is set up with both "Script Body" and "Script File" ** WHEN the processor is triggered THEN an error should be logged and neither the inline script nor the script body should be executed *Background:* We would like to start supporting inline python scripts in ExecutePythonProcessor. Someone already tried to add support for this option, but he did not finish the implementation. *Proposal:* The related dead code was removed in [PR-768|https://github.com/apache/nifi-minifi-cpp/pull/768]. The implementation can be started by reverting this commit. Upon implementation, please check if failures are handled properly and add tests for this branch as well. > Implement inline-script support for ExecutePythonProcessor > -- > > Key: MINIFICPP-1206 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1206 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Acceptance criteria:* > # GIVEN a python script that transfers to *REL_FAILURE* and an > ExecutePythonProcessor set up with a *"Script File"* attribute, but not > "Script Body" > WHEN the processor is triggered > THEN any consumer using the *failure* relationship as source should receive > the transfered data > # GIVEN an ExecutePythonProcessor that is set up with a *"Script Body"* > attribute that transfers to *REL_SUCCESS*, but not *"Script File"* > WHEN the processor is triggered > THEN any consumer using the *success* relationship as source should receive > the transfered data > # GIVEN an ExecutePythonProcessor that is set up with a *"Script Body"* > attribute that transfers to *REL_FAILURE*, but not *"Script File"* > WHEN the processor is triggered > THEN any consumer using the *failure* relationship as source should receive > the transfered data > # GIVEN an ExecutePythonProcessor that is set up with both *"Script Body"* > and *"Script File"* > WHEN the processor is triggered > THEN an error should be logged and neither the inline script nor the script > body should be executed > *Background:* > We would like to start
[jira] [Updated] (MINIFICPP-1206) Implement inline-script support for ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1206: Description: *Acceptance criteria:* # GIVEN a python script that transfers to *REL_FAILURE* and an ExecutePythonProcessor set up with a *"Script File"* attribute, but not with *"Script Body"* WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute that transfers to *REL_SUCCESS*, but not *"Script File"* WHEN the processor is triggered THEN any consumer using the *success* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute that transfers to *REL_FAILURE*, but not *"Script File"* WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessorset up with both *"Script Body"* and *"Script File"* WHEN the processor is triggered THEN an error should be logged and neither the inline script nor the script body should be executed *Background:* We would like to start supporting inline python scripts in ExecutePythonProcessor. Someone already tried to add support for this option, but he did not finish the implementation. *Proposal:* The related dead code was removed in [PR-768|https://github.com/apache/nifi-minifi-cpp/pull/768]. The implementation can be started by reverting this commit. Upon implementation, please check if failures are handled properly and add tests for this branch as well. was: *Acceptance criteria:* # GIVEN a python script that transfers to *REL_FAILURE* and an ExecutePythonProcessor set up with a *"Script File"* attribute, but not "Script Body" WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor that is set up with a *"Script Body"* attribute that transfers to *REL_SUCCESS*, but not *"Script File"* WHEN the processor is triggered THEN any consumer using the *success* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor that is set up with a *"Script Body"* attribute that transfers to *REL_FAILURE*, but not *"Script File"* WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor that is set up with both *"Script Body"* and *"Script File"* WHEN the processor is triggered THEN an error should be logged and neither the inline script nor the script body should be executed *Background:* We would like to start supporting inline python scripts in ExecutePythonProcessor. Someone already tried to add support for this option, but he did not finish the implementation. *Proposal:* The related dead code was removed in [PR-768|https://github.com/apache/nifi-minifi-cpp/pull/768]. The implementation can be started by reverting this commit. Upon implementation, please check if failures are handled properly and add tests for this branch as well. > Implement inline-script support for ExecutePythonProcessor > -- > > Key: MINIFICPP-1206 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1206 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Acceptance criteria:* > # GIVEN a python script that transfers to *REL_FAILURE* and an > ExecutePythonProcessor set up with a *"Script File"* attribute, but not with > *"Script Body"* > WHEN the processor is triggered > THEN any consumer using the *failure* relationship as source should receive > the transfered data > # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute > that transfers to *REL_SUCCESS*, but not *"Script File"* > WHEN the processor is triggered > THEN any consumer using the *success* relationship as source should receive > the transfered data > # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute > that transfers to *REL_FAILURE*, but not *"Script File"* > WHEN the processor is triggered > THEN any consumer using the *failure* relationship as source should receive > the transfered data > # GIVEN an ExecutePythonProcessorset up with both *"Script Body"* and > *"Script File"* > WHEN the processor is triggered > THEN an error should be logged and neither the inline script nor the script > body should be executed > *Background:* > We would like to start supporting inline python s
[jira] [Updated] (MINIFICPP-1206) Implement inline-script support for ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1206: Description: *Acceptance criteria:* # GIVEN a python script that transfers to *REL_FAILURE* and an ExecutePythonProcessor set up with a *"Script File"* attribute, but not with *"Script Body"* WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute that transfers to *REL_SUCCESS*, but not *"Script File"* WHEN the processor is triggered THEN any consumer using the *success* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute that transfers to *REL_FAILURE*, but not *"Script File"* WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessorset up with both *"Script Body"* and *"Script File"* WHEN the processor is triggered THEN an error should be logged and neither the inline script nor the script body should be executed *Background:* We would like to start supporting inline python scripts in ExecutePythonProcessor. Someone already tried to add support for this option, but he did not finish the implementation. *Proposal:* The related dead code was removed in [PR-768|https://github.com/apache/nifi-minifi-cpp/pull/768]. The implementation can be started by reverting this commit. Upon implementation, please check if failures are handled properly and add tests for this branch as well. was: *Acceptance criteria:* # GIVEN a python script that transfers to *REL_FAILURE* and an ExecutePythonProcessor set up with a *"Script File"* attribute, but not with *"Script Body"* WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute that transfers to *REL_SUCCESS*, but not *"Script File"* WHEN the processor is triggered THEN any consumer using the *success* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute that transfers to *REL_FAILURE*, but not *"Script File"* WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessorset up with both *"Script Body"* and *"Script File"* WHEN the processor is triggered THEN an error should be logged and neither the inline script nor the script body should be executed *Background:* We would like to start supporting inline python scripts in ExecutePythonProcessor. Someone already tried to add support for this option, but he did not finish the implementation. *Proposal:* The related dead code was removed in [PR-768|https://github.com/apache/nifi-minifi-cpp/pull/768]. The implementation can be started by reverting this commit. Upon implementation, please check if failures are handled properly and add tests for this branch as well. > Implement inline-script support for ExecutePythonProcessor > -- > > Key: MINIFICPP-1206 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1206 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Acceptance criteria:* > # GIVEN a python script that transfers to *REL_FAILURE* and an > ExecutePythonProcessor set up with a *"Script File"* attribute, but not with > *"Script Body"* > WHEN the processor is triggered > THEN any consumer using the *failure* relationship as source should receive > the transfered data > # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute > that transfers to *REL_SUCCESS*, but not *"Script File"* > WHEN the processor is triggered > THEN any consumer using the *success* relationship as source should receive > the transfered data > # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute > that transfers to *REL_FAILURE*, but not *"Script File"* > WHEN the processor is triggered > THEN any consumer using the *failure* relationship as source should receive > the transfered data > # GIVEN an ExecutePythonProcessorset up with both *"Script Body"* and > *"Script File"* > WHEN the processor is triggered > THEN an error should be logged and neither the inline script nor the script > body should be executed > *Background:* > We would like to start supporting inline python scripts in > Exec
[jira] [Updated] (MINIFICPP-1206) Implement inline-script support for ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1206: Description: *Acceptance criteria:* # GIVEN a python script that transfers to *REL_FAILURE* and an ExecutePythonProcessor set up with a *"Script File"* attribute, but not with *"Script Body"* WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute that transfers to *REL_SUCCESS*, but not *"Script File"* WHEN the processor is triggered THEN any consumer using the *success* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute that transfers to *REL_FAILURE*, but not *"Script File"* WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessorset up with both *"Script Body"* and *"Script File"* WHEN the processor is triggered THEN an error should be logged and neither the inline script nor the script body should be executed *Background:* We would like to start supporting inline python scripts in ExecutePythonProcessor. Someone already tried to add support for this option, but he did not finish the implementation. *Proposal:* The related dead code was removed in [PR-768|https://github.com/apache/nifi-minifi-cpp/pull/768]. The implementation can be started by reverting this commit. Upon implementation, please check if failures are handled properly and add tests for this branch as well. was: *Acceptance criteria:* # GIVEN a python script that transfers to *REL_FAILURE* and an ExecutePythonProcessor set up with a *"Script File"* attribute, but not with *"Script Body"* WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute that transfers to *REL_SUCCESS*, but not *"Script File"* WHEN the processor is triggered THEN any consumer using the *success* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute that transfers to *REL_FAILURE*, but not *"Script File"* WHEN the processor is triggered THEN any consumer using the *failure* relationship as source should receive the transfered data # GIVEN an ExecutePythonProcessorset up with both *"Script Body"* and *"Script File"* WHEN the processor is triggered THEN an error should be logged and neither the inline script nor the script body should be executed *Background:* We would like to start supporting inline python scripts in ExecutePythonProcessor. Someone already tried to add support for this option, but he did not finish the implementation. *Proposal:* The related dead code was removed in [PR-768|https://github.com/apache/nifi-minifi-cpp/pull/768]. The implementation can be started by reverting this commit. Upon implementation, please check if failures are handled properly and add tests for this branch as well. > Implement inline-script support for ExecutePythonProcessor > -- > > Key: MINIFICPP-1206 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1206 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Acceptance criteria:* > # GIVEN a python script that transfers to *REL_FAILURE* and an > ExecutePythonProcessor set up with a *"Script File"* attribute, but not with > *"Script Body"* > WHEN the processor is triggered > THEN any consumer using the *failure* relationship as source should receive > the transfered data > # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute > that transfers to *REL_SUCCESS*, but not *"Script File"* > WHEN the processor is triggered > THEN any consumer using the *success* relationship as source should receive > the transfered data > # GIVEN an ExecutePythonProcessor set up with a *"Script Body"* attribute > that transfers to *REL_FAILURE*, but not *"Script File"* > WHEN the processor is triggered > THEN any consumer using the *failure* relationship as source should receive > the transfered data > # GIVEN an ExecutePythonProcessorset up with both *"Script Body"* and > *"Script File"* > WHEN the processor is triggered > THEN an error should be logged and neither the inline script nor the script > body should be executed > *Background:* > We would like to start supporting inline python scripts in > Ex
[jira] [Assigned] (MINIFICPP-1202) C2Agent uses vector instead of queue to handle C2 requests/responses.
[ https://issues.apache.org/jira/browse/MINIFICPP-1202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi reassigned MINIFICPP-1202: --- Assignee: Adam Hunyadi > C2Agent uses vector instead of queue to handle C2 requests/responses. > - > > Key: MINIFICPP-1202 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1202 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Murtuza Shareef >Assignee: Adam Hunyadi >Priority: Minor > > C2Agent uses std::vector to queue up C2 requests and responses and uses it > much like stack when popping items out of them. > Ideally we would want to use a queue or a thread safe concurrent structure > with queue semantics. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1204) Correct property-relationship mix-ups in Processors.md
[ https://issues.apache.org/jira/browse/MINIFICPP-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi resolved MINIFICPP-1204. - Resolution: Fixed > Correct property-relationship mix-ups in Processors.md > -- > > Key: MINIFICPP-1204 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1204 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Trivial > Fix For: 0.8.0 > > Attachments: Screenshot 2020-04-23 at 10.22.34.png > > Time Spent: 10m > Remaining Estimate: 0h > > *Background:* > Processors.md incorrectly shows two sets of properties for each of the > processors, eg.: > !Screenshot 2020-04-23 at 10.22.34.png|width=678,height=380! > *Proposal:* > The second table titled "Properties" are expected to have the title of > "Relationships". Review the documentation and check if the words "properties" > and "relationships" are used correctly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1205) Correct documentation and dead code for ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi resolved MINIFICPP-1205. - Resolution: Fixed > Correct documentation and dead code for ExecutePythonProcessor > -- > > Key: MINIFICPP-1205 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1205 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Trivial > Fix For: 0.8.0 > > Attachments: Screenshot 2020-04-22 at 13.29.06.png, Screenshot > 2020-04-22 at 13.38.06.png > > Time Spent: 20m > Remaining Estimate: 0h > > *Background:* > Both the documentation and the property description contains a potential for > using "Script Body" for specifying inline python scripts as processors. > However, this option does not seem to be supported. > !Screenshot 2020-04-22 at 13.29.06.png|width=718,height=151! > !Screenshot 2020-04-22 at 13.38.06.png|width=709,height=145! > > *Proposal:* > Remove mentioning "Script Body" option from the documentation and property > description. > > *Update:* > There is also some dead code clearly linked to someone trying to implement > this feature, this is also expected to be removed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1222) Rework engine queue for ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1222: Description: *Background:* The ExecutePythonProcessor is expected to have a limit on the maximal number of concurrent tasks it can run. This {{{color:#403294}{{max_concurrent_tasks}}{color}}} number is currently handled incorrectly, and the processor is able to run any number of tasks at the same time, but only {{{color:#403294}{{max_concurrent_tasks}}{color}}} script-engine is reused. *Proposal*: We should use {{{color:#403294}{{max_concurrent_tasks}}{color}}} as the size of script-engine pool for the processor and always use engines from this pool, wait otherwise. was: *Background:* The ExecutePythonProcessor is expected to have a limit on the maximal number of concurrent tasks it can run. This {{{color:#403294}{{max_concurrent_tasks}}{color}}} number is currently handled incorrectly, and the processor is able to run any number of tasks at the same time, but only {{{color:#403294}{{max_concurrent_tasks}}{color}}} script-engine is reused. *Proposal*: **We should use {{{color:#403294}{{max_concurrent_tasks}}{color}}} as the size of script-engine pool for the processor and always use engines from this pool, wait otherwise. > Rework engine queue for ExecutePythonProcessor > -- > > Key: MINIFICPP-1222 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1222 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Background:* > The ExecutePythonProcessor is expected to have a limit on the maximal number > of concurrent tasks it can run. This > {{{color:#403294}{{max_concurrent_tasks}}{color}}} number is currently > handled incorrectly, and the processor is able to run any number of tasks at > the same time, but only {{{color:#403294}{{max_concurrent_tasks}}{color}}} > script-engine is reused. > *Proposal*: > We should use {{{color:#403294}{{max_concurrent_tasks}}{color}}} as the size > of script-engine pool for the processor and always use engines from this > pool, wait otherwise. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1222) Rework engine queue for ExecutePythonProcessor
Adam Hunyadi created MINIFICPP-1222: --- Summary: Rework engine queue for ExecutePythonProcessor Key: MINIFICPP-1222 URL: https://issues.apache.org/jira/browse/MINIFICPP-1222 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Background:* The ExecutePythonProcessor is expected to have a limit on the maximal number of concurrent tasks it can run. This {{{color:#403294}{{max_concurrent_tasks}}{color}}} number is currently handled incorrectly, and the processor is able to run any number of tasks at the same time, but only {{{color:#403294}{{max_concurrent_tasks}}{color}}} script-engine is reused. *Proposal*: **We should use {{{color:#403294}{{max_concurrent_tasks}}{color}}} as the size of script-engine pool for the processor and always use engines from this pool, wait otherwise. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1223) Stop reloading script files every time ExecutePythonProcessor is triggered
Adam Hunyadi created MINIFICPP-1223: --- Summary: Stop reloading script files every time ExecutePythonProcessor is triggered Key: MINIFICPP-1223 URL: https://issues.apache.org/jira/browse/MINIFICPP-1223 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 *Background:* For backward compatibility, we went for keeping the behaviour of reading the script file every time the processor is triggered intact. *Proposal:* We would like to change with the first major release. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1224) Implement runtime module-directory extension for ExecutePythonScript
Adam Hunyadi created MINIFICPP-1224: --- Summary: Implement runtime module-directory extension for ExecutePythonScript Key: MINIFICPP-1224 URL: https://issues.apache.org/jira/browse/MINIFICPP-1224 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 0.7.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Attachments: Screenshot 2020-05-13 at 16.49.35.png *Background:* Having no runtime extension should be a convenience feature only, one could both use the python code itself. It should be possible to access {{{color:#403294}{{sys.path}}{color}}} from the cpp wrapper like this: [https://pybind11.readthedocs.io/en/stable/advanced/embedding.html#importing-modules] !Screenshot 2020-05-13 at 16.49.35.png|width=631,height=142! Having this code called before the script-engines start causes crash even under gil. *Proposal:* Extend the python script engine with an interface that handles module imports and call to it before any {{{color:#403294}{{eval}}{color}}} call happens. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1240) Check last modified timestamp of the inode before re-reading the script file of ExecutePythonScript
Adam Hunyadi created MINIFICPP-1240: --- Summary: Check last modified timestamp of the inode before re-reading the script file of ExecutePythonScript Key: MINIFICPP-1240 URL: https://issues.apache.org/jira/browse/MINIFICPP-1240 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 0.7.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 1.0.0 As an optimization we may want to check the last modified timestamp of the inode before reading the file. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1240) Check last modified timestamp of the inode before re-reading the script file of ExecutePythonScript
[ https://issues.apache.org/jira/browse/MINIFICPP-1240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1240: Description: *Acceptance criteria:* *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice without any update on the script file inbetween *THEN* There should be no log line stating that the script was reloaded *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice without a change in the script file content *THEN* On the second execution, the new script file should be executed and a re-read should be logged *Background:* The ExecutePythonScriptProcessor currently rereads the script file every time it is on schedule. This is suboptimal. *Proposal:* As an optimization we may want to check the last modified timestamp of the inode before reading the file. was:As an optimization we may want to check the last modified timestamp of the inode before reading the file. > Check last modified timestamp of the inode before re-reading the script file > of ExecutePythonScript > --- > > Key: MINIFICPP-1240 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1240 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Acceptance criteria:* > *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice without any update on the script > file inbetween > *THEN* There should be no log line stating that the script was reloaded > *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice without a change in the script > file content > *THEN* On the second execution, the new script file should be executed and a > re-read should be logged > *Background:* > The ExecutePythonScriptProcessor currently rereads the script file every time > it is on schedule. This is suboptimal. > > *Proposal:* > As an optimization we may want to check the last modified timestamp of the > inode before reading the file. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1240) Check last modified timestamp of the inode before re-reading the script file of ExecutePythonScript
[ https://issues.apache.org/jira/browse/MINIFICPP-1240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1240: Description: *Acceptance criteria:* *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *without* any update on the script file inbetween *THEN* There should be no log line stating that the script was reloaded *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution, the new script file should be executed and a re-read should be logged *Background:* The ExecutePythonScriptProcessor currently rereads the script file every time it is on schedule. This is suboptimal. *Proposal:* As an optimization we may want to check the last modified timestamp of the inode before reading the file. was: *Acceptance criteria:* *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice without any update on the script file inbetween *THEN* There should be no log line stating that the script was reloaded *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice without a change in the script file content *THEN* On the second execution, the new script file should be executed and a re-read should be logged *Background:* The ExecutePythonScriptProcessor currently rereads the script file every time it is on schedule. This is suboptimal. *Proposal:* As an optimization we may want to check the last modified timestamp of the inode before reading the file. > Check last modified timestamp of the inode before re-reading the script file > of ExecutePythonScript > --- > > Key: MINIFICPP-1240 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1240 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Acceptance criteria:* > *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *without* any update on the > script file inbetween > *THEN* There should be no log line stating that the script was reloaded > *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution, the new script file should be executed and a > re-read should be logged > *Background:* > The ExecutePythonScriptProcessor currently rereads the script file every time > it is on schedule. This is suboptimal. > > *Proposal:* > As an optimization we may want to check the last modified timestamp of the > inode before reading the file. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1223) Stop reloading script files every time ExecutePythonProcessor is triggered
[ https://issues.apache.org/jira/browse/MINIFICPP-1223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1223: Description: *Acceptance criteria:* *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution the behaviour of the ExecuteScriptProcessor should not change *Background:* For backward compatibility, we went for keeping the behaviour of reading the script file every time the processor is triggered intact. *Proposal:* We would like to change with the first major release. was: *Acceptance criteria:* ***GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution the behaviour of the ExecuteScriptProcessor should not change *Background:* For backward compatibility, we went for keeping the behaviour of reading the script file every time the processor is triggered intact. *Proposal:* We would like to change with the first major release. > Stop reloading script files every time ExecutePythonProcessor is triggered > -- > > Key: MINIFICPP-1223 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1223 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Acceptance criteria:* > *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution the behaviour of the ExecuteScriptProcessor > should not change > *Background:* > For backward compatibility, we went for keeping the behaviour of reading the > script file every time the processor is triggered intact. > *Proposal:* > We would like to change with the first major release. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1223) Stop reloading script files every time ExecutePythonProcessor is triggered
[ https://issues.apache.org/jira/browse/MINIFICPP-1223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1223: Description: *Acceptance criteria:* ***GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution the behaviour of the ExecuteScriptProcessor should not change *Background:* For backward compatibility, we went for keeping the behaviour of reading the script file every time the processor is triggered intact. *Proposal:* We would like to change with the first major release. was: *Background:* For backward compatibility, we went for keeping the behaviour of reading the script file every time the processor is triggered intact. *Proposal:* We would like to change with the first major release. > Stop reloading script files every time ExecutePythonProcessor is triggered > -- > > Key: MINIFICPP-1223 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1223 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Acceptance criteria:* > ***GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution the behaviour of the ExecuteScriptProcessor > should not change > *Background:* > For backward compatibility, we went for keeping the behaviour of reading the > script file every time the processor is triggered intact. > *Proposal:* > We would like to change with the first major release. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1241) Add cmake option for enforced diagnostic color
Adam Hunyadi created MINIFICPP-1241: --- Summary: Add cmake option for enforced diagnostic color Key: MINIFICPP-1241 URL: https://issues.apache.org/jira/browse/MINIFICPP-1241 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Reporter: Adam Hunyadi Assignee: Adam Hunyadi *Background:* By default, neither Clang or GCC will add ANSI-formatted colors to your output if they detect the output medium is not a terminal. This means that when using a generator different than "GNU Makefiles". See: [https://medium.com/@alasher/colored-c-compiler-output-with-ninja-clang-gcc-10bfe7f2b949] *Proposal:* Add an option that enforces diagnostic colors for clang and gcc -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1241) Add cmake option for enforced diagnostic color
[ https://issues.apache.org/jira/browse/MINIFICPP-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1241: Priority: Trivial (was: Minor) > Add cmake option for enforced diagnostic color > -- > > Key: MINIFICPP-1241 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1241 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Trivial > > *Background:* > By default, neither Clang or GCC will add ANSI-formatted colors to your > output if they detect the output medium is not a terminal. This means that > when using a generator different than "GNU Makefiles". See: > [https://medium.com/@alasher/colored-c-compiler-output-with-ninja-clang-gcc-10bfe7f2b949] > *Proposal:* > Add an option that enforces diagnostic colors for clang and gcc -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1241) Add cmake option for enforced diagnostic color
[ https://issues.apache.org/jira/browse/MINIFICPP-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1241: Description: *Background:* By default, neither Clang or GCC will add ANSI-formatted colors to your output if they detect the output medium is not a terminal. This means that when using a generator different than "GNU Makefiles". See: [https://medium.com/@alasher/colored-c-compiler-output-with-ninja-clang-gcc-10bfe7f2b949] *Proposal:* Add an option that enforces diagnostic colors for clang and gcc was: *Background:* By default, neither Clang or GCC will add ANSI-formatted colors to your output if they detect the output medium is not a terminal. This means that when using a generator different than "GNU Makefiles". See: [https://medium.com/@alasher/colored-c-compiler-output-with-ninja-clang-gcc-10bfe7f2b949] *Proposal:* Add an option that enforces diagnostic colors for clang and gcc > Add cmake option for enforced diagnostic color > -- > > Key: MINIFICPP-1241 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1241 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Trivial > > *Background:* > By default, neither Clang or GCC will add ANSI-formatted colors to your > output if they detect the output medium is not a terminal. This means that > when using a generator different than "GNU Makefiles". > See: > [https://medium.com/@alasher/colored-c-compiler-output-with-ninja-clang-gcc-10bfe7f2b949] > *Proposal:* > Add an option that enforces diagnostic colors for clang and gcc -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1223) Stop reloading script files every time ExecutePythonProcessor is triggered
[ https://issues.apache.org/jira/browse/MINIFICPP-1223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1223: Description: *Acceptance criteria:* *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" not set) -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution the behaviour of the ExecuteScriptProcessor should not change *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" disabled) -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution the behaviour of the ExecuteScriptProcessor should not change *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" enabled) -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution the behaviour of the ExecuteScriptProcessor should follow the updated script *Background:* For backward compatibility, we went for keeping the behaviour of reading the script file every time the processor is triggered intact. *Proposal:* We would like to add an option called *"Reload on Script Change"* to toggle this with the first major release. was: *Acceptance criteria:* *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution the behaviour of the ExecuteScriptProcessor should not change *Background:* For backward compatibility, we went for keeping the behaviour of reading the script file every time the processor is triggered intact. *Proposal:* We would like to change with the first major release. > Stop reloading script files every time ExecutePythonProcessor is triggered > -- > > Key: MINIFICPP-1223 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1223 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Acceptance criteria:* > *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" not > set) -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution the behaviour of the ExecuteScriptProcessor > should not change > *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" > disabled) -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution the behaviour of the ExecuteScriptProcessor > should not change > *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" > enabled) -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution the behaviour of the ExecuteScriptProcessor > should follow the updated script > *Background:* > For backward compatibility, we went for keeping the behaviour of reading the > script file every time the processor is triggered intact. > *Proposal:* > We would like to add an option called *"Reload on Script Change"* to toggle > this with the first major release. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1240) Check last modified timestamp of the inode before re-reading the script file of ExecutePythonScript
[ https://issues.apache.org/jira/browse/MINIFICPP-1240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1240: Description: *Acceptance criteria:* *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *without* any update on the script file inbetween *THEN* There should be no log line stating that the script was reloaded *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution, the new script file should be executed and a re-read should be logged *Background:* The ExecutePythonScriptProcessor currently rereads the script file every time it is on schedule. This is suboptimal. *Proposal:* As an optimization we may want to check the last modified timestamp of the inode before reading the file. was: *Acceptance criteria:* *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *without* any update on the script file inbetween *THEN* There should be no log line stating that the script was reloaded *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution, the new script file should be executed and a re-read should be logged *Background:* The ExecutePythonScriptProcessor currently rereads the script file every time it is on schedule. This is suboptimal. *Proposal:* As an optimization we may want to check the last modified timestamp of the inode before reading the file. > Check last modified timestamp of the inode before re-reading the script file > of ExecutePythonScript > --- > > Key: MINIFICPP-1240 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1240 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Acceptance criteria:* > *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *without* any update on the > script file inbetween > *THEN* There should be no log line stating that the script was reloaded > *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution, the new script file should be executed and a > re-read should be logged > *Background:* > The ExecutePythonScriptProcessor currently rereads the script file every time > it is on schedule. This is suboptimal. > *Proposal:* > As an optimization we may want to check the last modified timestamp of the > inode before reading the file. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1250) Remove C2Callback.h
Adam Hunyadi created MINIFICPP-1250: --- Summary: Remove C2Callback.h Key: MINIFICPP-1250 URL: https://issues.apache.org/jira/browse/MINIFICPP-1250 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Affects Versions: 0.7.0 Reporter: Adam Hunyadi Assignee: Adam Hunyadi Fix For: 0.8.0 *Background:* C2Callback.h is a piece of code never referenced (not even syntactically correct. *Proposal:* We should remove it from the codebase. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (MINIFICPP-1202) C2Agent uses vector instead of queue to handle C2 requests/responses.
[ https://issues.apache.org/jira/browse/MINIFICPP-1202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi resolved MINIFICPP-1202. - Resolution: Fixed > C2Agent uses vector instead of queue to handle C2 requests/responses. > - > > Key: MINIFICPP-1202 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1202 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Murtuza Shareef >Assignee: Adam Hunyadi >Priority: Minor > Time Spent: 12h 50m > Remaining Estimate: 0h > > C2Agent uses std::vector to queue up C2 requests and responses and uses it > much like stack when popping items out of them. > Ideally we would want to use a queue or a thread safe concurrent structure > with queue semantics. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1251) Implement RetryFlowFile Processor
Adam Hunyadi created MINIFICPP-1251: --- Summary: Implement RetryFlowFile Processor Key: MINIFICPP-1251 URL: https://issues.apache.org/jira/browse/MINIFICPP-1251 Project: Apache NiFi MiNiFi C++ Issue Type: New Feature Reporter: Adam Hunyadi Assignee: Adam Hunyadi *Background:* NiFi already has a RetryFlowFile processor, as documented here: [[Apache documentation for RetryFlowFile]|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.11.4/org.apache.nifi.processors.standard.RetryFlowFile/index.html] *Proposal:* As this is an important logic for creating flows, we should port this functionality to MiNiFi as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1251) Implement RetryFlowFile Processor
[ https://issues.apache.org/jira/browse/MINIFICPP-1251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1251: Description: *Acceptance criteria (edit in progress):* # GIVEN ** WHEN THEN # GIVEN ** WHEN THEN # GIVEN ** WHEN THEN # GIVEN ** WHEN THEN *Background:* NiFi already has a RetryFlowFile processor, as documented here: [[Apache documentation for RetryFlowFile]|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.11.4/org.apache.nifi.processors.standard.RetryFlowFile/index.html] *Proposal:* As this is an important logic for creating flows, we should port this functionality to MiNiFi as well. was: *Background:* NiFi already has a RetryFlowFile processor, as documented here: [[Apache documentation for RetryFlowFile]|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.11.4/org.apache.nifi.processors.standard.RetryFlowFile/index.html] *Proposal:* As this is an important logic for creating flows, we should port this functionality to MiNiFi as well. > Implement RetryFlowFile Processor > - > > Key: MINIFICPP-1251 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1251 > Project: Apache NiFi MiNiFi C++ > Issue Type: New Feature >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > > *Acceptance criteria (edit in progress):* > # GIVEN ** > WHEN > THEN > # GIVEN ** > WHEN > THEN > # GIVEN ** > WHEN > THEN > # GIVEN ** > WHEN > THEN > > *Background:* > NiFi already has a RetryFlowFile processor, as documented here: > [[Apache documentation for > RetryFlowFile]|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.11.4/org.apache.nifi.processors.standard.RetryFlowFile/index.html] > > *Proposal:* > As this is an important logic for creating flows, we should port this > functionality to MiNiFi as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1251) Implement RetryFlowFile Processor
[ https://issues.apache.org/jira/browse/MINIFICPP-1251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1251: Description: *Acceptance criteria (edit in progress):* Planned flow to use: *GetFile(s)* => *PutFile* (with *Conflict Resolution Strategy* set to *Fail*) => *RetryFlowFile* Using this conflict resolution strategy on PutFile to fail means that we can make it file by trying to make it write a file # GIVEN ** WHEN THEN # GIVEN ** WHEN THEN # GIVEN ** WHEN THEN # GIVEN ** WHEN THEN *Background:* NiFi already has a RetryFlowFile processor, as documented here: [[Apache documentation for RetryFlowFile]|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.11.4/org.apache.nifi.processors.standard.RetryFlowFile/index.html] *Proposal:* As this is an important logic for creating flows, we should port this functionality to MiNiFi as well. was: *Acceptance criteria (edit in progress):* # GIVEN ** WHEN THEN # GIVEN ** WHEN THEN # GIVEN ** WHEN THEN # GIVEN ** WHEN THEN *Background:* NiFi already has a RetryFlowFile processor, as documented here: [[Apache documentation for RetryFlowFile]|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.11.4/org.apache.nifi.processors.standard.RetryFlowFile/index.html] *Proposal:* As this is an important logic for creating flows, we should port this functionality to MiNiFi as well. > Implement RetryFlowFile Processor > - > > Key: MINIFICPP-1251 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1251 > Project: Apache NiFi MiNiFi C++ > Issue Type: New Feature >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > > *Acceptance criteria (edit in progress):* > Planned flow to use: > *GetFile(s)* => *PutFile* (with *Conflict Resolution Strategy* set to *Fail*) > => *RetryFlowFile* > Using this conflict resolution strategy on PutFile to fail means that we can > make it file by trying to make it write a file > # GIVEN ** > WHEN > THEN > # GIVEN ** > WHEN > THEN > # GIVEN ** > WHEN > THEN > # GIVEN ** > WHEN > THEN > > *Background:* > NiFi already has a RetryFlowFile processor, as documented here: > [[Apache documentation for > RetryFlowFile]|http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.11.4/org.apache.nifi.processors.standard.RetryFlowFile/index.html] > > *Proposal:* > As this is an important logic for creating flows, we should port this > functionality to MiNiFi as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1251) Implement RetryFlowFile Processor
[ https://issues.apache.org/jira/browse/MINIFICPP-1251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1251: Description: *Acceptance criteria (edit in progress):* Planned flow to use: *GetFile(s)* => *PutFile* (with *Conflict Resolution Strategy* set to *Fail*) => *RetryFlowFile* Using this conflict resolution strategy on PutFile to fail means that we can make it fail by making it try writing a file that already exists |{color:#00}*Expected Retry Property Name*{color}|{color:#00}*Expected Retry Property Value*{color}|{color:#00}*Expected Outbound Relationship*{color}|{color:#00}*Expect Penalty on Flowfile*{color}|{color:#00}*Retry Attribute Value on FlowFile*{color}|{color:#00}*Retry Attribute Value Before Processing*{color}|{color:#00}*Maximum Retries*{color}|{color:#00}*Penalize Retries*{color}|{color:#00}*Fail on Non-numerical Overwrite*{color}|{color:#00}*Reuse Mode*{color}|{color:#00}*Processor UUID matches FlowFile UUID*{color}| |{color:#00}*flowfile.retries*{color}|{color:#00}*1*{color}|{color:#00}*retry*{color}|{color:#00}*TRUE*{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}n/a{color}| |{color:#00}*flowfile.retryCount*{color}|{color:#00}*1*{color}|{color:#00}*retry*{color}|{color:#00}*TRUE*{color}|{color:#00}flowfile.retryCount{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}n/a{color}| |{color:#00}*flowfile.retries*{color}|{color:#00}*2*{color}|{color:#00}*retry*{color}|{color:#00}*TRUE*{color}|{color:#00}flowfile.retries{color}|{color:#00}1{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}TRUE{color}| |{color:#00}*(property cleared)*{color}|{color:#00}*(property cleared)*{color}|{color:#00}*retries_exceeded*{color}|{color:#00}*FALSE*{color}|{color:#00}flowfile.retries{color}|{color:#00}3{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}TRUE{color}| |{color:#00}*(property cleared)*{color}|{color:#00}*(property cleared)*{color}|{color:#00}*retries_exceeded*{color}|{color:#00}*FALSE*{color}|{color:#00}flowfile.retries{color}|{color:#00}4{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}TRUE{color}| |{color:#00}*flowfile.retries*{color}|{color:#00}*6*{color}|{color:#00}*retry*{color}|{color:#00}*TRUE*{color}|{color:#00}flowfile.retries{color}|{color:#00}5{color}|{color:#00}6{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}TRUE{color}| |{color:#00}*flowfile.retries*{color}|{color:#00}*1*{color}|{color:#00}*retry*{color}|{color:#00}*TRUE*{color}|{color:#00}flowfile.retries{color}|{color:#00}2{color}|{color:#00}(not set){color}|{color:#00}TRUE{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}TRUE{color}| |{color:#00}*(property cleared)*{color}|{color:#00}*(property cleared)*{color}|{color:#00}*retries_exceeded*{color}|{color:#00}*FALSE*{color}|{color:#00}flowfile.retries{color}|{color:#00}3{color}|{color:#00}(not set){color}|{color:#00}TRUE{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}TRUE{color}| |{color:#00}*flowfile.retries*{color}|{color:#00}*1*{color}|{color:#00}*retry*{color}|{color:#00}*FALSE*{color}|{color:#00}flowfile.retries{color}|{color:#00}2{color}|{color:#00}(not set){color}|{color:#00}FALSE{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}TRUE{color}| |{color:#00}*(property cleared)*{color}|{color:#00}*(property cleared)*{color}|{color:#00}*retries_exceeded*{color}|{color:#00}*FALSE*{color}|{color:#00}flowfile.retries{color}|{color:#00}3{color}|{color:#00}(not set){color}|{color:#00}FALSE{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}TRUE{color}| |{color:#00}*flowfile.retries*{color}|{color:#00}*2*{color}|{color:#00}*retry*{color}|{color:#00}*TRUE*{color}|{color:#00}flowfile.retries{color}|{color:#00}1{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}TRUE{color}|{color:#
[jira] [Updated] (MINIFICPP-1251) Implement RetryFlowFile Processor
[ https://issues.apache.org/jira/browse/MINIFICPP-1251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1251: Description: *Acceptance criteria (edit in progress):* Planned flow to use: *GetFile(s)* => *PutFile* (with *Conflict Resolution Strategy* set to *Fail*) => *RetryFlowFile* Using this conflict resolution strategy on PutFile to fail means that we can make it fail by making it try writing a file that already exists |{color:#00875a}{color:#00}*Expected Retry Property Name*{color}{color}|{color:#00875a}{color:#00}*Expected Retry Property Value*{color}{color}|{color:#00875a}{color:#00}*Expected Outbound Relationship*{color}{color}|{color:#00875a}{color:#00}*Expect Penalty on Flowfile*{color}{color}|{color:#00}*Retry Attribute Value on FlowFile*{color}|{color:#00}*Retry Attribute Value Before Processing*{color}|{color:#00}*Maximum Retries*{color}|{color:#00}*Penalize Retries*{color}|{color:#00}*Fail on Non-numerical Overwrite*{color}|{color:#00}*Reuse Mode*{color}|{color:#00}*Processor UUID matches FlowFile UUID*{color}| |{color:#00875a}{color:#00}*flowfile.retries*{color}{color}|{color:#00875a}{color:#00}*1*{color}{color}|{color:#00875a}{color:#00}*retry*{color}{color}|{color:#00875a}{color:#00}*TRUE*{color}{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}n/a{color}| |{color:#00875a}{color:#00}*flowfile.retryCount*{color}{color}|{color:#00875a}{color:#00}*1*{color}{color}|{color:#00875a}{color:#00}*retry*{color}{color}|{color:#00875a}{color:#00}*TRUE*{color}{color}|{color:#00}flowfile.retryCount{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}n/a{color}| |{color:#00875a}{color:#00}*flowfile.retries*{color}{color}|{color:#00875a}{color:#00}*2*{color}{color}|{color:#00875a}{color:#00}*retry*{color}{color}|{color:#00875a}{color:#00}*TRUE*{color}{color}|{color:#00}flowfile.retries{color}|{color:#00}1{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}TRUE{color}| |{color:#00875a}{color:#00}*(property cleared)*{color}{color}|{color:#00875a}{color:#00}*(property cleared)*{color}{color}|{color:#00875a}{color:#00}*retries_exceeded*{color}{color}|{color:#00875a}{color:#00}*FALSE*{color}{color}|{color:#00}flowfile.retries{color}|{color:#00}3{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}TRUE{color}| |{color:#00875a}{color:#00}*(property cleared)*{color}{color}|{color:#00875a}{color:#00}*(property cleared)*{color}{color}|{color:#00875a}{color:#00}*retries_exceeded*{color}{color}|{color:#00875a}{color:#00}*FALSE*{color}{color}|{color:#00}flowfile.retries{color}|{color:#00}4{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}TRUE{color}| |{color:#00875a}{color:#00}*flowfile.retries*{color}{color}|{color:#00875a}{color:#00}*6*{color}{color}|{color:#00875a}{color:#00}*retry*{color}{color}|{color:#00875a}{color:#00}*TRUE*{color}{color}|{color:#00}flowfile.retries{color}|{color:#00}5{color}|{color:#00}6{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}TRUE{color}| |{color:#00875a}{color:#00}*flowfile.retries*{color}{color}|{color:#00875a}{color:#00}*1*{color}{color}|{color:#00875a}{color:#00}*retry*{color}{color}|{color:#00875a}{color:#00}*TRUE*{color}{color}|{color:#00}flowfile.retries{color}|{color:#00}2{color}|{color:#00}(not set){color}|{color:#00}TRUE{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}TRUE{color}| |{color:#00875a}{color:#00}*(property cleared)*{color}{color}|{color:#00875a}{color:#00}*(property cleared)*{color}{color}|{color:#00875a}{color:#00}*retries_exceeded*{color}{color}|{color:#00875a}{color:#00}*FALSE*{color}{color}|{color:#00}flowfile.retries{color}|{color:#00}3{color}|{color:#00}(not set){color}|{color:#00}TRUE{color}|{color:#00}(not set){color}|{color:#00}(not set){color}|{color:#00}TRUE{color}| |{color:#00875a}{color:#00}*flowfile.retries*{color}{color}|{color:#00875a}{color:#00}*1*{color}{color}|{color:#00875a}{color:#00}*retry*{color}{color}|{color:#00875a}{color:#00}*FALSE*{color}{color}|