[jira] [Commented] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323589#comment-16323589 ] ASF GitHub Bot commented on NIFI-4768: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2397 I confirmed that exclusion works as expected. The previous three comments are all I wanted to be updated if you agree with. If those are addressed, I'm +1 with this. Thanks! > Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #2397: NIFI-4768: Add exclusion filters to S2SProvenanceReporting...
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2397 I confirmed that exclusion works as expected. The previous three comments are all I wanted to be updated if you agree with. If those are addressed, I'm +1 with this. Thanks! ---
[jira] [Commented] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323490#comment-16323490 ] ASF GitHub Bot commented on NIFI-4768: -- Github user ijokarumawak commented on a diff in the pull request: https://github.com/apache/nifi/pull/2397#discussion_r161131114 --- Diff: nifi-nar-bundles/nifi-extension-utils/nifi-reporting-utils/src/main/java/org/apache/nifi/reporting/util/provenance/ProvenanceEventConsumer.java --- @@ -89,16 +94,26 @@ public void setComponentTypeRegex(final String componentTypeRegex) { } } -public void addTargetEventType(final ProvenanceEventType... types) { -for (ProvenanceEventType type : types) { -eventTypes.add(type); +public void setComponentTypeRegexExclude(final String componentTypeRegex) { +if (!StringUtils.isBlank(componentTypeRegex)) { +this.componentTypeRegexExclude = Pattern.compile(componentTypeRegex); } } +public void addTargetEventType(final ProvenanceEventType... types) { +eventTypes.addAll(Arrays.asList(types)); --- End diff -- Trivial, but is there any reason to not use `Collections.addAll` here, as addTargetEventTypeExclude does? > Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323488#comment-16323488 ] ASF GitHub Bot commented on NIFI-4768: -- Github user ijokarumawak commented on a diff in the pull request: https://github.com/apache/nifi/pull/2397#discussion_r161131729 --- Diff: nifi-nar-bundles/nifi-extension-utils/nifi-reporting-utils/src/main/java/org/apache/nifi/reporting/util/provenance/ProvenanceEventConsumer.java --- @@ -235,6 +251,32 @@ private boolean isFilteringEnabled() { for (ProvenanceEventRecord provenanceEventRecord : provenanceEvents) { final String componentId = provenanceEventRecord.getComponentId(); +if (!componentIdsExclude.isEmpty()) { +if (componentIdsExclude.contains(componentId)) { +continue; +} +// If we aren't excluding it based on component ID, let's see if this component has a parent process group IDs +// that is being excluded +if (componentMapHolder == null) { +continue; +} +final String processGroupId = componentMapHolder.getProcessGroupId(componentId, provenanceEventRecord.getComponentType()); +if (StringUtils.isEmpty(processGroupId)) { +continue; --- End diff -- Do we want to skip events if processGroupId is not found for one? We probably better to apply the exclude rule if a processGroupId is known, different from inclusion rules. Unknown should NOT be filtered out by exclusion rules IMO. > Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323489#comment-16323489 ] ASF GitHub Bot commented on NIFI-4768: -- Github user ijokarumawak commented on a diff in the pull request: https://github.com/apache/nifi/pull/2397#discussion_r161132714 --- Diff: nifi-nar-bundles/nifi-extension-utils/nifi-reporting-utils/src/main/java/org/apache/nifi/reporting/util/provenance/ProvenanceEventConsumer.java --- @@ -256,9 +297,15 @@ private boolean isFilteringEnabled() { } } } +if (!eventTypesExclude.isEmpty() && eventTypesExclude.contains(provenanceEventRecord.getEventType())) { +continue; --- End diff -- These two, `eventTypesExclude` and `eventTypes` are the most computationally cheap conditions. So, these should be done at the beginning, before checking ProcessGroup hierarchies. How do you think? > Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2397: NIFI-4768: Add exclusion filters to S2SProvenanceRe...
Github user ijokarumawak commented on a diff in the pull request: https://github.com/apache/nifi/pull/2397#discussion_r161131114 --- Diff: nifi-nar-bundles/nifi-extension-utils/nifi-reporting-utils/src/main/java/org/apache/nifi/reporting/util/provenance/ProvenanceEventConsumer.java --- @@ -89,16 +94,26 @@ public void setComponentTypeRegex(final String componentTypeRegex) { } } -public void addTargetEventType(final ProvenanceEventType... types) { -for (ProvenanceEventType type : types) { -eventTypes.add(type); +public void setComponentTypeRegexExclude(final String componentTypeRegex) { +if (!StringUtils.isBlank(componentTypeRegex)) { +this.componentTypeRegexExclude = Pattern.compile(componentTypeRegex); } } +public void addTargetEventType(final ProvenanceEventType... types) { +eventTypes.addAll(Arrays.asList(types)); --- End diff -- Trivial, but is there any reason to not use `Collections.addAll` here, as addTargetEventTypeExclude does? ---
[GitHub] nifi pull request #2397: NIFI-4768: Add exclusion filters to S2SProvenanceRe...
Github user ijokarumawak commented on a diff in the pull request: https://github.com/apache/nifi/pull/2397#discussion_r161132714 --- Diff: nifi-nar-bundles/nifi-extension-utils/nifi-reporting-utils/src/main/java/org/apache/nifi/reporting/util/provenance/ProvenanceEventConsumer.java --- @@ -256,9 +297,15 @@ private boolean isFilteringEnabled() { } } } +if (!eventTypesExclude.isEmpty() && eventTypesExclude.contains(provenanceEventRecord.getEventType())) { +continue; --- End diff -- These two, `eventTypesExclude` and `eventTypes` are the most computationally cheap conditions. So, these should be done at the beginning, before checking ProcessGroup hierarchies. How do you think? ---
[GitHub] nifi pull request #2397: NIFI-4768: Add exclusion filters to S2SProvenanceRe...
Github user ijokarumawak commented on a diff in the pull request: https://github.com/apache/nifi/pull/2397#discussion_r161131729 --- Diff: nifi-nar-bundles/nifi-extension-utils/nifi-reporting-utils/src/main/java/org/apache/nifi/reporting/util/provenance/ProvenanceEventConsumer.java --- @@ -235,6 +251,32 @@ private boolean isFilteringEnabled() { for (ProvenanceEventRecord provenanceEventRecord : provenanceEvents) { final String componentId = provenanceEventRecord.getComponentId(); +if (!componentIdsExclude.isEmpty()) { +if (componentIdsExclude.contains(componentId)) { +continue; +} +// If we aren't excluding it based on component ID, let's see if this component has a parent process group IDs +// that is being excluded +if (componentMapHolder == null) { +continue; +} +final String processGroupId = componentMapHolder.getProcessGroupId(componentId, provenanceEventRecord.getComponentType()); +if (StringUtils.isEmpty(processGroupId)) { +continue; --- End diff -- Do we want to skip events if processGroupId is not found for one? We probably better to apply the exclude rule if a processGroupId is known, different from inclusion rules. Unknown should NOT be filtered out by exclusion rules IMO. ---
[GitHub] nifi issue #2397: NIFI-4768: Add exclusion filters to S2SProvenanceReporting...
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2397 Reviewing... ---
[jira] [Commented] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323461#comment-16323461 ] ASF GitHub Bot commented on NIFI-4768: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2397 Reviewing... > Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFIREG-117) Sorting Groups in the "Add user to groups" dialog is broken.
[ https://issues.apache.org/jira/browse/NIFIREG-117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323451#comment-16323451 ] ASF GitHub Bot commented on NIFIREG-117: GitHub user scottyaslan opened a pull request: https://github.com/apache/nifi-registry/pull/85 [NIFIREG-117] update sorting function name and add ellipsis to long t… …ext in sidenav tables You can merge this pull request into a Git repository by running: $ git pull https://github.com/scottyaslan/nifi-registry NIFIREG-117 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-registry/pull/85.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #85 commit 6085007168632fcda855de0f1a6939f1b271f448 Author: Scott AslanDate: 2018-01-12T02:32:29Z [NIFIREG-117] update sorting function name and add ellipsis to long text in sidenav tables > Sorting Groups in the "Add user to groups" dialog is broken. > > > Key: NIFIREG-117 > URL: https://issues.apache.org/jira/browse/NIFIREG-117 > Project: NiFi Registry > Issue Type: Bug >Affects Versions: 0.1.0 >Reporter: Scott Aslan >Assignee: Scott Aslan > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-registry pull request #85: [NIFIREG-117] update sorting function name a...
GitHub user scottyaslan opened a pull request: https://github.com/apache/nifi-registry/pull/85 [NIFIREG-117] update sorting function name and add ellipsis to long t⦠â¦ext in sidenav tables You can merge this pull request into a Git repository by running: $ git pull https://github.com/scottyaslan/nifi-registry NIFIREG-117 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-registry/pull/85.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #85 commit 6085007168632fcda855de0f1a6939f1b271f448 Author: Scott AslanDate: 2018-01-12T02:32:29Z [NIFIREG-117] update sorting function name and add ellipsis to long text in sidenav tables ---
[jira] [Updated] (NIFIREG-117) Sorting Groups in the "Add user to groups" dialog is broken.
[ https://issues.apache.org/jira/browse/NIFIREG-117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Scott Aslan updated NIFIREG-117: Issue Type: Bug (was: Improvement) > Sorting Groups in the "Add user to groups" dialog is broken. > > > Key: NIFIREG-117 > URL: https://issues.apache.org/jira/browse/NIFIREG-117 > Project: NiFi Registry > Issue Type: Bug >Affects Versions: 0.1.0 >Reporter: Scott Aslan >Assignee: Scott Aslan > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFIREG-116) Checkboxes in the New Bucket Policy dialog do not show as checked when clicking directly on the checkboxes.
[ https://issues.apache.org/jira/browse/NIFIREG-116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323443#comment-16323443 ] ASF GitHub Bot commented on NIFIREG-116: GitHub user scottyaslan opened a pull request: https://github.com/apache/nifi-registry/pull/84 [NIFIREG-116] update New Policy dialog checkbox UI/UX You can merge this pull request into a Git repository by running: $ git pull https://github.com/scottyaslan/nifi-registry NIFIREG-116 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-registry/pull/84.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #84 commit 2a78049d20eb4de06d56e4cf91317b60ec58b0b6 Author: Scott AslanDate: 2018-01-12T02:27:46Z [NIFIREG-116] update New Policy dialog checkbox UI/UX > Checkboxes in the New Bucket Policy dialog do not show as checked when > clicking directly on the checkboxes. > --- > > Key: NIFIREG-116 > URL: https://issues.apache.org/jira/browse/NIFIREG-116 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.1.0 >Reporter: Scott Aslan >Assignee: Scott Aslan > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-registry pull request #84: [NIFIREG-116] update New Policy dialog check...
GitHub user scottyaslan opened a pull request: https://github.com/apache/nifi-registry/pull/84 [NIFIREG-116] update New Policy dialog checkbox UI/UX You can merge this pull request into a Git repository by running: $ git pull https://github.com/scottyaslan/nifi-registry NIFIREG-116 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-registry/pull/84.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #84 commit 2a78049d20eb4de06d56e4cf91317b60ec58b0b6 Author: Scott AslanDate: 2018-01-12T02:27:46Z [NIFIREG-116] update New Policy dialog checkbox UI/UX ---
[GitHub] nifi issue #2400: NIFI-4769: Use FlowFile for EL at Fetch and PutAzureBlobSt...
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2400 Updated to check if FlowFile is null at AzureStorageUtils. Let's wait for Travis-ci to finish checks. ---
[jira] [Commented] (NIFI-4769) PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming FlowFile with EL to create connection string
[ https://issues.apache.org/jira/browse/NIFI-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323441#comment-16323441 ] ASF GitHub Bot commented on NIFI-4769: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2400 Updated to check if FlowFile is null at AzureStorageUtils. Let's wait for Travis-ci to finish checks. > PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming > FlowFile with EL to create connection string > - > > Key: NIFI-4769 > URL: https://issues.apache.org/jira/browse/NIFI-4769 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Koji Kawamura >Assignee: Koji Kawamura > > The latest change made by NIFI-4005 can break existing flows if Put or Fetch > AzureBlobStorage is configured to use incoming FlowFile attribute with EL for > accountName or accountKey. PutAzureBlobStorage and FetchAzureBlobStorage used > to be able to [specify key and account name from incoming FlowFile using > EL|https://github.com/apache/nifi/pull/1886/files#diff-a1be985cab6af1d412dbb21c5750e42aL76]. > But the change removed that capability mistakenly. > Following error messages are logged if this happens: > {code} > 2018-01-12 09:59:58,445 ERROR [Timer-Driven Process Thread-7] > o.a.n.p.a.storage.PutAzureBlobStorage > PutAzureBlobStorage[id=045a9107-a6f1-363f-bd95-1ba8abd7ee09] Invalid > connection string URI for 'PutAzureBlobStorage': > java.lang.IllegalArgumentException: Invalid connection string. > java.lang.IllegalArgumentException: Invalid connection string. > at > com.microsoft.azure.storage.CloudStorageAccount.parse(CloudStorageAccount.java:249) > at > org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.createCloudBlobClient(AzureStorageUtils.java:96) > at > org.apache.nifi.processors.azure.storage.PutAzureBlobStorage.onTrigger(PutAzureBlobStorage.java:75) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4769) PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming FlowFile with EL to create connection string
[ https://issues.apache.org/jira/browse/NIFI-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323435#comment-16323435 ] ASF GitHub Bot commented on NIFI-4769: -- Github user joewitt commented on a diff in the pull request: https://github.com/apache/nifi/pull/2400#discussion_r161127033 --- Diff: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java --- @@ -78,10 +84,15 @@ private AzureStorageUtils() { // do not instantiate } -public static CloudBlobClient createCloudBlobClient(ProcessContext context, ComponentLog logger) { -final String accountName = context.getProperty(AzureStorageUtils.ACCOUNT_NAME).evaluateAttributeExpressions().getValue(); -final String accountKey = context.getProperty(AzureStorageUtils.ACCOUNT_KEY).evaluateAttributeExpressions().getValue(); -final String sasToken = context.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).evaluateAttributeExpressions().getValue(); +/** + * Create CloudBlobClient instance. + * @param flowFile An incoming FlowFile can be used for NiFi Expression Language evaluation to derive + * Account Name, Account Key or SAS Token. This can be null if not available. + */ +public static CloudBlobClient createCloudBlobClient(ProcessContext context, ComponentLog logger, FlowFile flowFile) { +final String accountName = context.getProperty(AzureStorageUtils.ACCOUNT_NAME).evaluateAttributeExpressions(flowFile).getValue(); +final String accountKey = context.getProperty(AzureStorageUtils.ACCOUNT_KEY).evaluateAttributeExpressions(flowFile).getValue(); +final String sasToken = context.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).evaluateAttributeExpressions(flowFile).getValue(); --- End diff -- thanks koji. I am +2 with that then ;) > PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming > FlowFile with EL to create connection string > - > > Key: NIFI-4769 > URL: https://issues.apache.org/jira/browse/NIFI-4769 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Koji Kawamura >Assignee: Koji Kawamura > > The latest change made by NIFI-4005 can break existing flows if Put or Fetch > AzureBlobStorage is configured to use incoming FlowFile attribute with EL for > accountName or accountKey. PutAzureBlobStorage and FetchAzureBlobStorage used > to be able to [specify key and account name from incoming FlowFile using > EL|https://github.com/apache/nifi/pull/1886/files#diff-a1be985cab6af1d412dbb21c5750e42aL76]. > But the change removed that capability mistakenly. > Following error messages are logged if this happens: > {code} > 2018-01-12 09:59:58,445 ERROR [Timer-Driven Process Thread-7] > o.a.n.p.a.storage.PutAzureBlobStorage > PutAzureBlobStorage[id=045a9107-a6f1-363f-bd95-1ba8abd7ee09] Invalid > connection string URI for 'PutAzureBlobStorage': > java.lang.IllegalArgumentException: Invalid connection string. > java.lang.IllegalArgumentException: Invalid connection string. > at > com.microsoft.azure.storage.CloudStorageAccount.parse(CloudStorageAccount.java:249) > at > org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.createCloudBlobClient(AzureStorageUtils.java:96) > at > org.apache.nifi.processors.azure.storage.PutAzureBlobStorage.onTrigger(PutAzureBlobStorage.java:75) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at >
[GitHub] nifi pull request #2400: NIFI-4769: Use FlowFile for EL at Fetch and PutAzur...
Github user joewitt commented on a diff in the pull request: https://github.com/apache/nifi/pull/2400#discussion_r161127033 --- Diff: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java --- @@ -78,10 +84,15 @@ private AzureStorageUtils() { // do not instantiate } -public static CloudBlobClient createCloudBlobClient(ProcessContext context, ComponentLog logger) { -final String accountName = context.getProperty(AzureStorageUtils.ACCOUNT_NAME).evaluateAttributeExpressions().getValue(); -final String accountKey = context.getProperty(AzureStorageUtils.ACCOUNT_KEY).evaluateAttributeExpressions().getValue(); -final String sasToken = context.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).evaluateAttributeExpressions().getValue(); +/** + * Create CloudBlobClient instance. + * @param flowFile An incoming FlowFile can be used for NiFi Expression Language evaluation to derive + * Account Name, Account Key or SAS Token. This can be null if not available. + */ +public static CloudBlobClient createCloudBlobClient(ProcessContext context, ComponentLog logger, FlowFile flowFile) { +final String accountName = context.getProperty(AzureStorageUtils.ACCOUNT_NAME).evaluateAttributeExpressions(flowFile).getValue(); +final String accountKey = context.getProperty(AzureStorageUtils.ACCOUNT_KEY).evaluateAttributeExpressions(flowFile).getValue(); +final String sasToken = context.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).evaluateAttributeExpressions(flowFile).getValue(); --- End diff -- thanks koji. I am +2 with that then ;) ---
[jira] [Commented] (NIFIREG-115) Dialogs should automatically focus their initial inputs to allow users to quickly start typing.
[ https://issues.apache.org/jira/browse/NIFIREG-115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323434#comment-16323434 ] ASF GitHub Bot commented on NIFIREG-115: GitHub user scottyaslan opened a pull request: https://github.com/apache/nifi-registry/pull/83 [NIFIREG-115] add focus to inputs in the new user, new group, new buc… …kets dialogs You can merge this pull request into a Git repository by running: $ git pull https://github.com/scottyaslan/nifi-registry NIFIREG-115 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-registry/pull/83.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #83 commit a43c5e42248e8705e7452d1377bdb81665ed87c4 Author: Scott AslanDate: 2018-01-12T02:15:32Z [NIFIREG-115] add focus to inputs in the new user, new group, new buckets dialogs > Dialogs should automatically focus their initial inputs to allow users to > quickly start typing. > --- > > Key: NIFIREG-115 > URL: https://issues.apache.org/jira/browse/NIFIREG-115 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.1.0 >Reporter: Scott Aslan >Assignee: Scott Aslan > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-registry pull request #83: [NIFIREG-115] add focus to inputs in the new...
GitHub user scottyaslan opened a pull request: https://github.com/apache/nifi-registry/pull/83 [NIFIREG-115] add focus to inputs in the new user, new group, new buc⦠â¦kets dialogs You can merge this pull request into a Git repository by running: $ git pull https://github.com/scottyaslan/nifi-registry NIFIREG-115 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-registry/pull/83.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #83 commit a43c5e42248e8705e7452d1377bdb81665ed87c4 Author: Scott AslanDate: 2018-01-12T02:15:32Z [NIFIREG-115] add focus to inputs in the new user, new group, new buckets dialogs ---
[jira] [Commented] (NIFI-4769) PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming FlowFile with EL to create connection string
[ https://issues.apache.org/jira/browse/NIFI-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323433#comment-16323433 ] ASF GitHub Bot commented on NIFI-4769: -- Github user ijokarumawak commented on a diff in the pull request: https://github.com/apache/nifi/pull/2400#discussion_r161126780 --- Diff: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java --- @@ -78,10 +84,15 @@ private AzureStorageUtils() { // do not instantiate } -public static CloudBlobClient createCloudBlobClient(ProcessContext context, ComponentLog logger) { -final String accountName = context.getProperty(AzureStorageUtils.ACCOUNT_NAME).evaluateAttributeExpressions().getValue(); -final String accountKey = context.getProperty(AzureStorageUtils.ACCOUNT_KEY).evaluateAttributeExpressions().getValue(); -final String sasToken = context.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).evaluateAttributeExpressions().getValue(); +/** + * Create CloudBlobClient instance. + * @param flowFile An incoming FlowFile can be used for NiFi Expression Language evaluation to derive + * Account Name, Account Key or SAS Token. This can be null if not available. + */ +public static CloudBlobClient createCloudBlobClient(ProcessContext context, ComponentLog logger, FlowFile flowFile) { +final String accountName = context.getProperty(AzureStorageUtils.ACCOUNT_NAME).evaluateAttributeExpressions(flowFile).getValue(); +final String accountKey = context.getProperty(AzureStorageUtils.ACCOUNT_KEY).evaluateAttributeExpressions(flowFile).getValue(); +final String sasToken = context.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).evaluateAttributeExpressions(flowFile).getValue(); --- End diff -- Thanks for the comment. Yes, I thought the exact same thing. But I did it this way with a strong (a bit dangerous) knowledge that the standard implementation of PropertyDescriptor handles null FlowFile reference. But yes, if we check null here, it will be more defensive. I will update it. > PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming > FlowFile with EL to create connection string > - > > Key: NIFI-4769 > URL: https://issues.apache.org/jira/browse/NIFI-4769 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Koji Kawamura >Assignee: Koji Kawamura > > The latest change made by NIFI-4005 can break existing flows if Put or Fetch > AzureBlobStorage is configured to use incoming FlowFile attribute with EL for > accountName or accountKey. PutAzureBlobStorage and FetchAzureBlobStorage used > to be able to [specify key and account name from incoming FlowFile using > EL|https://github.com/apache/nifi/pull/1886/files#diff-a1be985cab6af1d412dbb21c5750e42aL76]. > But the change removed that capability mistakenly. > Following error messages are logged if this happens: > {code} > 2018-01-12 09:59:58,445 ERROR [Timer-Driven Process Thread-7] > o.a.n.p.a.storage.PutAzureBlobStorage > PutAzureBlobStorage[id=045a9107-a6f1-363f-bd95-1ba8abd7ee09] Invalid > connection string URI for 'PutAzureBlobStorage': > java.lang.IllegalArgumentException: Invalid connection string. > java.lang.IllegalArgumentException: Invalid connection string. > at > com.microsoft.azure.storage.CloudStorageAccount.parse(CloudStorageAccount.java:249) > at > org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.createCloudBlobClient(AzureStorageUtils.java:96) > at > org.apache.nifi.processors.azure.storage.PutAzureBlobStorage.onTrigger(PutAzureBlobStorage.java:75) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at >
[GitHub] nifi pull request #2400: NIFI-4769: Use FlowFile for EL at Fetch and PutAzur...
Github user ijokarumawak commented on a diff in the pull request: https://github.com/apache/nifi/pull/2400#discussion_r161126780 --- Diff: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java --- @@ -78,10 +84,15 @@ private AzureStorageUtils() { // do not instantiate } -public static CloudBlobClient createCloudBlobClient(ProcessContext context, ComponentLog logger) { -final String accountName = context.getProperty(AzureStorageUtils.ACCOUNT_NAME).evaluateAttributeExpressions().getValue(); -final String accountKey = context.getProperty(AzureStorageUtils.ACCOUNT_KEY).evaluateAttributeExpressions().getValue(); -final String sasToken = context.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).evaluateAttributeExpressions().getValue(); +/** + * Create CloudBlobClient instance. + * @param flowFile An incoming FlowFile can be used for NiFi Expression Language evaluation to derive + * Account Name, Account Key or SAS Token. This can be null if not available. + */ +public static CloudBlobClient createCloudBlobClient(ProcessContext context, ComponentLog logger, FlowFile flowFile) { +final String accountName = context.getProperty(AzureStorageUtils.ACCOUNT_NAME).evaluateAttributeExpressions(flowFile).getValue(); +final String accountKey = context.getProperty(AzureStorageUtils.ACCOUNT_KEY).evaluateAttributeExpressions(flowFile).getValue(); +final String sasToken = context.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).evaluateAttributeExpressions(flowFile).getValue(); --- End diff -- Thanks for the comment. Yes, I thought the exact same thing. But I did it this way with a strong (a bit dangerous) knowledge that the standard implementation of PropertyDescriptor handles null FlowFile reference. But yes, if we check null here, it will be more defensive. I will update it. ---
[jira] [Commented] (NIFI-4769) PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming FlowFile with EL to create connection string
[ https://issues.apache.org/jira/browse/NIFI-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323425#comment-16323425 ] ASF GitHub Bot commented on NIFI-4769: -- Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/2400 left a comment but if travis-ci checks out i am a +1 on this even if you disagree with the feedback. > PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming > FlowFile with EL to create connection string > - > > Key: NIFI-4769 > URL: https://issues.apache.org/jira/browse/NIFI-4769 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Koji Kawamura >Assignee: Koji Kawamura > > The latest change made by NIFI-4005 can break existing flows if Put or Fetch > AzureBlobStorage is configured to use incoming FlowFile attribute with EL for > accountName or accountKey. PutAzureBlobStorage and FetchAzureBlobStorage used > to be able to [specify key and account name from incoming FlowFile using > EL|https://github.com/apache/nifi/pull/1886/files#diff-a1be985cab6af1d412dbb21c5750e42aL76]. > But the change removed that capability mistakenly. > Following error messages are logged if this happens: > {code} > 2018-01-12 09:59:58,445 ERROR [Timer-Driven Process Thread-7] > o.a.n.p.a.storage.PutAzureBlobStorage > PutAzureBlobStorage[id=045a9107-a6f1-363f-bd95-1ba8abd7ee09] Invalid > connection string URI for 'PutAzureBlobStorage': > java.lang.IllegalArgumentException: Invalid connection string. > java.lang.IllegalArgumentException: Invalid connection string. > at > com.microsoft.azure.storage.CloudStorageAccount.parse(CloudStorageAccount.java:249) > at > org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.createCloudBlobClient(AzureStorageUtils.java:96) > at > org.apache.nifi.processors.azure.storage.PutAzureBlobStorage.onTrigger(PutAzureBlobStorage.java:75) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #2400: NIFI-4769: Use FlowFile for EL at Fetch and PutAzureBlobSt...
Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/2400 left a comment but if travis-ci checks out i am a +1 on this even if you disagree with the feedback. ---
[jira] [Commented] (NIFI-4769) PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming FlowFile with EL to create connection string
[ https://issues.apache.org/jira/browse/NIFI-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323424#comment-16323424 ] ASF GitHub Bot commented on NIFI-4769: -- Github user joewitt commented on a diff in the pull request: https://github.com/apache/nifi/pull/2400#discussion_r161126450 --- Diff: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java --- @@ -78,10 +84,15 @@ private AzureStorageUtils() { // do not instantiate } -public static CloudBlobClient createCloudBlobClient(ProcessContext context, ComponentLog logger) { -final String accountName = context.getProperty(AzureStorageUtils.ACCOUNT_NAME).evaluateAttributeExpressions().getValue(); -final String accountKey = context.getProperty(AzureStorageUtils.ACCOUNT_KEY).evaluateAttributeExpressions().getValue(); -final String sasToken = context.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).evaluateAttributeExpressions().getValue(); +/** + * Create CloudBlobClient instance. + * @param flowFile An incoming FlowFile can be used for NiFi Expression Language evaluation to derive + * Account Name, Account Key or SAS Token. This can be null if not available. + */ +public static CloudBlobClient createCloudBlobClient(ProcessContext context, ComponentLog logger, FlowFile flowFile) { +final String accountName = context.getProperty(AzureStorageUtils.ACCOUNT_NAME).evaluateAttributeExpressions(flowFile).getValue(); +final String accountKey = context.getProperty(AzureStorageUtils.ACCOUNT_KEY).evaluateAttributeExpressions(flowFile).getValue(); +final String sasToken = context.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).evaluateAttributeExpressions(flowFile).getValue(); --- End diff -- this is probably just a stylistic thing but I'd prefer we didn't pass a null in here if a given flowfile is null. Rather it seems cleaner to me that we'd check if the supplied flowfile is null and if so called evaluatedAttrExpres without the flowfile reference. > PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming > FlowFile with EL to create connection string > - > > Key: NIFI-4769 > URL: https://issues.apache.org/jira/browse/NIFI-4769 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Koji Kawamura >Assignee: Koji Kawamura > > The latest change made by NIFI-4005 can break existing flows if Put or Fetch > AzureBlobStorage is configured to use incoming FlowFile attribute with EL for > accountName or accountKey. PutAzureBlobStorage and FetchAzureBlobStorage used > to be able to [specify key and account name from incoming FlowFile using > EL|https://github.com/apache/nifi/pull/1886/files#diff-a1be985cab6af1d412dbb21c5750e42aL76]. > But the change removed that capability mistakenly. > Following error messages are logged if this happens: > {code} > 2018-01-12 09:59:58,445 ERROR [Timer-Driven Process Thread-7] > o.a.n.p.a.storage.PutAzureBlobStorage > PutAzureBlobStorage[id=045a9107-a6f1-363f-bd95-1ba8abd7ee09] Invalid > connection string URI for 'PutAzureBlobStorage': > java.lang.IllegalArgumentException: Invalid connection string. > java.lang.IllegalArgumentException: Invalid connection string. > at > com.microsoft.azure.storage.CloudStorageAccount.parse(CloudStorageAccount.java:249) > at > org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.createCloudBlobClient(AzureStorageUtils.java:96) > at > org.apache.nifi.processors.azure.storage.PutAzureBlobStorage.onTrigger(PutAzureBlobStorage.java:75) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at >
[GitHub] nifi pull request #2400: NIFI-4769: Use FlowFile for EL at Fetch and PutAzur...
Github user joewitt commented on a diff in the pull request: https://github.com/apache/nifi/pull/2400#discussion_r161126450 --- Diff: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java --- @@ -78,10 +84,15 @@ private AzureStorageUtils() { // do not instantiate } -public static CloudBlobClient createCloudBlobClient(ProcessContext context, ComponentLog logger) { -final String accountName = context.getProperty(AzureStorageUtils.ACCOUNT_NAME).evaluateAttributeExpressions().getValue(); -final String accountKey = context.getProperty(AzureStorageUtils.ACCOUNT_KEY).evaluateAttributeExpressions().getValue(); -final String sasToken = context.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).evaluateAttributeExpressions().getValue(); +/** + * Create CloudBlobClient instance. + * @param flowFile An incoming FlowFile can be used for NiFi Expression Language evaluation to derive + * Account Name, Account Key or SAS Token. This can be null if not available. + */ +public static CloudBlobClient createCloudBlobClient(ProcessContext context, ComponentLog logger, FlowFile flowFile) { +final String accountName = context.getProperty(AzureStorageUtils.ACCOUNT_NAME).evaluateAttributeExpressions(flowFile).getValue(); +final String accountKey = context.getProperty(AzureStorageUtils.ACCOUNT_KEY).evaluateAttributeExpressions(flowFile).getValue(); +final String sasToken = context.getProperty(AzureStorageUtils.PROP_SAS_TOKEN).evaluateAttributeExpressions(flowFile).getValue(); --- End diff -- this is probably just a stylistic thing but I'd prefer we didn't pass a null in here if a given flowfile is null. Rather it seems cleaner to me that we'd check if the supplied flowfile is null and if so called evaluatedAttrExpres without the flowfile reference. ---
[jira] [Updated] (NIFI-4769) PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming FlowFile with EL to create connection string
[ https://issues.apache.org/jira/browse/NIFI-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-4769: Status: Patch Available (was: In Progress) > PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming > FlowFile with EL to create connection string > - > > Key: NIFI-4769 > URL: https://issues.apache.org/jira/browse/NIFI-4769 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Koji Kawamura >Assignee: Koji Kawamura > > The latest change made by NIFI-4005 can break existing flows if Put or Fetch > AzureBlobStorage is configured to use incoming FlowFile attribute with EL for > accountName or accountKey. PutAzureBlobStorage and FetchAzureBlobStorage used > to be able to [specify key and account name from incoming FlowFile using > EL|https://github.com/apache/nifi/pull/1886/files#diff-a1be985cab6af1d412dbb21c5750e42aL76]. > But the change removed that capability mistakenly. > Following error messages are logged if this happens: > {code} > 2018-01-12 09:59:58,445 ERROR [Timer-Driven Process Thread-7] > o.a.n.p.a.storage.PutAzureBlobStorage > PutAzureBlobStorage[id=045a9107-a6f1-363f-bd95-1ba8abd7ee09] Invalid > connection string URI for 'PutAzureBlobStorage': > java.lang.IllegalArgumentException: Invalid connection string. > java.lang.IllegalArgumentException: Invalid connection string. > at > com.microsoft.azure.storage.CloudStorageAccount.parse(CloudStorageAccount.java:249) > at > org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.createCloudBlobClient(AzureStorageUtils.java:96) > at > org.apache.nifi.processors.azure.storage.PutAzureBlobStorage.onTrigger(PutAzureBlobStorage.java:75) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2400: NIFI-4769: Use FlowFile for EL at Fetch and PutAzur...
GitHub user ijokarumawak opened a pull request: https://github.com/apache/nifi/pull/2400 NIFI-4769: Use FlowFile for EL at Fetch and PutAzureBlobStorage This commit add back the existing capability for those Processors to use incoming FlowFile attributes to compute account name and account key, which had been removed by NIFI-4004. Also, the same capability is added for SAS token. Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ijokarumawak/nifi nifi-4769 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2400.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2400 commit 01d4af96b26c010b790713010df08b686b22ac1d Author: Koji KawamuraDate: 2018-01-12T02:03:05Z NIFI-4769: Use FlowFile for EL at Fetch and PutAzureBlobStorage This commit add back the existing capability for those Processors to use incoming FlowFile attributes to compute account name and account key, which had been removed by NIFI-4004. Also, the same capability is added for SAS token. ---
[jira] [Commented] (NIFI-4769) PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming FlowFile with EL to create connection string
[ https://issues.apache.org/jira/browse/NIFI-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323418#comment-16323418 ] ASF GitHub Bot commented on NIFI-4769: -- GitHub user ijokarumawak opened a pull request: https://github.com/apache/nifi/pull/2400 NIFI-4769: Use FlowFile for EL at Fetch and PutAzureBlobStorage This commit add back the existing capability for those Processors to use incoming FlowFile attributes to compute account name and account key, which had been removed by NIFI-4004. Also, the same capability is added for SAS token. Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ijokarumawak/nifi nifi-4769 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2400.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2400 commit 01d4af96b26c010b790713010df08b686b22ac1d Author: Koji KawamuraDate: 2018-01-12T02:03:05Z NIFI-4769: Use FlowFile for EL at Fetch and PutAzureBlobStorage This commit add back the existing capability for those Processors to use incoming FlowFile attributes to compute account name and account key, which had been removed by NIFI-4004. Also, the same capability is added for SAS token. > PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming > FlowFile with EL to create connection string > - > > Key: NIFI-4769 > URL: https://issues.apache.org/jira/browse/NIFI-4769 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Koji Kawamura >Assignee: Koji Kawamura > > The latest change made by NIFI-4005 can break existing flows if Put or Fetch > AzureBlobStorage is configured to use incoming FlowFile attribute with EL for > accountName or accountKey. PutAzureBlobStorage and FetchAzureBlobStorage used > to be able to [specify key and account name from incoming FlowFile using > EL|https://github.com/apache/nifi/pull/1886/files#diff-a1be985cab6af1d412dbb21c5750e42aL76]. > But the change removed that capability mistakenly. > Following error messages are logged if this happens: > {code} > 2018-01-12 09:59:58,445 ERROR [Timer-Driven Process Thread-7] > o.a.n.p.a.storage.PutAzureBlobStorage > PutAzureBlobStorage[id=045a9107-a6f1-363f-bd95-1ba8abd7ee09] Invalid > connection string URI for 'PutAzureBlobStorage': > java.lang.IllegalArgumentException: Invalid connection string. > java.lang.IllegalArgumentException: Invalid connection string. > at > com.microsoft.azure.storage.CloudStorageAccount.parse(CloudStorageAccount.java:249) > at > org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.createCloudBlobClient(AzureStorageUtils.java:96) > at >
[jira] [Assigned] (NIFI-4769) PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming FlowFile with EL to create connection string
[ https://issues.apache.org/jira/browse/NIFI-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura reassigned NIFI-4769: --- Assignee: Koji Kawamura > PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming > FlowFile with EL to create connection string > - > > Key: NIFI-4769 > URL: https://issues.apache.org/jira/browse/NIFI-4769 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Koji Kawamura >Assignee: Koji Kawamura > > The latest change made by NIFI-4005 can break existing flows if Put or Fetch > AzureBlobStorage is configured to use incoming FlowFile attribute with EL for > accountName or accountKey. PutAzureBlobStorage and FetchAzureBlobStorage used > to be able to [specify key and account name from incoming FlowFile using > EL|https://github.com/apache/nifi/pull/1886/files#diff-a1be985cab6af1d412dbb21c5750e42aL76]. > But the change removed that capability mistakenly. > Following error messages are logged if this happens: > {code} > 2018-01-12 09:59:58,445 ERROR [Timer-Driven Process Thread-7] > o.a.n.p.a.storage.PutAzureBlobStorage > PutAzureBlobStorage[id=045a9107-a6f1-363f-bd95-1ba8abd7ee09] Invalid > connection string URI for 'PutAzureBlobStorage': > java.lang.IllegalArgumentException: Invalid connection string. > java.lang.IllegalArgumentException: Invalid connection string. > at > com.microsoft.azure.storage.CloudStorageAccount.parse(CloudStorageAccount.java:249) > at > org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.createCloudBlobClient(AzureStorageUtils.java:96) > at > org.apache.nifi.processors.azure.storage.PutAzureBlobStorage.onTrigger(PutAzureBlobStorage.java:75) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFI-4769) PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming FlowFile with EL to create connection string
Koji Kawamura created NIFI-4769: --- Summary: PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming FlowFile with EL to create connection string Key: NIFI-4769 URL: https://issues.apache.org/jira/browse/NIFI-4769 Project: Apache NiFi Issue Type: Bug Components: Extensions Reporter: Koji Kawamura The latest change made by NIFI-4005 can break existing flows if Put or Fetch AzureBlobStorage is configured to use incoming FlowFile attribute with EL for accountName or accountKey. PutAzureBlobStorage and FetchAzureBlobStorage used to be able to [specify key and account name from incoming FlowFile using EL|https://github.com/apache/nifi/pull/1886/files#diff-a1be985cab6af1d412dbb21c5750e42aL76]. But the change removed that capability mistakenly. Following error messages are logged if this happens: {code} 2018-01-12 09:59:58,445 ERROR [Timer-Driven Process Thread-7] o.a.n.p.a.storage.PutAzureBlobStorage PutAzureBlobStorage[id=045a9107-a6f1-363f-bd95-1ba8abd7ee09] Invalid connection string URI for 'PutAzureBlobStorage': java.lang.IllegalArgumentException: Invalid connection string. java.lang.IllegalArgumentException: Invalid connection string. at com.microsoft.azure.storage.CloudStorageAccount.parse(CloudStorageAccount.java:249) at org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.createCloudBlobClient(AzureStorageUtils.java:96) at org.apache.nifi.processors.azure.storage.PutAzureBlobStorage.onTrigger(PutAzureBlobStorage.java:75) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi pull request #108: MINIFI-415 Adjusting logging when a bundle is...
Github user jzonthemtn commented on a diff in the pull request: https://github.com/apache/nifi-minifi/pull/108#discussion_r161116290 --- Diff: minifi-nar-bundles/minifi-framework-bundle/minifi-framework/minifi-runtime/src/main/java/org/apache/nifi/minifi/FlowEnricher.java --- @@ -130,17 +127,13 @@ private void enrichComponent(EnrichingElementAdapter componentToEnrich, Map componentToEnrichBundleVersions = componentToEnrichVersionToBundles.values().stream() .map(bundle -> bundle.getBundleDetails().getCoordinate().getVersion()).collect(Collectors.toSet()); -final String componentToEnrichId = componentToEnrich.getComponentId(); -String bundleVersion = componentToEnrichBundleVersions.stream().sorted().reduce((version, otherVersion) -> otherVersion).orElse(null); -if (bundleVersion != null) { - componentToEnrich.setBundleInformation(componentToEnrichVersionToBundles.get(bundleVersion).getBundleDetails().getCoordinate()); -} -logger.info("Enriching {} with bundle {}", new Object[]{}); - +final String bundleVersion = componentToEnrichBundleVersions.stream().sorted().reduce((version, otherVersion) -> otherVersion).orElse(null); +final BundleCoordinate enrichingCoordinate = componentToEnrichVersionToBundles.get(bundleVersion).getBundleDetails().getCoordinate(); --- End diff -- As long as its guaranteed to not be `null` I think it is ok as is. The `orElse` is what raised a flag for me. ---
[jira] [Updated] (NIFI-4748) Add endpoint override to PutKinesisStream and PutKinesisFirehose
[ https://issues.apache.org/jira/browse/NIFI-4748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joey Frazee updated NIFI-4748: -- Status: Patch Available (was: Open) > Add endpoint override to PutKinesisStream and PutKinesisFirehose > > > Key: NIFI-4748 > URL: https://issues.apache.org/jira/browse/NIFI-4748 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Joey Frazee >Assignee: Joey Frazee >Priority: Minor > > Both AmazonKinesisClient and AmazonKinesisFirehoseClient support setRegion() > and endpoint override can be useful for unit testing and working with Kinesis > compatible services, so it'd be helpful to add the ENDPOINT_OVERRIDE property > to these processors. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-4748) Add endpoint override to PutKinesisStream and PutKinesisFirehose
[ https://issues.apache.org/jira/browse/NIFI-4748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joey Frazee updated NIFI-4748: -- Priority: Minor (was: Major) > Add endpoint override to PutKinesisStream and PutKinesisFirehose > > > Key: NIFI-4748 > URL: https://issues.apache.org/jira/browse/NIFI-4748 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Joey Frazee >Assignee: Joey Frazee >Priority: Minor > > Both AmazonKinesisClient and AmazonKinesisFirehoseClient support setRegion() > and endpoint override can be useful for unit testing and working with Kinesis > compatible services, so it'd be helpful to add the ENDPOINT_OVERRIDE property > to these processors. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4748) Add endpoint override to PutKinesisStream and PutKinesisFirehose
[ https://issues.apache.org/jira/browse/NIFI-4748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323154#comment-16323154 ] ASF GitHub Bot commented on NIFI-4748: -- GitHub user jfrazee opened a pull request: https://github.com/apache/nifi/pull/2399 NIFI-4748 Add endpoint override to Kinesis processors Note that for some mock Kinesis implementations you have to disable CBOR; this can be done by setting the environment variable AWS_CBOR_DISABLE=1 or adding the property com.amazonaws.sdk.disableCbor=1 to bootstrap.conf. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jfrazee/nifi NIFI-4748 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2399.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2399 commit 6ef2b8e1221e438bb870ec91695265546cc453b8 Author: Joey FrazeeDate: 2018-01-11T21:00:42Z NIFI-4748 Add endpoint override to Kinesis processors > Add endpoint override to PutKinesisStream and PutKinesisFirehose > > > Key: NIFI-4748 > URL: https://issues.apache.org/jira/browse/NIFI-4748 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Joey Frazee >Assignee: Joey Frazee > > Both AmazonKinesisClient and AmazonKinesisFirehoseClient support setRegion() > and endpoint override can be useful for unit testing and working with Kinesis > compatible services, so it'd be helpful to add the ENDPOINT_OVERRIDE property > to these processors. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2399: NIFI-4748 Add endpoint override to Kinesis processo...
GitHub user jfrazee opened a pull request: https://github.com/apache/nifi/pull/2399 NIFI-4748 Add endpoint override to Kinesis processors Note that for some mock Kinesis implementations you have to disable CBOR; this can be done by setting the environment variable AWS_CBOR_DISABLE=1 or adding the property com.amazonaws.sdk.disableCbor=1 to bootstrap.conf. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jfrazee/nifi NIFI-4748 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2399.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2399 commit 6ef2b8e1221e438bb870ec91695265546cc453b8 Author: Joey FrazeeDate: 2018-01-11T21:00:42Z NIFI-4748 Add endpoint override to Kinesis processors ---
[GitHub] nifi-minifi pull request #108: MINIFI-415 Adjusting logging when a bundle is...
Github user apiri commented on a diff in the pull request: https://github.com/apache/nifi-minifi/pull/108#discussion_r161095169 --- Diff: minifi-nar-bundles/minifi-framework-bundle/minifi-framework/minifi-runtime/src/main/java/org/apache/nifi/minifi/FlowEnricher.java --- @@ -130,17 +127,13 @@ private void enrichComponent(EnrichingElementAdapter componentToEnrich, Map componentToEnrichBundleVersions = componentToEnrichVersionToBundles.values().stream() .map(bundle -> bundle.getBundleDetails().getCoordinate().getVersion()).collect(Collectors.toSet()); -final String componentToEnrichId = componentToEnrich.getComponentId(); -String bundleVersion = componentToEnrichBundleVersions.stream().sorted().reduce((version, otherVersion) -> otherVersion).orElse(null); -if (bundleVersion != null) { - componentToEnrich.setBundleInformation(componentToEnrichVersionToBundles.get(bundleVersion).getBundleDetails().getCoordinate()); -} -logger.info("Enriching {} with bundle {}", new Object[]{}); - +final String bundleVersion = componentToEnrichBundleVersions.stream().sorted().reduce((version, otherVersion) -> otherVersion).orElse(null); +final BundleCoordinate enrichingCoordinate = componentToEnrichVersionToBundles.get(bundleVersion).getBundleDetails().getCoordinate(); --- End diff -- Probably doesn't hurt but my thought when I looked at it again was that given that we have already determined that there is a non-empty collection of possible candidates and we are just selecting the greatest of these through the reduction, there will be a guaranteed value. What makes it awkward is the orElse that was left behind. I suppose I could change this to just #get() and the meaning would be a little clearer. What do you think makes the most sense? ---
[jira] [Commented] (NIFI-4767) UpdateRecord not working properly with Arrays and Maps
[ https://issues.apache.org/jira/browse/NIFI-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323100#comment-16323100 ] ASF GitHub Bot commented on NIFI-4767: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2398 > UpdateRecord not working properly with Arrays and Maps > -- > > Key: NIFI-4767 > URL: https://issues.apache.org/jira/browse/NIFI-4767 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > Fix For: 1.6.0 > > > If I use UpdateRecord to update an individual field in an array or in a map, > it does not update the field specified. Instead, it just skips over it. In > one case, I tried using multiple indices in an array such as /accounts[1, 3, > 5] and setting the value to 99. Instead, only the 0th element was updated. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2398: NIFI-4767: Fixed issues with RecordPath using the w...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2398 ---
[jira] [Updated] (NIFI-4767) UpdateRecord not working properly with Arrays and Maps
[ https://issues.apache.org/jira/browse/NIFI-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-4767: - Resolution: Fixed Status: Resolved (was: Patch Available) > UpdateRecord not working properly with Arrays and Maps > -- > > Key: NIFI-4767 > URL: https://issues.apache.org/jira/browse/NIFI-4767 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > Fix For: 1.6.0 > > > If I use UpdateRecord to update an individual field in an array or in a map, > it does not update the field specified. Instead, it just skips over it. In > one case, I tried using multiple indices in an array such as /accounts[1, 3, > 5] and setting the value to 99. Instead, only the 0th element was updated. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4767) UpdateRecord not working properly with Arrays and Maps
[ https://issues.apache.org/jira/browse/NIFI-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323098#comment-16323098 ] ASF subversion and git services commented on NIFI-4767: --- Commit a36afe0bbe0051b528810d0670757d3401c80215 in nifi's branch refs/heads/master from [~markap14] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=a36afe0 ] NIFI-4767 - Fixed issues with RecordPath using the wrong field name for arrays and maps. Also addressed issue where Avro Reader was returning a Record object when it should return a Map Signed-off-by: Pierre VillardThis closes #2398. > UpdateRecord not working properly with Arrays and Maps > -- > > Key: NIFI-4767 > URL: https://issues.apache.org/jira/browse/NIFI-4767 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > Fix For: 1.6.0 > > > If I use UpdateRecord to update an individual field in an array or in a map, > it does not update the field specified. Instead, it just skips over it. In > one case, I tried using multiple indices in an array such as /accounts[1, 3, > 5] and setting the value to 99. Instead, only the 0th element was updated. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4767) UpdateRecord not working properly with Arrays and Maps
[ https://issues.apache.org/jira/browse/NIFI-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323091#comment-16323091 ] ASF GitHub Bot commented on NIFI-4767: -- Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/2398 +1, merging to master, thanks @markap14 > UpdateRecord not working properly with Arrays and Maps > -- > > Key: NIFI-4767 > URL: https://issues.apache.org/jira/browse/NIFI-4767 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > Fix For: 1.6.0 > > > If I use UpdateRecord to update an individual field in an array or in a map, > it does not update the field specified. Instead, it just skips over it. In > one case, I tried using multiple indices in an array such as /accounts[1, 3, > 5] and setting the value to 99. Instead, only the 0th element was updated. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #2398: NIFI-4767: Fixed issues with RecordPath using the wrong fi...
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/2398 +1, merging to master, thanks @markap14 ---
[GitHub] nifi-minifi pull request #108: MINIFI-415 Adjusting logging when a bundle is...
Github user jzonthemtn commented on a diff in the pull request: https://github.com/apache/nifi-minifi/pull/108#discussion_r161082784 --- Diff: minifi-nar-bundles/minifi-framework-bundle/minifi-framework/minifi-runtime/src/main/java/org/apache/nifi/minifi/FlowEnricher.java --- @@ -130,17 +127,13 @@ private void enrichComponent(EnrichingElementAdapter componentToEnrich, Map componentToEnrichBundleVersions = componentToEnrichVersionToBundles.values().stream() .map(bundle -> bundle.getBundleDetails().getCoordinate().getVersion()).collect(Collectors.toSet()); -final String componentToEnrichId = componentToEnrich.getComponentId(); -String bundleVersion = componentToEnrichBundleVersions.stream().sorted().reduce((version, otherVersion) -> otherVersion).orElse(null); -if (bundleVersion != null) { - componentToEnrich.setBundleInformation(componentToEnrichVersionToBundles.get(bundleVersion).getBundleDetails().getCoordinate()); -} -logger.info("Enriching {} with bundle {}", new Object[]{}); - +final String bundleVersion = componentToEnrichBundleVersions.stream().sorted().reduce((version, otherVersion) -> otherVersion).orElse(null); +final BundleCoordinate enrichingCoordinate = componentToEnrichVersionToBundles.get(bundleVersion).getBundleDetails().getCoordinate(); --- End diff -- Is the null check on `bundleVersion` still necessary to prevent trying to get a map value by key `null`? ---
[GitHub] nifi-registry pull request #82: NIFIREG-109 Expand LdapUserGroupProvider Con...
Github user asfgit closed the pull request at: https://github.com/apache/nifi-registry/pull/82 ---
[GitHub] nifi-registry issue #82: NIFIREG-109 Expand LdapUserGroupProvider Config
Github user bbende commented on the issue: https://github.com/apache/nifi-registry/pull/82 Looks good, will merge, thanks! ---
[jira] [Commented] (NIFIREG-109) LdapUserGroupProvider: Allow admin to configure group membership attribute
[ https://issues.apache.org/jira/browse/NIFIREG-109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323006#comment-16323006 ] ASF GitHub Bot commented on NIFIREG-109: Github user bbende commented on the issue: https://github.com/apache/nifi-registry/pull/82 Looks good, will merge, thanks! > LdapUserGroupProvider: Allow admin to configure group membership attribute > -- > > Key: NIFIREG-109 > URL: https://issues.apache.org/jira/browse/NIFIREG-109 > Project: NiFi Registry > Issue Type: Improvement >Reporter: Kevin Doran >Assignee: Kevin Doran > Fix For: 0.1.1 > > > This is a cloned issue from NiFi (NIFI-4567) that is also relevant in NiFi > Registry as it uses the same LDAP configuration functionality. > Currently, group membership is defined using a fully qualified DN between > user and group or between group and user. When membership is defined through > a user, the group DN is required. When membership is defined through a group, > the user DN is required. > We should add another property to configure which attribute in the referenced > group or user should be used as the value of the user's group attribute or > the group's user attribute. For instance, if the user's member attribute > contains the value 'group1' this new property would be the group attribute > that returns the value 'group1'. When these new properties are blank a full > DN is assumed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi-cpp issue #237: MINIFICPP-37: Create an executable to support ba...
Github user apiri commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/237 reviewing ---
[jira] [Commented] (MINIFICPP-37) Create scripts to get information from the controller API
[ https://issues.apache.org/jira/browse/MINIFICPP-37?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323001#comment-16323001 ] ASF GitHub Bot commented on MINIFICPP-37: - Github user apiri commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/237 reviewing > Create scripts to get information from the controller API > - > > Key: MINIFICPP-37 > URL: https://issues.apache.org/jira/browse/MINIFICPP-37 > Project: NiFi MiNiFi C++ > Issue Type: Sub-task >Reporter: marco polo >Assignee: marco polo >Priority: Minor > Labels: Durability, Reliability > Fix For: 0.4.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFIREG-117) Sorting Groups in the "Add user to groups" dialog is broken.
Scott Aslan created NIFIREG-117: --- Summary: Sorting Groups in the "Add user to groups" dialog is broken. Key: NIFIREG-117 URL: https://issues.apache.org/jira/browse/NIFIREG-117 Project: NiFi Registry Issue Type: Improvement Affects Versions: 0.1.0 Reporter: Scott Aslan Assignee: Scott Aslan -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFIREG-116) Checkboxes in the New Bucket Policy dialog do not show as checked when clicking directly on the checkboxes.
Scott Aslan created NIFIREG-116: --- Summary: Checkboxes in the New Bucket Policy dialog do not show as checked when clicking directly on the checkboxes. Key: NIFIREG-116 URL: https://issues.apache.org/jira/browse/NIFIREG-116 Project: NiFi Registry Issue Type: Improvement Affects Versions: 0.1.0 Reporter: Scott Aslan Assignee: Scott Aslan -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (NIFI-4748) Add endpoint override to PutKinesisStream and PutKinesisFirehose
[ https://issues.apache.org/jira/browse/NIFI-4748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joey Frazee reassigned NIFI-4748: - Assignee: Joey Frazee > Add endpoint override to PutKinesisStream and PutKinesisFirehose > > > Key: NIFI-4748 > URL: https://issues.apache.org/jira/browse/NIFI-4748 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Joey Frazee >Assignee: Joey Frazee > > Both AmazonKinesisClient and AmazonKinesisFirehoseClient support setRegion() > and endpoint override can be useful for unit testing and working with Kinesis > compatible services, so it'd be helpful to add the ENDPOINT_OVERRIDE property > to these processors. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-4767) UpdateRecord not working properly with Arrays and Maps
[ https://issues.apache.org/jira/browse/NIFI-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-4767: - Status: Patch Available (was: Open) > UpdateRecord not working properly with Arrays and Maps > -- > > Key: NIFI-4767 > URL: https://issues.apache.org/jira/browse/NIFI-4767 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > Fix For: 1.6.0 > > > If I use UpdateRecord to update an individual field in an array or in a map, > it does not update the field specified. Instead, it just skips over it. In > one case, I tried using multiple indices in an array such as /accounts[1, 3, > 5] and setting the value to 99. Instead, only the 0th element was updated. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4767) UpdateRecord not working properly with Arrays and Maps
[ https://issues.apache.org/jira/browse/NIFI-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322868#comment-16322868 ] ASF GitHub Bot commented on NIFI-4767: -- GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/2398 NIFI-4767: Fixed issues with RecordPath using the wrong field name fo… …r arrays and maps. Also addressed issue where Avro Reader was returning a Record object when it should return a Map Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-4767 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2398.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2398 commit 930f3302df4be0693da669d346506ce9230ad065 Author: Mark PayneDate: 2018-01-11T20:04:14Z NIFI-4767: Fixed issues with RecordPath using the wrong field name for arrays and maps. Also addressed issue where Avro Reader was returning a Record object when it should return a Map > UpdateRecord not working properly with Arrays and Maps > -- > > Key: NIFI-4767 > URL: https://issues.apache.org/jira/browse/NIFI-4767 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > Fix For: 1.6.0 > > > If I use UpdateRecord to update an individual field in an array or in a map, > it does not update the field specified. Instead, it just skips over it. In > one case, I tried using multiple indices in an array such as /accounts[1, 3, > 5] and setting the value to 99. Instead, only the 0th element was updated. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-4441) Add MapRecord support inside avro union types
[ https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-4441: - Resolution: Fixed Fix Version/s: 1.6.0 Status: Resolved (was: Patch Available) > Add MapRecord support inside avro union types > - > > Key: NIFI-4441 > URL: https://issues.apache.org/jira/browse/NIFI-4441 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.4.0 >Reporter: Patrice Freydiere > Fix For: 1.6.0 > > > Using an avro union type that contain maps in the definition lead to errors > in loading avro records. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2398: NIFI-4767: Fixed issues with RecordPath using the w...
GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/2398 NIFI-4767: Fixed issues with RecordPath using the wrong field name fo⦠â¦r arrays and maps. Also addressed issue where Avro Reader was returning a Record object when it should return a Map Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-4767 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2398.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2398 commit 930f3302df4be0693da669d346506ce9230ad065 Author: Mark PayneDate: 2018-01-11T20:04:14Z NIFI-4767: Fixed issues with RecordPath using the wrong field name for arrays and maps. Also addressed issue where Avro Reader was returning a Record object when it should return a Map ---
[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types
[ https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322867#comment-16322867 ] ASF GitHub Bot commented on NIFI-4441: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2207 @frett27 @mattyb149 I think what is done in this PR is actually correct as-is. The issue you're running into, @mattyb149, I think is that the Avro Reader is returning a MapRecord object when it should in fact return a Map object (Any time a RecordFieldType is MAP, it should return a java.util.Map for the value). So I will address that as part of NIFI-4767. +1 merged to master. > Add MapRecord support inside avro union types > - > > Key: NIFI-4441 > URL: https://issues.apache.org/jira/browse/NIFI-4441 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.4.0 >Reporter: Patrice Freydiere > > Using an avro union type that contain maps in the definition lead to errors > in loading avro records. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #2207: NIFI-4441 patch avro maps in union types
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2207 @frett27 @mattyb149 I think what is done in this PR is actually correct as-is. The issue you're running into, @mattyb149, I think is that the Avro Reader is returning a MapRecord object when it should in fact return a Map object (Any time a RecordFieldType is MAP, it should return a java.util.Map for the value). So I will address that as part of NIFI-4767. +1 merged to master. ---
[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types
[ https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322865#comment-16322865 ] ASF subversion and git services commented on NIFI-4441: --- Commit 5f7bd81af97523b6b25f206cd0810c148d8dcd4a in nifi's branch refs/heads/master from [~frett27] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=5f7bd81 ] NIFI-4441: patch avro maps in union types. This closes #2207. Signed-off-by: Mark Payne> Add MapRecord support inside avro union types > - > > Key: NIFI-4441 > URL: https://issues.apache.org/jira/browse/NIFI-4441 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.4.0 >Reporter: Patrice Freydiere > > Using an avro union type that contain maps in the definition lead to errors > in loading avro records. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4441) Add MapRecord support inside avro union types
[ https://issues.apache.org/jira/browse/NIFI-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322866#comment-16322866 ] ASF GitHub Bot commented on NIFI-4441: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2207 > Add MapRecord support inside avro union types > - > > Key: NIFI-4441 > URL: https://issues.apache.org/jira/browse/NIFI-4441 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.4.0 >Reporter: Patrice Freydiere > > Using an avro union type that contain maps in the definition lead to errors > in loading avro records. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2207: NIFI-4441 patch avro maps in union types
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2207 ---
[jira] [Commented] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322858#comment-16322858 ] ASF GitHub Bot commented on NIFI-4768: -- GitHub user mattyb149 opened a pull request: https://github.com/apache/nifi/pull/2397 NIFI-4768: Add exclusion filters to S2SProvenanceReportingTask Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mattyb149/nifi NIFI-4768 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2397.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2397 commit 559c33e50cee3dc5e96b8dcb030a9f8464f3db18 Author: Matthew BurgessDate: 2018-01-11T20:00:03Z NIFI-4768: Add exclusion filters to S2SProvenanceReportingTask > Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-4768: --- Status: Patch Available (was: In Progress) > Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2397: NIFI-4768: Add exclusion filters to S2SProvenanceRe...
GitHub user mattyb149 opened a pull request: https://github.com/apache/nifi/pull/2397 NIFI-4768: Add exclusion filters to S2SProvenanceReportingTask Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mattyb149/nifi NIFI-4768 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2397.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2397 commit 559c33e50cee3dc5e96b8dcb030a9f8464f3db18 Author: Matthew BurgessDate: 2018-01-11T20:00:03Z NIFI-4768: Add exclusion filters to S2SProvenanceReportingTask ---
[jira] [Assigned] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess reassigned NIFI-4768: -- Assignee: Matt Burgess > Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
Matt Burgess created NIFI-4768: -- Summary: Add exclusion filters to SiteToSiteProvenanceReportingTask Key: NIFI-4768 URL: https://issues.apache.org/jira/browse/NIFI-4768 Project: Apache NiFi Issue Type: Improvement Reporter: Matt Burgess Although the SiteToSiteProvenanceReportingTask has filters for which events, components, etc. to capture, it is an inclusive filter, meaning if a filter is set, only those entities' events will be sent. However it would be useful to also have an exclusionary filter, in order to capture all events except a few. One particular use case is a sub-flow that processes provenance events, where the user would not want to process provenance events generated by components involved in the provenance-handling flow itself. In this fashion, for example, if the sub-flow is in a process group (PG), then the user could exclude the PG and the Input Port sending events to it, thereby allowing the sub-flow to process all other events except those involved with the provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFI-4767) UpdateRecord not working properly with Arrays and Maps
Mark Payne created NIFI-4767: Summary: UpdateRecord not working properly with Arrays and Maps Key: NIFI-4767 URL: https://issues.apache.org/jira/browse/NIFI-4767 Project: Apache NiFi Issue Type: Bug Reporter: Mark Payne Assignee: Mark Payne Priority: Critical Fix For: 1.6.0 If I use UpdateRecord to update an individual field in an array or in a map, it does not update the field specified. Instead, it just skips over it. In one case, I tried using multiple indices in an array such as /accounts[1, 3, 5] and setting the value to 99. Instead, only the 0th element was updated. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #2180: Added GetMongoAggregation to support running Mongo aggrega...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2180 @mattyb149 I think all of your changes are in now. I also updated GetMongo to harmonize some of the changes between it and RunMongoAggregation ---
[GitHub] nifi-minifi pull request #108: MINIFI-415 Adjusting logging when a bundle is...
GitHub user apiri opened a pull request: https://github.com/apache/nifi-minifi/pull/108 MINIFI-415 Adjusting logging when a bundle is automatically selected MINIFI-415 Adjusting logging when a bundle is automatically selected for a component when multiple options are available. Thank you for submitting a contribution to Apache NiFi - MiNiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi-minifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under minifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under minifi-assembly? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/apiri/nifi-minifi MINIFI-415 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi/pull/108.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #108 commit 3797f8c1c09e331b544fb85d7cc0cc5668cf13b4 Author: Aldrin PiriDate: 2018-01-11T18:56:54Z MINIFI-415 Adjusting logging when a bundle is automatically selected for a component when multiple options are available. ---
[jira] [Commented] (NIFIREG-114) UI menu modifications
[ https://issues.apache.org/jira/browse/NIFIREG-114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322740#comment-16322740 ] Rob Moran commented on NIFIREG-114: --- [~markbean] the number things you can do in the Registry UI is quite small at the moment, so I can see how the current arrangement of actions may seem a bit odd. The design thinking in these areas is to expose one commonly used/high-level action -- New (bucket), Add (user), etc. -- then provide access to all actions that can be performed on those things from a menu. Actions in these menus will vary depending on the section and context in which you're working. As new functionality is introduced and more actions become available, the Actions button menu pattern should scale nicely. > UI menu modifications > - > > Key: NIFIREG-114 > URL: https://issues.apache.org/jira/browse/NIFIREG-114 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.1.0 >Reporter: Mark Bean >Priority: Minor > > On the Settings page/Buckets tab: > - Move the "New Bucket" button under the Actions list > - Alternatively, change the Actions drop down to simply "Delete" since it is > the only option > On the Settings page/Users tab: > - Move "Add User" under the "Actions" list -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (MINIFICPP-359) Support anonymous connections in config.yml
[ https://issues.apache.org/jira/browse/MINIFICPP-359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322730#comment-16322730 ] ASF GitHub Bot commented on MINIFICPP-359: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/231 > Support anonymous connections in config.yml > --- > > Key: MINIFICPP-359 > URL: https://issues.apache.org/jira/browse/MINIFICPP-359 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Minor > > Since connections are rarely, if ever, referenced by name or ID in a typical > config.yml, allow for anonymous (no ID and no name) connections. MiNiFi will > generate IDs for anonymous connections. This will make writing config.yml a > little simpler. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi-cpp pull request #231: MINIFICPP-359 Generate connection name fi...
Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/231 ---
[jira] [Commented] (NIFIREG-77) Allow a user to see the changes created by the currently loaded version
[ https://issues.apache.org/jira/browse/NIFIREG-77?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322709#comment-16322709 ] Rob Moran commented on NIFIREG-77: -- I think it makes sense to enable multiple selections the same way that it is done in the users or buckets table of the admin section with right-aligned checkboxes and an action (_Compare versions_) available from the Actions button menu. This approach will scale nicely as more functionality that can be performed on a resource is introduced. [~scottyaslan] I am wondering how interaction would work here. I'm thinking each Change Log "row" will need two, more carefully defined areas a user can click on: 1) the current area that opens/closes details, and 2) a new area containing a checkbox. Clicking the checkbox area would apply similar styling as a selected table row -- thoughts? > Allow a user to see the changes created by the currently loaded version > --- > > Key: NIFIREG-77 > URL: https://issues.apache.org/jira/browse/NIFIREG-77 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.1.0 >Reporter: Joseph Percivall >Priority: Critical > Attachments: Suggestion for diff UX.png > > > As a user, I would like to see the changes that are included in a particular > version. More specifically, if I'm on an old version and I upgrade to a > version written by someone else, I have no way to know what changes occurred > during that version upgrade. > A simple solution would be to utilize the same logic which displays the > current differences between local and stored in the registry and use that to > show the differences between the current version N and version N-1. The user > could then change between versions to see the changes that happened as part > of that version. > An even better solution (from a DFM perspective) would be to be able to see > the changes within any version (not just the most recent). That way a DFM > wouldn't have to stop the flow for an extended period of time to view the > changes/differences in different versions but I think that'd be more work. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (MINIFICPP-313) GenerateFlowFile 'Unique FlowFiles' property inverted
[ https://issues.apache.org/jira/browse/MINIFICPP-313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aldrin Piri updated MINIFICPP-313: -- Fix Version/s: 0.4.0 > GenerateFlowFile 'Unique FlowFiles' property inverted > - > > Key: MINIFICPP-313 > URL: https://issues.apache.org/jira/browse/MINIFICPP-313 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Andrew Christianson >Priority: Minor > Fix For: 0.4.0 > > > 'Unique FlowFiles' must be set to false in order to get unique flow files, > which is backward. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (MINIFICPP-336) With default GetFile settings dot files are not getting ignored on linux systems as they should
[ https://issues.apache.org/jira/browse/MINIFICPP-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] marco polo updated MINIFICPP-336: - Status: Patch Available (was: Open) > With default GetFile settings dot files are not getting ignored on linux > systems as they should > --- > > Key: MINIFICPP-336 > URL: https://issues.apache.org/jira/browse/MINIFICPP-336 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Affects Versions: 0.3.0, 0.2.0, 0.1.0 >Reporter: Joseph Witt >Assignee: marco polo > Fix For: 0.4.0 > > > With this config > Processors: > - name: GetFile > class: org.apache.nifi.processors.standard.GetFile > max concurrent tasks: 1 > scheduling strategy: TIMER_DRIVEN > scheduling period: 0 sec > penalization period: 30 sec > yield period: 1 sec > run duration nanos: 0 > auto-terminated relationships list: [] > Properties: > Batch Size: '10' > File Filter: '[^\.].*' > Ignore Hidden Files: 'true' > Input Directory: test/input > Keep Source File: 'false' > Maximum File Age: > Maximum File Size: > Minimum File Age: 0 sec > Minimum File Size: 0 B > Path Filter: > Polling Interval: 0 sec > Recurse Subdirectories: 'true' > The minifi flow picks up any files starting with '.' character right away. I > believe this is causing duplication to occur when NiFi writes to that > directory being watched, for example, because it writes the files a > hidden/dot notation then renamed it when done. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (MINIFICPP-336) With default GetFile settings dot files are not getting ignored on linux systems as they should
[ https://issues.apache.org/jira/browse/MINIFICPP-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322676#comment-16322676 ] ASF GitHub Bot commented on MINIFICPP-336: -- GitHub user phrocker opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/238 MINIFICPP-336: Use correct path when excluding dot files Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-336 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/238.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #238 commit 0b813a95af19e2db3d0c809d43c4c057f1b49e0c Author: Marc ParisiDate: 2018-01-11T18:02:24Z MINIFICPP-336: Use correct path when excluding dot files > With default GetFile settings dot files are not getting ignored on linux > systems as they should > --- > > Key: MINIFICPP-336 > URL: https://issues.apache.org/jira/browse/MINIFICPP-336 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Affects Versions: 0.1.0, 0.2.0, 0.3.0 >Reporter: Joseph Witt >Assignee: marco polo > Fix For: 0.4.0 > > > With this config > Processors: > - name: GetFile > class: org.apache.nifi.processors.standard.GetFile > max concurrent tasks: 1 > scheduling strategy: TIMER_DRIVEN > scheduling period: 0 sec > penalization period: 30 sec > yield period: 1 sec > run duration nanos: 0 > auto-terminated relationships list: [] > Properties: > Batch Size: '10' > File Filter: '[^\.].*' > Ignore Hidden Files: 'true' > Input Directory: test/input > Keep Source File: 'false' > Maximum File Age: > Maximum File Size: > Minimum File Age: 0 sec > Minimum File Size: 0 B > Path Filter: > Polling Interval: 0 sec > Recurse Subdirectories: 'true' > The minifi flow picks up any files starting with '.' character right away. I > believe this is causing duplication to occur when NiFi writes to that > directory being watched, for example, because it writes the files a > hidden/dot notation then renamed it when done. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi-cpp pull request #238: MINIFICPP-336: Use correct path when excl...
GitHub user phrocker opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/238 MINIFICPP-336: Use correct path when excluding dot files Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-336 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/238.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #238 commit 0b813a95af19e2db3d0c809d43c4c057f1b49e0c Author: Marc ParisiDate: 2018-01-11T18:02:24Z MINIFICPP-336: Use correct path when excluding dot files ---
[jira] [Resolved] (MINIFICPP-33) Create GetGPS processor for acquiring GPS coordinates
[ https://issues.apache.org/jira/browse/MINIFICPP-33?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] marco polo resolved MINIFICPP-33. - Resolution: Fixed This has been resolved > Create GetGPS processor for acquiring GPS coordinates > - > > Key: MINIFICPP-33 > URL: https://issues.apache.org/jira/browse/MINIFICPP-33 > Project: NiFi MiNiFi C++ > Issue Type: New Feature >Reporter: Jeremy Dyer >Assignee: Jeremy Dyer > Fix For: 0.4.0 > > > GPSD is a popular framework for interacting with a multitude of GPS devices. > It drastically simplifies the interaction with vendor specific GPS devices by > providing a daemon service which communicates with the device, converts the > raw NMEA 0183 sentences into JSON objects, and then emits those JSON objects > over a socket for 0-N downstream devices to consume. > This feature would create a GetGPS processor that would listen to a running > instance of GPSD as one of those downstream consumers. The processor would > provide integration with the GPSD daemon to accept the JSON objects and > create new flowfiles for each of the JSON objects received. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (MINIFICPP-37) Create scripts to get information from the controller API
[ https://issues.apache.org/jira/browse/MINIFICPP-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] marco polo updated MINIFICPP-37: Status: Patch Available (was: Open) > Create scripts to get information from the controller API > - > > Key: MINIFICPP-37 > URL: https://issues.apache.org/jira/browse/MINIFICPP-37 > Project: NiFi MiNiFi C++ > Issue Type: Sub-task >Reporter: marco polo >Assignee: marco polo >Priority: Minor > Labels: Durability, Reliability > Fix For: 0.4.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (MINIFICPP-337) Make default log directory 'logs'
[ https://issues.apache.org/jira/browse/MINIFICPP-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322626#comment-16322626 ] ASF GitHub Bot commented on MINIFICPP-337: -- Github user apiri commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/236#discussion_r161024930 --- Diff: libminifi/src/core/logging/LoggerConfiguration.cpp --- @@ -110,6 +112,17 @@ std::shared_ptr LoggerConfiguration::initialize_names if (!logger_properties->get(appender_key + ".file_name", file_name)) { file_name = "minifi-app.log"; } + std::string directory = ""; + if (logger_properties->get(appender_key + ".directory", directory)) { +// Create the log directory if needed +struct stat logDirStat; +if (stat(directory.c_str(), ) != 0 || !S_ISDIR(logDirStat.st_mode)) { + if (mkdir(directory.c_str(), 0777) == -1) { --- End diff -- How directory is handled here will not make this within the root of the minifi deployment and will be relative to where the call is invoked from. So if I perform a minifi.sh start, it will do so from the logs directory of where I started and not within my minifi installation. > Make default log directory 'logs' > - > > Key: MINIFICPP-337 > URL: https://issues.apache.org/jira/browse/MINIFICPP-337 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: marco polo >Assignee: bqiu > Fix For: 0.4.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi-cpp pull request #236: MINIFICPP-337: make default log directory...
Github user apiri commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/236#discussion_r161024930 --- Diff: libminifi/src/core/logging/LoggerConfiguration.cpp --- @@ -110,6 +112,17 @@ std::shared_ptr LoggerConfiguration::initialize_names if (!logger_properties->get(appender_key + ".file_name", file_name)) { file_name = "minifi-app.log"; } + std::string directory = ""; + if (logger_properties->get(appender_key + ".directory", directory)) { +// Create the log directory if needed +struct stat logDirStat; +if (stat(directory.c_str(), ) != 0 || !S_ISDIR(logDirStat.st_mode)) { + if (mkdir(directory.c_str(), 0777) == -1) { --- End diff -- How directory is handled here will not make this within the root of the minifi deployment and will be relative to where the call is invoked from. So if I perform a minifi.sh start, it will do so from the logs directory of where I started and not within my minifi installation. ---
[jira] [Created] (NIFIREG-115) Dialogs should automatically focus their initial inputs to allow users to quickly start typing.
Scott Aslan created NIFIREG-115: --- Summary: Dialogs should automatically focus their initial inputs to allow users to quickly start typing. Key: NIFIREG-115 URL: https://issues.apache.org/jira/browse/NIFIREG-115 Project: NiFi Registry Issue Type: Improvement Affects Versions: 0.1.0 Reporter: Scott Aslan Assignee: Scott Aslan -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (NIFI-4764) Add tooltips to status bar icons
[ https://issues.apache.org/jira/browse/NIFI-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman reassigned NIFI-4764: - Assignee: Marco Gaido > Add tooltips to status bar icons > > > Key: NIFI-4764 > URL: https://issues.apache.org/jira/browse/NIFI-4764 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Reporter: Matt Burgess >Assignee: Marco Gaido > Fix For: 1.6.0 > > > The current status bar has a number of icons that refer to various states of > components in the NiFi instance, and with NiFi Registry support in 1.5.0 > there will be even more. Currently the documentation for these icons is in > the User Guide, but it would nice to have tooltips for each icon (with its > description and the count) so the information is readily available without > having to go to the Help documentation in the UI. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4764) Add tooltips to status bar icons
[ https://issues.apache.org/jira/browse/NIFI-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322582#comment-16322582 ] ASF GitHub Bot commented on NIFI-4764: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2394 > Add tooltips to status bar icons > > > Key: NIFI-4764 > URL: https://issues.apache.org/jira/browse/NIFI-4764 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Reporter: Matt Burgess > Fix For: 1.6.0 > > > The current status bar has a number of icons that refer to various states of > components in the NiFi instance, and with NiFi Registry support in 1.5.0 > there will be even more. Currently the documentation for these icons is in > the User Guide, but it would nice to have tooltips for each icon (with its > description and the count) so the information is readily available without > having to go to the Help documentation in the UI. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (NIFI-4764) Add tooltips to status bar icons
[ https://issues.apache.org/jira/browse/NIFI-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman resolved NIFI-4764. --- Resolution: Fixed Fix Version/s: 1.6.0 > Add tooltips to status bar icons > > > Key: NIFI-4764 > URL: https://issues.apache.org/jira/browse/NIFI-4764 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Reporter: Matt Burgess >Assignee: Marco Gaido > Fix For: 1.6.0 > > > The current status bar has a number of icons that refer to various states of > components in the NiFi instance, and with NiFi Registry support in 1.5.0 > there will be even more. Currently the documentation for these icons is in > the User Guide, but it would nice to have tooltips for each icon (with its > description and the count) so the information is readily available without > having to go to the Help documentation in the UI. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4764) Add tooltips to status bar icons
[ https://issues.apache.org/jira/browse/NIFI-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322581#comment-16322581 ] ASF GitHub Bot commented on NIFI-4764: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2394 Thanks @mgaido91! This has been merged to master. > Add tooltips to status bar icons > > > Key: NIFI-4764 > URL: https://issues.apache.org/jira/browse/NIFI-4764 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Reporter: Matt Burgess > > The current status bar has a number of icons that refer to various states of > components in the NiFi instance, and with NiFi Registry support in 1.5.0 > there will be even more. Currently the documentation for these icons is in > the User Guide, but it would nice to have tooltips for each icon (with its > description and the count) so the information is readily available without > having to go to the Help documentation in the UI. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2394: NIFI-4764: Add tooltips to status bar icons
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2394 ---
[GitHub] nifi issue #2394: NIFI-4764: Add tooltips to status bar icons
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2394 Thanks @mgaido91! This has been merged to master. ---
[jira] [Commented] (NIFI-4764) Add tooltips to status bar icons
[ https://issues.apache.org/jira/browse/NIFI-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322579#comment-16322579 ] ASF subversion and git services commented on NIFI-4764: --- Commit 39a484b631d4a53820c60c1c5c388fd33fcf8ddd in nifi's branch refs/heads/master from [~mgaido] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=39a484b ] NIFI-4764: Add tooltips to status bar icons add tooltip to process groups cleanup This closes #2394 > Add tooltips to status bar icons > > > Key: NIFI-4764 > URL: https://issues.apache.org/jira/browse/NIFI-4764 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Reporter: Matt Burgess > > The current status bar has a number of icons that refer to various states of > components in the NiFi instance, and with NiFi Registry support in 1.5.0 > there will be even more. Currently the documentation for these icons is in > the User Guide, but it would nice to have tooltips for each icon (with its > description and the count) so the information is readily available without > having to go to the Help documentation in the UI. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (MINIFICPP-66) Provide YAML schema version 2
[ https://issues.apache.org/jira/browse/MINIFICPP-66?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aldrin Piri resolved MINIFICPP-66. -- Resolution: Won't Do > Provide YAML schema version 2 > - > > Key: MINIFICPP-66 > URL: https://issues.apache.org/jira/browse/MINIFICPP-66 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Aldrin Piri > > The Java version has created another version of the YAML schema. Supporting > these versions in a graceful manner should also exist. While the components > may not have framework level support, establishing a similar versioned system > for managing the YAML config will be important moving forward especially for > user experience. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2180: Added GetMongoAggregation to support running Mongo ...
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2180#discussion_r161004084 --- Diff: nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java --- @@ -268,32 +248,29 @@ public void onTrigger(final ProcessContext context, final ProcessSession session String payload = buildBatch(batch, jsonTypeSetting); writeBatch(payload, context, session); batch = new ArrayList<>(); -} catch (IOException ex) { +} catch (Exception ex) { getLogger().error("Error building batch", ex); } } } if (batch.size() > 0) { try { writeBatch(buildBatch(batch, jsonTypeSetting), context, session); -} catch (IOException ex) { +} catch (Exception ex) { getLogger().error("Error sending remainder of batch", ex); } } } else { while (cursor.hasNext()) { flowFile = session.create(); -flowFile = session.write(flowFile, new OutputStreamCallback() { -@Override -public void process(OutputStream out) throws IOException { -String json; -if (jsonTypeSetting.equals(JSON_TYPE_STANDARD)) { -json = mapper.writerWithDefaultPrettyPrinter().writeValueAsString(cursor.next()); -} else { -json = cursor.next().toJson(); -} -IOUtils.write(json, out); +flowFile = session.write(flowFile, out -> { +String json; +if (jsonTypeSetting.equals(JSON_TYPE_STANDARD)) { +json = mapper.writerWithDefaultPrettyPrinter().writeValueAsString(cursor.next()); +} else { +json = cursor.next().toJson(); } +IOUtils.write(json, out); --- End diff -- No, but since you're writing to a flow file at this point, you don't need to use the Mongo API. I was just wondering about files in Mongo with various character sets, so for example if you had a document in Mongo with a string field containing Japanese characters, etc. ---
[jira] [Updated] (NIFI-3648) Address Excessive Garbage Collection
[ https://issues.apache.org/jira/browse/NIFI-3648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-3648: - Fix Version/s: 1.6.0 Status: Patch Available (was: Open) > Address Excessive Garbage Collection > > > Key: NIFI-3648 > URL: https://issues.apache.org/jira/browse/NIFI-3648 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, Extensions >Reporter: Mark Payne >Assignee: Mark Payne > Fix For: 1.6.0 > > > We have a lot of places in the codebase where we generate lots of unnecessary > garbage - especially byte arrays. We need to clean this up to in order to > relieve stress on the garbage collector. > Specific points that I've found create unnecessary garbage: > Provenance CompressableRecordWriter creates a new BufferedOutputStream for > each 'compression block' that it creates. Each one has a 64 KB byte[]. This > is very wasteful. We should instead subclass BufferedOutputStream so that we > are able to provide a byte[] to use instead of an int that indicates the > size. This way, we can just keep re-using the same byte[] that we create for > each writer. This saves about 32,000 of these 64 KB byte[] for each writer > that we create. And we create more than 1 of these per minute. > EvaluateJsonPath uses a BufferedInputStream but it is not necessary, because > the underlying library will also buffer data. So we are unnecessarily > creating a lot of byte[]'s > CompressContent uses Buffered Input AND Output. And uses 64 KB byte[]. And > doesn't need them at all, because it reads and writes with its own byte[] > buffer via StreamUtils.copy > Site-to-site uses CompressionInputStream. This stream creates a new byte[] in > the readChunkHeader() method continually. We should instead only create a new > byte[] if we need a bigger buffer and otherwise just use an offset & length > variable. > Right now, SplitText uses TextLineDemarcator. The fill() method increases the > size of the internal byte[] by 8 KB each time. When dealing with a large > chunk of data, this is VERY expensive on GC because we continually create a > byte[] and then discard it to create a new one. Take for example an 800 KB > chunk. We would do this 100,000 times. If we instead double the size we would > only have to create 8 of these. > Other Processors that use Buffered streams unnecessarily: > ConvertJSONToSQL > ExecuteProcess > ExecuteStreamCommand > AttributesToJSON > EvaluateJsonPath (when writing to content) > ExtractGrok > JmsConsumer -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-3648) Address Excessive Garbage Collection
[ https://issues.apache.org/jira/browse/NIFI-3648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-3648: - Resolution: Fixed Status: Resolved (was: Patch Available) > Address Excessive Garbage Collection > > > Key: NIFI-3648 > URL: https://issues.apache.org/jira/browse/NIFI-3648 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, Extensions >Reporter: Mark Payne >Assignee: Mark Payne > Fix For: 1.6.0 > > > We have a lot of places in the codebase where we generate lots of unnecessary > garbage - especially byte arrays. We need to clean this up to in order to > relieve stress on the garbage collector. > Specific points that I've found create unnecessary garbage: > Provenance CompressableRecordWriter creates a new BufferedOutputStream for > each 'compression block' that it creates. Each one has a 64 KB byte[]. This > is very wasteful. We should instead subclass BufferedOutputStream so that we > are able to provide a byte[] to use instead of an int that indicates the > size. This way, we can just keep re-using the same byte[] that we create for > each writer. This saves about 32,000 of these 64 KB byte[] for each writer > that we create. And we create more than 1 of these per minute. > EvaluateJsonPath uses a BufferedInputStream but it is not necessary, because > the underlying library will also buffer data. So we are unnecessarily > creating a lot of byte[]'s > CompressContent uses Buffered Input AND Output. And uses 64 KB byte[]. And > doesn't need them at all, because it reads and writes with its own byte[] > buffer via StreamUtils.copy > Site-to-site uses CompressionInputStream. This stream creates a new byte[] in > the readChunkHeader() method continually. We should instead only create a new > byte[] if we need a bigger buffer and otherwise just use an offset & length > variable. > Right now, SplitText uses TextLineDemarcator. The fill() method increases the > size of the internal byte[] by 8 KB each time. When dealing with a large > chunk of data, this is VERY expensive on GC because we continually create a > byte[] and then discard it to create a new one. Take for example an 800 KB > chunk. We would do this 100,000 times. If we instead double the size we would > only have to create 8 of these. > Other Processors that use Buffered streams unnecessarily: > ConvertJSONToSQL > ExecuteProcess > ExecuteStreamCommand > AttributesToJSON > EvaluateJsonPath (when writing to content) > ExtractGrok > JmsConsumer -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-3648) Address Excessive Garbage Collection
[ https://issues.apache.org/jira/browse/NIFI-3648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322494#comment-16322494 ] ASF GitHub Bot commented on NIFI-3648: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/1637 @mosermw sorry about the delay. The changes do look good... it looks like I'd reviewed on Mar. 30 and then must have forgotten it. But was able to cleanly rebase against master and verify that all looks good. Definitely a nice improvement. +1 merged to master. > Address Excessive Garbage Collection > > > Key: NIFI-3648 > URL: https://issues.apache.org/jira/browse/NIFI-3648 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, Extensions >Reporter: Mark Payne >Assignee: Mark Payne > Fix For: 1.6.0 > > > We have a lot of places in the codebase where we generate lots of unnecessary > garbage - especially byte arrays. We need to clean this up to in order to > relieve stress on the garbage collector. > Specific points that I've found create unnecessary garbage: > Provenance CompressableRecordWriter creates a new BufferedOutputStream for > each 'compression block' that it creates. Each one has a 64 KB byte[]. This > is very wasteful. We should instead subclass BufferedOutputStream so that we > are able to provide a byte[] to use instead of an int that indicates the > size. This way, we can just keep re-using the same byte[] that we create for > each writer. This saves about 32,000 of these 64 KB byte[] for each writer > that we create. And we create more than 1 of these per minute. > EvaluateJsonPath uses a BufferedInputStream but it is not necessary, because > the underlying library will also buffer data. So we are unnecessarily > creating a lot of byte[]'s > CompressContent uses Buffered Input AND Output. And uses 64 KB byte[]. And > doesn't need them at all, because it reads and writes with its own byte[] > buffer via StreamUtils.copy > Site-to-site uses CompressionInputStream. This stream creates a new byte[] in > the readChunkHeader() method continually. We should instead only create a new > byte[] if we need a bigger buffer and otherwise just use an offset & length > variable. > Right now, SplitText uses TextLineDemarcator. The fill() method increases the > size of the internal byte[] by 8 KB each time. When dealing with a large > chunk of data, this is VERY expensive on GC because we continually create a > byte[] and then discard it to create a new one. Take for example an 800 KB > chunk. We would do this 100,000 times. If we instead double the size we would > only have to create 8 of these. > Other Processors that use Buffered streams unnecessarily: > ConvertJSONToSQL > ExecuteProcess > ExecuteStreamCommand > AttributesToJSON > EvaluateJsonPath (when writing to content) > ExtractGrok > JmsConsumer -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #1637: NIFI-3648 removed cluster message copying when not in debu...
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/1637 @mosermw sorry about the delay. The changes do look good... it looks like I'd reviewed on Mar. 30 and then must have forgotten it. But was able to cleanly rebase against master and verify that all looks good. Definitely a nice improvement. +1 merged to master. ---
[jira] [Commented] (NIFI-3648) Address Excessive Garbage Collection
[ https://issues.apache.org/jira/browse/NIFI-3648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322490#comment-16322490 ] ASF subversion and git services commented on NIFI-3648: --- Commit bcac2766bce63508b7930344b9dc21ceef33bf98 in nifi's branch refs/heads/master from [~boardm26] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=bcac276 ] NIFI-3648 removed message copying when not in debug mode. This closes #1637. Signed-off-by: Mark Payne> Address Excessive Garbage Collection > > > Key: NIFI-3648 > URL: https://issues.apache.org/jira/browse/NIFI-3648 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, Extensions >Reporter: Mark Payne >Assignee: Mark Payne > > We have a lot of places in the codebase where we generate lots of unnecessary > garbage - especially byte arrays. We need to clean this up to in order to > relieve stress on the garbage collector. > Specific points that I've found create unnecessary garbage: > Provenance CompressableRecordWriter creates a new BufferedOutputStream for > each 'compression block' that it creates. Each one has a 64 KB byte[]. This > is very wasteful. We should instead subclass BufferedOutputStream so that we > are able to provide a byte[] to use instead of an int that indicates the > size. This way, we can just keep re-using the same byte[] that we create for > each writer. This saves about 32,000 of these 64 KB byte[] for each writer > that we create. And we create more than 1 of these per minute. > EvaluateJsonPath uses a BufferedInputStream but it is not necessary, because > the underlying library will also buffer data. So we are unnecessarily > creating a lot of byte[]'s > CompressContent uses Buffered Input AND Output. And uses 64 KB byte[]. And > doesn't need them at all, because it reads and writes with its own byte[] > buffer via StreamUtils.copy > Site-to-site uses CompressionInputStream. This stream creates a new byte[] in > the readChunkHeader() method continually. We should instead only create a new > byte[] if we need a bigger buffer and otherwise just use an offset & length > variable. > Right now, SplitText uses TextLineDemarcator. The fill() method increases the > size of the internal byte[] by 8 KB each time. When dealing with a large > chunk of data, this is VERY expensive on GC because we continually create a > byte[] and then discard it to create a new one. Take for example an 800 KB > chunk. We would do this 100,000 times. If we instead double the size we would > only have to create 8 of these. > Other Processors that use Buffered streams unnecessarily: > ConvertJSONToSQL > ExecuteProcess > ExecuteStreamCommand > AttributesToJSON > EvaluateJsonPath (when writing to content) > ExtractGrok > JmsConsumer -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4764) Add tooltips to status bar icons
[ https://issues.apache.org/jira/browse/NIFI-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322488#comment-16322488 ] ASF GitHub Bot commented on NIFI-4764: -- Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/2394#discussion_r161001512 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-process-group.js --- @@ -847,6 +870,7 @@ .text(function (d) { return d.activeRemotePortCount; }); +transmittingCount.append("title").text("Transmitting Remote Process Groups"); --- End diff -- Ah sorry, you're right. I overlooked that these tips were on the counts and not the icons. > Add tooltips to status bar icons > > > Key: NIFI-4764 > URL: https://issues.apache.org/jira/browse/NIFI-4764 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Reporter: Matt Burgess > > The current status bar has a number of icons that refer to various states of > components in the NiFi instance, and with NiFi Registry support in 1.5.0 > there will be even more. Currently the documentation for these icons is in > the User Guide, but it would nice to have tooltips for each icon (with its > description and the count) so the information is readily available without > having to go to the Help documentation in the UI. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-3648) Address Excessive Garbage Collection
[ https://issues.apache.org/jira/browse/NIFI-3648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322491#comment-16322491 ] ASF GitHub Bot commented on NIFI-3648: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1637 > Address Excessive Garbage Collection > > > Key: NIFI-3648 > URL: https://issues.apache.org/jira/browse/NIFI-3648 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, Extensions >Reporter: Mark Payne >Assignee: Mark Payne > > We have a lot of places in the codebase where we generate lots of unnecessary > garbage - especially byte arrays. We need to clean this up to in order to > relieve stress on the garbage collector. > Specific points that I've found create unnecessary garbage: > Provenance CompressableRecordWriter creates a new BufferedOutputStream for > each 'compression block' that it creates. Each one has a 64 KB byte[]. This > is very wasteful. We should instead subclass BufferedOutputStream so that we > are able to provide a byte[] to use instead of an int that indicates the > size. This way, we can just keep re-using the same byte[] that we create for > each writer. This saves about 32,000 of these 64 KB byte[] for each writer > that we create. And we create more than 1 of these per minute. > EvaluateJsonPath uses a BufferedInputStream but it is not necessary, because > the underlying library will also buffer data. So we are unnecessarily > creating a lot of byte[]'s > CompressContent uses Buffered Input AND Output. And uses 64 KB byte[]. And > doesn't need them at all, because it reads and writes with its own byte[] > buffer via StreamUtils.copy > Site-to-site uses CompressionInputStream. This stream creates a new byte[] in > the readChunkHeader() method continually. We should instead only create a new > byte[] if we need a bigger buffer and otherwise just use an offset & length > variable. > Right now, SplitText uses TextLineDemarcator. The fill() method increases the > size of the internal byte[] by 8 KB each time. When dealing with a large > chunk of data, this is VERY expensive on GC because we continually create a > byte[] and then discard it to create a new one. Take for example an 800 KB > chunk. We would do this 100,000 times. If we instead double the size we would > only have to create 8 of these. > Other Processors that use Buffered streams unnecessarily: > ConvertJSONToSQL > ExecuteProcess > ExecuteStreamCommand > AttributesToJSON > EvaluateJsonPath (when writing to content) > ExtractGrok > JmsConsumer -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #1637: NIFI-3648 removed cluster message copying when not ...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1637 ---