[jira] [Updated] (NIFI-4769) PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming FlowFile with EL to create connection string
[ https://issues.apache.org/jira/browse/NIFI-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-4769: Resolution: Fixed Fix Version/s: 1.6.0 Status: Resolved (was: Patch Available) > PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming > FlowFile with EL to create connection string > - > > Key: NIFI-4769 > URL: https://issues.apache.org/jira/browse/NIFI-4769 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Koji Kawamura >Assignee: Koji Kawamura > Fix For: 1.6.0 > > > The latest change made by NIFI-4005 can break existing flows if Put or Fetch > AzureBlobStorage is configured to use incoming FlowFile attribute with EL for > accountName or accountKey. PutAzureBlobStorage and FetchAzureBlobStorage used > to be able to [specify key and account name from incoming FlowFile using > EL|https://github.com/apache/nifi/pull/1886/files#diff-a1be985cab6af1d412dbb21c5750e42aL76]. > But the change removed that capability mistakenly. > Following error messages are logged if this happens: > {code} > 2018-01-12 09:59:58,445 ERROR [Timer-Driven Process Thread-7] > o.a.n.p.a.storage.PutAzureBlobStorage > PutAzureBlobStorage[id=045a9107-a6f1-363f-bd95-1ba8abd7ee09] Invalid > connection string URI for 'PutAzureBlobStorage': > java.lang.IllegalArgumentException: Invalid connection string. > java.lang.IllegalArgumentException: Invalid connection string. > at > com.microsoft.azure.storage.CloudStorageAccount.parse(CloudStorageAccount.java:249) > at > org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.createCloudBlobClient(AzureStorageUtils.java:96) > at > org.apache.nifi.processors.azure.storage.PutAzureBlobStorage.onTrigger(PutAzureBlobStorage.java:75) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4004) Refactor RecordReaderFactory and SchemaAccessStrategy to be used without incoming FlowFile
[ https://issues.apache.org/jira/browse/NIFI-4004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324817#comment-16324817 ] ASF subversion and git services commented on NIFI-4004: --- Commit 83701632fb7216fba40d2e5cc3cd87bd8116c385 in nifi's branch refs/heads/master from [~ijokarumawak] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=8370163 ] NIFI-4769: Use FlowFile for EL at Fetch and PutAzureBlobStorage This commit add back the existing capability for those Processors to use incoming FlowFile attributes to compute account name and account key, which had been removed by NIFI-4004. Also, the same capability is added for SAS token. This closes #2400. Signed-off-by: Koji Kawamura> Refactor RecordReaderFactory and SchemaAccessStrategy to be used without > incoming FlowFile > -- > > Key: NIFI-4004 > URL: https://issues.apache.org/jira/browse/NIFI-4004 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.2.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura > Fix For: 1.4.0 > > > Current RecordReaderFactory and SchemaAccessStrategy implementation assumes > there's always an incoming FlowFile available, and use it to resolve Record > Schema. > That is fine for components those convert or update incoming FlowFiles, > however there are other components those does not have any incoming > FlowFiles, for example, ConsumeKafkaRecord_0_10. Typically, ones fetches data > from external system do not have incoming FlowFile. And current API doesn't > fit well with these as it requires a FlowFile. > In fact, [ConsumeKafkaRecord creates a temporal > FlowFile|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-0-10-processors/src/main/java/org/apache/nifi/processors/kafka/pubsub/ConsumerLease.java#L426] > only to get RecordSchema. This should be avoided as we expect more > components start using Record reader mechanism. > This JIRA proposes refactoring current API to allow accessing RecordReaders > without needing an incoming FlowFile. > Additionally, since there's Schema Access Strategy that requires incoming > FlowFile containing attribute values to access schema registry, it'd be > useful if we could tell user when such RecordReader is specified that it > can't be used. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4769) PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming FlowFile with EL to create connection string
[ https://issues.apache.org/jira/browse/NIFI-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324819#comment-16324819 ] ASF GitHub Bot commented on NIFI-4769: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2400 > PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming > FlowFile with EL to create connection string > - > > Key: NIFI-4769 > URL: https://issues.apache.org/jira/browse/NIFI-4769 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Koji Kawamura >Assignee: Koji Kawamura > Fix For: 1.6.0 > > > The latest change made by NIFI-4005 can break existing flows if Put or Fetch > AzureBlobStorage is configured to use incoming FlowFile attribute with EL for > accountName or accountKey. PutAzureBlobStorage and FetchAzureBlobStorage used > to be able to [specify key and account name from incoming FlowFile using > EL|https://github.com/apache/nifi/pull/1886/files#diff-a1be985cab6af1d412dbb21c5750e42aL76]. > But the change removed that capability mistakenly. > Following error messages are logged if this happens: > {code} > 2018-01-12 09:59:58,445 ERROR [Timer-Driven Process Thread-7] > o.a.n.p.a.storage.PutAzureBlobStorage > PutAzureBlobStorage[id=045a9107-a6f1-363f-bd95-1ba8abd7ee09] Invalid > connection string URI for 'PutAzureBlobStorage': > java.lang.IllegalArgumentException: Invalid connection string. > java.lang.IllegalArgumentException: Invalid connection string. > at > com.microsoft.azure.storage.CloudStorageAccount.parse(CloudStorageAccount.java:249) > at > org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.createCloudBlobClient(AzureStorageUtils.java:96) > at > org.apache.nifi.processors.azure.storage.PutAzureBlobStorage.onTrigger(PutAzureBlobStorage.java:75) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4769) PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming FlowFile with EL to create connection string
[ https://issues.apache.org/jira/browse/NIFI-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324816#comment-16324816 ] ASF subversion and git services commented on NIFI-4769: --- Commit 83701632fb7216fba40d2e5cc3cd87bd8116c385 in nifi's branch refs/heads/master from [~ijokarumawak] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=8370163 ] NIFI-4769: Use FlowFile for EL at Fetch and PutAzureBlobStorage This commit add back the existing capability for those Processors to use incoming FlowFile attributes to compute account name and account key, which had been removed by NIFI-4004. Also, the same capability is added for SAS token. This closes #2400. Signed-off-by: Koji Kawamura> PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming > FlowFile with EL to create connection string > - > > Key: NIFI-4769 > URL: https://issues.apache.org/jira/browse/NIFI-4769 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Koji Kawamura >Assignee: Koji Kawamura > Fix For: 1.6.0 > > > The latest change made by NIFI-4005 can break existing flows if Put or Fetch > AzureBlobStorage is configured to use incoming FlowFile attribute with EL for > accountName or accountKey. PutAzureBlobStorage and FetchAzureBlobStorage used > to be able to [specify key and account name from incoming FlowFile using > EL|https://github.com/apache/nifi/pull/1886/files#diff-a1be985cab6af1d412dbb21c5750e42aL76]. > But the change removed that capability mistakenly. > Following error messages are logged if this happens: > {code} > 2018-01-12 09:59:58,445 ERROR [Timer-Driven Process Thread-7] > o.a.n.p.a.storage.PutAzureBlobStorage > PutAzureBlobStorage[id=045a9107-a6f1-363f-bd95-1ba8abd7ee09] Invalid > connection string URI for 'PutAzureBlobStorage': > java.lang.IllegalArgumentException: Invalid connection string. > java.lang.IllegalArgumentException: Invalid connection string. > at > com.microsoft.azure.storage.CloudStorageAccount.parse(CloudStorageAccount.java:249) > at > org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.createCloudBlobClient(AzureStorageUtils.java:96) > at > org.apache.nifi.processors.azure.storage.PutAzureBlobStorage.onTrigger(PutAzureBlobStorage.java:75) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2400: NIFI-4769: Use FlowFile for EL at Fetch and PutAzur...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2400 ---
[GitHub] nifi issue #2400: NIFI-4769: Use FlowFile for EL at Fetch and PutAzureBlobSt...
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2400 Travis CI succeeded. Backed by Joe's +1, I'm merging this to master. Thanks for reviewing, @joewitt ---
[jira] [Commented] (NIFI-4769) PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming FlowFile with EL to create connection string
[ https://issues.apache.org/jira/browse/NIFI-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324814#comment-16324814 ] ASF GitHub Bot commented on NIFI-4769: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2400 Travis CI succeeded. Backed by Joe's +1, I'm merging this to master. Thanks for reviewing, @joewitt > PutAzureBlobStorage and FetchAzureBlobStorage should be able to use incoming > FlowFile with EL to create connection string > - > > Key: NIFI-4769 > URL: https://issues.apache.org/jira/browse/NIFI-4769 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Koji Kawamura >Assignee: Koji Kawamura > > The latest change made by NIFI-4005 can break existing flows if Put or Fetch > AzureBlobStorage is configured to use incoming FlowFile attribute with EL for > accountName or accountKey. PutAzureBlobStorage and FetchAzureBlobStorage used > to be able to [specify key and account name from incoming FlowFile using > EL|https://github.com/apache/nifi/pull/1886/files#diff-a1be985cab6af1d412dbb21c5750e42aL76]. > But the change removed that capability mistakenly. > Following error messages are logged if this happens: > {code} > 2018-01-12 09:59:58,445 ERROR [Timer-Driven Process Thread-7] > o.a.n.p.a.storage.PutAzureBlobStorage > PutAzureBlobStorage[id=045a9107-a6f1-363f-bd95-1ba8abd7ee09] Invalid > connection string URI for 'PutAzureBlobStorage': > java.lang.IllegalArgumentException: Invalid connection string. > java.lang.IllegalArgumentException: Invalid connection string. > at > com.microsoft.azure.storage.CloudStorageAccount.parse(CloudStorageAccount.java:249) > at > org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.createCloudBlobClient(AzureStorageUtils.java:96) > at > org.apache.nifi.processors.azure.storage.PutAzureBlobStorage.onTrigger(PutAzureBlobStorage.java:75) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-4768: Resolution: Fixed Fix Version/s: 1.6.0 Status: Resolved (was: Patch Available) > Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > Fix For: 1.6.0 > > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324798#comment-16324798 ] ASF subversion and git services commented on NIFI-4768: --- Commit 83d29300953fa86e89cb30c59dcb86ed660557cc in nifi's branch refs/heads/master from [~ca9mbu] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=83d2930 ] NIFI-4768: Add exclusion filters to S2SProvenanceReportingTask NIFI-4768: Updated exclusion logic per review comments This closes #2397. Signed-off-by: Koji Kawamura> Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324799#comment-16324799 ] ASF subversion and git services commented on NIFI-4768: --- Commit 83d29300953fa86e89cb30c59dcb86ed660557cc in nifi's branch refs/heads/master from [~ca9mbu] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=83d2930 ] NIFI-4768: Add exclusion filters to S2SProvenanceReportingTask NIFI-4768: Updated exclusion logic per review comments This closes #2397. Signed-off-by: Koji Kawamura> Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324800#comment-16324800 ] ASF GitHub Bot commented on NIFI-4768: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2397 > Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2397: NIFI-4768: Add exclusion filters to S2SProvenanceRe...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2397 ---
[jira] [Commented] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324794#comment-16324794 ] ASF GitHub Bot commented on NIFI-4768: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2397 @mattyb149 Thanks for incorporating the comments. LGTM +1. Merging in! > Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #2397: NIFI-4768: Add exclusion filters to S2SProvenanceReporting...
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2397 @mattyb149 Thanks for incorporating the comments. LGTM +1. Merging in! ---
[jira] [Commented] (MINIFICPP-337) Make default log directory 'logs'
[ https://issues.apache.org/jira/browse/MINIFICPP-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324777#comment-16324777 ] ASF GitHub Bot commented on MINIFICPP-337: -- Github user minifirocks commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/236 @apiri using MINIFI_HOME/logs. please review > Make default log directory 'logs' > - > > Key: MINIFICPP-337 > URL: https://issues.apache.org/jira/browse/MINIFICPP-337 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: marco polo >Assignee: bqiu > Fix For: 0.4.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi-cpp issue #236: MINIFICPP-337: make default log directory as log...
Github user minifirocks commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/236 @apiri using MINIFI_HOME/logs. please review ---
[jira] [Commented] (NIFI-4487) ConsumerKafka V10 should write Kafka Timestamp attribute
[ https://issues.apache.org/jira/browse/NIFI-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324720#comment-16324720 ] Jordan Moore commented on NIFI-4487: I am interested in this ticket for a MirrorMaker-like alternative. For producer, would be nice to see the option to use either a timestamp that can come from attributes, or data itself. (But putting a processor ahead of the publish to extract data to add the attribute is okay). For consumer, extract mentioned metadata and expose as attributes. > ConsumerKafka V10 should write Kafka Timestamp attribute > > > Key: NIFI-4487 > URL: https://issues.apache.org/jira/browse/NIFI-4487 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.4.0, 1.5.0 >Reporter: Pawel Niezgoda > Original Estimate: 1h > Remaining Estimate: 1h > > Starting from kafka v10 kafka provides info about message timestamp. > We should expose that data together with other attributes (like topic, > partition, offset). > We should add: > @WritesAttribute(attribute = KafkaProcessorUtils.KAFKA_TIMESTAMP, description > = "Kafka message timestamp"), > @WritesAttribute(attribute = > KafkaProcessorUtils.KAFKA_TIMESTAMP_TYPE, description = "Kafka message > timestamp type") -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (NIFI-4751) Release Management for Apache NiFi 1.5.0 RC
[ https://issues.apache.org/jira/browse/NIFI-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt resolved NIFI-4751. --- Resolution: Fixed > Release Management for Apache NiFi 1.5.0 RC > --- > > Key: NIFI-4751 > URL: https://issues.apache.org/jira/browse/NIFI-4751 > Project: Apache NiFi > Issue Type: Bug > Components: Tools and Build >Affects Versions: 1.5.0 >Reporter: Joseph Witt >Assignee: Joseph Witt > Fix For: 1.5.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4751) Release Management for Apache NiFi 1.5.0 RC
[ https://issues.apache.org/jira/browse/NIFI-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324547#comment-16324547 ] ASF subversion and git services commented on NIFI-4751: --- Commit 1821033 from [~joewitt] in branch 'site/trunk' [ https://svn.apache.org/r1821033 ] NIFI-4751 > Release Management for Apache NiFi 1.5.0 RC > --- > > Key: NIFI-4751 > URL: https://issues.apache.org/jira/browse/NIFI-4751 > Project: Apache NiFi > Issue Type: Bug > Components: Tools and Build >Affects Versions: 1.5.0 >Reporter: Joseph Witt >Assignee: Joseph Witt > Fix For: 1.5.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (MINIFICPP-313) GenerateFlowFile 'Unique FlowFiles' property inverted
[ https://issues.apache.org/jira/browse/MINIFICPP-313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dustin Rodrigues reassigned MINIFICPP-313: -- Assignee: Dustin Rodrigues > GenerateFlowFile 'Unique FlowFiles' property inverted > - > > Key: MINIFICPP-313 > URL: https://issues.apache.org/jira/browse/MINIFICPP-313 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Andrew Christianson >Assignee: Dustin Rodrigues >Priority: Minor > Fix For: 0.4.0 > > > 'Unique FlowFiles' must be set to false in order to get unique flow files, > which is backward. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4772) If several processors do not return from their @OnScheduled method, NiFi will stop scheduling any Processors
[ https://issues.apache.org/jira/browse/NIFI-4772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324539#comment-16324539 ] ASF GitHub Bot commented on NIFI-4772: -- Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/2403 I think it makes sense to have the tasks which check the status of component scheduling/unscheduling/etc.. be done in an administrative pool whereby they cannot get blocked/excluded from running in case the primary timer driven pool is exhausted with slow starting items. > If several processors do not return from their @OnScheduled method, NiFi will > stop scheduling any Processors > > > Key: NIFI-4772 > URL: https://issues.apache.org/jira/browse/NIFI-4772 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > > If a Processor does not properly return from its @OnScheduled method and > several instances of the processor are started, we can get into a state where > no Processors can start. We start seeing log messages like the following: > {code} > 2018-01-10 10:16:31,433 WARN [StandardProcessScheduler Thread-1] > o.a.n.controller.StandardProcessorNode Timed out while waiting for > OnScheduled of 'UpdateAttribute' processor to finish. An attempt is made to > cancel the task via Thread.interrupt(). However it does not guarantee that > the task will be canceled since the code inside current OnScheduled operation > may have been written to ignore interrupts which may result in a runaway > thread. This could lead to more issues, eventually requiring NiFi to be > restarted. This is usually a bug in the target Processor > 'UpdateAttribute[id=95423ee6-e6a6-1220-83ad-af20577063bd]' that needs to be > documented, reported and eventually fixed. > 2018-01-10 10:16:42,937 WARN [StandardProcessScheduler Thread-2] > o.a.n.controller.StandardProcessorNode Timed out while waiting for > OnScheduled of 'PutHDFS' processor to finish. An attempt is made to cancel > the task via Thread.interrupt(). However it does not guarantee that the task > will be canceled since the code inside current OnScheduled operation may have > been written to ignore interrupts which may result in a runaway thread. This > could lead to more issues, eventually requiring NiFi to be restarted. This is > usually a bug in the target Processor > 'PutHDFS[id=25e531ec-d873-1dec-acc9-ea745e7869ed]' that needs to be > documented, reported and eventually fixed. > 2018-01-10 10:16:46,993 WARN [StandardProcessScheduler Thread-4] > o.a.n.controller.StandardProcessorNode Timed out while waiting for > OnScheduled of 'LogAttribute' processor to finish. An attempt is made to > cancel the task via Thread.interrupt(). However it does not guarantee that > the task will be canceled since the code inside current OnScheduled operation > may have been written to ignore interrupts which may result in a runaway > thread. This could lead to more issues, eventually requiring NiFi to be > restarted. This is usually a bug in the target Processor > 'LogAttribute[id=9a683a06-aa24-19b5--944a0216]' that needs to be > documented, reported and eventually fixed. > {code} > While we should avoid having misbehaving Processors to begin with, the > framework must also be tolerant of this and should not allow one misbehaving > Processor from affecting other Processors. > We can "approximate" this issue by following these steps: > 1. Create 1 DebugFlow Processor. Auto-terminate its success & failure > relationships. Set the "@OnScheduled Pause Time" property to "2 mins" > 2. Copy & paste this DebugFlow Processor so that there are at least 8 of them. > 3. Create a GenerateFlowFile Processor and an UpdateAttribute Processor. Send > success of GenerateFlowFile to UpdateAttribute. > 4. Start all of the DebugFlow Processors. > 5. Start the GenerateFlowFIle and UpdateAttribute Processors. > In this scenario, we will not see the above log messages, because after 1 > minute the DebugFlow Processor is interrupted and the @OnSchedule method > completes Exceptionally. However, we do see that GenerateFlowFile and > UpdateAttribute do not start running until after the 2 minute time window has > elapsed. If DebugFlow instead did not complete Exceptionally, then > GenerateFlowFile and UpdateAttribute would never start running and we would > see the above error messages in the log. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (MINIFICPP-313) GenerateFlowFile 'Unique FlowFiles' property inverted
[ https://issues.apache.org/jira/browse/MINIFICPP-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324540#comment-16324540 ] ASF GitHub Bot commented on MINIFICPP-313: -- GitHub user dtrodrigues opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/240 MINIFICPP-313: correctly implement unique flowfiles param for GenerateFlowFile Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/dtrodrigues/nifi-minifi-cpp MINIFICPP-313 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/240.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #240 commit 6ac3279d62e1e511d8a5b8f3cef149816510a31b Author: Dustin RodriguesDate: 2018-01-12T20:47:37Z MINIFICPP-313: correctly implement unique flowfiles param for GenerateFlowFile > GenerateFlowFile 'Unique FlowFiles' property inverted > - > > Key: MINIFICPP-313 > URL: https://issues.apache.org/jira/browse/MINIFICPP-313 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Andrew Christianson >Priority: Minor > Fix For: 0.4.0 > > > 'Unique FlowFiles' must be set to false in order to get unique flow files, > which is backward. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #2403: NIFI-4772: Refactored how the @OnScheduled methods of proc...
Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/2403 I think it makes sense to have the tasks which check the status of component scheduling/unscheduling/etc.. be done in an administrative pool whereby they cannot get blocked/excluded from running in case the primary timer driven pool is exhausted with slow starting items. ---
[GitHub] nifi-minifi-cpp pull request #240: MINIFICPP-313: correctly implement unique...
GitHub user dtrodrigues opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/240 MINIFICPP-313: correctly implement unique flowfiles param for GenerateFlowFile Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/dtrodrigues/nifi-minifi-cpp MINIFICPP-313 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/240.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #240 commit 6ac3279d62e1e511d8a5b8f3cef149816510a31b Author: Dustin RodriguesDate: 2018-01-12T20:47:37Z MINIFICPP-313: correctly implement unique flowfiles param for GenerateFlowFile ---
[jira] [Commented] (NIFI-4751) Release Management for Apache NiFi 1.5.0 RC
[ https://issues.apache.org/jira/browse/NIFI-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324536#comment-16324536 ] ASF subversion and git services commented on NIFI-4751: --- Commit 1821031 from [~joewitt] in branch 'site/trunk' [ https://svn.apache.org/r1821031 ] NIFI-4751 > Release Management for Apache NiFi 1.5.0 RC > --- > > Key: NIFI-4751 > URL: https://issues.apache.org/jira/browse/NIFI-4751 > Project: Apache NiFi > Issue Type: Bug > Components: Tools and Build >Affects Versions: 1.5.0 >Reporter: Joseph Witt >Assignee: Joseph Witt > Fix For: 1.5.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4751) Release Management for Apache NiFi 1.5.0 RC
[ https://issues.apache.org/jira/browse/NIFI-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324517#comment-16324517 ] ASF subversion and git services commented on NIFI-4751: --- Commit 5e3867011e106b8dab4097ff5dac1d462f687a81 in nifi's branch refs/heads/master from [~joewitt] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=5e38670 ] NIFI-4751 updated docker version > Release Management for Apache NiFi 1.5.0 RC > --- > > Key: NIFI-4751 > URL: https://issues.apache.org/jira/browse/NIFI-4751 > Project: Apache NiFi > Issue Type: Bug > Components: Tools and Build >Affects Versions: 1.5.0 >Reporter: Joseph Witt >Assignee: Joseph Witt > Fix For: 1.5.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4751) Release Management for Apache NiFi 1.5.0 RC
[ https://issues.apache.org/jira/browse/NIFI-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324513#comment-16324513 ] ASF subversion and git services commented on NIFI-4751: --- Commit 42edfa75b72b15aa3e0ce6d2aef7af39718409ce in nifi's branch refs/heads/master from [~joewitt] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=42edfa7 ] Merge branch 'NIFI-4751-RC1' > Release Management for Apache NiFi 1.5.0 RC > --- > > Key: NIFI-4751 > URL: https://issues.apache.org/jira/browse/NIFI-4751 > Project: Apache NiFi > Issue Type: Bug > Components: Tools and Build >Affects Versions: 1.5.0 >Reporter: Joseph Witt >Assignee: Joseph Witt > Fix For: 1.5.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4751) Release Management for Apache NiFi 1.5.0 RC
[ https://issues.apache.org/jira/browse/NIFI-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324512#comment-16324512 ] ASF subversion and git services commented on NIFI-4751: --- Commit 36405e888cb367a6449e6db936cc07f837334dd5 in nifi's branch refs/heads/master from [~joewitt] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=36405e8 ] NIFI-4751-RC1 prepare for next development iteration > Release Management for Apache NiFi 1.5.0 RC > --- > > Key: NIFI-4751 > URL: https://issues.apache.org/jira/browse/NIFI-4751 > Project: Apache NiFi > Issue Type: Bug > Components: Tools and Build >Affects Versions: 1.5.0 >Reporter: Joseph Witt >Assignee: Joseph Witt > Fix For: 1.5.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4751) Release Management for Apache NiFi 1.5.0 RC
[ https://issues.apache.org/jira/browse/NIFI-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324514#comment-16324514 ] ASF subversion and git services commented on NIFI-4751: --- Commit 41ce788812d4119cbbddad019d22e7239e7c3f5f in nifi's branch refs/heads/master from [~joewitt] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=41ce788 ] NIFI-4751 changed to next minor release version snapshot > Release Management for Apache NiFi 1.5.0 RC > --- > > Key: NIFI-4751 > URL: https://issues.apache.org/jira/browse/NIFI-4751 > Project: Apache NiFi > Issue Type: Bug > Components: Tools and Build >Affects Versions: 1.5.0 >Reporter: Joseph Witt >Assignee: Joseph Witt > Fix For: 1.5.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFI-4775) Allow FlowFile Repository to optionally perform fsync when writing CREATE events but not other events
Mark Payne created NIFI-4775: Summary: Allow FlowFile Repository to optionally perform fsync when writing CREATE events but not other events Key: NIFI-4775 URL: https://issues.apache.org/jira/browse/NIFI-4775 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Reporter: Mark Payne Currently, when a FlowFile is written to the FlowFile Repository, the repo can either fsync or not, depending on nifi.properties. We should allow a third option, of fsync only for CREATE events. In this case, if we receive new data from a source we can fsync the update to the FlowFile Repository before ACK'ing the data from the source. This allows us to guarantee data persistence without the overhead of an fsync for every FlowFile Repository update. It may make sense, though, to be a bit more selective about when do this. For example if the source is a system that does not allow us to acknowledge the receipt of data, such as a ListenUDP processor, this doesn't really buy us much. In such a case, we could be smart about avoiding the high cost of an fsync. However, for something like GetSFTP where we have to remove the file in order to 'acknowledge receipt' we can ensure that we wait for the fsync before proceeding. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFI-4774) FlowFile Repository should write updates to the same FlowFile to the same partition
Mark Payne created NIFI-4774: Summary: FlowFile Repository should write updates to the same FlowFile to the same partition Key: NIFI-4774 URL: https://issues.apache.org/jira/browse/NIFI-4774 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Mark Payne Assignee: Mark Payne As-is, in the case of power loss or Operating System crash, we could have an update that is lost, and then an update for the same FlowFile that is not lost, because the updates for a given FlowFile can span partitions. If an update were written to Partition 1 and then to Partition 2 and Partition 2 is flushed to disk by the Operating System and then the Operating System crashes or power is lost before Partition 1 is flushed to disk, we could lose the update to Partition 1. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi pull request #109: MINIFI-421 Updating MiNiFi to make use of NiF...
GitHub user apiri opened a pull request: https://github.com/apache/nifi-minifi/pull/109 MINIFI-421 Updating MiNiFi to make use of NiFi 1.5 libraries MINIFI-421 Updating MiNiFi to make use of NiFi 1.5 libraries and providing handling for the switch to RPG ports to make use of targetId where available in 1.2 encoded templates. Updating some deprecated code usages. Thank you for submitting a contribution to Apache NiFi - MiNiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X] Has your PR been rebased against the latest commit within the target branch (typically master)? - [X] Is your initial contribution a single, squashed commit? ### For code changes: - [X] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi-minifi folder? - [X] Have you written or updated unit tests to verify your changes? - [-] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [-] If applicable, have you updated the LICENSE file, including the main LICENSE file under minifi-assembly? - [-] If applicable, have you updated the NOTICE file, including the main NOTICE file found under minifi-assembly? ### For documentation related changes: - [-] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/apiri/nifi-minifi MINIFI-421 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi/pull/109.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #109 commit c837dc88080b70131f1f071e3c96c3726402e49e Author: Aldrin PiriDate: 2018-01-02T19:14:50Z MINIFI-421 Updating MiNiFi to make use of NiFi 1.5 libraries and providing handling for the switch to RPG ports to make use of targetId where available in 1.2 encoded templates. Updating some deprecated code usages. ---
[jira] [Commented] (NIFI-4773) CreateDatabaseTable setup is incorrect
[ https://issues.apache.org/jira/browse/NIFI-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324437#comment-16324437 ] Mark Payne commented on NIFI-4773: -- I believe the issue here is that QueryDatabaseTable is connecting to a database in the @OnScheduled method. This method can then hang indefinitely in cases. We should avoid making the connection to the database in @OnScheduled and instead do it in the onTrigger method. We should also double-check that timeouts are always used. > CreateDatabaseTable setup is incorrect > -- > > Key: NIFI-4773 > URL: https://issues.apache.org/jira/browse/NIFI-4773 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.2.0 >Reporter: Wynner > > The QueryDatabaseTable processor calls the CreateDatabaseTable method, during > setup this method is doing bad stuff in the OnSchedule method. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFI-4773) CreateDatabaseTable setup is incorrect
Wynner created NIFI-4773: Summary: CreateDatabaseTable setup is incorrect Key: NIFI-4773 URL: https://issues.apache.org/jira/browse/NIFI-4773 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.2.0 Reporter: Wynner The QueryDatabaseTable processor calls the CreateDatabaseTable method, during setup this method is doing bad stuff in the OnSchedule method. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4772) If several processors do not return from their @OnScheduled method, NiFi will stop scheduling any Processors
[ https://issues.apache.org/jira/browse/NIFI-4772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324410#comment-16324410 ] ASF GitHub Bot commented on NIFI-4772: -- GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/2403 NIFI-4772: Refactored how the @OnScheduled methods of processors is i… …nvoked/monitored. The new method does away with the two previously created 8-thread thread pools and just uses the Timer-Driven thread pool that is used by other framework tasks. Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-4772 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2403.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2403 commit c59645e0bc780ca3f2f997437902fe4e498f528d Author: Mark PayneDate: 2018-01-12T19:12:57Z NIFI-4772: Refactored how the @OnScheduled methods of processors is invoked/monitored. The new method does away with the two previously created 8-thread thread pools and just uses the Timer-Driven thread pool that is used by other framework tasks. > If several processors do not return from their @OnScheduled method, NiFi will > stop scheduling any Processors > > > Key: NIFI-4772 > URL: https://issues.apache.org/jira/browse/NIFI-4772 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > > If a Processor does not properly return from its @OnScheduled method and > several instances of the processor are started, we can get into a state where > no Processors can start. We start seeing log messages like the following: > {code} > 2018-01-10 10:16:31,433 WARN [StandardProcessScheduler Thread-1] > o.a.n.controller.StandardProcessorNode Timed out while waiting for > OnScheduled of 'UpdateAttribute' processor to finish. An attempt is made to > cancel the task via Thread.interrupt(). However it does not guarantee that > the task will be canceled since the code inside current OnScheduled operation > may have been written to ignore interrupts which may result in a runaway > thread. This could lead to more issues, eventually requiring NiFi to be > restarted. This is usually a bug in the target Processor > 'UpdateAttribute[id=95423ee6-e6a6-1220-83ad-af20577063bd]' that needs to be > documented, reported and eventually fixed. > 2018-01-10 10:16:42,937 WARN [StandardProcessScheduler Thread-2] > o.a.n.controller.StandardProcessorNode Timed out while waiting for > OnScheduled of 'PutHDFS' processor to finish. An attempt is made to cancel > the task via Thread.interrupt(). However it does not guarantee that the task > will be canceled since the code inside current
[GitHub] nifi pull request #2403: NIFI-4772: Refactored how the @OnScheduled methods ...
GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/2403 NIFI-4772: Refactored how the @OnScheduled methods of processors is i⦠â¦nvoked/monitored. The new method does away with the two previously created 8-thread thread pools and just uses the Timer-Driven thread pool that is used by other framework tasks. Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-4772 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2403.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2403 commit c59645e0bc780ca3f2f997437902fe4e498f528d Author: Mark PayneDate: 2018-01-12T19:12:57Z NIFI-4772: Refactored how the @OnScheduled methods of processors is invoked/monitored. The new method does away with the two previously created 8-thread thread pools and just uses the Timer-Driven thread pool that is used by other framework tasks. ---
[jira] [Created] (NIFI-4772) If several processors do not return from their @OnScheduled method, NiFi will stop scheduling any Processors
Mark Payne created NIFI-4772: Summary: If several processors do not return from their @OnScheduled method, NiFi will stop scheduling any Processors Key: NIFI-4772 URL: https://issues.apache.org/jira/browse/NIFI-4772 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Mark Payne Assignee: Mark Payne If a Processor does not properly return from its @OnScheduled method and several instances of the processor are started, we can get into a state where no Processors can start. We start seeing log messages like the following: {code} 2018-01-10 10:16:31,433 WARN [StandardProcessScheduler Thread-1] o.a.n.controller.StandardProcessorNode Timed out while waiting for OnScheduled of 'UpdateAttribute' processor to finish. An attempt is made to cancel the task via Thread.interrupt(). However it does not guarantee that the task will be canceled since the code inside current OnScheduled operation may have been written to ignore interrupts which may result in a runaway thread. This could lead to more issues, eventually requiring NiFi to be restarted. This is usually a bug in the target Processor 'UpdateAttribute[id=95423ee6-e6a6-1220-83ad-af20577063bd]' that needs to be documented, reported and eventually fixed. 2018-01-10 10:16:42,937 WARN [StandardProcessScheduler Thread-2] o.a.n.controller.StandardProcessorNode Timed out while waiting for OnScheduled of 'PutHDFS' processor to finish. An attempt is made to cancel the task via Thread.interrupt(). However it does not guarantee that the task will be canceled since the code inside current OnScheduled operation may have been written to ignore interrupts which may result in a runaway thread. This could lead to more issues, eventually requiring NiFi to be restarted. This is usually a bug in the target Processor 'PutHDFS[id=25e531ec-d873-1dec-acc9-ea745e7869ed]' that needs to be documented, reported and eventually fixed. 2018-01-10 10:16:46,993 WARN [StandardProcessScheduler Thread-4] o.a.n.controller.StandardProcessorNode Timed out while waiting for OnScheduled of 'LogAttribute' processor to finish. An attempt is made to cancel the task via Thread.interrupt(). However it does not guarantee that the task will be canceled since the code inside current OnScheduled operation may have been written to ignore interrupts which may result in a runaway thread. This could lead to more issues, eventually requiring NiFi to be restarted. This is usually a bug in the target Processor 'LogAttribute[id=9a683a06-aa24-19b5--944a0216]' that needs to be documented, reported and eventually fixed. {code} While we should avoid having misbehaving Processors to begin with, the framework must also be tolerant of this and should not allow one misbehaving Processor from affecting other Processors. We can "approximate" this issue by following these steps: 1. Create 1 DebugFlow Processor. Auto-terminate its success & failure relationships. Set the "@OnScheduled Pause Time" property to "2 mins" 2. Copy & paste this DebugFlow Processor so that there are at least 8 of them. 3. Create a GenerateFlowFile Processor and an UpdateAttribute Processor. Send success of GenerateFlowFile to UpdateAttribute. 4. Start all of the DebugFlow Processors. 5. Start the GenerateFlowFIle and UpdateAttribute Processors. In this scenario, we will not see the above log messages, because after 1 minute the DebugFlow Processor is interrupted and the @OnSchedule method completes Exceptionally. However, we do see that GenerateFlowFile and UpdateAttribute do not start running until after the 2 minute time window has elapsed. If DebugFlow instead did not complete Exceptionally, then GenerateFlowFile and UpdateAttribute would never start running and we would see the above error messages in the log. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-4772) If several processors do not return from their @OnScheduled method, NiFi will stop scheduling any Processors
[ https://issues.apache.org/jira/browse/NIFI-4772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-4772: - Priority: Critical (was: Major) > If several processors do not return from their @OnScheduled method, NiFi will > stop scheduling any Processors > > > Key: NIFI-4772 > URL: https://issues.apache.org/jira/browse/NIFI-4772 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > > If a Processor does not properly return from its @OnScheduled method and > several instances of the processor are started, we can get into a state where > no Processors can start. We start seeing log messages like the following: > {code} > 2018-01-10 10:16:31,433 WARN [StandardProcessScheduler Thread-1] > o.a.n.controller.StandardProcessorNode Timed out while waiting for > OnScheduled of 'UpdateAttribute' processor to finish. An attempt is made to > cancel the task via Thread.interrupt(). However it does not guarantee that > the task will be canceled since the code inside current OnScheduled operation > may have been written to ignore interrupts which may result in a runaway > thread. This could lead to more issues, eventually requiring NiFi to be > restarted. This is usually a bug in the target Processor > 'UpdateAttribute[id=95423ee6-e6a6-1220-83ad-af20577063bd]' that needs to be > documented, reported and eventually fixed. > 2018-01-10 10:16:42,937 WARN [StandardProcessScheduler Thread-2] > o.a.n.controller.StandardProcessorNode Timed out while waiting for > OnScheduled of 'PutHDFS' processor to finish. An attempt is made to cancel > the task via Thread.interrupt(). However it does not guarantee that the task > will be canceled since the code inside current OnScheduled operation may have > been written to ignore interrupts which may result in a runaway thread. This > could lead to more issues, eventually requiring NiFi to be restarted. This is > usually a bug in the target Processor > 'PutHDFS[id=25e531ec-d873-1dec-acc9-ea745e7869ed]' that needs to be > documented, reported and eventually fixed. > 2018-01-10 10:16:46,993 WARN [StandardProcessScheduler Thread-4] > o.a.n.controller.StandardProcessorNode Timed out while waiting for > OnScheduled of 'LogAttribute' processor to finish. An attempt is made to > cancel the task via Thread.interrupt(). However it does not guarantee that > the task will be canceled since the code inside current OnScheduled operation > may have been written to ignore interrupts which may result in a runaway > thread. This could lead to more issues, eventually requiring NiFi to be > restarted. This is usually a bug in the target Processor > 'LogAttribute[id=9a683a06-aa24-19b5--944a0216]' that needs to be > documented, reported and eventually fixed. > {code} > While we should avoid having misbehaving Processors to begin with, the > framework must also be tolerant of this and should not allow one misbehaving > Processor from affecting other Processors. > We can "approximate" this issue by following these steps: > 1. Create 1 DebugFlow Processor. Auto-terminate its success & failure > relationships. Set the "@OnScheduled Pause Time" property to "2 mins" > 2. Copy & paste this DebugFlow Processor so that there are at least 8 of them. > 3. Create a GenerateFlowFile Processor and an UpdateAttribute Processor. Send > success of GenerateFlowFile to UpdateAttribute. > 4. Start all of the DebugFlow Processors. > 5. Start the GenerateFlowFIle and UpdateAttribute Processors. > In this scenario, we will not see the above log messages, because after 1 > minute the DebugFlow Processor is interrupted and the @OnSchedule method > completes Exceptionally. However, we do see that GenerateFlowFile and > UpdateAttribute do not start running until after the 2 minute time window has > elapsed. If DebugFlow instead did not complete Exceptionally, then > GenerateFlowFile and UpdateAttribute would never start running and we would > see the above error messages in the log. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4390) Add a keyboard shortcut for Connection related dialogs
[ https://issues.apache.org/jira/browse/NIFI-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324386#comment-16324386 ] ASF GitHub Bot commented on NIFI-4390: -- Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/2157#discussion_r161298805 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/jquery/modal/jquery.modal.js --- @@ -50,6 +50,7 @@ * disabled: isDisabledFunction, * handler: { * click: applyHandler + * keyup: keyupHandler --- End diff -- Is there a reason the `keyupHandler` is specified per button? If both Apply and Cancel have keyupHandlers specified, which one is called? Can one handler 'consume' the event and prevent further listeners from triggering? Does it make sense to only allow a single key listener per modal? In other places, we have placed key listeners on components within a dialog. Was that not an option here? How do the key listeners effect those if configured at the same time? > Add a keyboard shortcut for Connection related dialogs > -- > > Key: NIFI-4390 > URL: https://issues.apache.org/jira/browse/NIFI-4390 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Reporter: Yuri >Priority: Minor > Labels: dialogs, shortcuts, ui, ux > Attachments: nifi_dialogs_v1.ods > > > Current dialogs don't allow to bound a keyboard shortcut to an action. This > hinders the UX, since there are many dialogs involved in the most common > interactions. > For instance, adding a new connection with a single relationship still > requires a click at the confirm button. Instead, it should be possible to > confirm the dialog simply by hitting the Enter key. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4390) Add a keyboard shortcut for Connection related dialogs
[ https://issues.apache.org/jira/browse/NIFI-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324387#comment-16324387 ] ASF GitHub Bot commented on NIFI-4390: -- Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/2157#discussion_r161299941 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/jquery/modal/jquery.modal.js --- @@ -127,6 +128,10 @@ // check if the button should be disabled if (isDisabled()) { button.addClass('disabled-button'); +// remove keyup listener +if (isDefinedAndNotNull(buttonConfig.handler) && isDefinedAndNotNull(buttonConfig.handler.keyup) && typeof buttonConfig.handler.keyup === 'function') { +document.removeEventListener('keyup', buttonConfig.handler.keyup, true); --- End diff -- Is there a reason we're not using jQuery for adding/removing the event listeners? This isn't a big deal but I'd prefer to remain consistent with the rest of the code base if possible. Does it have to do with the useCapture flag you've specified here? > Add a keyboard shortcut for Connection related dialogs > -- > > Key: NIFI-4390 > URL: https://issues.apache.org/jira/browse/NIFI-4390 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Reporter: Yuri >Priority: Minor > Labels: dialogs, shortcuts, ui, ux > Attachments: nifi_dialogs_v1.ods > > > Current dialogs don't allow to bound a keyboard shortcut to an action. This > hinders the UX, since there are many dialogs involved in the most common > interactions. > For instance, adding a new connection with a single relationship still > requires a click at the confirm button. Instead, it should be possible to > confirm the dialog simply by hitting the Enter key. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2157: NIFI-4390 - Add a keyboard shortcut for Connection....
Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/2157#discussion_r161299941 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/jquery/modal/jquery.modal.js --- @@ -127,6 +128,10 @@ // check if the button should be disabled if (isDisabled()) { button.addClass('disabled-button'); +// remove keyup listener +if (isDefinedAndNotNull(buttonConfig.handler) && isDefinedAndNotNull(buttonConfig.handler.keyup) && typeof buttonConfig.handler.keyup === 'function') { +document.removeEventListener('keyup', buttonConfig.handler.keyup, true); --- End diff -- Is there a reason we're not using jQuery for adding/removing the event listeners? This isn't a big deal but I'd prefer to remain consistent with the rest of the code base if possible. Does it have to do with the useCapture flag you've specified here? ---
[GitHub] nifi pull request #2157: NIFI-4390 - Add a keyboard shortcut for Connection....
Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/2157#discussion_r161298805 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/jquery/modal/jquery.modal.js --- @@ -50,6 +50,7 @@ * disabled: isDisabledFunction, * handler: { * click: applyHandler + * keyup: keyupHandler --- End diff -- Is there a reason the `keyupHandler` is specified per button? If both Apply and Cancel have keyupHandlers specified, which one is called? Can one handler 'consume' the event and prevent further listeners from triggering? Does it make sense to only allow a single key listener per modal? In other places, we have placed key listeners on components within a dialog. Was that not an option here? How do the key listeners effect those if configured at the same time? ---
[jira] [Commented] (NIFI-4428) Implement PutDruid Processor and Controller
[ https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324382#comment-16324382 ] ASF GitHub Bot commented on NIFI-4428: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2310#discussion_r161298748 --- Diff: nifi-assembly/LICENSE --- @@ -2073,3 +2073,93 @@ style license. WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + + The binary distribution of this product bundles 'ANTLR 4' which is available +under a "3-clause BSD" license. For details see http://www.antlr.org/license.html + + Copyright (c) 2012 Terence Parr and Sam Harwell + All rights reserved. + Redistribution and use in source and binary forms, with or without modification, are permitted + provided that the following conditions are met: + + Redistributions of source code must retain the above copyright notice, this list of + conditions and the following disclaimer. + Redistributions in binary form must reproduce the above copyright notice, this list of + conditions and the following disclaimer in the documentation and/or other materials + provided with the distribution. + + Neither the name of the author nor the names of its contributors may be used to endorse + or promote products derived from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY + EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL + THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, + PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF + THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + The binary distribution of this product bundles 'icu4j' +which is available under a X-style license. --- End diff -- Done, and squashed to 2 commits (original from Vadim and one of mine) > Implement PutDruid Processor and Controller > --- > > Key: NIFI-4428 > URL: https://issues.apache.org/jira/browse/NIFI-4428 > Project: Apache NiFi > Issue Type: New Feature >Affects Versions: 1.3.0 >Reporter: Vadim Vaks >Assignee: Matt Burgess > > Implement a PutDruid Processor and Controller using Tranquility API. This > will enable Nifi to index contents of flow files in Druid. The implementation > should also be able to handle late arriving data (event timestamp points to > Druid indexing task that has closed, segment granularity and grace window > period expired). Late arriving data is typically dropped. Nifi should allow > late arriving data to be diverted to FAILED or DROPPED relationship. That > would allow late arriving data to be stored on HDFS or S3 until a re-indexing > task can merge it into the correct segment in deep storage. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2310: NIFI-4428: Add PutDruidRecord processor and DruidTr...
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2310#discussion_r161298748 --- Diff: nifi-assembly/LICENSE --- @@ -2073,3 +2073,93 @@ style license. WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + + The binary distribution of this product bundles 'ANTLR 4' which is available +under a "3-clause BSD" license. For details see http://www.antlr.org/license.html + + Copyright (c) 2012 Terence Parr and Sam Harwell + All rights reserved. + Redistribution and use in source and binary forms, with or without modification, are permitted + provided that the following conditions are met: + + Redistributions of source code must retain the above copyright notice, this list of + conditions and the following disclaimer. + Redistributions in binary form must reproduce the above copyright notice, this list of + conditions and the following disclaimer in the documentation and/or other materials + provided with the distribution. + + Neither the name of the author nor the names of its contributors may be used to endorse + or promote products derived from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY + EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL + THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, + PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF + THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + The binary distribution of this product bundles 'icu4j' +which is available under a X-style license. --- End diff -- Done, and squashed to 2 commits (original from Vadim and one of mine) ---
[jira] [Commented] (NIFI-4428) Implement PutDruid Processor and Controller
[ https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324377#comment-16324377 ] ASF GitHub Bot commented on NIFI-4428: -- Github user joewitt commented on a diff in the pull request: https://github.com/apache/nifi/pull/2310#discussion_r161298116 --- Diff: nifi-assembly/LICENSE --- @@ -2073,3 +2073,93 @@ style license. WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + + The binary distribution of this product bundles 'ANTLR 4' which is available +under a "3-clause BSD" license. For details see http://www.antlr.org/license.html + + Copyright (c) 2012 Terence Parr and Sam Harwell + All rights reserved. + Redistribution and use in source and binary forms, with or without modification, are permitted + provided that the following conditions are met: + + Redistributions of source code must retain the above copyright notice, this list of + conditions and the following disclaimer. + Redistributions in binary form must reproduce the above copyright notice, this list of + conditions and the following disclaimer in the documentation and/or other materials + provided with the distribution. + + Neither the name of the author nor the names of its contributors may be used to endorse + or promote products derived from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY + EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL + THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, + PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF + THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + The binary distribution of this product bundles 'icu4j' +which is available under a X-style license. --- End diff -- Please Refer to it as the ICU License as noted here http://source.icu-project.org/repos/icu/icu4j/tags/release-4-8-1/main/shared/licenses/license.html. Fortunately that is in the Apache Category A so we're good here. Just call it out as the ICU License as in the LIcense text instead of 'X-Style' thanks > Implement PutDruid Processor and Controller > --- > > Key: NIFI-4428 > URL: https://issues.apache.org/jira/browse/NIFI-4428 > Project: Apache NiFi > Issue Type: New Feature >Affects Versions: 1.3.0 >Reporter: Vadim Vaks >Assignee: Matt Burgess > > Implement a PutDruid Processor and Controller using Tranquility API. This > will enable Nifi to index contents of flow files in Druid. The implementation > should also be able to handle late arriving data (event timestamp points to > Druid indexing task that has closed, segment granularity and grace window > period expired). Late arriving data is typically dropped. Nifi should allow > late arriving data to be diverted to FAILED or DROPPED relationship. That > would allow late arriving data to be stored on HDFS or S3 until a re-indexing > task can merge it into the correct segment in deep storage. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2310: NIFI-4428: Add PutDruidRecord processor and DruidTr...
Github user joewitt commented on a diff in the pull request: https://github.com/apache/nifi/pull/2310#discussion_r161298116 --- Diff: nifi-assembly/LICENSE --- @@ -2073,3 +2073,93 @@ style license. WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + + The binary distribution of this product bundles 'ANTLR 4' which is available +under a "3-clause BSD" license. For details see http://www.antlr.org/license.html + + Copyright (c) 2012 Terence Parr and Sam Harwell + All rights reserved. + Redistribution and use in source and binary forms, with or without modification, are permitted + provided that the following conditions are met: + + Redistributions of source code must retain the above copyright notice, this list of + conditions and the following disclaimer. + Redistributions in binary form must reproduce the above copyright notice, this list of + conditions and the following disclaimer in the documentation and/or other materials + provided with the distribution. + + Neither the name of the author nor the names of its contributors may be used to endorse + or promote products derived from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY + EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL + THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, + PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF + THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + The binary distribution of this product bundles 'icu4j' +which is available under a X-style license. --- End diff -- Please Refer to it as the ICU License as noted here http://source.icu-project.org/repos/icu/icu4j/tags/release-4-8-1/main/shared/licenses/license.html. Fortunately that is in the Apache Category A so we're good here. Just call it out as the ICU License as in the LIcense text instead of 'X-Style' thanks ---
[jira] [Commented] (NIFI-4428) Implement PutDruid Processor and Controller
[ https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324374#comment-16324374 ] ASF GitHub Bot commented on NIFI-4428: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2310#discussion_r161297870 --- Diff: nifi-assembly/LICENSE --- @@ -2073,3 +2073,93 @@ style license. WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + + The binary distribution of this product bundles 'ANTLR 4' which is available +under a "3-clause BSD" license. For details see http://www.antlr.org/license.html + + Copyright (c) 2012 Terence Parr and Sam Harwell + All rights reserved. + Redistribution and use in source and binary forms, with or without modification, are permitted + provided that the following conditions are met: + + Redistributions of source code must retain the above copyright notice, this list of + conditions and the following disclaimer. + Redistributions in binary form must reproduce the above copyright notice, this list of + conditions and the following disclaimer in the documentation and/or other materials + provided with the distribution. + + Neither the name of the author nor the names of its contributors may be used to endorse + or promote products derived from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY + EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL + THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, + PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF + THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + The binary distribution of this product bundles 'icu4j' +which is available under a X-style license. --- End diff -- The reference to "X-Style" comes from [this page](http://source.icu-project.org/repos/icu/icu4j/tags/release-4-8-1/readme.html#license). However the [license](http://source.icu-project.org/repos/icu/icu4j/tags/release-4-8-1/main/shared/licenses/license.html) does look MIT-style. > Implement PutDruid Processor and Controller > --- > > Key: NIFI-4428 > URL: https://issues.apache.org/jira/browse/NIFI-4428 > Project: Apache NiFi > Issue Type: New Feature >Affects Versions: 1.3.0 >Reporter: Vadim Vaks >Assignee: Matt Burgess > > Implement a PutDruid Processor and Controller using Tranquility API. This > will enable Nifi to index contents of flow files in Druid. The implementation > should also be able to handle late arriving data (event timestamp points to > Druid indexing task that has closed, segment granularity and grace window > period expired). Late arriving data is typically dropped. Nifi should allow > late arriving data to be diverted to FAILED or DROPPED relationship. That > would allow late arriving data to be stored on HDFS or S3 until a re-indexing > task can merge it into the correct segment in deep storage. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2310: NIFI-4428: Add PutDruidRecord processor and DruidTr...
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2310#discussion_r161297870 --- Diff: nifi-assembly/LICENSE --- @@ -2073,3 +2073,93 @@ style license. WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + + The binary distribution of this product bundles 'ANTLR 4' which is available +under a "3-clause BSD" license. For details see http://www.antlr.org/license.html + + Copyright (c) 2012 Terence Parr and Sam Harwell + All rights reserved. + Redistribution and use in source and binary forms, with or without modification, are permitted + provided that the following conditions are met: + + Redistributions of source code must retain the above copyright notice, this list of + conditions and the following disclaimer. + Redistributions in binary form must reproduce the above copyright notice, this list of + conditions and the following disclaimer in the documentation and/or other materials + provided with the distribution. + + Neither the name of the author nor the names of its contributors may be used to endorse + or promote products derived from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY + EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL + THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, + PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF + THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + The binary distribution of this product bundles 'icu4j' +which is available under a X-style license. --- End diff -- The reference to "X-Style" comes from [this page](http://source.icu-project.org/repos/icu/icu4j/tags/release-4-8-1/readme.html#license). However the [license](http://source.icu-project.org/repos/icu/icu4j/tags/release-4-8-1/main/shared/licenses/license.html) does look MIT-style. ---
[jira] [Commented] (NIFI-4428) Implement PutDruid Processor and Controller
[ https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324369#comment-16324369 ] ASF GitHub Bot commented on NIFI-4428: -- Github user joewitt commented on a diff in the pull request: https://github.com/apache/nifi/pull/2310#discussion_r161297207 --- Diff: nifi-assembly/LICENSE --- @@ -2073,3 +2073,93 @@ style license. WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + + The binary distribution of this product bundles 'ANTLR 4' which is available +under a "3-clause BSD" license. For details see http://www.antlr.org/license.html + + Copyright (c) 2012 Terence Parr and Sam Harwell + All rights reserved. + Redistribution and use in source and binary forms, with or without modification, are permitted + provided that the following conditions are met: + + Redistributions of source code must retain the above copyright notice, this list of + conditions and the following disclaimer. + Redistributions in binary form must reproduce the above copyright notice, this list of + conditions and the following disclaimer in the documentation and/or other materials + provided with the distribution. + + Neither the name of the author nor the names of its contributors may be used to endorse + or promote products derived from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY + EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL + THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, + PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF + THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + The binary distribution of this product bundles 'icu4j' +which is available under a X-style license. --- End diff -- Cannot have 'X-Style'. Is this MIT? Can you share the link to the LICENSE this is coming from? > Implement PutDruid Processor and Controller > --- > > Key: NIFI-4428 > URL: https://issues.apache.org/jira/browse/NIFI-4428 > Project: Apache NiFi > Issue Type: New Feature >Affects Versions: 1.3.0 >Reporter: Vadim Vaks >Assignee: Matt Burgess > > Implement a PutDruid Processor and Controller using Tranquility API. This > will enable Nifi to index contents of flow files in Druid. The implementation > should also be able to handle late arriving data (event timestamp points to > Druid indexing task that has closed, segment granularity and grace window > period expired). Late arriving data is typically dropped. Nifi should allow > late arriving data to be diverted to FAILED or DROPPED relationship. That > would allow late arriving data to be stored on HDFS or S3 until a re-indexing > task can merge it into the correct segment in deep storage. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2310: NIFI-4428: Add PutDruidRecord processor and DruidTr...
Github user joewitt commented on a diff in the pull request: https://github.com/apache/nifi/pull/2310#discussion_r161297207 --- Diff: nifi-assembly/LICENSE --- @@ -2073,3 +2073,93 @@ style license. WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + + The binary distribution of this product bundles 'ANTLR 4' which is available +under a "3-clause BSD" license. For details see http://www.antlr.org/license.html + + Copyright (c) 2012 Terence Parr and Sam Harwell + All rights reserved. + Redistribution and use in source and binary forms, with or without modification, are permitted + provided that the following conditions are met: + + Redistributions of source code must retain the above copyright notice, this list of + conditions and the following disclaimer. + Redistributions in binary form must reproduce the above copyright notice, this list of + conditions and the following disclaimer in the documentation and/or other materials + provided with the distribution. + + Neither the name of the author nor the names of its contributors may be used to endorse + or promote products derived from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY + EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL + THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, + PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF + THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + The binary distribution of this product bundles 'icu4j' +which is available under a X-style license. --- End diff -- Cannot have 'X-Style'. Is this MIT? Can you share the link to the LICENSE this is coming from? ---
[jira] [Commented] (MINIFICPP-337) Make default log directory 'logs'
[ https://issues.apache.org/jira/browse/MINIFICPP-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324360#comment-16324360 ] ASF GitHub Bot commented on MINIFICPP-337: -- Github user minifirocks commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/236#discussion_r161296430 --- Diff: libminifi/src/core/logging/LoggerConfiguration.cpp --- @@ -110,6 +112,17 @@ std::shared_ptr LoggerConfiguration::initialize_names if (!logger_properties->get(appender_key + ".file_name", file_name)) { file_name = "minifi-app.log"; } + std::string directory = ""; + if (logger_properties->get(appender_key + ".directory", directory)) { +// Create the log directory if needed +struct stat logDirStat; +if (stat(directory.c_str(), ) != 0 || !S_ISDIR(logDirStat.st_mode)) { + if (mkdir(directory.c_str(), 0777) == -1) { --- End diff -- so if you start bin/minifi.sh, the log directory will be ./logs where you start the same > Make default log directory 'logs' > - > > Key: MINIFICPP-337 > URL: https://issues.apache.org/jira/browse/MINIFICPP-337 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: marco polo >Assignee: bqiu > Fix For: 0.4.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi-cpp pull request #236: MINIFICPP-337: make default log directory...
Github user minifirocks commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/236#discussion_r161296430 --- Diff: libminifi/src/core/logging/LoggerConfiguration.cpp --- @@ -110,6 +112,17 @@ std::shared_ptr LoggerConfiguration::initialize_names if (!logger_properties->get(appender_key + ".file_name", file_name)) { file_name = "minifi-app.log"; } + std::string directory = ""; + if (logger_properties->get(appender_key + ".directory", directory)) { +// Create the log directory if needed +struct stat logDirStat; +if (stat(directory.c_str(), ) != 0 || !S_ISDIR(logDirStat.st_mode)) { + if (mkdir(directory.c_str(), 0777) == -1) { --- End diff -- so if you start bin/minifi.sh, the log directory will be ./logs where you start the same ---
[jira] [Commented] (MINIFICPP-337) Make default log directory 'logs'
[ https://issues.apache.org/jira/browse/MINIFICPP-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324354#comment-16324354 ] ASF GitHub Bot commented on MINIFICPP-337: -- Github user minifirocks commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/236#discussion_r161295854 --- Diff: libminifi/src/core/logging/LoggerConfiguration.cpp --- @@ -110,6 +112,17 @@ std::shared_ptr LoggerConfiguration::initialize_names if (!logger_properties->get(appender_key + ".file_name", file_name)) { file_name = "minifi-app.log"; } + std::string directory = ""; + if (logger_properties->get(appender_key + ".directory", directory)) { +// Create the log directory if needed +struct stat logDirStat; +if (stat(directory.c_str(), ) != 0 || !S_ISDIR(logDirStat.st_mode)) { + if (mkdir(directory.c_str(), 0777) == -1) { --- End diff -- we read the directory variable from minifi-log.properties which specify the log name and log directory. the directory can be a absolute path or relative path. > Make default log directory 'logs' > - > > Key: MINIFICPP-337 > URL: https://issues.apache.org/jira/browse/MINIFICPP-337 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: marco polo >Assignee: bqiu > Fix For: 0.4.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi-cpp pull request #236: MINIFICPP-337: make default log directory...
Github user minifirocks commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/236#discussion_r161295854 --- Diff: libminifi/src/core/logging/LoggerConfiguration.cpp --- @@ -110,6 +112,17 @@ std::shared_ptr LoggerConfiguration::initialize_names if (!logger_properties->get(appender_key + ".file_name", file_name)) { file_name = "minifi-app.log"; } + std::string directory = ""; + if (logger_properties->get(appender_key + ".directory", directory)) { +// Create the log directory if needed +struct stat logDirStat; +if (stat(directory.c_str(), ) != 0 || !S_ISDIR(logDirStat.st_mode)) { + if (mkdir(directory.c_str(), 0777) == -1) { --- End diff -- we read the directory variable from minifi-log.properties which specify the log name and log directory. the directory can be a absolute path or relative path. ---
[jira] [Closed] (MINIFICPP-37) Create scripts to get information from the controller API
[ https://issues.apache.org/jira/browse/MINIFICPP-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] marco polo closed MINIFICPP-37. --- > Create scripts to get information from the controller API > - > > Key: MINIFICPP-37 > URL: https://issues.apache.org/jira/browse/MINIFICPP-37 > Project: NiFi MiNiFi C++ > Issue Type: Sub-task >Reporter: marco polo >Assignee: marco polo >Priority: Minor > Labels: Durability, Reliability > Fix For: 0.4.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (MINIFICPP-37) Create scripts to get information from the controller API
[ https://issues.apache.org/jira/browse/MINIFICPP-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] marco polo updated MINIFICPP-37: Resolution: Fixed Status: Resolved (was: Patch Available) > Create scripts to get information from the controller API > - > > Key: MINIFICPP-37 > URL: https://issues.apache.org/jira/browse/MINIFICPP-37 > Project: NiFi MiNiFi C++ > Issue Type: Sub-task >Reporter: marco polo >Assignee: marco polo >Priority: Minor > Labels: Durability, Reliability > Fix For: 0.4.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (MINIFICPP-371) Document Maximum File Count in PutFile processor
[ https://issues.apache.org/jira/browse/MINIFICPP-371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] marco polo updated MINIFICPP-371: - Resolution: Fixed Status: Resolved (was: Patch Available) > Document Maximum File Count in PutFile processor > > > Key: MINIFICPP-371 > URL: https://issues.apache.org/jira/browse/MINIFICPP-371 > Project: NiFi MiNiFi C++ > Issue Type: Documentation >Reporter: Dustin Rodrigues >Assignee: Dustin Rodrigues >Priority: Minor > Fix For: 0.4.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (MINIFICPP-336) With default GetFile settings dot files are not getting ignored on linux systems as they should
[ https://issues.apache.org/jira/browse/MINIFICPP-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] marco polo updated MINIFICPP-336: - Resolution: Fixed Status: Resolved (was: Patch Available) > With default GetFile settings dot files are not getting ignored on linux > systems as they should > --- > > Key: MINIFICPP-336 > URL: https://issues.apache.org/jira/browse/MINIFICPP-336 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Affects Versions: 0.1.0, 0.2.0, 0.3.0 >Reporter: Joseph Witt >Assignee: marco polo > Fix For: 0.4.0 > > > With this config > Processors: > - name: GetFile > class: org.apache.nifi.processors.standard.GetFile > max concurrent tasks: 1 > scheduling strategy: TIMER_DRIVEN > scheduling period: 0 sec > penalization period: 30 sec > yield period: 1 sec > run duration nanos: 0 > auto-terminated relationships list: [] > Properties: > Batch Size: '10' > File Filter: '[^\.].*' > Ignore Hidden Files: 'true' > Input Directory: test/input > Keep Source File: 'false' > Maximum File Age: > Maximum File Size: > Minimum File Age: 0 sec > Minimum File Size: 0 B > Path Filter: > Polling Interval: 0 sec > Recurse Subdirectories: 'true' > The minifi flow picks up any files starting with '.' character right away. I > believe this is causing duplication to occur when NiFi writes to that > directory being watched, for example, because it writes the files a > hidden/dot notation then renamed it when done. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (MINIFICPP-371) Document Maximum File Count in PutFile processor
[ https://issues.apache.org/jira/browse/MINIFICPP-371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] marco polo updated MINIFICPP-371: - Fix Version/s: 0.4.0 > Document Maximum File Count in PutFile processor > > > Key: MINIFICPP-371 > URL: https://issues.apache.org/jira/browse/MINIFICPP-371 > Project: NiFi MiNiFi C++ > Issue Type: Documentation >Reporter: Dustin Rodrigues >Assignee: Dustin Rodrigues >Priority: Minor > Fix For: 0.4.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (MINIFICPP-371) Document Maximum File Count in PutFile processor
[ https://issues.apache.org/jira/browse/MINIFICPP-371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324314#comment-16324314 ] ASF GitHub Bot commented on MINIFICPP-371: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/239 > Document Maximum File Count in PutFile processor > > > Key: MINIFICPP-371 > URL: https://issues.apache.org/jira/browse/MINIFICPP-371 > Project: NiFi MiNiFi C++ > Issue Type: Documentation >Reporter: Dustin Rodrigues >Assignee: Dustin Rodrigues >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi-cpp pull request #239: MINIFICPP-371: document Maximum File Coun...
Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/239 ---
[jira] [Commented] (MINIFICPP-336) With default GetFile settings dot files are not getting ignored on linux systems as they should
[ https://issues.apache.org/jira/browse/MINIFICPP-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324312#comment-16324312 ] ASF GitHub Bot commented on MINIFICPP-336: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/238 > With default GetFile settings dot files are not getting ignored on linux > systems as they should > --- > > Key: MINIFICPP-336 > URL: https://issues.apache.org/jira/browse/MINIFICPP-336 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Affects Versions: 0.1.0, 0.2.0, 0.3.0 >Reporter: Joseph Witt >Assignee: marco polo > Fix For: 0.4.0 > > > With this config > Processors: > - name: GetFile > class: org.apache.nifi.processors.standard.GetFile > max concurrent tasks: 1 > scheduling strategy: TIMER_DRIVEN > scheduling period: 0 sec > penalization period: 30 sec > yield period: 1 sec > run duration nanos: 0 > auto-terminated relationships list: [] > Properties: > Batch Size: '10' > File Filter: '[^\.].*' > Ignore Hidden Files: 'true' > Input Directory: test/input > Keep Source File: 'false' > Maximum File Age: > Maximum File Size: > Minimum File Age: 0 sec > Minimum File Size: 0 B > Path Filter: > Polling Interval: 0 sec > Recurse Subdirectories: 'true' > The minifi flow picks up any files starting with '.' character right away. I > believe this is causing duplication to occur when NiFi writes to that > directory being watched, for example, because it writes the files a > hidden/dot notation then renamed it when done. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi-cpp pull request #238: MINIFICPP-336: Use correct path when excl...
Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/238 ---
[jira] [Commented] (MINIFICPP-37) Create scripts to get information from the controller API
[ https://issues.apache.org/jira/browse/MINIFICPP-37?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324296#comment-16324296 ] ASF GitHub Bot commented on MINIFICPP-37: - Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/237 > Create scripts to get information from the controller API > - > > Key: MINIFICPP-37 > URL: https://issues.apache.org/jira/browse/MINIFICPP-37 > Project: NiFi MiNiFi C++ > Issue Type: Sub-task >Reporter: marco polo >Assignee: marco polo >Priority: Minor > Labels: Durability, Reliability > Fix For: 0.4.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi-cpp pull request #237: MINIFICPP-37: Create an executable to sup...
Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/237 ---
[jira] [Updated] (NIFIREG-118) Create a NiFi Registry Docker Image
[ https://issues.apache.org/jira/browse/NIFIREG-118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aldrin Piri updated NIFIREG-118: Priority: Minor (was: Major) > Create a NiFi Registry Docker Image > --- > > Key: NIFIREG-118 > URL: https://issues.apache.org/jira/browse/NIFIREG-118 > Project: NiFi Registry > Issue Type: Task >Reporter: Aldrin Piri >Assignee: Aldrin Piri >Priority: Minor > > Adding a supporting Dockerfile for Registry would help many users work > through some of the quick testing and evaluation of Registry in conjunction > with the NiFi image with the assistance of config scripts and/or > docker-compose. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFIREG-118) Create a NiFi Registry Docker Image
Aldrin Piri created NIFIREG-118: --- Summary: Create a NiFi Registry Docker Image Key: NIFIREG-118 URL: https://issues.apache.org/jira/browse/NIFIREG-118 Project: NiFi Registry Issue Type: Task Reporter: Aldrin Piri Assignee: Aldrin Piri Adding a supporting Dockerfile for Registry would help many users work through some of the quick testing and evaluation of Registry in conjunction with the NiFi image with the assistance of config scripts and/or docker-compose. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFI-4771) Consolidate Docker resources into nifi-container
Aldrin Piri created NIFI-4771: - Summary: Consolidate Docker resources into nifi-container Key: NIFI-4771 URL: https://issues.apache.org/jira/browse/NIFI-4771 Project: Apache NiFi Issue Type: Task Components: Docker Reporter: Aldrin Piri Assignee: Aldrin Piri With the creation of DockerHub repositories for all major components within NiFi, we can consolidate efforts and resources in the nifi-container. Currently we have existing Docker work for NiFi, NiFi Toolkit, NiFi MiNiFI Java & C++. These should be transitioned to the container repo and established with Docker Hub for the associated convenience binaries. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4770) ListAzureBlobProcessor doesn't write the container name as a flowfile attribute
[ https://issues.apache.org/jira/browse/NIFI-4770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324262#comment-16324262 ] ASF GitHub Bot commented on NIFI-4770: -- GitHub user zenfenan opened a pull request: https://github.com/apache/nifi/pull/2402 NIFI-4770 ListAzureBlobStorage now writes azure.container flowfile attribute Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/zenfenan/nifi master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2402.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2402 commit 4b45c31e842a057fce2f56e96a00a4dc9023580c Author: zenfenaanDate: 2018-01-12T17:15:13Z NIFI-4770: Added container name getter, setter methods and added azure.container to flowfile attributes list commit 8d4b5c9e0db112308dfaa6daebdf6d399829c420 Author: zenfenaan Date: 2018-01-12T17:20:30Z NIFI-4770: Fixed writing container name as a flowfile attribute > ListAzureBlobProcessor doesn't write the container name as a flowfile > attribute > --- > > Key: NIFI-4770 > URL: https://issues.apache.org/jira/browse/NIFI-4770 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.3.0, 1.4.0 >Reporter: zenfenaan >Priority: Minor > > Usage documentation of ListAzureBlobStorage mentions that it writes the > attribute "azure.container" which provides the container name to the flowfile > but it doesn't. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2402: NIFI-4770 ListAzureBlobStorage now writes azure.con...
GitHub user zenfenan opened a pull request: https://github.com/apache/nifi/pull/2402 NIFI-4770 ListAzureBlobStorage now writes azure.container flowfile attribute Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/zenfenan/nifi master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2402.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2402 commit 4b45c31e842a057fce2f56e96a00a4dc9023580c Author: zenfenaanDate: 2018-01-12T17:15:13Z NIFI-4770: Added container name getter, setter methods and added azure.container to flowfile attributes list commit 8d4b5c9e0db112308dfaa6daebdf6d399829c420 Author: zenfenaan Date: 2018-01-12T17:20:30Z NIFI-4770: Fixed writing container name as a flowfile attribute ---
[jira] [Created] (NIFI-4770) ListAzureBlobProcessor doesn't write the container name as a flowfile attribute
zenfenaan created NIFI-4770: --- Summary: ListAzureBlobProcessor doesn't write the container name as a flowfile attribute Key: NIFI-4770 URL: https://issues.apache.org/jira/browse/NIFI-4770 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.4.0, 1.3.0 Reporter: zenfenaan Priority: Minor Usage documentation of ListAzureBlobStorage mentions that it writes the attribute "azure.container" which provides the container name to the flowfile but it doesn't. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (MINIFICPP-371) Document Maximum File Count in PutFile processor
[ https://issues.apache.org/jira/browse/MINIFICPP-371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dustin Rodrigues updated MINIFICPP-371: --- Status: Patch Available (was: Open) > Document Maximum File Count in PutFile processor > > > Key: MINIFICPP-371 > URL: https://issues.apache.org/jira/browse/MINIFICPP-371 > Project: NiFi MiNiFi C++ > Issue Type: Documentation >Reporter: Dustin Rodrigues >Assignee: Dustin Rodrigues >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (MINIFICPP-371) Document Maximum File Count in PutFile processor
[ https://issues.apache.org/jira/browse/MINIFICPP-371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324199#comment-16324199 ] ASF GitHub Bot commented on MINIFICPP-371: -- GitHub user dtrodrigues opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/239 MINIFICPP-371: document Maximum File Count in PutFile processor Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/dtrodrigues/nifi-minifi-cpp MINIFICPP-371 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/239.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #239 commit 44d1c8583d7161e40e1cbe14181d8a4d2229a758 Author: Dustin RodriguesDate: 2018-01-12T16:32:04Z MINIFICPP-371: document Maximum File Count in PutFile processor > Document Maximum File Count in PutFile processor > > > Key: MINIFICPP-371 > URL: https://issues.apache.org/jira/browse/MINIFICPP-371 > Project: NiFi MiNiFi C++ > Issue Type: Documentation >Reporter: Dustin Rodrigues >Assignee: Dustin Rodrigues >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi-cpp pull request #239: MINIFICPP-371: document Maximum File Coun...
GitHub user dtrodrigues opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/239 MINIFICPP-371: document Maximum File Count in PutFile processor Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/dtrodrigues/nifi-minifi-cpp MINIFICPP-371 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/239.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #239 commit 44d1c8583d7161e40e1cbe14181d8a4d2229a758 Author: Dustin RodriguesDate: 2018-01-12T16:32:04Z MINIFICPP-371: document Maximum File Count in PutFile processor ---
[jira] [Created] (MINIFICPP-371) Document Maximum File Count in PutFile processor
Dustin Rodrigues created MINIFICPP-371: -- Summary: Document Maximum File Count in PutFile processor Key: MINIFICPP-371 URL: https://issues.apache.org/jira/browse/MINIFICPP-371 Project: NiFi MiNiFi C++ Issue Type: Documentation Reporter: Dustin Rodrigues Priority: Minor -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (MINIFICPP-371) Document Maximum File Count in PutFile processor
[ https://issues.apache.org/jira/browse/MINIFICPP-371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dustin Rodrigues reassigned MINIFICPP-371: -- Assignee: Dustin Rodrigues > Document Maximum File Count in PutFile processor > > > Key: MINIFICPP-371 > URL: https://issues.apache.org/jira/browse/MINIFICPP-371 > Project: NiFi MiNiFi C++ > Issue Type: Documentation >Reporter: Dustin Rodrigues >Assignee: Dustin Rodrigues >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4759) PutMongo does not handle updateKey field correctly
[ https://issues.apache.org/jira/browse/NIFI-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324175#comment-16324175 ] ASF GitHub Bot commented on NIFI-4759: -- Github user mgaido91 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2401#discussion_r161261787 --- Diff: nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/test/java/org/apache/nifi/processors/mongodb/PutMongoTest.java --- @@ -256,4 +257,74 @@ public void testUpsertWithOperators() throws Exception { Assert.assertEquals("Msg had wrong value", msg, "Hi"); } } + +/* + * Start NIFI-4759 Regression Tests + * + * 2 issues with ID field: + * + * * Assumed _id is the update key, causing failures when the user configured a different one in the UI. + * * Treated _id as a string even when it is an ObjectID sent from another processor as a string value. + * + * Expected behavior: + * + * * update key field should work no matter what (legal) value it is set to be. + * * _ids that are ObjectID should become real ObjectIDs when added to Mongo. + * * _ids that are arbitrary strings should be still go in as strings. + * + */ +@Test +public void testNiFi_4759_Regressions() { +String[] upserts = new String[]{ +"{\n" + --- End diff -- What about writing the JSON document on one line? > PutMongo does not handle updateKey field correctly > -- > > Key: NIFI-4759 > URL: https://issues.apache.org/jira/browse/NIFI-4759 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mike Thomsen >Assignee: Mike Thomsen > > Two issues: > * The updateKey field is ignored in favor of _id in the update code block of > PutMongo. > * _id fields are always treated as strings, even if they're valid ObjectIds > represented as a string. PutMongo should be able to handle these as ObjectIds. > Regarding the first point, this works: > {code:java} > { > "_id": "1234", > "$set": { "msg": "Hello, world" } > } > {code} > This does not: > {code:java} > { > "uniqueKey": "12345", > "$set": { "msg": "Hello, World" } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2401: NIFI-4759 Fixed a bug that left a hard-coded refere...
Github user mgaido91 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2401#discussion_r161261787 --- Diff: nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/test/java/org/apache/nifi/processors/mongodb/PutMongoTest.java --- @@ -256,4 +257,74 @@ public void testUpsertWithOperators() throws Exception { Assert.assertEquals("Msg had wrong value", msg, "Hi"); } } + +/* + * Start NIFI-4759 Regression Tests + * + * 2 issues with ID field: + * + * * Assumed _id is the update key, causing failures when the user configured a different one in the UI. + * * Treated _id as a string even when it is an ObjectID sent from another processor as a string value. + * + * Expected behavior: + * + * * update key field should work no matter what (legal) value it is set to be. + * * _ids that are ObjectID should become real ObjectIDs when added to Mongo. + * * _ids that are arbitrary strings should be still go in as strings. + * + */ +@Test +public void testNiFi_4759_Regressions() { +String[] upserts = new String[]{ +"{\n" + --- End diff -- What about writing the JSON document on one line? ---
[jira] [Commented] (NIFI-4759) PutMongo does not handle updateKey field correctly
[ https://issues.apache.org/jira/browse/NIFI-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324176#comment-16324176 ] ASF GitHub Bot commented on NIFI-4759: -- Github user mgaido91 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2401#discussion_r161261619 --- Diff: nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/test/java/org/apache/nifi/processors/mongodb/PutMongoTest.java --- @@ -256,4 +257,74 @@ public void testUpsertWithOperators() throws Exception { Assert.assertEquals("Msg had wrong value", msg, "Hi"); } } + +/* + * Start NIFI-4759 Regression Tests + * + * 2 issues with ID field: + * + * * Assumed _id is the update key, causing failures when the user configured a different one in the UI. + * * Treated _id as a string even when it is an ObjectID sent from another processor as a string value. + * + * Expected behavior: + * + * * update key field should work no matter what (legal) value it is set to be. + * * _ids that are ObjectID should become real ObjectIDs when added to Mongo. + * * _ids that are arbitrary strings should be still go in as strings. + * + */ +@Test +public void testNiFi_4759_Regressions() { +String[] upserts = new String[]{ +"{\n" + +"\t\"_id\": \"12345\",\n" + +"\t\"$set\": {\n" + +"\t\t\"msg\": \"Hello, world\"\n" + +"\t}\n" + +"}", + +"{\n" + +"\t\"_id\": \"5a5617b9c1f5de6d8276e87d\",\n" + +"\t\"$set\": {\n" + +"\t\t\"msg\": \"Hello, world\"\n" + +"\t}\n" + +"}", + +"{\n" + +"\t\"updateKey\": \"12345\",\n" + +"\t\"$set\": {\n" + +"\t\t\"msg\": \"Hello, world\"\n" + +"\t}\n" + +"}", +}; + +String[] updateKeyProps = new String[] { "_id", "_id", "updateKey" }; +Object[] updateKeys = new Object[] { "12345", new ObjectId("5a5617b9c1f5de6d8276e87d"), "12345" }; +int index = 0; + +runner.setProperty(PutMongo.UPDATE_MODE, PutMongo.UPDATE_WITH_OPERATORS); +runner.setProperty(PutMongo.MODE, "update"); +runner.setProperty(PutMongo.UPSERT, "true"); + +for (String upsert : upserts) { +runner.setProperty(PutMongo.UPDATE_QUERY_KEY, updateKeyProps[index]); +for (int x = 0; x < 5; x++) { --- End diff -- What about using 2 instead of 5? > PutMongo does not handle updateKey field correctly > -- > > Key: NIFI-4759 > URL: https://issues.apache.org/jira/browse/NIFI-4759 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mike Thomsen >Assignee: Mike Thomsen > > Two issues: > * The updateKey field is ignored in favor of _id in the update code block of > PutMongo. > * _id fields are always treated as strings, even if they're valid ObjectIds > represented as a string. PutMongo should be able to handle these as ObjectIds. > Regarding the first point, this works: > {code:java} > { > "_id": "1234", > "$set": { "msg": "Hello, world" } > } > {code} > This does not: > {code:java} > { > "uniqueKey": "12345", > "$set": { "msg": "Hello, World" } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4759) PutMongo does not handle updateKey field correctly
[ https://issues.apache.org/jira/browse/NIFI-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324174#comment-16324174 ] ASF GitHub Bot commented on NIFI-4759: -- Github user mgaido91 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2401#discussion_r161262183 --- Diff: nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/test/java/org/apache/nifi/processors/mongodb/PutMongoTest.java --- @@ -256,4 +257,74 @@ public void testUpsertWithOperators() throws Exception { Assert.assertEquals("Msg had wrong value", msg, "Hi"); } } + +/* + * Start NIFI-4759 Regression Tests + * + * 2 issues with ID field: + * + * * Assumed _id is the update key, causing failures when the user configured a different one in the UI. + * * Treated _id as a string even when it is an ObjectID sent from another processor as a string value. + * + * Expected behavior: + * + * * update key field should work no matter what (legal) value it is set to be. + * * _ids that are ObjectID should become real ObjectIDs when added to Mongo. + * * _ids that are arbitrary strings should be still go in as strings. + * + */ +@Test +public void testNiFi_4759_Regressions() { +String[] upserts = new String[]{ +"{\n" + +"\t\"_id\": \"12345\",\n" + +"\t\"$set\": {\n" + +"\t\t\"msg\": \"Hello, world\"\n" + +"\t}\n" + +"}", + +"{\n" + +"\t\"_id\": \"5a5617b9c1f5de6d8276e87d\",\n" + +"\t\"$set\": {\n" + +"\t\t\"msg\": \"Hello, world\"\n" + +"\t}\n" + +"}", + +"{\n" + +"\t\"updateKey\": \"12345\",\n" + +"\t\"$set\": {\n" + +"\t\t\"msg\": \"Hello, world\"\n" + +"\t}\n" + +"}", +}; + +String[] updateKeyProps = new String[] { "_id", "_id", "updateKey" }; +Object[] updateKeys = new Object[] { "12345", new ObjectId("5a5617b9c1f5de6d8276e87d"), "12345" }; +int index = 0; + +runner.setProperty(PutMongo.UPDATE_MODE, PutMongo.UPDATE_WITH_OPERATORS); +runner.setProperty(PutMongo.MODE, "update"); +runner.setProperty(PutMongo.UPSERT, "true"); + +for (String upsert : upserts) { +runner.setProperty(PutMongo.UPDATE_QUERY_KEY, updateKeyProps[index]); +for (int x = 0; x < 5; x++) { +runner.enqueue(upsert); +} +runner.run(5, true, true); +runner.assertTransferCount(PutMongo.REL_FAILURE, 0); +runner.assertTransferCount(PutMongo.REL_SUCCESS, 5); + +Document query = new Document(updateKeyProps[index], updateKeys[index]); +Document result = collection.find(query).first(); +Assert.assertNotNull("Result was null", result); +Assert.assertEquals("Count was wrong", 1, collection.count(query)); +runner.clearTransferState(); +index++; +} +} + +/* --- End diff -- Nit: I'd remove this and the empty line above... > PutMongo does not handle updateKey field correctly > -- > > Key: NIFI-4759 > URL: https://issues.apache.org/jira/browse/NIFI-4759 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mike Thomsen >Assignee: Mike Thomsen > > Two issues: > * The updateKey field is ignored in favor of _id in the update code block of > PutMongo. > * _id fields are always treated as strings, even if they're valid ObjectIds > represented as a string. PutMongo should be able to handle these as ObjectIds. > Regarding the first point, this works: > {code:java} > { > "_id": "1234", > "$set": { "msg": "Hello, world" } > } > {code} > This does not: > {code:java} > { > "uniqueKey": "12345", > "$set": { "msg": "Hello, World" } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2401: NIFI-4759 Fixed a bug that left a hard-coded refere...
Github user mgaido91 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2401#discussion_r161261619 --- Diff: nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/test/java/org/apache/nifi/processors/mongodb/PutMongoTest.java --- @@ -256,4 +257,74 @@ public void testUpsertWithOperators() throws Exception { Assert.assertEquals("Msg had wrong value", msg, "Hi"); } } + +/* + * Start NIFI-4759 Regression Tests + * + * 2 issues with ID field: + * + * * Assumed _id is the update key, causing failures when the user configured a different one in the UI. + * * Treated _id as a string even when it is an ObjectID sent from another processor as a string value. + * + * Expected behavior: + * + * * update key field should work no matter what (legal) value it is set to be. + * * _ids that are ObjectID should become real ObjectIDs when added to Mongo. + * * _ids that are arbitrary strings should be still go in as strings. + * + */ +@Test +public void testNiFi_4759_Regressions() { +String[] upserts = new String[]{ +"{\n" + +"\t\"_id\": \"12345\",\n" + +"\t\"$set\": {\n" + +"\t\t\"msg\": \"Hello, world\"\n" + +"\t}\n" + +"}", + +"{\n" + +"\t\"_id\": \"5a5617b9c1f5de6d8276e87d\",\n" + +"\t\"$set\": {\n" + +"\t\t\"msg\": \"Hello, world\"\n" + +"\t}\n" + +"}", + +"{\n" + +"\t\"updateKey\": \"12345\",\n" + +"\t\"$set\": {\n" + +"\t\t\"msg\": \"Hello, world\"\n" + +"\t}\n" + +"}", +}; + +String[] updateKeyProps = new String[] { "_id", "_id", "updateKey" }; +Object[] updateKeys = new Object[] { "12345", new ObjectId("5a5617b9c1f5de6d8276e87d"), "12345" }; +int index = 0; + +runner.setProperty(PutMongo.UPDATE_MODE, PutMongo.UPDATE_WITH_OPERATORS); +runner.setProperty(PutMongo.MODE, "update"); +runner.setProperty(PutMongo.UPSERT, "true"); + +for (String upsert : upserts) { +runner.setProperty(PutMongo.UPDATE_QUERY_KEY, updateKeyProps[index]); +for (int x = 0; x < 5; x++) { --- End diff -- What about using 2 instead of 5? ---
[GitHub] nifi pull request #2401: NIFI-4759 Fixed a bug that left a hard-coded refere...
Github user mgaido91 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2401#discussion_r161262183 --- Diff: nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/test/java/org/apache/nifi/processors/mongodb/PutMongoTest.java --- @@ -256,4 +257,74 @@ public void testUpsertWithOperators() throws Exception { Assert.assertEquals("Msg had wrong value", msg, "Hi"); } } + +/* + * Start NIFI-4759 Regression Tests + * + * 2 issues with ID field: + * + * * Assumed _id is the update key, causing failures when the user configured a different one in the UI. + * * Treated _id as a string even when it is an ObjectID sent from another processor as a string value. + * + * Expected behavior: + * + * * update key field should work no matter what (legal) value it is set to be. + * * _ids that are ObjectID should become real ObjectIDs when added to Mongo. + * * _ids that are arbitrary strings should be still go in as strings. + * + */ +@Test +public void testNiFi_4759_Regressions() { +String[] upserts = new String[]{ +"{\n" + +"\t\"_id\": \"12345\",\n" + +"\t\"$set\": {\n" + +"\t\t\"msg\": \"Hello, world\"\n" + +"\t}\n" + +"}", + +"{\n" + +"\t\"_id\": \"5a5617b9c1f5de6d8276e87d\",\n" + +"\t\"$set\": {\n" + +"\t\t\"msg\": \"Hello, world\"\n" + +"\t}\n" + +"}", + +"{\n" + +"\t\"updateKey\": \"12345\",\n" + +"\t\"$set\": {\n" + +"\t\t\"msg\": \"Hello, world\"\n" + +"\t}\n" + +"}", +}; + +String[] updateKeyProps = new String[] { "_id", "_id", "updateKey" }; +Object[] updateKeys = new Object[] { "12345", new ObjectId("5a5617b9c1f5de6d8276e87d"), "12345" }; +int index = 0; + +runner.setProperty(PutMongo.UPDATE_MODE, PutMongo.UPDATE_WITH_OPERATORS); +runner.setProperty(PutMongo.MODE, "update"); +runner.setProperty(PutMongo.UPSERT, "true"); + +for (String upsert : upserts) { +runner.setProperty(PutMongo.UPDATE_QUERY_KEY, updateKeyProps[index]); +for (int x = 0; x < 5; x++) { +runner.enqueue(upsert); +} +runner.run(5, true, true); +runner.assertTransferCount(PutMongo.REL_FAILURE, 0); +runner.assertTransferCount(PutMongo.REL_SUCCESS, 5); + +Document query = new Document(updateKeyProps[index], updateKeys[index]); +Document result = collection.find(query).first(); +Assert.assertNotNull("Result was null", result); +Assert.assertEquals("Count was wrong", 1, collection.count(query)); +runner.clearTransferState(); +index++; +} +} + +/* --- End diff -- Nit: I'd remove this and the empty line above... ---
[jira] [Closed] (NIFIREG-114) UI menu modifications
[ https://issues.apache.org/jira/browse/NIFIREG-114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Bean closed NIFIREG-114. - > UI menu modifications > - > > Key: NIFIREG-114 > URL: https://issues.apache.org/jira/browse/NIFIREG-114 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.1.0 >Reporter: Mark Bean >Priority: Minor > > On the Settings page/Buckets tab: > - Move the "New Bucket" button under the Actions list > - Alternatively, change the Actions drop down to simply "Delete" since it is > the only option > On the Settings page/Users tab: > - Move "Add User" under the "Actions" list -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (NIFIREG-114) UI menu modifications
[ https://issues.apache.org/jira/browse/NIFIREG-114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Bean resolved NIFIREG-114. --- Resolution: Not A Problem Good enough explanation. This ticket can be closed. > UI menu modifications > - > > Key: NIFIREG-114 > URL: https://issues.apache.org/jira/browse/NIFIREG-114 > Project: NiFi Registry > Issue Type: Improvement >Affects Versions: 0.1.0 >Reporter: Mark Bean >Priority: Minor > > On the Settings page/Buckets tab: > - Move the "New Bucket" button under the Actions list > - Alternatively, change the Actions drop down to simply "Delete" since it is > the only option > On the Settings page/Users tab: > - Move "Add User" under the "Actions" list -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324159#comment-16324159 ] ASF GitHub Bot commented on NIFI-4768: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2397#discussion_r161257145 --- Diff: nifi-nar-bundles/nifi-extension-utils/nifi-reporting-utils/src/main/java/org/apache/nifi/reporting/util/provenance/ProvenanceEventConsumer.java --- @@ -235,6 +251,32 @@ private boolean isFilteringEnabled() { for (ProvenanceEventRecord provenanceEventRecord : provenanceEvents) { final String componentId = provenanceEventRecord.getComponentId(); +if (!componentIdsExclude.isEmpty()) { +if (componentIdsExclude.contains(componentId)) { +continue; +} +// If we aren't excluding it based on component ID, let's see if this component has a parent process group IDs +// that is being excluded +if (componentMapHolder == null) { +continue; +} +final String processGroupId = componentMapHolder.getProcessGroupId(componentId, provenanceEventRecord.getComponentType()); +if (StringUtils.isEmpty(processGroupId)) { +continue; --- End diff -- Good point, this is a copy-paste error, will remove. > Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2397: NIFI-4768: Add exclusion filters to S2SProvenanceRe...
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2397#discussion_r161257145 --- Diff: nifi-nar-bundles/nifi-extension-utils/nifi-reporting-utils/src/main/java/org/apache/nifi/reporting/util/provenance/ProvenanceEventConsumer.java --- @@ -235,6 +251,32 @@ private boolean isFilteringEnabled() { for (ProvenanceEventRecord provenanceEventRecord : provenanceEvents) { final String componentId = provenanceEventRecord.getComponentId(); +if (!componentIdsExclude.isEmpty()) { +if (componentIdsExclude.contains(componentId)) { +continue; +} +// If we aren't excluding it based on component ID, let's see if this component has a parent process group IDs +// that is being excluded +if (componentMapHolder == null) { +continue; +} +final String processGroupId = componentMapHolder.getProcessGroupId(componentId, provenanceEventRecord.getComponentType()); +if (StringUtils.isEmpty(processGroupId)) { +continue; --- End diff -- Good point, this is a copy-paste error, will remove. ---
[jira] [Commented] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324158#comment-16324158 ] ASF GitHub Bot commented on NIFI-4768: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2397#discussion_r161256592 --- Diff: nifi-nar-bundles/nifi-extension-utils/nifi-reporting-utils/src/main/java/org/apache/nifi/reporting/util/provenance/ProvenanceEventConsumer.java --- @@ -256,9 +297,15 @@ private boolean isFilteringEnabled() { } } } +if (!eventTypesExclude.isEmpty() && eventTypesExclude.contains(provenanceEventRecord.getEventType())) { +continue; --- End diff -- Yes that's a good point, I just co-located them with their inclusionary counterparts. Will move them to the top. > Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2397: NIFI-4768: Add exclusion filters to S2SProvenanceRe...
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2397#discussion_r161256592 --- Diff: nifi-nar-bundles/nifi-extension-utils/nifi-reporting-utils/src/main/java/org/apache/nifi/reporting/util/provenance/ProvenanceEventConsumer.java --- @@ -256,9 +297,15 @@ private boolean isFilteringEnabled() { } } } +if (!eventTypesExclude.isEmpty() && eventTypesExclude.contains(provenanceEventRecord.getEventType())) { +continue; --- End diff -- Yes that's a good point, I just co-located them with their inclusionary counterparts. Will move them to the top. ---
[jira] [Commented] (NIFI-4768) Add exclusion filters to SiteToSiteProvenanceReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324155#comment-16324155 ] ASF GitHub Bot commented on NIFI-4768: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2397#discussion_r161256398 --- Diff: nifi-nar-bundles/nifi-extension-utils/nifi-reporting-utils/src/main/java/org/apache/nifi/reporting/util/provenance/ProvenanceEventConsumer.java --- @@ -89,16 +94,26 @@ public void setComponentTypeRegex(final String componentTypeRegex) { } } -public void addTargetEventType(final ProvenanceEventType... types) { -for (ProvenanceEventType type : types) { -eventTypes.add(type); +public void setComponentTypeRegexExclude(final String componentTypeRegex) { +if (!StringUtils.isBlank(componentTypeRegex)) { +this.componentTypeRegexExclude = Pattern.compile(componentTypeRegex); } } +public void addTargetEventType(final ProvenanceEventType... types) { +eventTypes.addAll(Arrays.asList(types)); --- End diff -- Nope, I think that was the original code, it only shows up as a diff here because I added a method before it. Will change it to use Collections for consistency. > Add exclusion filters to SiteToSiteProvenanceReportingTask > -- > > Key: NIFI-4768 > URL: https://issues.apache.org/jira/browse/NIFI-4768 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess > > Although the SiteToSiteProvenanceReportingTask has filters for which events, > components, etc. to capture, it is an inclusive filter, meaning if a filter > is set, only those entities' events will be sent. However it would be useful > to also have an exclusionary filter, in order to capture all events except a > few. > One particular use case is a sub-flow that processes provenance events, where > the user would not want to process provenance events generated by components > involved in the provenance-handling flow itself. In this fashion, for > example, if the sub-flow is in a process group (PG), then the user could > exclude the PG and the Input Port sending events to it, thereby allowing the > sub-flow to process all other events except those involved with the > provenance-handling flow itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2397: NIFI-4768: Add exclusion filters to S2SProvenanceRe...
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2397#discussion_r161256398 --- Diff: nifi-nar-bundles/nifi-extension-utils/nifi-reporting-utils/src/main/java/org/apache/nifi/reporting/util/provenance/ProvenanceEventConsumer.java --- @@ -89,16 +94,26 @@ public void setComponentTypeRegex(final String componentTypeRegex) { } } -public void addTargetEventType(final ProvenanceEventType... types) { -for (ProvenanceEventType type : types) { -eventTypes.add(type); +public void setComponentTypeRegexExclude(final String componentTypeRegex) { +if (!StringUtils.isBlank(componentTypeRegex)) { +this.componentTypeRegexExclude = Pattern.compile(componentTypeRegex); } } +public void addTargetEventType(final ProvenanceEventType... types) { +eventTypes.addAll(Arrays.asList(types)); --- End diff -- Nope, I think that was the original code, it only shows up as a diff here because I added a method before it. Will change it to use Collections for consistency. ---
[jira] [Commented] (MINIFICPP-37) Create scripts to get information from the controller API
[ https://issues.apache.org/jira/browse/MINIFICPP-37?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324149#comment-16324149 ] ASF GitHub Bot commented on MINIFICPP-37: - Github user phrocker commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/237#discussion_r161255479 --- Diff: controller/MiNiFiController.cpp --- @@ -0,0 +1,196 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "io/BaseStream.h" + +#include "core/Core.h" + +#include "core/FlowConfiguration.h" +#include "core/ConfigurationFactory.h" +#include "core/RepositoryFactory.h" +#include "FlowController.h" +#include "Main.h" + +#include "Controller.h" +#include "c2/ControllerSocketProtocol.h" + +#include "cxxopts.hpp" + +int main(int argc, char **argv) { + + std::shared_ptr logger = logging::LoggerConfiguration::getConfiguration().getLogger("controller"); + + // assumes POSIX compliant environment + std::string minifiHome; + if (const char *env_p = std::getenv(MINIFI_HOME_ENV_KEY)) { +minifiHome = env_p; +logger->log_info("Using MINIFI_HOME=%s from environment.", minifiHome); + } else { +logger->log_info("MINIFI_HOME is not set; determining based on environment."); +char *path = nullptr; +char full_path[PATH_MAX]; +path = realpath(argv[0], full_path); + +if (path != nullptr) { + std::string minifiHomePath(path); + if (minifiHomePath.find_last_of("/\\") != std::string::npos) { +minifiHomePath = minifiHomePath.substr(0, minifiHomePath.find_last_of("/\\")); //Remove /minifi from path +minifiHome = minifiHomePath.substr(0, minifiHomePath.find_last_of("/\\"));//Remove /bin from path + } +} + +// attempt to use cwd as MINIFI_HOME +if (minifiHome.empty() || !validHome(minifiHome)) { + char cwd[PATH_MAX]; + getcwd(cwd, PATH_MAX); + minifiHome = cwd; +} + + } + + if (!validHome(minifiHome)) { +logger->log_error("No valid MINIFI_HOME could be inferred. " + "Please set MINIFI_HOME or run minifi from a valid location."); +return -1; + } + + std::shared_ptr configuration = std::make_shared(); + configuration->setHome(minifiHome); + configuration->loadConfigureFile(DEFAULT_NIFI_PROPERTIES_FILE); + + std::shared_ptr log_properties = std::make_shared(); + log_properties->setHome(minifiHome); + log_properties->loadConfigureFile(DEFAULT_LOG_PROPERTIES_FILE); + logging::LoggerConfiguration::getConfiguration().initialize(log_properties); + + auto stream_factory_ = std::make_shared(configuration); + + std::string host, port, caCert; + + if (!configuration->get("controller.socket.host", host) || !configuration->get("controller.socket.port", port)) { --- End diff -- This was one of the big things I wanted feedback onprimarily because this depends highly on the use case. The reason It is not an option now is because the socket is limited to the loopback adapter, thus making an option irrelevant. If we add make it configurable so that the deployment can use a socket bound to any interface on that host, we could support an option in minificontroller. I'll make this change making the default only work on local host and allowing it to be changed in the minifi.properties > Create scripts to get information from the controller API > - > > Key: MINIFICPP-37 > URL: https://issues.apache.org/jira/browse/MINIFICPP-37 > Project: NiFi MiNiFi C++ > Issue Type: Sub-task >Reporter: marco polo >Assignee: marco
[GitHub] nifi-minifi-cpp pull request #237: MINIFICPP-37: Create an executable to sup...
Github user phrocker commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/237#discussion_r161255479 --- Diff: controller/MiNiFiController.cpp --- @@ -0,0 +1,196 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "io/BaseStream.h" + +#include "core/Core.h" + +#include "core/FlowConfiguration.h" +#include "core/ConfigurationFactory.h" +#include "core/RepositoryFactory.h" +#include "FlowController.h" +#include "Main.h" + +#include "Controller.h" +#include "c2/ControllerSocketProtocol.h" + +#include "cxxopts.hpp" + +int main(int argc, char **argv) { + + std::shared_ptr logger = logging::LoggerConfiguration::getConfiguration().getLogger("controller"); + + // assumes POSIX compliant environment + std::string minifiHome; + if (const char *env_p = std::getenv(MINIFI_HOME_ENV_KEY)) { +minifiHome = env_p; +logger->log_info("Using MINIFI_HOME=%s from environment.", minifiHome); + } else { +logger->log_info("MINIFI_HOME is not set; determining based on environment."); +char *path = nullptr; +char full_path[PATH_MAX]; +path = realpath(argv[0], full_path); + +if (path != nullptr) { + std::string minifiHomePath(path); + if (minifiHomePath.find_last_of("/\\") != std::string::npos) { +minifiHomePath = minifiHomePath.substr(0, minifiHomePath.find_last_of("/\\")); //Remove /minifi from path +minifiHome = minifiHomePath.substr(0, minifiHomePath.find_last_of("/\\"));//Remove /bin from path + } +} + +// attempt to use cwd as MINIFI_HOME +if (minifiHome.empty() || !validHome(minifiHome)) { + char cwd[PATH_MAX]; + getcwd(cwd, PATH_MAX); + minifiHome = cwd; +} + + } + + if (!validHome(minifiHome)) { +logger->log_error("No valid MINIFI_HOME could be inferred. " + "Please set MINIFI_HOME or run minifi from a valid location."); +return -1; + } + + std::shared_ptr configuration = std::make_shared(); + configuration->setHome(minifiHome); + configuration->loadConfigureFile(DEFAULT_NIFI_PROPERTIES_FILE); + + std::shared_ptr log_properties = std::make_shared(); + log_properties->setHome(minifiHome); + log_properties->loadConfigureFile(DEFAULT_LOG_PROPERTIES_FILE); + logging::LoggerConfiguration::getConfiguration().initialize(log_properties); + + auto stream_factory_ = std::make_shared(configuration); + + std::string host, port, caCert; + + if (!configuration->get("controller.socket.host", host) || !configuration->get("controller.socket.port", port)) { --- End diff -- This was one of the big things I wanted feedback onprimarily because this depends highly on the use case. The reason It is not an option now is because the socket is limited to the loopback adapter, thus making an option irrelevant. If we add make it configurable so that the deployment can use a socket bound to any interface on that host, we could support an option in minificontroller. I'll make this change making the default only work on local host and allowing it to be changed in the minifi.properties ---
[GitHub] nifi-minifi pull request #108: MINIFI-415 Adjusting logging when a bundle is...
Github user apiri commented on a diff in the pull request: https://github.com/apache/nifi-minifi/pull/108#discussion_r161253393 --- Diff: minifi-nar-bundles/minifi-framework-bundle/minifi-framework/minifi-runtime/src/main/java/org/apache/nifi/minifi/FlowEnricher.java --- @@ -130,17 +127,13 @@ private void enrichComponent(EnrichingElementAdapter componentToEnrich, Map componentToEnrichBundleVersions = componentToEnrichVersionToBundles.values().stream() .map(bundle -> bundle.getBundleDetails().getCoordinate().getVersion()).collect(Collectors.toSet()); -final String componentToEnrichId = componentToEnrich.getComponentId(); -String bundleVersion = componentToEnrichBundleVersions.stream().sorted().reduce((version, otherVersion) -> otherVersion).orElse(null); -if (bundleVersion != null) { - componentToEnrich.setBundleInformation(componentToEnrichVersionToBundles.get(bundleVersion).getBundleDetails().getCoordinate()); -} -logger.info("Enriching {} with bundle {}", new Object[]{}); - +final String bundleVersion = componentToEnrichBundleVersions.stream().sorted().reduce((version, otherVersion) -> otherVersion).orElse(null); +final BundleCoordinate enrichingCoordinate = componentToEnrichVersionToBundles.get(bundleVersion).getBundleDetails().getCoordinate(); --- End diff -- Okay, fair enough. Definitely looks odd and I will adjust that. ---
[jira] [Commented] (NIFI-4615) Processor status is not reported with the API
[ https://issues.apache.org/jira/browse/NIFI-4615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324108#comment-16324108 ] ASF GitHub Bot commented on NIFI-4615: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2276 I'd suggest invoking the endpoints using curl or looking at Dev Tools in your browser to verify that the issue is with NiFi. > Processor status is not reported with the API > - > > Key: NIFI-4615 > URL: https://issues.apache.org/jira/browse/NIFI-4615 > Project: Apache NiFi > Issue Type: Bug > Components: Configuration Management >Affects Versions: 1.4.0 >Reporter: Sébastien Bouchex Bellomié > Fix For: 1.5.0 > > > * Start a processor > * Request its status with the Processor/get/{id] API > Issue : ProcessorEntity.status.runStatus is always null > Note : DtoFactory.createProcessorStatusDto do not set the processor status -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #2276: NIFI-4615 processor status is set to the ProcessorStatusDT...
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2276 I'd suggest invoking the endpoints using curl or looking at Dev Tools in your browser to verify that the issue is with NiFi. ---
[jira] [Commented] (NIFI-4615) Processor status is not reported with the API
[ https://issues.apache.org/jira/browse/NIFI-4615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324103#comment-16324103 ] ASF GitHub Bot commented on NIFI-4615: -- Github user sbouchex commented on the issue: https://github.com/apache/nifi/pull/2276 I'm running a build from the latest sources in standalone mode. I have a process group with 4 processors all running (from the web ui pow) and when I do retrieve the ProcessGroupStatusDTO of my process group, iterate on each entry entry of the ProcessGroupStatusDTO.AggregateSnapshot, the RunStatus is always null. FYI, I'm using a java API (https://github.com/hermannpencole/nifi-swagger-client) I'm going to trace if I'm finding something. > Processor status is not reported with the API > - > > Key: NIFI-4615 > URL: https://issues.apache.org/jira/browse/NIFI-4615 > Project: Apache NiFi > Issue Type: Bug > Components: Configuration Management >Affects Versions: 1.4.0 >Reporter: Sébastien Bouchex Bellomié > Fix For: 1.5.0 > > > * Start a processor > * Request its status with the Processor/get/{id] API > Issue : ProcessorEntity.status.runStatus is always null > Note : DtoFactory.createProcessorStatusDto do not set the processor status -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #2276: NIFI-4615 processor status is set to the ProcessorStatusDT...
Github user sbouchex commented on the issue: https://github.com/apache/nifi/pull/2276 I'm running a build from the latest sources in standalone mode. I have a process group with 4 processors all running (from the web ui pow) and when I do retrieve the ProcessGroupStatusDTO of my process group, iterate on each entry entry of the ProcessGroupStatusDTO.AggregateSnapshot, the RunStatus is always null. FYI, I'm using a java API (https://github.com/hermannpencole/nifi-swagger-client) I'm going to trace if I'm finding something. ---
[jira] [Commented] (MINIFICPP-37) Create scripts to get information from the controller API
[ https://issues.apache.org/jira/browse/MINIFICPP-37?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324066#comment-16324066 ] ASF GitHub Bot commented on MINIFICPP-37: - Github user apiri commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/237#discussion_r161240472 --- Diff: libminifi/include/io/DescriptorStream.h --- @@ -0,0 +1,194 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#ifndef LIBMINIFI_INCLUDE_IO_DESCRIPTORSTREAM_H_ +#define LIBMINIFI_INCLUDE_IO_DESCRIPTORSTREAM_H_ + +#include +#include +#include +#include "EndianCheck.h" +#include "BaseStream.h" +#include "Serializable.h" +#include "core/logging/LoggerConfiguration.h" + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace io { + +/** + * Purpose: File Stream Base stream extension. This is intended to be a thread safe access to + * read/write to the local file system. + * + * Design: Simply extends BaseStream and overrides readData/writeData to allow a sink to the + * fstream object. + */ +class DescriptorStream : public io::BaseStream { + public: + /** + * File Stream constructor that accepts an fstream shared pointer. + * It must already be initialized for read and write. + */ + explicit DescriptorStream(int fd); + + virtual ~DescriptorStream() { + + } + + /** + * Skip to the specified offset. + * @param offset offset to which we will skip + */ + void seek(uint64_t offset); + + const uint64_t getSize() const { +return -1; + } + + // data stream extensions + /** + * Reads data and places it into buf + * @param buf buffer in which we extract data + * @param buflen + */ + virtual int readData(std::vector , int buflen); + /** + * Reads data and places it into buf + * @param buf buffer in which we extract data + * @param buflen + */ + virtual int readData(uint8_t *buf, int buflen); + + /** + * Write value to the stream using std::vector + * @param buf incoming buffer + * @param buflen buffer to write + * + */ + virtual int writeData(std::vector , int buflen); + + /** + * writes value to stream + * @param value value to write + * @param size size of value + */ + virtual int writeData(uint8_t *value, int size); + + /** + * Returns the underlying buffer + * @return vector's array + **/ + const uint8_t *getBuffer() const { +throw std::runtime_error("Stream does not support this operation"); + } + + + + /** + * write UTF string to stream + * @param str string to write + * @return resulting write size + **/ + //virtual int writeUTF(std::string str, bool widen = false); --- End diff -- remove? > Create scripts to get information from the controller API > - > > Key: MINIFICPP-37 > URL: https://issues.apache.org/jira/browse/MINIFICPP-37 > Project: NiFi MiNiFi C++ > Issue Type: Sub-task >Reporter: marco polo >Assignee: marco polo >Priority: Minor > Labels: Durability, Reliability > Fix For: 0.4.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi-cpp pull request #237: MINIFICPP-37: Create an executable to sup...
Github user apiri commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/237#discussion_r161239844 --- Diff: libminifi/include/c2/C2Agent.h --- @@ -73,7 +73,7 @@ class C2Agent : public state::UpdateController, public state::metrics::MetricsSi */ virtual int16_t setMetrics(const std::shared_ptr ); - int64_t getHeartBestDelay(){ + int64_t getHeartBeatDelay(){ --- End diff -- should help with minifi's arrhythmia ð ---
[GitHub] nifi-minifi-cpp pull request #237: MINIFICPP-37: Create an executable to sup...
Github user apiri commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/237#discussion_r161238296 --- Diff: controller/Controller.h --- @@ -0,0 +1,188 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#ifndef CONTROLLER_CONTROLLER_H_ +#define CONTROLLER_CONTROLLER_H_ + +#include "io/ClientSocket.h" +#include "c2/ControllerSocketProtocol.h" + +/** + * Sends a single argument comment + * @param socket socket unique ptr. + * @param op operation to perform + * @param value value to send + */ +bool sendSingleCommand(std::unique_ptr socket, uint8_t op, const std::string value) { + socket->initialize(); + std::vector data; + minifi::io::BaseStream stream; + stream.writeData(, 1); + stream.writeUTF(value); + socket->writeData(const_cast(stream.getBuffer()), stream.getSize()); + return true; +} + +/** + * Stops a stopped component + * @param socket socket unique ptr. + * @param op operation to perform + */ +bool stopComponent(std::unique_ptr socket, std::string component) { + return sendSingleCommand(std::move(socket), minifi::c2::Operation::STOP, component); +} + +/** + * Starts a previously stopped component. + * @param socket socket unique ptr. + * @param op operation to perform + */ +bool startComponent(std::unique_ptr socket, std::string component) { + return sendSingleCommand(std::move(socket), minifi::c2::Operation::START, component); +} + +/** + * Clears a connection queue. + * @param socket socket unique ptr. + * @param op operation to perform + */ +bool clearConnection(std::unique_ptr socket, std::string connection) { + return sendSingleCommand(std::move(socket), minifi::c2::Operation::CLEAR, connection); +} + +/** + * Updates the flow to the provided file + */ +void updateFlow(std::unique_ptr socket, std::ostream , std::string file) { + socket->initialize(); + std::vector data; + uint8_t op = minifi::c2::Operation::UPDATE; + minifi::io::BaseStream stream; + stream.writeData(, 1); + stream.writeUTF("flow"); + stream.writeUTF(file); + socket->writeData(const_cast (stream.getBuffer()), stream.getSize()); + + // read the response + uint8_t resp = 0; + socket->readData(, 1); + if (resp == minifi::c2::Operation::DESCRIBE) { +uint16_t connections = 0; +socket->read(connections); +out << connections << " are full" << std::endl; +for (int i = 0; i < connections; i++) { + std::string fullcomponent; + socket->readUTF(fullcomponent); + out << fullcomponent << " is full" << std::endl; +} + } +} + +/** + * Lists connections which are full + * @param socket socket ptr + */ +void getFullConnections(std::unique_ptr socket, std::ostream ) { + socket->initialize(); + std::vector data; + uint8_t op = minifi::c2::Operation::DESCRIBE; + minifi::io::BaseStream stream; + stream.writeData(, 1); + stream.writeUTF("getfull"); + socket->writeData(const_cast (stream.getBuffer()), stream.getSize()); + + // read the response + uint8_t resp = 0; + socket->readData(, 1); + if (resp == minifi::c2::Operation::DESCRIBE) { +uint16_t connections = 0; +socket->read(connections); +out << connections << " are full" << std::endl; +for (int i = 0; i < connections; i++) { + std::string fullcomponent; + socket->readUTF(fullcomponent); + out << fullcomponent << " is full" << std::endl; +} + + } +} + +/** + * Prints the connection size for the provided connection. + * @param socket socket ptr + * @param connection connection whose size will be returned. + */ +void getConnectionSize(std::unique_ptr socket, std::ostream , std::string connection) { + socket->initialize(); +