[jira] [Commented] (NIFI-4632) Site-to-Site failing with default configuration
[ https://issues.apache.org/jira/browse/NIFI-4632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446635#comment-16446635 ] ASF GitHub Bot commented on NIFI-4632: -- Github user elgalu commented on the issue: https://github.com/apache/nifi/pull/2288 @simnotes how are you deploying to K8s? did you use https://github.com/AlexsJones/kubernetes-nifi-cluster ? > Site-to-Site failing with default configuration > --- > > Key: NIFI-4632 > URL: https://issues.apache.org/jira/browse/NIFI-4632 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Critical > Fix For: 1.5.0 > > > With a new install, from 'master' branch I created an Input Port and then a > Remote Process Group pointing to localhost:8080/nifi. When attempting to send > data to my Input Port, nothing appears to happen in the UI. In the logs, I > see the following: > {code} > 2017-11-22 09:30:39,918 WARN [NiFi Web Server-184] > o.a.nifi.web.server.HostHeaderHandler Request host header > [.local:8080] different from web hostname [localhost(:8080)]. > Overriding to [localhost:8080/nifi-api/site-to-site/peers] > 2017-11-22 09:30:39,918 WARN [Http Site-to-Site PeerSelector] > o.a.n.r.util.SiteToSiteRestApiClient Failed to parse Json. The specified URL > http://.local:8080/nifi-api is not a proper remote NiFi endpoint > for Site-to-Site communication. > requestedUrl=http://.local:8080/nifi-api/site-to-site/peers, > response=System Error > The request contained an invalid host header [.local:8080] in > the request [/nifi-api/site-to-site/peers]. Check for request manipulation or > third-party intercept. > {code} > I tried updating nifi.properties to set the "nifi.web.http.host" property to > .local and that did resolve the issue... but then I could not > connect to the UI using localhost:8080 but instead had to connect using > .local. > This appears to not affect any released versions of NiFi. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2288: NIFI-4632: Add the local hostname to the list of validated...
Github user elgalu commented on the issue: https://github.com/apache/nifi/pull/2288 @simnotes how are you deploying to K8s? did you use https://github.com/AlexsJones/kubernetes-nifi-cluster ? ---
[jira] [Commented] (NIFI-5075) Funnels with no outgoing relationship error
[ https://issues.apache.org/jira/browse/NIFI-5075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446458#comment-16446458 ] ASF GitHub Bot commented on NIFI-5075: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2634 Thank you, @markap14, you are right, I thought I pushed it but didn't. Just pushed updated commit. > Funnels with no outgoing relationship error > --- > > Key: NIFI-5075 > URL: https://issues.apache.org/jira/browse/NIFI-5075 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.6.0 >Reporter: Peter Wicks >Assignee: Koji Kawamura >Priority: Major > > If a Funnel has no outgoing relationships it will throw an exception when it > tries to send FlowFile's to that non-existent relationship. > Replicate by creating a GenerateFlowFile processor to a Funnel, start the > GenerateFlowFile processor and check your log file. > > 2018-04-11 23:53:28,066 ERROR [Timer-Driven Process Thread-31] > o.apache.nifi.controller.StandardFunnel > StandardFunnel[id=b868231c-0162-1000-571c-ae3e7d15d848] > StandardFunnel[id=b868231c-0162-1000-571c-ae3e7d15d848] failed to process > session due to java.lang.RuntimeException: > java.lang.IllegalArgumentException: Relationship '' is not known; Processor > Administratively Yielded for 1 sec: java.lang.RuntimeException: > java.lang.IllegalArgumentException: Relationship '' is not known > java.lang.RuntimeException: java.lang.IllegalArgumentException: Relationship > '' is not known > at > org.apache.nifi.controller.StandardFunnel.onTrigger(StandardFunnel.java:365) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:175) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.IllegalArgumentException: Relationship '' is not known > at > org.apache.nifi.controller.repository.StandardProcessSession.transfer(StandardProcessSession.java:1935) > at > org.apache.nifi.controller.StandardFunnel.onTrigger(StandardFunnel.java:379) > at > org.apache.nifi.controller.StandardFunnel.onTrigger(StandardFunnel.java:358) > ... 9 common frames omitted -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2634: NIFI-5075: Do not execute Funnels with no outgoing connect...
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2634 Thank you, @markap14, you are right, I thought I pushed it but didn't. Just pushed updated commit. ---
[jira] [Updated] (NIFI-5018) basic snap-to-grid feature for UI
[ https://issues.apache.org/jira/browse/NIFI-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Bower updated NIFI-5018: - Description: NiFi 1.2.0 contained the flow alignment feature, detailed: *NiFi 1.2.0 has a nice, new little feature that will surely please those who may spend a bit of time – for some, perhaps A LOT of time – getting their flow to line up perfectly. The background grid lines can help with this, but the process can still be quite tedious with many components. Now there is a quick, easy way.* **I've made a slight modification to the UI (roughly 5 lines) that results in a "snap-to-grid" for selected components. See [this video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an example of it in action. Target file: nifi-1.6.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js The processor alignment is based on rounding the component's X and Y coordinates during the drag event. The result is a consistent "snap" alignment. I modified the following code to achieve this: {{// previous code...}} {{(this, function ($, d3, nfConnection, nfBirdseye, nfCanvasUtils, nfCommon, nfDialog, nfClient, nfErrorHandler) {}} {{ 'use strict';}} {{ var nfCanvas;}} {{ var drag;}}{{ }}{{// added for snap-to-grid feature.}} {{ var snapTo = 16;}} {{// code...}} {{ var nfDraggable = {}} {{ // more code...}} {{ if (dragSelection.empty()) }}{ {{ // more code...}} {{ } else {}} {{ // update the position of the drag selection}}{{ }} {{ dragSelection.attr('x', function (d) {}} {{ d.x += d3.event.dx;}} {{ // rounding the result achieves the "snap" alignment}} {{ return (Math.round(d.x/snapTo) * snapTo);}} {{ })}} {{ .attr('y', function (d) {}} {{ d.y += d3.event.dy;}} {{ return (Math.round(d.y/snapTo) * snapTo);}} {{ });}} } {{ // more code}} {{ updateComponentPosition: function (d, delta) {}} {{ // perform rounding again for the update}} {{ var newPosition = {}} {{ 'x': (Math.round((d.position.x + delta.x)/snapTo) * snapTo),}} {{ 'y': (Math.round((d.position.y + delta.y)/snapTo) * snapTo)}} {{ };}} {{ // more code}} {{}}} The downside of this is that components must start aligned in order to snap to the same alignment on the canvas. To remedy this, just use the 1.2.0 flow alignment feature. Note: this is only an issue for old, unaligned flows. New flows and aligned flows don't have this problem. was: NiFi 1.2.0 contained the flow alignment feature, detailed: *NiFi 1.2.0 has a nice, new little feature that will surely please those who may spend a bit of time – for some, perhaps A LOT of time – getting their flow to line up perfectly. The background grid lines can help with this, but the process can still be quite tedious with many components. Now there is a quick, easy way.* **I've made a slight modification to the UI (roughly 8 lines) that results in a "snap-to-grid" for selected components. See [this video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an example of it in action. Target file: nifi-1.6.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js The processor alignment is based on rounding the component's X and Y coordinates during the drag event. The result is a consistent "snap" alignment. I modified the following code to achieve this: {{// previous code...}} {{(this, function ($, d3, nfConnection, nfBirdseye, nfCanvasUtils, nfCommon, nfDialog, nfClient, nfErrorHandler) {}} {{ 'use strict';}} {{ var nfCanvas;}} {{ var drag;}}{{ }}{{// added for snap-to-grid feature.}} {{ var snapTo = 16;}} {{// code...}} {{ var nfDraggable = {}} {{ // more code...}} {{ if (dragSelection.empty()) }}{ {{ // more code...}} {{ } else {}} {{ // update the position of the drag selection}}{{ }} {{ dragSelection.attr('x', function (d) {}} {{ d.x += d3.event.dx;}} {{ // rounding the result achieves the "snap" alignment}} {{ return (Math.round(d.x/snapTo) * snapTo);}} {{ })}} {{ .attr('y', function (d) {}} {{ d.y += d3.event.dy;}} {{ return (Math.round(d.y/snapTo) * snapTo);}} {{ });}} } {{ // more code}} {{ updateComponentPosition: function (d, delta) {}} {{ // perform rounding again for the update}} {{ var newPosition = {}} {{ 'x': (Math.round((d.position.x + delta.x)/snapTo) * snapTo),}} {{ 'y': (Math.round((d.position.y + delta.y)/snapTo) * snapTo)}} {{ };}} {{ // more code}} {{}}} The downside of this is that components must start aligned in order to snap to the same alignment on
[jira] [Updated] (NIFI-5018) basic snap-to-grid feature for UI
[ https://issues.apache.org/jira/browse/NIFI-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Bower updated NIFI-5018: - Description: NiFi 1.2.0 contained the flow alignment feature, detailed: *NiFi 1.2.0 has a nice, new little feature that will surely please those who may spend a bit of time – for some, perhaps A LOT of time – getting their flow to line up perfectly. The background grid lines can help with this, but the process can still be quite tedious with many components. Now there is a quick, easy way.* **I've made a slight modification to the UI (roughly 8 lines) that results in a "snap-to-grid" for selected components. See [this video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an example of it in action. Target file: nifi-1.6.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js The processor alignment is based on rounding the component's X and Y coordinates during the drag event. The result is a consistent "snap" alignment. I modified the following code to achieve this: {{// previous code...}} {{(this, function ($, d3, nfConnection, nfBirdseye, nfCanvasUtils, nfCommon, nfDialog, nfClient, nfErrorHandler) {}} {{ 'use strict';}} {{ var nfCanvas;}} {{ var drag;}}{{ }}{{// added for snap-to-grid feature.}} {{ var snapTo = 16;}} {{// code...}} {{ var nfDraggable = {}} {{ // more code...}} {{ if (dragSelection.empty()) }}{ {{ // more code...}} {{ } else {}} {{ // update the position of the drag selection}}{{ }} {{ dragSelection.attr('x', function (d) {}} {{ d.x += d3.event.dx;}} {{ // rounding the result achieves the "snap" alignment}} {{ return (Math.round(d.x/snapTo) * snapTo);}} {{ })}} {{ .attr('y', function (d) {}} {{ d.y += d3.event.dy;}} {{ return (Math.round(d.y/snapTo) * snapTo);}} {{ });}} } {{ // more code}} {{ updateComponentPosition: function (d, delta) {}} {{ // perform rounding again for the update}} {{ var newPosition = {}} {{ 'x': (Math.round((d.position.x + delta.x)/snapTo) * snapTo),}} {{ 'y': (Math.round((d.position.y + delta.y)/snapTo) * snapTo)}} {{ };}} {{ // more code}} {{}}} The downside of this is that components must start aligned in order to snap to the same alignment on the canvas. To remedy this, just use the 1.2.0 flow alignment feature. Note: this is only an issue for old, unaligned flows. New flows and aligned flows don't have this problem. was: NiFi 1.2.0 contained the flow alignment feature, detailed: *NiFi 1.2.0 has a nice, new little feature that will surely please those who may spend a bit of time – for some, perhaps A LOT of time – getting their flow to line up perfectly. The background grid lines can help with this, but the process can still be quite tedious with many components. Now there is a quick, easy way.* **I've made a slight modification to the UI (roughly 8 lines) that results in a "snap-to-grid" for selected components. See [this video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an example of it in action. Target file: nifi-1.6.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js The processor alignment is based on rounding the component's X and Y coordinates during the drag event. The result is a consistent "snap" alignment. I modified the following code to achieve this: {{// unmodified code...}} {{(this, function ($, d3, nfConnection, nfBirdseye, nfCanvasUtils, nfCommon, nfDialog, nfClient, nfErrorHandler) {}} {{ 'use strict';}} {{ var nfCanvas;}} {{ var drag;}}{{ }}{{// added for snap-to-grid feature.}} {{ var snapTo = 16;}} // unmodified code... {{ var nfDraggable = {}} {{ //more code...}} {{ if (dragSelection.empty()) }}{ {{ // unmodified code...}} {{ } else {}} {{ // update the position of the drag selection}}{{ }} {{ dragSelection.attr('x', function (d) {}} {{ d.x += d3.event.dx;}} {{ // rounding the result achieves the "snap" alignment}} {{ return (Math.round(d.x/snapTo) * snapTo);}} {{ })}} {{ .attr('y', function (d) {}} {{ d.y += d3.event.dy;}} {{ return (Math.round(d.y/snapTo) * snapTo);}} {{ });}} } {{ // unmodified code}} {{ updateComponentPosition: function (d, delta) {}} {{ // perform rounding again for the update}} {{ var newPosition = {}} {{ 'x': (Math.round((d.position.x + delta.x)/snapTo) * snapTo),}} {{ 'y': (Math.round((d.position.y + delta.y)/snapTo) * snapTo)}} {{ };}} {{ // unmodified code}} {{}}} The downside of this is that components must start aligned in order to snap to the
[jira] [Updated] (NIFI-5018) basic snap-to-grid feature for UI
[ https://issues.apache.org/jira/browse/NIFI-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Bower updated NIFI-5018: - Description: NiFi 1.2.0 contained the flow alignment feature, detailed: *NiFi 1.2.0 has a nice, new little feature that will surely please those who may spend a bit of time – for some, perhaps A LOT of time – getting their flow to line up perfectly. The background grid lines can help with this, but the process can still be quite tedious with many components. Now there is a quick, easy way.* **I've made a slight modification to the UI (roughly 8 lines) that results in a "snap-to-grid" for selected components. See [this video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an example of it in action. Target file: nifi-1.6.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js The processor alignment is based on rounding the component's X and Y coordinates during the drag event. The result is a consistent "snap" alignment. I modified the following code to achieve this: {{// unmodified code...}} {{(this, function ($, d3, nfConnection, nfBirdseye, nfCanvasUtils, nfCommon, nfDialog, nfClient, nfErrorHandler) {}} {{ 'use strict';}} {{ var nfCanvas;}} {{ var drag;}}{{ }}{{// added for snap-to-grid feature.}} {{ var snapTo = 16;}} // unmodified code... {{ var nfDraggable = {}} {{ //more code...}} {{ if (dragSelection.empty()) }}{ {{ // unmodified code...}} {{ } else {}} {{ // update the position of the drag selection}}{{ }} {{ dragSelection.attr('x', function (d) {}} {{ d.x += d3.event.dx;}} {{ // rounding the result achieves the "snap" alignment}} {{ return (Math.round(d.x/snapTo) * snapTo);}} {{ })}} {{ .attr('y', function (d) {}} {{ d.y += d3.event.dy;}} {{ return (Math.round(d.y/snapTo) * snapTo);}} {{ });}} } {{ // unmodified code}} {{ updateComponentPosition: function (d, delta) {}} {{ // perform rounding again for the update}} {{ var newPosition = {}} {{ 'x': (Math.round((d.position.x + delta.x)/snapTo) * snapTo),}} {{ 'y': (Math.round((d.position.y + delta.y)/snapTo) * snapTo)}} {{ };}} {{ // unmodified code}} {{}}} The downside of this is that components must start aligned in order to snap to the same alignment on the canvas. To remedy this, just use the 1.2.0 flow alignment feature. Note: this is only an issue for old, unaligned flows. New flows and aligned flows don't have this problem. was: NiFi 1.2.0 contained the flow alignment feature, detailed: *NiFi 1.2.0 has a nice, new little feature that will surely please those who may spend a bit of time – for some, perhaps A LOT of time – getting their flow to line up perfectly. The background grid lines can help with this, but the process can still be quite tedious with many components. Now there is a quick, easy way.* **I've made a slight modification to the UI (roughly 8 lines) that results in a "snap-to-grid" for selected components. See [this video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an example of it in action. Target file: nifi-1.6.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js The processor alignment is based on rounding the component's X and Y coordinates during the drag event. The result is a consistent "snap" alignment. I modified the following code to achieve this: {{...}}{{(this, function ($, d3, nfConnection, nfBirdseye, nfCanvasUtils, nfCommon, nfDialog, nfClient, nfErrorHandler) {}} {{'use strict';}} {{var nfCanvas;}} {{var drag;}}{{ }}{{// added for snap-to-grid feature.}} {{var snapTo = 16;}} {{...}} {{ var nfDraggable = {}} {{ ...}} {{ if (dragSelection.empty()) }}{ {{ ...}} {{ } else {}} {{ // update the position of the drag selection}}{{ }} {{ dragSelection.attr('x', function (d) {}} {{ d.x += d3.event.dx;}} {{ // rounding the result achieves the "snap" alignment}} {{ return (Math.round(d.x/snapTo) * snapTo);}} {{ })}} {{ .attr('y', function (d) {}} {{ d.y += d3.event.dy;}} {{ return (Math.round(d.y/snapTo) * snapTo);}} {{ });}} } {{ ...}} {{ updateComponentPosition: function (d, delta) {}} {{ // perform rounding again for the update}} {{ var newPosition = {}} {{ 'x': (Math.round((d.position.x + delta.x)/snapTo) * snapTo),}} {{ 'y': (Math.round((d.position.y + delta.y)/snapTo) * snapTo)}} {{ };}} {{ ...}} {{}}} The downside of this is that components must start aligned in order to snap to the same alignment on the canvas. To remedy this, just use the 1.2.0 flow alignment feature. Note: this is
[jira] [Updated] (NIFI-5018) basic snap-to-grid feature for UI
[ https://issues.apache.org/jira/browse/NIFI-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Bower updated NIFI-5018: - Description: NiFi 1.2.0 contained the flow alignment feature, detailed: *NiFi 1.2.0 has a nice, new little feature that will surely please those who may spend a bit of time – for some, perhaps A LOT of time – getting their flow to line up perfectly. The background grid lines can help with this, but the process can still be quite tedious with many components. Now there is a quick, easy way.* **I've made a slight modification to the UI (roughly 8 lines) that results in a "snap-to-grid" for selected components. See [this video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an example of it in action. Target file: nifi-1.6.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js The processor alignment is based on rounding the component's X and Y coordinates during the drag event. The result is a consistent "snap" alignment. I modified the following code to achieve this: {{...}}{{(this, function ($, d3, nfConnection, nfBirdseye, nfCanvasUtils, nfCommon, nfDialog, nfClient, nfErrorHandler) {}} {{'use strict';}} {{var nfCanvas;}} {{var drag;}}{{ }}{{// added for snap-to-grid feature.}} {{var snapTo = 16;}} {{...}} {{ var nfDraggable = {}} {{ ...}} {{ if (dragSelection.empty()) }}{ {{ ...}} {{ } else {}} {{ // update the position of the drag selection}}{{ }} {{ dragSelection.attr('x', function (d) {}} {{ d.x += d3.event.dx;}} {{ // rounding the result achieves the "snap" alignment}} {{ return (Math.round(d.x/snapTo) * snapTo);}} {{ })}} {{ .attr('y', function (d) {}} {{ d.y += d3.event.dy;}} {{ return (Math.round(d.y/snapTo) * snapTo);}} {{ });}} } {{ ...}} {{ updateComponentPosition: function (d, delta) {}} {{ // perform rounding again for the update}} {{ var newPosition = {}} {{ 'x': (Math.round((d.position.x + delta.x)/snapTo) * snapTo),}} {{ 'y': (Math.round((d.position.y + delta.y)/snapTo) * snapTo)}} {{ };}} {{ ...}} {{}}} The downside of this is that components must start aligned in order to snap to the same alignment on the canvas. To remedy this, just use the 1.2.0 flow alignment feature. Note: this is only an issue for old, unaligned flows. New flows and aligned flows don't have this problem. was: NiFi 1.2.0 contained the flow alignment feature, detailed: *NiFi 1.2.0 has a nice, new little feature that will surely please those who may spend a bit of time – for some, perhaps A LOT of time – getting their flow to line up perfectly. The background grid lines can help with this, but the process can still be quite tedious with many components. Now there is a quick, easy way.* **I've made a slight modification to the UI (roughly 8 lines) that results in a "snap-to-grid" for selected components. See [this video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an example of it in action. Target file: nifi-1.6.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js The processor alignment is based on rounding the component's X and Y coordinates during the drag event. The result is a consistent "snap" alignment. I modified the following code to achieve this: {{...}} {{(this, function ($, d3, nfConnection, nfBirdseye, nfCanvasUtils, nfCommon, nfDialog, nfClient, nfErrorHandler) {}} {{'use strict';}}{{var nfCanvas;}} \{{ var drag;}} {{// added for snap-to-grid feature.}} \{{ var snapTo = 16;}} {{...}} {{ var nfDraggable = {}} {{ ...}} {{ if (dragSelection.empty()) }}{ {{ ...}} {{ } else {}} {{ // update the position of the drag selection}}{{ }} {{ dragSelection.attr('x', function (d) {}} {{ d.x += d3.event.dx;}} {{ // rounding the result achieves the "snap" alignment}} {{ return (Math.round(d.x/snapTo) * snapTo);}} {{ })}} {{ .attr('y', function (d) {}} {{ d.y += d3.event.dy;}} {{ return (Math.round(d.y/snapTo) * snapTo);}} {{ });}} } {{ ...}} {{ updateComponentPosition: function (d, delta) {}} {{ // perform rounding again for the update}} {{ var newPosition = {}} {{ 'x': (Math.round((d.position.x + delta.x)/snapTo) * snapTo),}} {{ 'y': (Math.round((d.position.y + delta.y)/snapTo) * snapTo)}} {{ };}} {{ ...}} {{}}} The downside of this is that components must start aligned in order to snap to the same alignment on the canvas. To remedy this, just use the 1.2.0 flow alignment feature. Note: this is only an issue for old, unaligned flows. New flows and aligned flows don't have this problem. > basic
[jira] [Updated] (NIFI-5018) basic snap-to-grid feature for UI
[ https://issues.apache.org/jira/browse/NIFI-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Bower updated NIFI-5018: - Description: NiFi 1.2.0 contained the flow alignment feature, detailed: *NiFi 1.2.0 has a nice, new little feature that will surely please those who may spend a bit of time – for some, perhaps A LOT of time – getting their flow to line up perfectly. The background grid lines can help with this, but the process can still be quite tedious with many components. Now there is a quick, easy way.* **I've made a slight modification to the UI (roughly 8 lines) that results in a "snap-to-grid" for selected components. See [this video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an example of it in action. Target file: nifi-1.6.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js The processor alignment is based on rounding the component's X and Y coordinates during the drag event. The result is a consistent "snap" alignment. I modified the following code to achieve this: {{...}} {{(this, function ($, d3, nfConnection, nfBirdseye, nfCanvasUtils, nfCommon, nfDialog, nfClient, nfErrorHandler) {}} {{'use strict';}}{{var nfCanvas;}} \{{ var drag;}} {{// added for snap-to-grid feature.}} \{{ var snapTo = 16;}} {{...}} {{ var nfDraggable = {}} {{ ...}} {{ if (dragSelection.empty()) }}{ {{ ...}} {{ } else {}} {{ // update the position of the drag selection}}{{ }} {{ dragSelection.attr('x', function (d) {}} {{ d.x += d3.event.dx;}} {{ // rounding the result achieves the "snap" alignment}} {{ return (Math.round(d.x/snapTo) * snapTo);}} {{ })}} {{ .attr('y', function (d) {}} {{ d.y += d3.event.dy;}} {{ return (Math.round(d.y/snapTo) * snapTo);}} {{ });}} } {{ ...}} {{ updateComponentPosition: function (d, delta) {}} {{ // perform rounding again for the update}} {{ var newPosition = {}} {{ 'x': (Math.round((d.position.x + delta.x)/snapTo) * snapTo),}} {{ 'y': (Math.round((d.position.y + delta.y)/snapTo) * snapTo)}} {{ };}} {{ ...}} {{}}} The downside of this is that components must start aligned in order to snap to the same alignment on the canvas. To remedy this, just use the 1.2.0 flow alignment feature. Note: this is only an issue for old, unaligned flows. New flows and aligned flows don't have this problem. was: NiFi 1.2.0 contained the flow alignment feature, detailed: *NiFi 1.2.0 has a nice, new little feature that will surely please those who may spend a bit of time – for some, perhaps A LOT of time – getting their flow to line up perfectly. The background grid lines can help with this, but the process can still be quite tedious with many components. Now there is a quick, easy way.* **I've made a slight modification to the UI (roughly 8 lines) that results in a "snap-to-grid" for selected components. See [this video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an example of it in action. Target file: nifi-1.6.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js The processor alignment is based on rounding the component's X and Y coordinates during the drag event. The result is a consistent "snap" alignment. I modified the following code to achieve this: {{...}} {{(this, function ($, d3, nfConnection, nfBirdseye, nfCanvasUtils, nfCommon, nfDialog, nfClient, nfErrorHandler) {}} {{'use strict';}}{{var nfCanvas;}} \{{ var drag;}} {{// added for snap-to-grid feature.}} \{{ var snapTo = 16;}} {{...}} {{ var nfDraggable = {}} {{ ...}} {{ if (dragSelection.empty()) }}{ {{ ...}} {{ } else {}}{\{ }} {{ // update the position of the drag selection}}{{ }} {{ dragSelection.attr('x', function (d) {}} {{ d.x += d3.event.dx;}} {{ // rounding the result achieves the "snap" alignment}} {{ return (Math.round(d.x/snapTo) * snapTo);}} {{ })}} {{ .attr('y', function (d) {}} {{ d.y += d3.event.dy;}} {{ return (Math.round(d.y/snapTo) * snapTo);}} {{ });}} {{ ...}} {{ updateComponentPosition: function (d, delta) {}} {{ // perform rounding again for the update}} {{ var newPosition = {}} {{ 'x': (Math.round((d.position.x + delta.x)/snapTo) * snapTo),}} {{ 'y': (Math.round((d.position.y + delta.y)/snapTo) * snapTo)}} {{ };}} {{ ...}} {{}}} The downside of this is that components must start aligned in order to snap to the same alignment on the canvas. To remedy this, just use the 1.2.0 flow alignment feature. Note: this is only an issue for old, unaligned flows. New flows and aligned flows don't have this problem. > basic
[jira] [Updated] (NIFI-5018) basic snap-to-grid feature for UI
[ https://issues.apache.org/jira/browse/NIFI-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Bower updated NIFI-5018: - Description: NiFi 1.2.0 contained the flow alignment feature, detailed: *NiFi 1.2.0 has a nice, new little feature that will surely please those who may spend a bit of time – for some, perhaps A LOT of time – getting their flow to line up perfectly. The background grid lines can help with this, but the process can still be quite tedious with many components. Now there is a quick, easy way.* **I've made a slight modification to the UI (roughly 8 lines) that results in a "snap-to-grid" for selected components. See [this video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an example of it in action. Target file: nifi-1.6.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js The processor alignment is based on rounding the component's X and Y coordinates during the drag event. The result is a consistent "snap" alignment. I modified the following code to achieve this: {{...}} {{(this, function ($, d3, nfConnection, nfBirdseye, nfCanvasUtils, nfCommon, nfDialog, nfClient, nfErrorHandler) {}} {{'use strict';}}{{var nfCanvas;}} \{{ var drag;}} {{// added for snap-to-grid feature.}} \{{ var snapTo = 16;}} {{...}} {{ var nfDraggable = {}} {{ ...}} {{ if (dragSelection.empty()) }}{ {{ ...}} {{ } else {}}{\{ }} {{ // update the position of the drag selection}}{{ }} {{ dragSelection.attr('x', function (d) {}} {{ d.x += d3.event.dx;}} {{ // rounding the result achieves the "snap" alignment}} {{ return (Math.round(d.x/snapTo) * snapTo);}} {{ })}} {{ .attr('y', function (d) {}} {{ d.y += d3.event.dy;}} {{ return (Math.round(d.y/snapTo) * snapTo);}} {{ });}} {{ ...}} {{ updateComponentPosition: function (d, delta) {}} {{ // perform rounding again for the update}} {{ var newPosition = {}} {{ 'x': (Math.round((d.position.x + delta.x)/snapTo) * snapTo),}} {{ 'y': (Math.round((d.position.y + delta.y)/snapTo) * snapTo)}} {{ };}} {{ ...}} {{}}} The downside of this is that components must start aligned in order to snap to the same alignment on the canvas. To remedy this, just use the 1.2.0 flow alignment feature. Note: this is only an issue for old, unaligned flows. New flows and aligned flows don't have this problem. was: NiFi 1.2.0 contained the flow alignment feature, detailed: *NiFi 1.2.0 has a nice, new little feature that will surely please those who may spend a bit of time – for some, perhaps A LOT of time – getting their flow to line up perfectly. The background grid lines can help with this, but the process can still be quite tedious with many components. Now there is a quick, easy way.* **I've made a slight modification to the UI (roughly 8 lines) that results in a "snap-to-grid" for selected components. See [this video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an example of it in action. Target file: nifi-1.6.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js The processor alignment is based on rounding the component's X and Y coordinates during the drag event. The result is a consistent "snap" alignment. I modified the following code to achieve this: ... {{(this, function ($, d3, nfConnection, nfBirdseye, nfCanvasUtils, nfCommon, nfDialog, nfClient, nfErrorHandler) {}} {{'use strict';}}{{var nfCanvas;}} {{ var drag;}} {{// added for snap-to-grid feature.}} {{ var snapTo = 16;}} {{...}} {{ var nfDraggable = {}} {{ ...}} {{ if (dragSelection.empty()) }}{ {{ ...}} {{ } else {}}{{ }} {{ // update the position of the drag selection}}{{ }} {{ dragSelection.attr('x', function (d) {}} {{ d.x += d3.event.dx;}} {{ // rounding the result achieves the "snap" alignment}} {{ return (Math.round(d.x/snapTo) * snapTo);}} {{ })}} {{ .attr('y', function (d) {}} {{ d.y += d3.event.dy;}} {{ return (Math.round(d.y/snapTo) * snapTo);}} {{ });}} {{ ...}} {{ updateComponentPosition: function (d, delta) {}} {{ // perform rounding again for the update}} {{ var newPosition = {}} {{ 'x': (Math.round((d.position.x + delta.x)/snapTo) * snapTo),}} {{ 'y': (Math.round((d.position.y + delta.y)/snapTo) * snapTo)}} {{ };}} {{ ...}} {{}}} The downside of this is that components must start aligned in order to snap to the same alignment on the canvas. To remedy this, just use the 1.2.0 flow alignment feature. Note: this is only an issue for old, unaligned flows. New flows and aligned flows don't have this problem. > basic snap-to-grid
[jira] [Updated] (NIFI-5018) basic snap-to-grid feature for UI
[ https://issues.apache.org/jira/browse/NIFI-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Bower updated NIFI-5018: - Description: NiFi 1.2.0 contained the flow alignment feature, detailed: *NiFi 1.2.0 has a nice, new little feature that will surely please those who may spend a bit of time – for some, perhaps A LOT of time – getting their flow to line up perfectly. The background grid lines can help with this, but the process can still be quite tedious with many components. Now there is a quick, easy way.* **I've made a slight modification to the UI (roughly 8 lines) that results in a "snap-to-grid" for selected components. See [this video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an example of it in action. Target file: nifi-1.6.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js The processor alignment is based on rounding the component's X and Y coordinates during the drag event. The result is a consistent "snap" alignment. I modified the following code to achieve this: ... {{(this, function ($, d3, nfConnection, nfBirdseye, nfCanvasUtils, nfCommon, nfDialog, nfClient, nfErrorHandler) {}} {{'use strict';}}{{var nfCanvas;}} {{ var drag;}} {{// added for snap-to-grid feature.}} {{ var snapTo = 16;}} {{...}} {{ var nfDraggable = {}} {{ ...}} {{ if (dragSelection.empty()) }}{ {{ ...}} {{ } else {}}{{ }} {{ // update the position of the drag selection}}{{ }} {{ dragSelection.attr('x', function (d) {}} {{ d.x += d3.event.dx;}} {{ // rounding the result achieves the "snap" alignment}} {{ return (Math.round(d.x/snapTo) * snapTo);}} {{ })}} {{ .attr('y', function (d) {}} {{ d.y += d3.event.dy;}} {{ return (Math.round(d.y/snapTo) * snapTo);}} {{ });}} {{ ...}} {{ updateComponentPosition: function (d, delta) {}} {{ // perform rounding again for the update}} {{ var newPosition = {}} {{ 'x': (Math.round((d.position.x + delta.x)/snapTo) * snapTo),}} {{ 'y': (Math.round((d.position.y + delta.y)/snapTo) * snapTo)}} {{ };}} {{ ...}} {{}}} The downside of this is that components must start aligned in order to snap to the same alignment on the canvas. To remedy this, just use the 1.2.0 flow alignment feature. Note: this is only an issue for old, unaligned flows. New flows and aligned flows don't have this problem. was: NiFi 1.2.0 contained the flow alignment feature, detailed: *NiFi 1.2.0 has a nice, new little feature that will surely please those who may spend a bit of time – for some, perhaps A LOT of time – getting their flow to line up perfectly. The background grid lines can help with this, but the process can still be quite tedious with many components. Now there is a quick, easy way.* **I've made a slight modification to the UI (roughly 8 lines) that results in a "snap-to-grid" for selected components. See [this video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an example of it in action. Target file: nifi-1.5.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js Disclaimer: I'm not experienced with javascript. There's definitely a better way to do this, code-wise, but this works and it's consistent. The processor alignment is based on rounding the component's X and Y coordinates during the drag event. The result is a consistent "snap" alignment. I modified nf-draggable.js to achieve this: {{var nfDraggable = {}} {{ ...}} {{ if (dragSelection.empty()) }}{ {{ ...}} {{ } else {}} {{ // added vars for preview outline rounding}} {{ var gridX = 16;}} {{ var gridY = 16;}} {{ // update the position of the drag selection}}{{ }} {{ dragSelection.attr('x', function (d) {}} {{ d.x += d3.event.dx;}} {{ // rounding the result achieves the "snap" alignment}} {{ var roundedX = Math.round(d.x/gridX) * gridX;}} {{ return roundedX;}} {{ })}} {{ .attr('y', function (d) {}} {{ d.y += d3.event.dy;}} {{ var roundedY = Math.round(d.y/gridY) * gridY;}} {{ return roundedY;}} {{ });}} ... {{ updateComponentPosition: function (d, delta) {}} // added vars for position update. These must match the previous two}} {{ var gridX = 16;}} {{ var gridY = 16;}} {{ // perform rounding again for the update}} {{ var newPosition = {}} {{ 'x': (Math.round((d.position.x + delta.x)/gridX) * gridX),}} {{ 'y': (Math.round((d.position.y + delta.y)/gridY) * gridY),}} {{ };}} {{ ...}} {{}}} The downside of this is that components must start aligned in order to snap to the same alignment on the canvas. To remedy this, just use the 1.2.0 flow alignment feature. Note: this is
[jira] [Commented] (NIFI-5018) basic snap-to-grid feature for UI
[ https://issues.apache.org/jira/browse/NIFI-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446416#comment-16446416 ] Ryan Bower commented on NIFI-5018: -- [~joewitt] build succeeded. The surefire plugin's test failed, but the project builds fine when I skip it. I had this issue on Windows as well. I'll create a pull request with this code. > basic snap-to-grid feature for UI > - > > Key: NIFI-5018 > URL: https://issues.apache.org/jira/browse/NIFI-5018 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Affects Versions: 1.5.0 > Environment: Tested on Windows >Reporter: Ryan Bower >Priority: Minor > Labels: web-ui > Original Estimate: 0.25h > Remaining Estimate: 0.25h > > NiFi 1.2.0 contained the flow alignment feature, detailed: > *NiFi 1.2.0 has a nice, new little feature that will surely please those who > may spend a bit of time – for some, perhaps A LOT of time – getting their > flow to line up perfectly. The background grid lines can help with this, but > the process can still be quite tedious with many components. Now there is a > quick, easy way.* > **I've made a slight modification to the UI (roughly 8 lines) that results in > a "snap-to-grid" for selected components. See [this > video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an > example of it in action. > Target file: > nifi-1.5.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js > Disclaimer: I'm not experienced with javascript. There's definitely a better > way to do this, code-wise, but this works and it's consistent. > The processor alignment is based on rounding the component's X and Y > coordinates during the drag event. The result is a consistent "snap" > alignment. I modified nf-draggable.js to achieve this: > {{var nfDraggable = {}} > {{ ...}} > {{ if (dragSelection.empty()) }}{ > {{ ...}} > {{ } else {}} > {{ // added vars for preview outline rounding}} > {{ var gridX = 16;}} > {{ var gridY = 16;}} > {{ // update the position of the drag selection}}{{ }} > {{ dragSelection.attr('x', function (d) {}} > {{ d.x += d3.event.dx;}} > {{ // rounding the result achieves the "snap" alignment}} > {{ var roundedX = Math.round(d.x/gridX) * gridX;}} > {{ return roundedX;}} > {{ })}} > {{ .attr('y', function (d) {}} > {{ d.y += d3.event.dy;}} > {{ var roundedY = Math.round(d.y/gridY) * gridY;}} > {{ return roundedY;}} > {{ });}} > ... > {{ updateComponentPosition: function (d, delta) {}} > // added vars for position update. These must match the previous two}} > {{ var gridX = 16;}} > {{ var gridY = 16;}} > {{ // perform rounding again for the update}} > {{ var newPosition = {}} > {{ 'x': (Math.round((d.position.x + delta.x)/gridX) * gridX),}} > {{ 'y': (Math.round((d.position.y + delta.y)/gridY) * gridY),}} > {{ };}} > {{ ...}} > {{}}} > > The downside of this is that components must start aligned in order to snap > to the same alignment on the canvas. To remedy this, just use the 1.2.0 flow > alignment feature. Note: this is only an issue for old, unaligned flows. > New flows and aligned flows don't have this problem. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5018) basic snap-to-grid feature for UI
[ https://issues.apache.org/jira/browse/NIFI-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446388#comment-16446388 ] Ryan Bower commented on NIFI-5018: -- Got lost these past few weeks with a big project. I've since transferred to an Ubuntu 16.04 workstation. The modified 1.6.0 source is building now. > basic snap-to-grid feature for UI > - > > Key: NIFI-5018 > URL: https://issues.apache.org/jira/browse/NIFI-5018 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Affects Versions: 1.5.0 > Environment: Tested on Windows >Reporter: Ryan Bower >Priority: Minor > Labels: web-ui > Original Estimate: 0.25h > Remaining Estimate: 0.25h > > NiFi 1.2.0 contained the flow alignment feature, detailed: > *NiFi 1.2.0 has a nice, new little feature that will surely please those who > may spend a bit of time – for some, perhaps A LOT of time – getting their > flow to line up perfectly. The background grid lines can help with this, but > the process can still be quite tedious with many components. Now there is a > quick, easy way.* > **I've made a slight modification to the UI (roughly 8 lines) that results in > a "snap-to-grid" for selected components. See [this > video|https://www.youtube.com/watch?v=S7lnBMMO6KE=youtu.be] for an > example of it in action. > Target file: > nifi-1.5.0-src\nifi-nar-bundles\nifi-framework-bundle\nifi-framework\nifi-web\nifi-web-ui\src\main\webapp\js\nf\canvas\nf-draggable.js > Disclaimer: I'm not experienced with javascript. There's definitely a better > way to do this, code-wise, but this works and it's consistent. > The processor alignment is based on rounding the component's X and Y > coordinates during the drag event. The result is a consistent "snap" > alignment. I modified nf-draggable.js to achieve this: > {{var nfDraggable = {}} > {{ ...}} > {{ if (dragSelection.empty()) }}{ > {{ ...}} > {{ } else {}} > {{ // added vars for preview outline rounding}} > {{ var gridX = 16;}} > {{ var gridY = 16;}} > {{ // update the position of the drag selection}}{{ }} > {{ dragSelection.attr('x', function (d) {}} > {{ d.x += d3.event.dx;}} > {{ // rounding the result achieves the "snap" alignment}} > {{ var roundedX = Math.round(d.x/gridX) * gridX;}} > {{ return roundedX;}} > {{ })}} > {{ .attr('y', function (d) {}} > {{ d.y += d3.event.dy;}} > {{ var roundedY = Math.round(d.y/gridY) * gridY;}} > {{ return roundedY;}} > {{ });}} > ... > {{ updateComponentPosition: function (d, delta) {}} > // added vars for position update. These must match the previous two}} > {{ var gridX = 16;}} > {{ var gridY = 16;}} > {{ // perform rounding again for the update}} > {{ var newPosition = {}} > {{ 'x': (Math.round((d.position.x + delta.x)/gridX) * gridX),}} > {{ 'y': (Math.round((d.position.y + delta.y)/gridY) * gridY),}} > {{ };}} > {{ ...}} > {{}}} > > The downside of this is that components must start aligned in order to snap > to the same alignment on the canvas. To remedy this, just use the 1.2.0 flow > alignment feature. Note: this is only an issue for old, unaligned flows. > New flows and aligned flows don't have this problem. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4185) Add XML record reader & writer services
[ https://issues.apache.org/jira/browse/NIFI-4185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446364#comment-16446364 ] ASF GitHub Bot commented on NIFI-4185: -- Github user JohannesDaniel commented on the issue: https://github.com/apache/nifi/pull/2587 @markap14 thank you for the response. I will simply remove that record tag validation as there are indeed many ways to do that before the data is processed by this reader. There is one little corner case, I need to discuss: Assuming we have the following data: ``` value1... ``` If the reader is used with (coerce==true), the field "map_field" can be parsed by defining a map in the schema. The embedded key fields do not have to be defined, its values only have to be of the defined type for the map. If the reader is used with (coerce==false && dropUnknown==true), the reader will parse all fields that exist in the schema ignoring its type. However, the data above will not be parsable even if the map exists in the schema. In this case, the reader identifies "map_field" as a field that exists in the schema, but the reader is not aware that it is of type map. Therefore, the reader will not parse the embedded key fields, as they don't exist in the schema. The field "map_field" will be classified as an empty field and not added to the record. Furthermore, even if the reader is used with (coerce==false && dropUnknown==true), it will be type-aware to some extent. The reader first checks, whether fields exist in the schema. If that is the case, the reader additionally will check whether they are of type record (or of type array embedding records, respectively). If that is also the case, the reader will retrieve the subschema in order to be enabled to check whether subtags of the current tag are known. > Add XML record reader & writer services > --- > > Key: NIFI-4185 > URL: https://issues.apache.org/jira/browse/NIFI-4185 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Andy LoPresto >Assignee: Johannes Peter >Priority: Major > Labels: json, records, xml > > With the addition of the {{RecordReader}} and {{RecordSetWriter}} paradigm, > XML conversion has not yet been targeted. This will replace the previous > ticket for XML to JSON conversion. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2587: NIFI-4185 Add XML Record Reader
Github user JohannesDaniel commented on the issue: https://github.com/apache/nifi/pull/2587 @markap14 thank you for the response. I will simply remove that record tag validation as there are indeed many ways to do that before the data is processed by this reader. There is one little corner case, I need to discuss: Assuming we have the following data: ``` value1... ``` If the reader is used with (coerce==true), the field "map_field" can be parsed by defining a map in the schema. The embedded key fields do not have to be defined, its values only have to be of the defined type for the map. If the reader is used with (coerce==false && dropUnknown==true), the reader will parse all fields that exist in the schema ignoring its type. However, the data above will not be parsable even if the map exists in the schema. In this case, the reader identifies "map_field" as a field that exists in the schema, but the reader is not aware that it is of type map. Therefore, the reader will not parse the embedded key fields, as they don't exist in the schema. The field "map_field" will be classified as an empty field and not added to the record. Furthermore, even if the reader is used with (coerce==false && dropUnknown==true), it will be type-aware to some extent. The reader first checks, whether fields exist in the schema. If that is the case, the reader additionally will check whether they are of type record (or of type array embedding records, respectively). If that is also the case, the reader will retrieve the subschema in order to be enabled to check whether subtags of the current tag are known. ---
[jira] [Commented] (NIFI-5096) When Primary Node changes, occasionally both the new and old primary nodes continue running processors
[ https://issues.apache.org/jira/browse/NIFI-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446348#comment-16446348 ] ASF subversion and git services commented on NIFI-5096: --- Commit 54eb6bc23211ad2b499f42e14759f3646f806d2f in nifi's branch refs/heads/master from [~markap14] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=54eb6bc ] NIFI-5096: Periodically poll ZooKeeper to determine the leader for each registered role in Leader Election. This avoids a condition whereby a node may occasionally fail to receive notification that it is no longer the elected leader. NIFI-5096: More proactively setting leadership to false when ZooKeeper/Curator ConnectionState changes This closes #2646 > When Primary Node changes, occasionally both the new and old primary nodes > continue running processors > -- > > Key: NIFI-5096 > URL: https://issues.apache.org/jira/browse/NIFI-5096 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.7.0 > > > Occasionally we will see that Node A is Primary Node and then the Primary > Node switches to Node B, resulting in both Node A and Node B running > processors that are marked as Primary Node only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5096) When Primary Node changes, occasionally both the new and old primary nodes continue running processors
[ https://issues.apache.org/jira/browse/NIFI-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446350#comment-16446350 ] ASF GitHub Bot commented on NIFI-5096: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2646 Thanks @markap14! This has been merged to master. > When Primary Node changes, occasionally both the new and old primary nodes > continue running processors > -- > > Key: NIFI-5096 > URL: https://issues.apache.org/jira/browse/NIFI-5096 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.7.0 > > > Occasionally we will see that Node A is Primary Node and then the Primary > Node switches to Node B, resulting in both Node A and Node B running > processors that are marked as Primary Node only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5096) When Primary Node changes, occasionally both the new and old primary nodes continue running processors
[ https://issues.apache.org/jira/browse/NIFI-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446351#comment-16446351 ] ASF GitHub Bot commented on NIFI-5096: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2646 > When Primary Node changes, occasionally both the new and old primary nodes > continue running processors > -- > > Key: NIFI-5096 > URL: https://issues.apache.org/jira/browse/NIFI-5096 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.7.0 > > > Occasionally we will see that Node A is Primary Node and then the Primary > Node switches to Node B, resulting in both Node A and Node B running > processors that are marked as Primary Node only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5096) When Primary Node changes, occasionally both the new and old primary nodes continue running processors
[ https://issues.apache.org/jira/browse/NIFI-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman updated NIFI-5096: -- Resolution: Fixed Fix Version/s: 1.7.0 Status: Resolved (was: Patch Available) > When Primary Node changes, occasionally both the new and old primary nodes > continue running processors > -- > > Key: NIFI-5096 > URL: https://issues.apache.org/jira/browse/NIFI-5096 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.7.0 > > > Occasionally we will see that Node A is Primary Node and then the Primary > Node switches to Node B, resulting in both Node A and Node B running > processors that are marked as Primary Node only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5096) When Primary Node changes, occasionally both the new and old primary nodes continue running processors
[ https://issues.apache.org/jira/browse/NIFI-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446349#comment-16446349 ] ASF subversion and git services commented on NIFI-5096: --- Commit 54eb6bc23211ad2b499f42e14759f3646f806d2f in nifi's branch refs/heads/master from [~markap14] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=54eb6bc ] NIFI-5096: Periodically poll ZooKeeper to determine the leader for each registered role in Leader Election. This avoids a condition whereby a node may occasionally fail to receive notification that it is no longer the elected leader. NIFI-5096: More proactively setting leadership to false when ZooKeeper/Curator ConnectionState changes This closes #2646 > When Primary Node changes, occasionally both the new and old primary nodes > continue running processors > -- > > Key: NIFI-5096 > URL: https://issues.apache.org/jira/browse/NIFI-5096 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.7.0 > > > Occasionally we will see that Node A is Primary Node and then the Primary > Node switches to Node B, resulting in both Node A and Node B running > processors that are marked as Primary Node only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2646: NIFI-5096: Periodically poll ZooKeeper to determine...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2646 ---
[GitHub] nifi issue #2646: NIFI-5096: Periodically poll ZooKeeper to determine the le...
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2646 Thanks @markap14! This has been merged to master. ---
[jira] [Commented] (NIFI-4456) Update JSON Record Reader / Writer to allow for 'json per line' format
[ https://issues.apache.org/jira/browse/NIFI-4456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446335#comment-16446335 ] ASF GitHub Bot commented on NIFI-4456: -- Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2640#discussion_r183159695 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/json/AbstractJsonRowRecordReader.java --- @@ -197,7 +193,8 @@ private JsonNode getNextJsonNode() throws JsonParseException, IOException, Malfo return jsonParser.readValueAsTree(); case END_ARRAY: case START_ARRAY: -return null; +return getNextJsonNode(); --- End diff -- Would recommend that we just use 'continue' here as we do for END_OBJECT, rather than recursively calling ourselves. Is consistent and avoids unnecessarily deepening the stack, but will result in the same logic being evaluated > Update JSON Record Reader / Writer to allow for 'json per line' format > -- > > Key: NIFI-4456 > URL: https://issues.apache.org/jira/browse/NIFI-4456 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Matt Burgess >Priority: Major > > It is common, especially for archiving purposes, to have many JSON objects > combined with new-lines in between, in order to delimit the records. It would > be useful to allow record readers and writers to support this, instead of > requiring that JSON records being elements in a JSON Array. > For example, the following JSON Is considered two records: > {code} > [ > { "greeting" : "hello", "id" : 1 }, > { "greeting" : "good-bye", "id" : 2 } > ] > {code} > It would be beneficial to also support the format: > {code} > { "greeting" : "hello", "id" : 1 } > { "greeting" : "good-bye", "id" : 2 } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4456) Update JSON Record Reader / Writer to allow for 'json per line' format
[ https://issues.apache.org/jira/browse/NIFI-4456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446338#comment-16446338 ] ASF GitHub Bot commented on NIFI-4456: -- Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2640#discussion_r183161985 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/json/JsonTreeReader.java --- @@ -39,11 +39,11 @@ @Tags({"json", "tree", "record", "reader", "parser"}) @CapabilityDescription("Parses JSON into individual Record objects. The Record that is produced will contain all top-level " -+ "elements of the corresponding JSON Object. " -+ "The root JSON element can be either a single element or an array of JSON elements, and each " -+ "element in that array will be treated as a separate record. " -+ "If the schema that is configured contains a field that is not present in the JSON, a null value will be used. If the JSON contains " -+ "a field that is not present in the schema, that field will be skipped. " ++ "elements of the corresponding JSON Objects. The reader does not require the flow file content to be well-formed JSON; rather " --- End diff -- I would make the same comment here as above, regarding the phrasing that the flow file content need not be well-formed JSON. > Update JSON Record Reader / Writer to allow for 'json per line' format > -- > > Key: NIFI-4456 > URL: https://issues.apache.org/jira/browse/NIFI-4456 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Matt Burgess >Priority: Major > > It is common, especially for archiving purposes, to have many JSON objects > combined with new-lines in between, in order to delimit the records. It would > be useful to allow record readers and writers to support this, instead of > requiring that JSON records being elements in a JSON Array. > For example, the following JSON Is considered two records: > {code} > [ > { "greeting" : "hello", "id" : 1 }, > { "greeting" : "good-bye", "id" : 2 } > ] > {code} > It would be beneficial to also support the format: > {code} > { "greeting" : "hello", "id" : 1 } > { "greeting" : "good-bye", "id" : 2 } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4456) Update JSON Record Reader / Writer to allow for 'json per line' format
[ https://issues.apache.org/jira/browse/NIFI-4456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446337#comment-16446337 ] ASF GitHub Bot commented on NIFI-4456: -- Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2640#discussion_r183161097 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/json/JsonPathReader.java --- @@ -48,15 +48,17 @@ import com.jayway.jsonpath.JsonPath; @Tags({"json", "jsonpath", "record", "reader", "parser"}) -@CapabilityDescription("Parses JSON records and evaluates user-defined JSON Path's against each JSON object. The root element may be either " -+ "a single JSON object or a JSON array. If a JSON array is found, each JSON object within that array is treated as a separate record. " -+ "User-defined properties define the fields that should be extracted from the JSON in order to form the fields of a Record. Any JSON field " -+ "that is not extracted via a JSONPath will not be returned in the JSON Records.") +@CapabilityDescription("Parses JSON records and evaluates user-defined JSON Path's against each JSON object. The reader does not require the " --- End diff -- I would be hesitant to indicate "The reader does not require the flow file content to be well-formed JSON." This gives me the impression that improper JSON will still be handled correctly, perhaps by skipping over the invalid parts? Perhaps we should word it as "While the reader expects each record to be well-formed JSON, the content of a FlowFile may consist of many records, either as a well-formed JSON array, or a series of JSON records with optional whitespace between them, such as the common 'JSON-per-line' format." or something of that nature. > Update JSON Record Reader / Writer to allow for 'json per line' format > -- > > Key: NIFI-4456 > URL: https://issues.apache.org/jira/browse/NIFI-4456 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Matt Burgess >Priority: Major > > It is common, especially for archiving purposes, to have many JSON objects > combined with new-lines in between, in order to delimit the records. It would > be useful to allow record readers and writers to support this, instead of > requiring that JSON records being elements in a JSON Array. > For example, the following JSON Is considered two records: > {code} > [ > { "greeting" : "hello", "id" : 1 }, > { "greeting" : "good-bye", "id" : 2 } > ] > {code} > It would be beneficial to also support the format: > {code} > { "greeting" : "hello", "id" : 1 } > { "greeting" : "good-bye", "id" : 2 } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4456) Update JSON Record Reader / Writer to allow for 'json per line' format
[ https://issues.apache.org/jira/browse/NIFI-4456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446336#comment-16446336 ] ASF GitHub Bot commented on NIFI-4456: -- Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2640#discussion_r183161782 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/json/JsonRecordSetWriter.java --- @@ -64,18 +72,40 @@ .defaultValue("false") .required(true) .build(); +static final PropertyDescriptor OUTPUT_GROUPING = new PropertyDescriptor.Builder() +.name("output-grouping") +.displayName("Output Grouping") +.description("Specifies how the writer should output the JSON records (as an array or one object per line, e.g.) Note that if 'One Line Per Object' is " ++ "selected, then Pretty Print JSON must be false.") +.allowableValues(OUTPUT_ARRAY, OUTPUT_ONELINE) +.defaultValue(OUTPUT_ARRAY.getValue()) +.required(true) +.build(); private volatile boolean prettyPrint; private volatile NullSuppression nullSuppression; +private volatile OutputGrouping outputGrouping; @Override protected List getSupportedPropertyDescriptors() { final List properties = new ArrayList<>(super.getSupportedPropertyDescriptors()); properties.add(PRETTY_PRINT_JSON); properties.add(SUPPRESS_NULLS); +properties.add(OUTPUT_GROUPING); return properties; } +@Override +protected Collection customValidate(ValidationContext context) { +final List problems = new ArrayList<>(super.customValidate(context)); +// Don't allow Pretty Print if One Line Per Object is selected +if (context.getProperty(PRETTY_PRINT_JSON).asBoolean() && context.getProperty(OUTPUT_GROUPING).getValue().equals(OUTPUT_ONELINE.getValue())) { +problems.add(new ValidationResult.Builder().input("Pretty Print").valid(false) +.explanation("Pretty Print JSON must be false when One Line Per Object is selected").build()); --- End diff -- Would recommend phrasing it something like "... when 'Output Grouping' is set to 'One Line Per Object'" to provide additional clarity > Update JSON Record Reader / Writer to allow for 'json per line' format > -- > > Key: NIFI-4456 > URL: https://issues.apache.org/jira/browse/NIFI-4456 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Mark Payne >Assignee: Matt Burgess >Priority: Major > > It is common, especially for archiving purposes, to have many JSON objects > combined with new-lines in between, in order to delimit the records. It would > be useful to allow record readers and writers to support this, instead of > requiring that JSON records being elements in a JSON Array. > For example, the following JSON Is considered two records: > {code} > [ > { "greeting" : "hello", "id" : 1 }, > { "greeting" : "good-bye", "id" : 2 } > ] > {code} > It would be beneficial to also support the format: > {code} > { "greeting" : "hello", "id" : 1 } > { "greeting" : "good-bye", "id" : 2 } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2640: NIFI-4456: Support multiple JSON objects in JSON re...
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2640#discussion_r183161097 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/json/JsonPathReader.java --- @@ -48,15 +48,17 @@ import com.jayway.jsonpath.JsonPath; @Tags({"json", "jsonpath", "record", "reader", "parser"}) -@CapabilityDescription("Parses JSON records and evaluates user-defined JSON Path's against each JSON object. The root element may be either " -+ "a single JSON object or a JSON array. If a JSON array is found, each JSON object within that array is treated as a separate record. " -+ "User-defined properties define the fields that should be extracted from the JSON in order to form the fields of a Record. Any JSON field " -+ "that is not extracted via a JSONPath will not be returned in the JSON Records.") +@CapabilityDescription("Parses JSON records and evaluates user-defined JSON Path's against each JSON object. The reader does not require the " --- End diff -- I would be hesitant to indicate "The reader does not require the flow file content to be well-formed JSON." This gives me the impression that improper JSON will still be handled correctly, perhaps by skipping over the invalid parts? Perhaps we should word it as "While the reader expects each record to be well-formed JSON, the content of a FlowFile may consist of many records, either as a well-formed JSON array, or a series of JSON records with optional whitespace between them, such as the common 'JSON-per-line' format." or something of that nature. ---
[GitHub] nifi pull request #2640: NIFI-4456: Support multiple JSON objects in JSON re...
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2640#discussion_r183161985 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/json/JsonTreeReader.java --- @@ -39,11 +39,11 @@ @Tags({"json", "tree", "record", "reader", "parser"}) @CapabilityDescription("Parses JSON into individual Record objects. The Record that is produced will contain all top-level " -+ "elements of the corresponding JSON Object. " -+ "The root JSON element can be either a single element or an array of JSON elements, and each " -+ "element in that array will be treated as a separate record. " -+ "If the schema that is configured contains a field that is not present in the JSON, a null value will be used. If the JSON contains " -+ "a field that is not present in the schema, that field will be skipped. " ++ "elements of the corresponding JSON Objects. The reader does not require the flow file content to be well-formed JSON; rather " --- End diff -- I would make the same comment here as above, regarding the phrasing that the flow file content need not be well-formed JSON. ---
[GitHub] nifi pull request #2640: NIFI-4456: Support multiple JSON objects in JSON re...
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2640#discussion_r183161782 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/json/JsonRecordSetWriter.java --- @@ -64,18 +72,40 @@ .defaultValue("false") .required(true) .build(); +static final PropertyDescriptor OUTPUT_GROUPING = new PropertyDescriptor.Builder() +.name("output-grouping") +.displayName("Output Grouping") +.description("Specifies how the writer should output the JSON records (as an array or one object per line, e.g.) Note that if 'One Line Per Object' is " ++ "selected, then Pretty Print JSON must be false.") +.allowableValues(OUTPUT_ARRAY, OUTPUT_ONELINE) +.defaultValue(OUTPUT_ARRAY.getValue()) +.required(true) +.build(); private volatile boolean prettyPrint; private volatile NullSuppression nullSuppression; +private volatile OutputGrouping outputGrouping; @Override protected List getSupportedPropertyDescriptors() { final List properties = new ArrayList<>(super.getSupportedPropertyDescriptors()); properties.add(PRETTY_PRINT_JSON); properties.add(SUPPRESS_NULLS); +properties.add(OUTPUT_GROUPING); return properties; } +@Override +protected Collection customValidate(ValidationContext context) { +final List problems = new ArrayList<>(super.customValidate(context)); +// Don't allow Pretty Print if One Line Per Object is selected +if (context.getProperty(PRETTY_PRINT_JSON).asBoolean() && context.getProperty(OUTPUT_GROUPING).getValue().equals(OUTPUT_ONELINE.getValue())) { +problems.add(new ValidationResult.Builder().input("Pretty Print").valid(false) +.explanation("Pretty Print JSON must be false when One Line Per Object is selected").build()); --- End diff -- Would recommend phrasing it something like "... when 'Output Grouping' is set to 'One Line Per Object'" to provide additional clarity ---
[GitHub] nifi pull request #2640: NIFI-4456: Support multiple JSON objects in JSON re...
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2640#discussion_r183159695 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/json/AbstractJsonRowRecordReader.java --- @@ -197,7 +193,8 @@ private JsonNode getNextJsonNode() throws JsonParseException, IOException, Malfo return jsonParser.readValueAsTree(); case END_ARRAY: case START_ARRAY: -return null; +return getNextJsonNode(); --- End diff -- Would recommend that we just use 'continue' here as we do for END_OBJECT, rather than recursively calling ourselves. Is consistent and avoids unnecessarily deepening the stack, but will result in the same logic being evaluated ---
[jira] [Commented] (NIFI-4185) Add XML record reader & writer services
[ https://issues.apache.org/jira/browse/NIFI-4185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446291#comment-16446291 ] ASF GitHub Bot commented on NIFI-4185: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2587 @JohannesDaniel thanks for the update! I commented above re: the use of Expression Language in the property descriptor. I do still feel like the check for 'record tag names' is unnecessary, as the reader should not be responsible for filtering the data but rather just for reading it. There already exist mechanisms for filtering the data (You could use PartitionRecord + RouteOnAttribute, ValidateRecord, or QueryRecord just off the top of my head to achieve this). Additionally, we have the Schema for the Record Reader. So if the element name matches the top-level Schema name (or one of them, if the top-level field is a UNION/CHOICE element), then we could use that. So, with your example above, if you only want to read the `` part, your schema should indicate that the top-level field name is `record`. In that case, it should filter out the `other` record. Does that make sense? > Add XML record reader & writer services > --- > > Key: NIFI-4185 > URL: https://issues.apache.org/jira/browse/NIFI-4185 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Andy LoPresto >Assignee: Johannes Peter >Priority: Major > Labels: json, records, xml > > With the addition of the {{RecordReader}} and {{RecordSetWriter}} paradigm, > XML conversion has not yet been targeted. This will replace the previous > ticket for XML to JSON conversion. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5075) Funnels with no outgoing relationship error
[ https://issues.apache.org/jira/browse/NIFI-5075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446301#comment-16446301 ] ASF GitHub Bot commented on NIFI-5075: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2634 @ijokarumawak that's great. Did you intend to push a new commit? The only commit that I see is from April 12th. > Funnels with no outgoing relationship error > --- > > Key: NIFI-5075 > URL: https://issues.apache.org/jira/browse/NIFI-5075 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.6.0 >Reporter: Peter Wicks >Assignee: Koji Kawamura >Priority: Major > > If a Funnel has no outgoing relationships it will throw an exception when it > tries to send FlowFile's to that non-existent relationship. > Replicate by creating a GenerateFlowFile processor to a Funnel, start the > GenerateFlowFile processor and check your log file. > > 2018-04-11 23:53:28,066 ERROR [Timer-Driven Process Thread-31] > o.apache.nifi.controller.StandardFunnel > StandardFunnel[id=b868231c-0162-1000-571c-ae3e7d15d848] > StandardFunnel[id=b868231c-0162-1000-571c-ae3e7d15d848] failed to process > session due to java.lang.RuntimeException: > java.lang.IllegalArgumentException: Relationship '' is not known; Processor > Administratively Yielded for 1 sec: java.lang.RuntimeException: > java.lang.IllegalArgumentException: Relationship '' is not known > java.lang.RuntimeException: java.lang.IllegalArgumentException: Relationship > '' is not known > at > org.apache.nifi.controller.StandardFunnel.onTrigger(StandardFunnel.java:365) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:175) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.IllegalArgumentException: Relationship '' is not known > at > org.apache.nifi.controller.repository.StandardProcessSession.transfer(StandardProcessSession.java:1935) > at > org.apache.nifi.controller.StandardFunnel.onTrigger(StandardFunnel.java:379) > at > org.apache.nifi.controller.StandardFunnel.onTrigger(StandardFunnel.java:358) > ... 9 common frames omitted -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2634: NIFI-5075: Do not execute Funnels with no outgoing connect...
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2634 @ijokarumawak that's great. Did you intend to push a new commit? The only commit that I see is from April 12th. ---
[GitHub] nifi issue #2587: NIFI-4185 Add XML Record Reader
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2587 @JohannesDaniel thanks for the update! I commented above re: the use of Expression Language in the property descriptor. I do still feel like the check for 'record tag names' is unnecessary, as the reader should not be responsible for filtering the data but rather just for reading it. There already exist mechanisms for filtering the data (You could use PartitionRecord + RouteOnAttribute, ValidateRecord, or QueryRecord just off the top of my head to achieve this). Additionally, we have the Schema for the Record Reader. So if the element name matches the top-level Schema name (or one of them, if the top-level field is a UNION/CHOICE element), then we could use that. So, with your example above, if you only want to read the `` part, your schema should indicate that the top-level field name is `record`. In that case, it should filter out the `other` record. Does that make sense? ---
[jira] [Created] (NIFI-5104) Create new processor PutFoundationDB
Mike Thomsen created NIFI-5104: -- Summary: Create new processor PutFoundationDB Key: NIFI-5104 URL: https://issues.apache.org/jira/browse/NIFI-5104 Project: Apache NiFi Issue Type: New Feature Reporter: Mike Thomsen Assignee: Mike Thomsen A processor capable of putting data transactionally into FoundationDB is needed. It should be able to at least define key value pairs in a file separated by a configurable pair separator and a configurable separator for the key value pieces. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4185) Add XML record reader & writer services
[ https://issues.apache.org/jira/browse/NIFI-4185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446275#comment-16446275 ] ASF GitHub Bot commented on NIFI-4185: -- Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2587#discussion_r183152900 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/xml/XMLReader.java --- @@ -0,0 +1,133 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.xml; + +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnEnabled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.controller.ConfigurationContext; +import org.apache.nifi.logging.ComponentLog; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.schema.access.SchemaNotFoundException; +import org.apache.nifi.serialization.DateTimeUtils; +import org.apache.nifi.serialization.MalformedRecordException; +import org.apache.nifi.serialization.RecordReader; +import org.apache.nifi.serialization.RecordReaderFactory; +import org.apache.nifi.serialization.SchemaRegistryService; +import org.apache.nifi.serialization.record.RecordSchema; + +import java.io.IOException; +import java.io.InputStream; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; + +@Tags({"xml", "record", "reader", "parser"}) +@CapabilityDescription("Reads XML content and creates Record objects. Records are expected in the second level of " + --- End diff -- Yes, exactly. I would use a property to convey this. I would be okay with allowing Expression Language to be used, or just allowing for a 'true'/'false' without Expression Language (I think in most cases, you'll want one or the other, not dependent upon each individual FlowFile). But if you think EL is important then I won't argue that point :) One other option, which we do in a few different processors, would be to offer a third option that looks at a well-known attribute. So you could choose 'true' (treat outer element as a wrapper), 'false' (treat each flowfile as a single record), or 'use xml.stream attribute', and when that is selected, the 'xml.stream' attribute would be looked at to determine how to handle it - a value of 'true' would mean it's a stream of multiple records, 'false' would mean it's only 1 record, missing or any other value would throw an Exception. I don't have strong preference one way or another how this should be handled, but wanted to present options that we typically use. > Add XML record reader & writer services > --- > > Key: NIFI-4185 > URL: https://issues.apache.org/jira/browse/NIFI-4185 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Andy LoPresto >Assignee: Johannes Peter >Priority: Major > Labels: json, records, xml > > With the addition of the {{RecordReader}} and {{RecordSetWriter}} paradigm, > XML conversion has not yet been targeted. This will replace the previous > ticket for XML to JSON conversion. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2587: NIFI-4185 Add XML Record Reader
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2587#discussion_r183152900 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/xml/XMLReader.java --- @@ -0,0 +1,133 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.xml; + +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnEnabled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.controller.ConfigurationContext; +import org.apache.nifi.logging.ComponentLog; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.schema.access.SchemaNotFoundException; +import org.apache.nifi.serialization.DateTimeUtils; +import org.apache.nifi.serialization.MalformedRecordException; +import org.apache.nifi.serialization.RecordReader; +import org.apache.nifi.serialization.RecordReaderFactory; +import org.apache.nifi.serialization.SchemaRegistryService; +import org.apache.nifi.serialization.record.RecordSchema; + +import java.io.IOException; +import java.io.InputStream; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; + +@Tags({"xml", "record", "reader", "parser"}) +@CapabilityDescription("Reads XML content and creates Record objects. Records are expected in the second level of " + --- End diff -- Yes, exactly. I would use a property to convey this. I would be okay with allowing Expression Language to be used, or just allowing for a 'true'/'false' without Expression Language (I think in most cases, you'll want one or the other, not dependent upon each individual FlowFile). But if you think EL is important then I won't argue that point :) One other option, which we do in a few different processors, would be to offer a third option that looks at a well-known attribute. So you could choose 'true' (treat outer element as a wrapper), 'false' (treat each flowfile as a single record), or 'use xml.stream attribute', and when that is selected, the 'xml.stream' attribute would be looked at to determine how to handle it - a value of 'true' would mean it's a stream of multiple records, 'false' would mean it's only 1 record, missing or any other value would throw an Exception. I don't have strong preference one way or another how this should be handled, but wanted to present options that we typically use. ---
[jira] [Commented] (NIFI-5096) When Primary Node changes, occasionally both the new and old primary nodes continue running processors
[ https://issues.apache.org/jira/browse/NIFI-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446272#comment-16446272 ] ASF GitHub Bot commented on NIFI-5096: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2646 @mcgilman I agree. I have pushed a new commit that does just that. > When Primary Node changes, occasionally both the new and old primary nodes > continue running processors > -- > > Key: NIFI-5096 > URL: https://issues.apache.org/jira/browse/NIFI-5096 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > > Occasionally we will see that Node A is Primary Node and then the Primary > Node switches to Node B, resulting in both Node A and Node B running > processors that are marked as Primary Node only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2646: NIFI-5096: Periodically poll ZooKeeper to determine the le...
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2646 @mcgilman I agree. I have pushed a new commit that does just that. ---
[jira] [Commented] (NIFI-5096) When Primary Node changes, occasionally both the new and old primary nodes continue running processors
[ https://issues.apache.org/jira/browse/NIFI-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446234#comment-16446234 ] ASF GitHub Bot commented on NIFI-5096: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2646 @markap14 I see. It appears then that the underlying issue is that either (1) the stateChange method is not being invoked or (2) the leader thread interruption is not happening/working. We could include your proposed changes and update our implementation of stateChanged to set `leader` to false when the `newState` is LOST or SUSPENDED before invoking super. Additionally, in `takeLeadership` we should loop while not stopped and is leader. This should help if the underlying issue was (2) while the polling could act as additional insurance. > When Primary Node changes, occasionally both the new and old primary nodes > continue running processors > -- > > Key: NIFI-5096 > URL: https://issues.apache.org/jira/browse/NIFI-5096 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > > Occasionally we will see that Node A is Primary Node and then the Primary > Node switches to Node B, resulting in both Node A and Node B running > processors that are marked as Primary Node only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2646: NIFI-5096: Periodically poll ZooKeeper to determine the le...
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2646 @markap14 I see. It appears then that the underlying issue is that either (1) the stateChange method is not being invoked or (2) the leader thread interruption is not happening/working. We could include your proposed changes and update our implementation of stateChanged to set `leader` to false when the `newState` is LOST or SUSPENDED before invoking super. Additionally, in `takeLeadership` we should loop while not stopped and is leader. This should help if the underlying issue was (2) while the polling could act as additional insurance. ---
[jira] [Resolved] (NIFI-4516) Add FetchSolr processor
[ https://issues.apache.org/jira/browse/NIFI-4516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Johannes Peter resolved NIFI-4516. -- Resolution: Fixed > Add FetchSolr processor > --- > > Key: NIFI-4516 > URL: https://issues.apache.org/jira/browse/NIFI-4516 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Johannes Peter >Assignee: Johannes Peter >Priority: Major > Labels: features > > The processor shall be capable > * to query Solr within a workflow, > * to make use of standard functionalities of Solr such as faceting, > highlighting, result grouping, etc., > * to make use of NiFis expression language to build Solr queries, > * to handle results as records. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5096) When Primary Node changes, occasionally both the new and old primary nodes continue running processors
[ https://issues.apache.org/jira/browse/NIFI-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446163#comment-16446163 ] ASF GitHub Bot commented on NIFI-5096: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2646 @mcgilman we do indeed implement the ConnectionStateListener, but we do so only to log the fact and then call super.stateChanged(). When we call super.stateChanged(), that will throw CancelLeadershipException, which in turn is supposed to interrupt our listener. We followed the "Error Handling" guidance provided by Apache Curator: https://curator.apache.org/curator-recipes/leader-election.html So we are handling the SUSPENDED and LOST scenarios as is recommended. And this works 99% of the time. Unfortunately, we do occasionally see scenarios where it does not interrupt the thread and as such the node believes that it retains the lock. It's not clear, when this happens, if the thread just wasn't interrupted for some reason, or if the notification of SUSPENDED/LOST never was received, or what exactly is occurring that prevents our ElectionListener from being interrupted. That's why I went with the solution of periodically polling ZooKeeper, to check the state. That way, whatever the cause of the thread not being interrupted, we still will break out. If you think it makes sense, though, we can detect the LOST state specifically and have that trigger us to leave the election, in addition to polling? > When Primary Node changes, occasionally both the new and old primary nodes > continue running processors > -- > > Key: NIFI-5096 > URL: https://issues.apache.org/jira/browse/NIFI-5096 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > > Occasionally we will see that Node A is Primary Node and then the Primary > Node switches to Node B, resulting in both Node A and Node B running > processors that are marked as Primary Node only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2646: NIFI-5096: Periodically poll ZooKeeper to determine the le...
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2646 @mcgilman we do indeed implement the ConnectionStateListener, but we do so only to log the fact and then call super.stateChanged(). When we call super.stateChanged(), that will throw CancelLeadershipException, which in turn is supposed to interrupt our listener. We followed the "Error Handling" guidance provided by Apache Curator: https://curator.apache.org/curator-recipes/leader-election.html So we are handling the SUSPENDED and LOST scenarios as is recommended. And this works 99% of the time. Unfortunately, we do occasionally see scenarios where it does not interrupt the thread and as such the node believes that it retains the lock. It's not clear, when this happens, if the thread just wasn't interrupted for some reason, or if the notification of SUSPENDED/LOST never was received, or what exactly is occurring that prevents our ElectionListener from being interrupted. That's why I went with the solution of periodically polling ZooKeeper, to check the state. That way, whatever the cause of the thread not being interrupted, we still will break out. If you think it makes sense, though, we can detect the LOST state specifically and have that trigger us to leave the election, in addition to polling? ---
[GitHub] nifi pull request #2611: NIFI-5015: Implemented Azure Queue Storage processo...
Github user zenfenan commented on a diff in the pull request: https://github.com/apache/nifi/pull/2611#discussion_r183125399 --- Diff: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/queue/GetAzureQueueStorage.java --- @@ -0,0 +1,208 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.azure.storage.queue; + +import com.microsoft.azure.storage.StorageException; +import com.microsoft.azure.storage.queue.CloudQueueMessage; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.SeeAlso; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.Collections; +import java.util.Arrays; +import java.util.Set; +import java.util.Map; +import java.util.HashMap; +import java.util.concurrent.TimeUnit; + +@SeeAlso({PutAzureQueueStorage.class}) +@InputRequirement(Requirement.INPUT_FORBIDDEN) +@Tags({"azure", "queue", "microsoft", "storage", "dequeue", "cloud"}) +@CapabilityDescription("Retrieves the messages from an Azure Queue Storage. The retrieved messages will be deleted from the queue by default. If the requirement is " + +"to consume messages without deleting them, set 'Auto Delete Messages' to 'false'.") +@WritesAttributes({ +@WritesAttribute(attribute = "azure.queue.uri", description = "The absolute URI of the configured Azure Queue Storage"), +@WritesAttribute(attribute = "azure.queue.insertionTime", description = "The time when the message was inserted into the queue storage"), +@WritesAttribute(attribute = "azure.queue.expirationTime", description = "The time when the message will expire from the queue storage"), +@WritesAttribute(attribute = "azure.queue.messageId", description = "The ID of the retrieved message"), +@WritesAttribute(attribute = "azure.queue.popReceipt", description = "The pop receipt of the retrieved message"), +}) +public class GetAzureQueueStorage extends AbstractAzureQueueStorage { + +public static final PropertyDescriptor AUTO_DELETE = new PropertyDescriptor.Builder() +.name("auto-delete-messages") +.displayName("Auto Delete Messages") +.description("Specifies whether the received message is to be automatically deleted from the queue.") +.required(true) +.allowableValues("true", "false") +.defaultValue("true") +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.build(); + +public static final PropertyDescriptor BATCH_SIZE = new PropertyDescriptor.Builder() +.name("batch-size") +.displayName("Batch Size") +.description("The number of messages to be retrieved from the queue.") +.required(true) +.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) +.defaultValue("32") +.build(); + +public
[GitHub] nifi issue #2561: NIFI-4035 Implement record-based Solr processors
Github user abhinavrohatgi30 commented on the issue: https://github.com/apache/nifi/pull/2561 I'm really sorry, it might take a while, I'm on a vacation and away from my workstation. I'll keep you updated as soon as I am back. ---
[jira] [Commented] (NIFI-4035) Implement record-based Solr processors
[ https://issues.apache.org/jira/browse/NIFI-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446102#comment-16446102 ] ASF GitHub Bot commented on NIFI-4035: -- Github user abhinavrohatgi30 commented on the issue: https://github.com/apache/nifi/pull/2561 I'm really sorry, it might take a while, I'm on a vacation and away from my workstation. I'll keep you updated as soon as I am back. > Implement record-based Solr processors > -- > > Key: NIFI-4035 > URL: https://issues.apache.org/jira/browse/NIFI-4035 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.2.0, 1.3.0 >Reporter: Bryan Bende >Priority: Minor > > Now that we have record readers and writers, we should implement variants of > the existing Solr processors that record-based... > Processors to consider: > * PutSolrRecord - uses a configured record reader to read an incoming flow > file and insert records to Solr > * GetSolrRecord - extracts records from Solr and uses a configured record > writer to write them to a flow file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5015) Develop Azure Queue Storage processors
[ https://issues.apache.org/jira/browse/NIFI-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446101#comment-16446101 ] ASF GitHub Bot commented on NIFI-5015: -- Github user zenfenan commented on a diff in the pull request: https://github.com/apache/nifi/pull/2611#discussion_r183126762 --- Diff: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/queue/AbstractAzureQueueStorage.java --- @@ -0,0 +1,130 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.azure.storage.queue; + +import com.microsoft.azure.storage.CloudStorageAccount; +import com.microsoft.azure.storage.StorageCredentials; +import com.microsoft.azure.storage.StorageCredentialsSharedAccessSignature; +import com.microsoft.azure.storage.StorageException; +import com.microsoft.azure.storage.queue.CloudQueue; +import com.microsoft.azure.storage.queue.CloudQueueClient; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.expression.ExpressionLanguageScope; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils; + +import java.net.URI; +import java.net.URISyntaxException; +import java.security.InvalidKeyException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; + +public abstract class AbstractAzureQueueStorage extends AbstractProcessor { + +public static final PropertyDescriptor QUEUE = new PropertyDescriptor.Builder() +.name("storage-cloudQueue-name") +.displayName("Queue Name") +.description("Name of the Azure Storage Queue") +.required(true) +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) + .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) --- End diff -- You were right. Removed `@OnScheduled` not just because of this reason alone but `ACCOUNT_NAME` and `ACCOUNT_KEY` or `SAS_TOKEN` properties use `ExpressionLanguageScope.FLOWFILES` so that could be a potential bug since with that method dint get flowfile as a parameter before. Now it has been changed. > Develop Azure Queue Storage processors > -- > > Key: NIFI-5015 > URL: https://issues.apache.org/jira/browse/NIFI-5015 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Sivaprasanna Sethuraman >Assignee: Sivaprasanna Sethuraman >Priority: Minor > > Develop NiFi processors bundle for Azure Queue Storage -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2611: NIFI-5015: Implemented Azure Queue Storage processo...
Github user zenfenan commented on a diff in the pull request: https://github.com/apache/nifi/pull/2611#discussion_r183126762 --- Diff: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/queue/AbstractAzureQueueStorage.java --- @@ -0,0 +1,130 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.azure.storage.queue; + +import com.microsoft.azure.storage.CloudStorageAccount; +import com.microsoft.azure.storage.StorageCredentials; +import com.microsoft.azure.storage.StorageCredentialsSharedAccessSignature; +import com.microsoft.azure.storage.StorageException; +import com.microsoft.azure.storage.queue.CloudQueue; +import com.microsoft.azure.storage.queue.CloudQueueClient; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.expression.ExpressionLanguageScope; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils; + +import java.net.URI; +import java.net.URISyntaxException; +import java.security.InvalidKeyException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; + +public abstract class AbstractAzureQueueStorage extends AbstractProcessor { + +public static final PropertyDescriptor QUEUE = new PropertyDescriptor.Builder() +.name("storage-cloudQueue-name") +.displayName("Queue Name") +.description("Name of the Azure Storage Queue") +.required(true) +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) + .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) --- End diff -- You were right. Removed `@OnScheduled` not just because of this reason alone but `ACCOUNT_NAME` and `ACCOUNT_KEY` or `SAS_TOKEN` properties use `ExpressionLanguageScope.FLOWFILES` so that could be a potential bug since with that method dint get flowfile as a parameter before. Now it has been changed. ---
[jira] [Created] (NIFI-5103) Create a Pragmatic Processor Development Guide
Otto Fowler created NIFI-5103: - Summary: Create a Pragmatic Processor Development Guide Key: NIFI-5103 URL: https://issues.apache.org/jira/browse/NIFI-5103 Project: Apache NiFi Issue Type: New Feature Reporter: Otto Fowler It would be useful to have and maintain a guide on best practices for developing processors. * Design * Working 'like' 'like' processors * Record v. Non-Record ( NIFI-5058 ) * How to use abstract bases * How to use properties with abstract bases * EL support * Logging * Exception Handling * Validation * Using things from other Bundles/Nars ( nar dep v. jar dep ) * Handling version conflicts -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5015) Develop Azure Queue Storage processors
[ https://issues.apache.org/jira/browse/NIFI-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446096#comment-16446096 ] ASF GitHub Bot commented on NIFI-5015: -- Github user zenfenan commented on a diff in the pull request: https://github.com/apache/nifi/pull/2611#discussion_r183125717 --- Diff: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/queue/GetAzureQueueStorage.java --- @@ -0,0 +1,208 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.azure.storage.queue; + +import com.microsoft.azure.storage.StorageException; +import com.microsoft.azure.storage.queue.CloudQueueMessage; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.SeeAlso; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.Collections; +import java.util.Arrays; +import java.util.Set; +import java.util.Map; +import java.util.HashMap; +import java.util.concurrent.TimeUnit; + +@SeeAlso({PutAzureQueueStorage.class}) +@InputRequirement(Requirement.INPUT_FORBIDDEN) +@Tags({"azure", "queue", "microsoft", "storage", "dequeue", "cloud"}) +@CapabilityDescription("Retrieves the messages from an Azure Queue Storage. The retrieved messages will be deleted from the queue by default. If the requirement is " + +"to consume messages without deleting them, set 'Auto Delete Messages' to 'false'.") +@WritesAttributes({ +@WritesAttribute(attribute = "azure.queue.uri", description = "The absolute URI of the configured Azure Queue Storage"), +@WritesAttribute(attribute = "azure.queue.insertionTime", description = "The time when the message was inserted into the queue storage"), +@WritesAttribute(attribute = "azure.queue.expirationTime", description = "The time when the message will expire from the queue storage"), +@WritesAttribute(attribute = "azure.queue.messageId", description = "The ID of the retrieved message"), +@WritesAttribute(attribute = "azure.queue.popReceipt", description = "The pop receipt of the retrieved message"), +}) +public class GetAzureQueueStorage extends AbstractAzureQueueStorage { + +public static final PropertyDescriptor AUTO_DELETE = new PropertyDescriptor.Builder() +.name("auto-delete-messages") +.displayName("Auto Delete Messages") +.description("Specifies whether the received message is to be automatically deleted from the queue.") +.required(true) +.allowableValues("true", "false") +.defaultValue("true") +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.build(); + +public static final PropertyDescriptor BATCH_SIZE = new PropertyDescriptor.Builder() +.name("batch-size") +.displayName("Batch Size") +.description("The number of
[GitHub] nifi pull request #2611: NIFI-5015: Implemented Azure Queue Storage processo...
Github user zenfenan commented on a diff in the pull request: https://github.com/apache/nifi/pull/2611#discussion_r183125717 --- Diff: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/queue/GetAzureQueueStorage.java --- @@ -0,0 +1,208 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.azure.storage.queue; + +import com.microsoft.azure.storage.StorageException; +import com.microsoft.azure.storage.queue.CloudQueueMessage; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.SeeAlso; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.Collections; +import java.util.Arrays; +import java.util.Set; +import java.util.Map; +import java.util.HashMap; +import java.util.concurrent.TimeUnit; + +@SeeAlso({PutAzureQueueStorage.class}) +@InputRequirement(Requirement.INPUT_FORBIDDEN) +@Tags({"azure", "queue", "microsoft", "storage", "dequeue", "cloud"}) +@CapabilityDescription("Retrieves the messages from an Azure Queue Storage. The retrieved messages will be deleted from the queue by default. If the requirement is " + +"to consume messages without deleting them, set 'Auto Delete Messages' to 'false'.") +@WritesAttributes({ +@WritesAttribute(attribute = "azure.queue.uri", description = "The absolute URI of the configured Azure Queue Storage"), +@WritesAttribute(attribute = "azure.queue.insertionTime", description = "The time when the message was inserted into the queue storage"), +@WritesAttribute(attribute = "azure.queue.expirationTime", description = "The time when the message will expire from the queue storage"), +@WritesAttribute(attribute = "azure.queue.messageId", description = "The ID of the retrieved message"), +@WritesAttribute(attribute = "azure.queue.popReceipt", description = "The pop receipt of the retrieved message"), +}) +public class GetAzureQueueStorage extends AbstractAzureQueueStorage { + +public static final PropertyDescriptor AUTO_DELETE = new PropertyDescriptor.Builder() +.name("auto-delete-messages") +.displayName("Auto Delete Messages") +.description("Specifies whether the received message is to be automatically deleted from the queue.") +.required(true) +.allowableValues("true", "false") +.defaultValue("true") +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.build(); + +public static final PropertyDescriptor BATCH_SIZE = new PropertyDescriptor.Builder() +.name("batch-size") +.displayName("Batch Size") +.description("The number of messages to be retrieved from the queue.") +.required(true) +.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) +.defaultValue("32") +.build(); + +public
[jira] [Commented] (NIFI-5015) Develop Azure Queue Storage processors
[ https://issues.apache.org/jira/browse/NIFI-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446095#comment-16446095 ] ASF GitHub Bot commented on NIFI-5015: -- Github user zenfenan commented on a diff in the pull request: https://github.com/apache/nifi/pull/2611#discussion_r183125399 --- Diff: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/queue/GetAzureQueueStorage.java --- @@ -0,0 +1,208 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.azure.storage.queue; + +import com.microsoft.azure.storage.StorageException; +import com.microsoft.azure.storage.queue.CloudQueueMessage; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.SeeAlso; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.Collections; +import java.util.Arrays; +import java.util.Set; +import java.util.Map; +import java.util.HashMap; +import java.util.concurrent.TimeUnit; + +@SeeAlso({PutAzureQueueStorage.class}) +@InputRequirement(Requirement.INPUT_FORBIDDEN) +@Tags({"azure", "queue", "microsoft", "storage", "dequeue", "cloud"}) +@CapabilityDescription("Retrieves the messages from an Azure Queue Storage. The retrieved messages will be deleted from the queue by default. If the requirement is " + +"to consume messages without deleting them, set 'Auto Delete Messages' to 'false'.") +@WritesAttributes({ +@WritesAttribute(attribute = "azure.queue.uri", description = "The absolute URI of the configured Azure Queue Storage"), +@WritesAttribute(attribute = "azure.queue.insertionTime", description = "The time when the message was inserted into the queue storage"), +@WritesAttribute(attribute = "azure.queue.expirationTime", description = "The time when the message will expire from the queue storage"), +@WritesAttribute(attribute = "azure.queue.messageId", description = "The ID of the retrieved message"), +@WritesAttribute(attribute = "azure.queue.popReceipt", description = "The pop receipt of the retrieved message"), +}) +public class GetAzureQueueStorage extends AbstractAzureQueueStorage { + +public static final PropertyDescriptor AUTO_DELETE = new PropertyDescriptor.Builder() +.name("auto-delete-messages") +.displayName("Auto Delete Messages") +.description("Specifies whether the received message is to be automatically deleted from the queue.") +.required(true) +.allowableValues("true", "false") +.defaultValue("true") +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.build(); + +public static final PropertyDescriptor BATCH_SIZE = new PropertyDescriptor.Builder() +.name("batch-size") +.displayName("Batch Size") +.description("The number of
[jira] [Commented] (NIFI-5015) Develop Azure Queue Storage processors
[ https://issues.apache.org/jira/browse/NIFI-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446093#comment-16446093 ] ASF GitHub Bot commented on NIFI-5015: -- Github user zenfenan commented on a diff in the pull request: https://github.com/apache/nifi/pull/2611#discussion_r183125143 --- Diff: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/queue/GetAzureQueueStorage.java --- @@ -0,0 +1,208 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.azure.storage.queue; + +import com.microsoft.azure.storage.StorageException; +import com.microsoft.azure.storage.queue.CloudQueueMessage; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.SeeAlso; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.Collections; +import java.util.Arrays; +import java.util.Set; +import java.util.Map; +import java.util.HashMap; +import java.util.concurrent.TimeUnit; + +@SeeAlso({PutAzureQueueStorage.class}) +@InputRequirement(Requirement.INPUT_FORBIDDEN) +@Tags({"azure", "queue", "microsoft", "storage", "dequeue", "cloud"}) +@CapabilityDescription("Retrieves the messages from an Azure Queue Storage. The retrieved messages will be deleted from the queue by default. If the requirement is " + +"to consume messages without deleting them, set 'Auto Delete Messages' to 'false'.") +@WritesAttributes({ +@WritesAttribute(attribute = "azure.queue.uri", description = "The absolute URI of the configured Azure Queue Storage"), +@WritesAttribute(attribute = "azure.queue.insertionTime", description = "The time when the message was inserted into the queue storage"), +@WritesAttribute(attribute = "azure.queue.expirationTime", description = "The time when the message will expire from the queue storage"), +@WritesAttribute(attribute = "azure.queue.messageId", description = "The ID of the retrieved message"), +@WritesAttribute(attribute = "azure.queue.popReceipt", description = "The pop receipt of the retrieved message"), +}) +public class GetAzureQueueStorage extends AbstractAzureQueueStorage { + +public static final PropertyDescriptor AUTO_DELETE = new PropertyDescriptor.Builder() +.name("auto-delete-messages") +.displayName("Auto Delete Messages") +.description("Specifies whether the received message is to be automatically deleted from the queue.") +.required(true) +.allowableValues("true", "false") +.defaultValue("true") +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.build(); + +public static final PropertyDescriptor BATCH_SIZE = new PropertyDescriptor.Builder() +.name("batch-size") +.displayName("Batch Size") +.description("The number of
[GitHub] nifi pull request #2611: NIFI-5015: Implemented Azure Queue Storage processo...
Github user zenfenan commented on a diff in the pull request: https://github.com/apache/nifi/pull/2611#discussion_r183125143 --- Diff: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/queue/GetAzureQueueStorage.java --- @@ -0,0 +1,208 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.azure.storage.queue; + +import com.microsoft.azure.storage.StorageException; +import com.microsoft.azure.storage.queue.CloudQueueMessage; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.behavior.WritesAttributes; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.SeeAlso; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.Collections; +import java.util.Arrays; +import java.util.Set; +import java.util.Map; +import java.util.HashMap; +import java.util.concurrent.TimeUnit; + +@SeeAlso({PutAzureQueueStorage.class}) +@InputRequirement(Requirement.INPUT_FORBIDDEN) +@Tags({"azure", "queue", "microsoft", "storage", "dequeue", "cloud"}) +@CapabilityDescription("Retrieves the messages from an Azure Queue Storage. The retrieved messages will be deleted from the queue by default. If the requirement is " + +"to consume messages without deleting them, set 'Auto Delete Messages' to 'false'.") +@WritesAttributes({ +@WritesAttribute(attribute = "azure.queue.uri", description = "The absolute URI of the configured Azure Queue Storage"), +@WritesAttribute(attribute = "azure.queue.insertionTime", description = "The time when the message was inserted into the queue storage"), +@WritesAttribute(attribute = "azure.queue.expirationTime", description = "The time when the message will expire from the queue storage"), +@WritesAttribute(attribute = "azure.queue.messageId", description = "The ID of the retrieved message"), +@WritesAttribute(attribute = "azure.queue.popReceipt", description = "The pop receipt of the retrieved message"), +}) +public class GetAzureQueueStorage extends AbstractAzureQueueStorage { + +public static final PropertyDescriptor AUTO_DELETE = new PropertyDescriptor.Builder() +.name("auto-delete-messages") +.displayName("Auto Delete Messages") +.description("Specifies whether the received message is to be automatically deleted from the queue.") +.required(true) +.allowableValues("true", "false") +.defaultValue("true") +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.build(); + +public static final PropertyDescriptor BATCH_SIZE = new PropertyDescriptor.Builder() +.name("batch-size") +.displayName("Batch Size") +.description("The number of messages to be retrieved from the queue.") +.required(true) +.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) +.defaultValue("32") +.build(); + +public
[jira] [Commented] (MINIFICPP-463) Implement escape/unescape URL EL functions
[ https://issues.apache.org/jira/browse/MINIFICPP-463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446037#comment-16446037 ] ASF GitHub Bot commented on MINIFICPP-463: -- GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/305 MINIFICPP-463 Implemented urlEncode/urlDecode EL functions Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-463 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/305.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #305 commit 90f89079c807127cee19bc595d375d09cd482aba Author: Andrew I. ChristiansonDate: 2018-04-20T17:06:22Z MINIFICPP-463 Implemented urlEncode/urlDecode EL functions > Implement escape/unescape URL EL functions > -- > > Key: MINIFICPP-463 > URL: https://issues.apache.org/jira/browse/MINIFICPP-463 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #305: MINIFICPP-463 Implemented urlEncode/urlDe...
GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/305 MINIFICPP-463 Implemented urlEncode/urlDecode EL functions Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-463 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/305.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #305 commit 90f89079c807127cee19bc595d375d09cd482aba Author: Andrew I. ChristiansonDate: 2018-04-20T17:06:22Z MINIFICPP-463 Implemented urlEncode/urlDecode EL functions ---
[jira] [Created] (NIFI-5102) MarkLogic DB Processors
Anthony Roach created NIFI-5102: --- Summary: MarkLogic DB Processors Key: NIFI-5102 URL: https://issues.apache.org/jira/browse/NIFI-5102 Project: Apache NiFi Issue Type: New Feature Components: Core Framework Affects Versions: 1.6.0 Reporter: Anthony Roach Fix For: 1.7.0 As a data architect, I need to ingest data from my NiFi FlowFile into MarkLogic database documents. I have created the following two processors: * PutMarkLogic: Ingest Flowfile into MarkLogic database documents * QueryMarkLogic: Retrieve result set from MarkLogic into FlowFile I will create a pull request. [www.marklogic.com|http://www.marklogic.com/] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (MINIFICPP-463) Implement escape/unescape URL EL functions
Andrew Christianson created MINIFICPP-463: - Summary: Implement escape/unescape URL EL functions Key: MINIFICPP-463 URL: https://issues.apache.org/jira/browse/MINIFICPP-463 Project: NiFi MiNiFi C++ Issue Type: Improvement Reporter: Andrew Christianson Assignee: Andrew Christianson -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (MINIFICPP-423) Implement encode/decode EL functions
[ https://issues.apache.org/jira/browse/MINIFICPP-423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Christianson updated MINIFICPP-423: -- Description: [Encode/Decode Functions|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#encode] * -[escapeJson|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapejson]- * -[escapeXml|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapexml]- * -[escapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapecsv]- * -[escapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapehtml3]- * -[escapeHtml4|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapehtml4]- * -[unescapeJson|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapejson]- * -[unescapeXml|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapexml]- * -[unescapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapecsv]- * -[unescapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapehtml3]- * -[unescapeHtml4|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapehtml4]- * [urlEncode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#urlencode] * [urlDecode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#urldecode] * [base64Encode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#base64encode] * [base64Decode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#base64decode] was: [Encode/Decode Functions|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#encode] * [escapeJson|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapejson] * [escapeXml|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapexml] * [escapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapecsv] * [escapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapehtml3] * [escapeHtml4|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapehtml4] * [unescapeJson|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapejson] * [unescapeXml|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapexml] * [unescapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapecsv] * [unescapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapehtml3] * [unescapeHtml4|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapehtml4] * [urlEncode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#urlencode] * [urlDecode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#urldecode] * [base64Encode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#base64encode] * [base64Decode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#base64decode] > Implement encode/decode EL functions > > > Key: MINIFICPP-423 > URL: https://issues.apache.org/jira/browse/MINIFICPP-423 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > [Encode/Decode > Functions|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#encode] > * > -[escapeJson|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapejson]- > * > -[escapeXml|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapexml]- > * > -[escapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapecsv]- > * > -[escapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapehtml3]- > * > -[escapeHtml4|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapehtml4]- > * > -[unescapeJson|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapejson]- > * > -[unescapeXml|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapexml]- > * > -[unescapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapecsv]- > * > -[unescapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapehtml3]- > * >
[jira] [Commented] (NIFI-5078) Add source/destination info in connection status for S2S Status RT
[ https://issues.apache.org/jira/browse/NIFI-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445970#comment-16445970 ] ASF GitHub Bot commented on NIFI-5078: -- Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2632 +1 LGTM, ran build and unit tests (with contrib-check), also tested with a live NiFi instance and verified the new fields are there and populated correctly. Thanks for the improvement! Merging to master > Add source/destination info in connection status for S2S Status RT > -- > > Key: NIFI-5078 > URL: https://issues.apache.org/jira/browse/NIFI-5078 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > It'd be useful to add the information about source and destination for the > connection status retrieved through the S2S Status Reporting Task. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5078) Add source/destination info in connection status for S2S Status RT
[ https://issues.apache.org/jira/browse/NIFI-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445973#comment-16445973 ] ASF GitHub Bot commented on NIFI-5078: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2632 > Add source/destination info in connection status for S2S Status RT > -- > > Key: NIFI-5078 > URL: https://issues.apache.org/jira/browse/NIFI-5078 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > It'd be useful to add the information about source and destination for the > connection status retrieved through the S2S Status Reporting Task. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (MINIFICPP-462) Docker verify integration tests are failing
Aldrin Piri created MINIFICPP-462: - Summary: Docker verify integration tests are failing Key: MINIFICPP-462 URL: https://issues.apache.org/jira/browse/MINIFICPP-462 Project: NiFi MiNiFi C++ Issue Type: Task Affects Versions: 0.5.0 Reporter: Aldrin Piri Looks like there may be some incompatibilities in python packages with the version of Docker being run. Tests fail with the following: {quote}../docker/test/integration/minifi/test/__init__.py:159: in __exit__ super(DockerTestCluster, self).__exit__(exc_type, exc_val, exc_tb) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = , exc_type = , exc_val = TypeError(" is not JSON serializable",), exc_tb = def __exit__(self, exc_type, exc_val, exc_tb): """ Clean up ephemeral cluster resources """ # Clean up containers for container in self.containers: logging.info('Cleaning up container: %s', container.name) container.remove(v=True, force=True) # Clean up images for image in self.images: > logging.info('Cleaning up image: %s', image.id) E AttributeError: 'tuple' object has no attribute 'id' ../docker/test/integration/minifi/__init__.py:255: AttributeError{quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5078) Add source/destination info in connection status for S2S Status RT
[ https://issues.apache.org/jira/browse/NIFI-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-5078: --- Fix Version/s: 1.7.0 > Add source/destination info in connection status for S2S Status RT > -- > > Key: NIFI-5078 > URL: https://issues.apache.org/jira/browse/NIFI-5078 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Fix For: 1.7.0 > > > It'd be useful to add the information about source and destination for the > connection status retrieved through the S2S Status Reporting Task. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5078) Add source/destination info in connection status for S2S Status RT
[ https://issues.apache.org/jira/browse/NIFI-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-5078: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Add source/destination info in connection status for S2S Status RT > -- > > Key: NIFI-5078 > URL: https://issues.apache.org/jira/browse/NIFI-5078 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Fix For: 1.7.0 > > > It'd be useful to add the information about source and destination for the > connection status retrieved through the S2S Status Reporting Task. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5078) Add source/destination info in connection status for S2S Status RT
[ https://issues.apache.org/jira/browse/NIFI-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445971#comment-16445971 ] ASF subversion and git services commented on NIFI-5078: --- Commit 262bf011e495a2e94c3975b91ba7c24d822e0a86 in nifi's branch refs/heads/master from [~pvillard] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=262bf01 ] NIFI-5078 - added source/destination connection info in S2SStatusRT Signed-off-by: Matthew BurgessThis closes #2632 > Add source/destination info in connection status for S2S Status RT > -- > > Key: NIFI-5078 > URL: https://issues.apache.org/jira/browse/NIFI-5078 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > It'd be useful to add the information about source and destination for the > connection status retrieved through the S2S Status Reporting Task. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2632: NIFI-5078 - added source/destination connection info in S2...
Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2632 +1 LGTM, ran build and unit tests (with contrib-check), also tested with a live NiFi instance and verified the new fields are there and populated correctly. Thanks for the improvement! Merging to master ---
[GitHub] nifi pull request #2632: NIFI-5078 - added source/destination connection inf...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2632 ---
[jira] [Created] (NIFI-5101) Add SiteToSiteReporting Task schemas to documentation
Matt Burgess created NIFI-5101: -- Summary: Add SiteToSiteReporting Task schemas to documentation Key: NIFI-5101 URL: https://issues.apache.org/jira/browse/NIFI-5101 Project: Apache NiFi Issue Type: Improvement Components: Documentation Website, Extensions Reporter: Matt Burgess This case builds off the wonderful example from NIFI-4809 of adding the known schema to the documentation for SiteToSiteReportingTasks (see [here|https://github.com/apache/nifi/pull/2575/files#diff-0aa7529c90fd4301493f47d103121156] for example). Having the schema readily available in the documentation will make it easier for the user to work with these reporting tasks and the record-based processors (the latter of which are more efficient than other components). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2561: NIFI-4035 Implement record-based Solr processors
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2561 @abhinavrohatgi30 You have a merge conflict in this branch. If you resolve it, I'll help @bbende finish the review. ---
[jira] [Commented] (NIFI-4035) Implement record-based Solr processors
[ https://issues.apache.org/jira/browse/NIFI-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445909#comment-16445909 ] ASF GitHub Bot commented on NIFI-4035: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2561 @abhinavrohatgi30 You have a merge conflict in this branch. If you resolve it, I'll help @bbende finish the review. > Implement record-based Solr processors > -- > > Key: NIFI-4035 > URL: https://issues.apache.org/jira/browse/NIFI-4035 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.2.0, 1.3.0 >Reporter: Bryan Bende >Priority: Minor > > Now that we have record readers and writers, we should implement variants of > the existing Solr processors that record-based... > Processors to consider: > * PutSolrRecord - uses a configured record reader to read an incoming flow > file and insert records to Solr > * GetSolrRecord - extracts records from Solr and uses a configured record > writer to write them to a flow file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services
[ https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445906#comment-16445906 ] ASF GitHub Bot commented on NIFI-4637: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2518 @ijokarumawak changes pushed. Ready for review. > Add support for HBase visibility labels to HBase processors and controller > services > --- > > Key: NIFI-4637 > URL: https://issues.apache.org/jira/browse/NIFI-4637 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Major > > HBase supports visibility labels, but you can't use them from NiFi because > there is no way to set them. The existing processors and services should be > upgraded to handle this capability. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5095) PutHiveQL should not log warning message when it fails to parse SET property command
[ https://issues.apache.org/jira/browse/NIFI-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-5095: --- Resolution: Fixed Status: Resolved (was: Patch Available) > PutHiveQL should not log warning message when it fails to parse SET property > command > > > Key: NIFI-5095 > URL: https://issues.apache.org/jira/browse/NIFI-5095 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Minor > Fix For: 1.7.0 > > > PutHiveQL can accept multiple queries separated by a specified delimiter > string, ';' by default. It supports users to specify Hive parameters by 'SET' > statement. E.g. set 'hive.exec.dynamic.partition.mode'=nonstrict > PutHiveQL also parses each query string with Hive ParseDriver, in order to > find input/output table names within queries. However, the aforementioned > 'SET' command is not a valid Hive query. The only query can start with 'SET' > is 'SET ROLE'. > [https://raw.githubusercontent.com/apache/hive/master/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g] > When set property statement is parsed, following warning message is logged > and shown in the NiFi UI: > {code:java} > 2018-04-19 05:34:05,616 WARN [Timer-Driven Process Thread-8] > o.apache.nifi.processors.hive.PutHiveQL > PutHiveQL[id=db408703-0162-1000--73ad3455] Failed to parse hiveQL: > set hive.exec.dynamic.partition.mode=nonstrict due to > org.apache.hadoop.hive.ql.parse.ParseException: line 1:4 missing KW_ROLE at > 'hive' near 'hive' line 1:8 missing EOF at '.' near 'hive': > {code} > In case there are other DML statements such as 'INSERT ...' in the same > FlowFile content, those queries are performed successfully regardless of > having above parse failure. However, the warning message is mis-leading, it > looks as if queries have failed. We should not show such warning message for > set property commands. > We can short-circuit query parse logic if statement starts with 'set', since > 'set role' does not have any target table. As a reference Hive HCatCli.java > has the similar filtering logic. > > [https://github.com/apache/hive/blob/master/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/HCatCli.java#L283] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2518: NIFI-4637 Added support for visibility labels to the HBase...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2518 @ijokarumawak changes pushed. Ready for review. ---
[jira] [Commented] (NIFI-5095) PutHiveQL should not log warning message when it fails to parse SET property command
[ https://issues.apache.org/jira/browse/NIFI-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445892#comment-16445892 ] ASF GitHub Bot commented on NIFI-5095: -- Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2645 +1 LGTM, ran build and unit tests. Thanks for this improvement! Merging to master > PutHiveQL should not log warning message when it fails to parse SET property > command > > > Key: NIFI-5095 > URL: https://issues.apache.org/jira/browse/NIFI-5095 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Minor > Fix For: 1.7.0 > > > PutHiveQL can accept multiple queries separated by a specified delimiter > string, ';' by default. It supports users to specify Hive parameters by 'SET' > statement. E.g. set 'hive.exec.dynamic.partition.mode'=nonstrict > PutHiveQL also parses each query string with Hive ParseDriver, in order to > find input/output table names within queries. However, the aforementioned > 'SET' command is not a valid Hive query. The only query can start with 'SET' > is 'SET ROLE'. > [https://raw.githubusercontent.com/apache/hive/master/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g] > When set property statement is parsed, following warning message is logged > and shown in the NiFi UI: > {code:java} > 2018-04-19 05:34:05,616 WARN [Timer-Driven Process Thread-8] > o.apache.nifi.processors.hive.PutHiveQL > PutHiveQL[id=db408703-0162-1000--73ad3455] Failed to parse hiveQL: > set hive.exec.dynamic.partition.mode=nonstrict due to > org.apache.hadoop.hive.ql.parse.ParseException: line 1:4 missing KW_ROLE at > 'hive' near 'hive' line 1:8 missing EOF at '.' near 'hive': > {code} > In case there are other DML statements such as 'INSERT ...' in the same > FlowFile content, those queries are performed successfully regardless of > having above parse failure. However, the warning message is mis-leading, it > looks as if queries have failed. We should not show such warning message for > set property commands. > We can short-circuit query parse logic if statement starts with 'set', since > 'set role' does not have any target table. As a reference Hive HCatCli.java > has the similar filtering logic. > > [https://github.com/apache/hive/blob/master/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/HCatCli.java#L283] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5095) PutHiveQL should not log warning message when it fails to parse SET property command
[ https://issues.apache.org/jira/browse/NIFI-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445893#comment-16445893 ] ASF subversion and git services commented on NIFI-5095: --- Commit c575a98936087dcd360ed7715a9e2341e46c7f0d in nifi's branch refs/heads/master from [~ijokarumawak] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=c575a98 ] NIFI-5095: Suppress SET property parse failure at Hive processors Log debug message when ParseException is thrown. Log warning message if other unknown Exception is thrown. Signed-off-by: Matthew BurgessThis closes #2645 > PutHiveQL should not log warning message when it fails to parse SET property > command > > > Key: NIFI-5095 > URL: https://issues.apache.org/jira/browse/NIFI-5095 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Minor > Fix For: 1.7.0 > > > PutHiveQL can accept multiple queries separated by a specified delimiter > string, ';' by default. It supports users to specify Hive parameters by 'SET' > statement. E.g. set 'hive.exec.dynamic.partition.mode'=nonstrict > PutHiveQL also parses each query string with Hive ParseDriver, in order to > find input/output table names within queries. However, the aforementioned > 'SET' command is not a valid Hive query. The only query can start with 'SET' > is 'SET ROLE'. > [https://raw.githubusercontent.com/apache/hive/master/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g] > When set property statement is parsed, following warning message is logged > and shown in the NiFi UI: > {code:java} > 2018-04-19 05:34:05,616 WARN [Timer-Driven Process Thread-8] > o.apache.nifi.processors.hive.PutHiveQL > PutHiveQL[id=db408703-0162-1000--73ad3455] Failed to parse hiveQL: > set hive.exec.dynamic.partition.mode=nonstrict due to > org.apache.hadoop.hive.ql.parse.ParseException: line 1:4 missing KW_ROLE at > 'hive' near 'hive' line 1:8 missing EOF at '.' near 'hive': > {code} > In case there are other DML statements such as 'INSERT ...' in the same > FlowFile content, those queries are performed successfully regardless of > having above parse failure. However, the warning message is mis-leading, it > looks as if queries have failed. We should not show such warning message for > set property commands. > We can short-circuit query parse logic if statement starts with 'set', since > 'set role' does not have any target table. As a reference Hive HCatCli.java > has the similar filtering logic. > > [https://github.com/apache/hive/blob/master/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/HCatCli.java#L283] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2645: NIFI-5095: Suppress SET property parse failure at H...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2645 ---
[jira] [Updated] (NIFI-5095) PutHiveQL should not log warning message when it fails to parse SET property command
[ https://issues.apache.org/jira/browse/NIFI-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-5095: --- Fix Version/s: 1.7.0 > PutHiveQL should not log warning message when it fails to parse SET property > command > > > Key: NIFI-5095 > URL: https://issues.apache.org/jira/browse/NIFI-5095 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Minor > Fix For: 1.7.0 > > > PutHiveQL can accept multiple queries separated by a specified delimiter > string, ';' by default. It supports users to specify Hive parameters by 'SET' > statement. E.g. set 'hive.exec.dynamic.partition.mode'=nonstrict > PutHiveQL also parses each query string with Hive ParseDriver, in order to > find input/output table names within queries. However, the aforementioned > 'SET' command is not a valid Hive query. The only query can start with 'SET' > is 'SET ROLE'. > [https://raw.githubusercontent.com/apache/hive/master/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g] > When set property statement is parsed, following warning message is logged > and shown in the NiFi UI: > {code:java} > 2018-04-19 05:34:05,616 WARN [Timer-Driven Process Thread-8] > o.apache.nifi.processors.hive.PutHiveQL > PutHiveQL[id=db408703-0162-1000--73ad3455] Failed to parse hiveQL: > set hive.exec.dynamic.partition.mode=nonstrict due to > org.apache.hadoop.hive.ql.parse.ParseException: line 1:4 missing KW_ROLE at > 'hive' near 'hive' line 1:8 missing EOF at '.' near 'hive': > {code} > In case there are other DML statements such as 'INSERT ...' in the same > FlowFile content, those queries are performed successfully regardless of > having above parse failure. However, the warning message is mis-leading, it > looks as if queries have failed. We should not show such warning message for > set property commands. > We can short-circuit query parse logic if statement starts with 'set', since > 'set role' does not have any target table. As a reference Hive HCatCli.java > has the similar filtering logic. > > [https://github.com/apache/hive/blob/master/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/HCatCli.java#L283] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5095) PutHiveQL should not log warning message when it fails to parse SET property command
[ https://issues.apache.org/jira/browse/NIFI-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445894#comment-16445894 ] ASF GitHub Bot commented on NIFI-5095: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2645 > PutHiveQL should not log warning message when it fails to parse SET property > command > > > Key: NIFI-5095 > URL: https://issues.apache.org/jira/browse/NIFI-5095 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Minor > Fix For: 1.7.0 > > > PutHiveQL can accept multiple queries separated by a specified delimiter > string, ';' by default. It supports users to specify Hive parameters by 'SET' > statement. E.g. set 'hive.exec.dynamic.partition.mode'=nonstrict > PutHiveQL also parses each query string with Hive ParseDriver, in order to > find input/output table names within queries. However, the aforementioned > 'SET' command is not a valid Hive query. The only query can start with 'SET' > is 'SET ROLE'. > [https://raw.githubusercontent.com/apache/hive/master/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g] > When set property statement is parsed, following warning message is logged > and shown in the NiFi UI: > {code:java} > 2018-04-19 05:34:05,616 WARN [Timer-Driven Process Thread-8] > o.apache.nifi.processors.hive.PutHiveQL > PutHiveQL[id=db408703-0162-1000--73ad3455] Failed to parse hiveQL: > set hive.exec.dynamic.partition.mode=nonstrict due to > org.apache.hadoop.hive.ql.parse.ParseException: line 1:4 missing KW_ROLE at > 'hive' near 'hive' line 1:8 missing EOF at '.' near 'hive': > {code} > In case there are other DML statements such as 'INSERT ...' in the same > FlowFile content, those queries are performed successfully regardless of > having above parse failure. However, the warning message is mis-leading, it > looks as if queries have failed. We should not show such warning message for > set property commands. > We can short-circuit query parse logic if statement starts with 'set', since > 'set role' does not have any target table. As a reference Hive HCatCli.java > has the similar filtering logic. > > [https://github.com/apache/hive/blob/master/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/HCatCli.java#L283] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2645: NIFI-5095: Suppress SET property parse failure at Hive pro...
Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2645 +1 LGTM, ran build and unit tests. Thanks for this improvement! Merging to master ---
[GitHub] nifi-minifi-cpp issue #304: Providing fixes for sourcing RHEL based distribu...
Github user apiri commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/304 Updated the script to determine the directory in which it resides and then uses that to generate a full path for each environment file sourced. Verification on OS X looks like behavior remains constant and this additionally resolves the issues that were present in CentOS & Fedora. ---
[jira] [Commented] (MINIFICPP-459) Include flex lexer headers so that generated code can be used without lex being installed
[ https://issues.apache.org/jira/browse/MINIFICPP-459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445829#comment-16445829 ] ASF GitHub Bot commented on MINIFICPP-459: -- Github user achristianson commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/301 Looks like Mac OS build has an issue. Will look into it. > Include flex lexer headers so that generated code can be used without lex > being installed > - > > Key: MINIFICPP-459 > URL: https://issues.apache.org/jira/browse/MINIFICPP-459 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > Sources generated by bison/flex for expression language depend on lex > headers. Inclusion of FlexLexer will allow the generated source to be > compiled in environments where lex is not available. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp issue #301: MINIFICPP-459 Include FLexLexer.h in thirdparty ...
Github user achristianson commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/301 Looks like Mac OS build has an issue. Will look into it. ---
[GitHub] nifi-minifi-cpp pull request #304: Providing fixes for sourcing RHEL based d...
Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/304 ---
[GitHub] nifi-minifi-cpp issue #304: Providing fixes for sourcing RHEL based distribu...
Github user phrocker commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/304 @apiri looks good. will merge. ---
[jira] [Resolved] (MINIFICPP-443) Support GET requests in ListenHTTP and allow response body to be configured
[ https://issues.apache.org/jira/browse/MINIFICPP-443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Christianson resolved MINIFICPP-443. --- Resolution: Fixed > Support GET requests in ListenHTTP and allow response body to be configured > --- > > Key: MINIFICPP-443 > URL: https://issues.apache.org/jira/browse/MINIFICPP-443 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > Currently GET requests are not supported with ListenHTTP. This limits > functionality for no real reason. ListenHTTP should support GET requests > where output FlowFiles have no content, but do have headers and query string > information available. > Additionally, the response body written by ListenHTTP is always empty. This > should be configurable --preferably dynamically by using incoming FlowFiles > with a type/purpose attribute. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp issue #304: Providing fixes for sourcing RHEL based distribu...
Github user phrocker commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/304 These changes look good at face value. I'm going to try it across a few platforms and await travis completion before merging. Thanks! ---
[GitHub] nifi-minifi-cpp issue #304: Providing fixes for sourcing RHEL based distribu...
Github user phrocker commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/304 @apiri what was the problem you were experiencing and on what distro specifically? I don't see an issue on the clean rhel distros I have running now, ---
[jira] [Created] (MINIFICPP-461) Bootstrap can't source env files for RHEL variants
Aldrin Piri created MINIFICPP-461: - Summary: Bootstrap can't source env files for RHEL variants Key: MINIFICPP-461 URL: https://issues.apache.org/jira/browse/MINIFICPP-461 Project: NiFi MiNiFi C++ Issue Type: Task Reporter: Aldrin Piri Assignee: Aldrin Piri CentOS, Fedora, and (I'm assuming) RHEL are unable to source their respective distribution file in the bootstrap. {{./bootstrap.sh: line 173: source: fedora.sh: file not found}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #304: Providing fixes for sourcing RHEL based d...
GitHub user apiri reopened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/304 Providing fixes for sourcing RHEL based distributions in the bootstrap process Providing fixes for sourcing RHEL based distributions in the bootstrap process Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ X] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [X ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/apiri/nifi-minifi-cpp MINIFICPP-461 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/304.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #304 commit 0a792843218e14fa9476072b4b1dd9d3ee81df2a Author: Aldrin PiriDate: 2018-04-20T13:34:12Z MINIFICPP-461 Providing fixes for sourcing env files in RHEL based distributions during bootstrap process ---
[jira] [Resolved] (NIFI-5062) Remove hbase-client from nifi-hbase-bundle pom
[ https://issues.apache.org/jira/browse/NIFI-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sivaprasanna Sethuraman resolved NIFI-5062. --- Resolution: Fixed Fix Version/s: 1.7.0 > Remove hbase-client from nifi-hbase-bundle pom > -- > > Key: NIFI-5062 > URL: https://issues.apache.org/jira/browse/NIFI-5062 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.6.0 >Reporter: Bryan Bende >Assignee: Sivaprasanna Sethuraman >Priority: Minor > Fix For: 1.7.0 > > > Since the hbase-client dependency should be coming from the client service > implementation, we shouldn't need to specify it here: > [https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-hbase-bundle/pom.xml#L43] > We should also make hbase.version a property that can be overridden at build > time, rather than hard-coding it here: > https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/pom.xml#L73 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #304: Providing fixes for sourcing RHEL based d...
Github user apiri closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/304 ---
[GitHub] nifi-minifi-cpp pull request #304: Providing fixes for sourcing RHEL based d...
GitHub user apiri opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/304 Providing fixes for sourcing RHEL based distributions in the bootstrap process Providing fixes for sourcing RHEL based distributions in the bootstrap process Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ X] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [X ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/apiri/nifi-minifi-cpp MINIFICPP-461 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/304.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #304 commit ddcf826c3493cc41e66e1b8564a0d927a72ccda9 Author: Aldrin PiriDate: 2018-04-20T12:48:48Z Providing fixes for sourcing RHEL based distributions in the bootstrap process ---
[jira] [Commented] (NIFI-5062) Remove hbase-client from nifi-hbase-bundle pom
[ https://issues.apache.org/jira/browse/NIFI-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445675#comment-16445675 ] ASF GitHub Bot commented on NIFI-5062: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2636 > Remove hbase-client from nifi-hbase-bundle pom > -- > > Key: NIFI-5062 > URL: https://issues.apache.org/jira/browse/NIFI-5062 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.6.0 >Reporter: Bryan Bende >Assignee: Sivaprasanna Sethuraman >Priority: Minor > > Since the hbase-client dependency should be coming from the client service > implementation, we shouldn't need to specify it here: > [https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-hbase-bundle/pom.xml#L43] > We should also make hbase.version a property that can be overridden at build > time, rather than hard-coding it here: > https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/pom.xml#L73 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5062) Remove hbase-client from nifi-hbase-bundle pom
[ https://issues.apache.org/jira/browse/NIFI-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445674#comment-16445674 ] ASF subversion and git services commented on NIFI-5062: --- Commit 1dbfcb94453cdfb50666b876c6836d924fae9048 in nifi's branch refs/heads/master from [~sivaprasanna] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=1dbfcb9 ] NIFI-5062: Removed hbase-client dependecy from hbase bundle This closes #2636 Signed-off-by: Mike Thomsen> Remove hbase-client from nifi-hbase-bundle pom > -- > > Key: NIFI-5062 > URL: https://issues.apache.org/jira/browse/NIFI-5062 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.6.0 >Reporter: Bryan Bende >Assignee: Sivaprasanna Sethuraman >Priority: Minor > > Since the hbase-client dependency should be coming from the client service > implementation, we shouldn't need to specify it here: > [https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-hbase-bundle/pom.xml#L43] > We should also make hbase.version a property that can be overridden at build > time, rather than hard-coding it here: > https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/pom.xml#L73 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2636: NIFI-5062: Removed hbase-client dependecy from hbas...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2636 ---
[jira] [Commented] (NIFI-5082) SQL processors do not handle Avro conversion of Oracle timestamps correctly
[ https://issues.apache.org/jira/browse/NIFI-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445673#comment-16445673 ] ASF GitHub Bot commented on NIFI-5082: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2638 +1 LGTM. Passed the build and it matches my rough understanding of the issues w/ Oracle as well. I'm good to merge this if you don't think it needs further review. > SQL processors do not handle Avro conversion of Oracle timestamps correctly > --- > > Key: NIFI-5082 > URL: https://issues.apache.org/jira/browse/NIFI-5082 > Project: Apache NiFi > Issue Type: Bug >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > > In JdbcCommon (used by such processors as ExecuteSQL and QueryDatabaseTable), > if a ResultSet column is not a CLOB or BLOB, its value is retrieved using > getObject(), then further processing is done based on the SQL type and/or the > Java class of the value. > However, in Oracle when getObject() is called on a Timestamp column, it > returns an Oracle-specific TIMESTAMP class which does not inherit from > java.sql.Timestamp or java.sql.Date. Thus the processing "falls through" and > its value is attempted to be inserted as a string, which violates the Avro > schema (which correctly recognized it as a long of timestamp logical type). > At least for Oracle, the right way to process a Timestamp column is to call > getTimestamp() rather than getObject(), the former returns a > java.sql.Timestamp object which would correctly be processed by the current > code. I would hope that all drivers would support this but we would want to > test on (at least) MySQL, Oracle, and PostgreSQL. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2638: NIFI-5082: Added support for custom Oracle timestamp types...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2638 +1 LGTM. Passed the build and it matches my rough understanding of the issues w/ Oracle as well. I'm good to merge this if you don't think it needs further review. ---
[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services
[ https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445671#comment-16445671 ] ASF GitHub Bot commented on NIFI-4637: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2518 @ijokarumawak Disregard that last comment. I apparently forgot to update the EL support. Will get that done today. > Add support for HBase visibility labels to HBase processors and controller > services > --- > > Key: NIFI-4637 > URL: https://issues.apache.org/jira/browse/NIFI-4637 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Major > > HBase supports visibility labels, but you can't use them from NiFi because > there is no way to set them. The existing processors and services should be > upgraded to handle this capability. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2518: NIFI-4637 Added support for visibility labels to the HBase...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2518 @ijokarumawak Disregard that last comment. I apparently forgot to update the EL support. Will get that done today. ---