[
https://issues.apache.org/jira/browse/NIFI-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16354854#comment-16354854
]
ASF GitHub Bot commented on NIFI-1706:
--------------------------------------
Github user patricker commented on a diff in the pull request:
https://github.com/apache/nifi/pull/2162#discussion_r166499522
--- Diff:
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
---
@@ -150,8 +151,13 @@ public QueryDatabaseTable() {
final List<PropertyDescriptor> pds = new ArrayList<>();
pds.add(DBCP_SERVICE);
pds.add(DB_TYPE);
- pds.add(TABLE_NAME);
+ pds.add(new PropertyDescriptor.Builder()
+ .fromPropertyDescriptor(TABLE_NAME)
+ .description("The name of the database table to be
queried. When a custom query is used, this property is used to alias the query
and appears as an attribute on the FlowFile.")
+ .build());
--- End diff --
Good catch.
> Extend QueryDatabaseTable to support arbitrary queries
> ------------------------------------------------------
>
> Key: NIFI-1706
> URL: https://issues.apache.org/jira/browse/NIFI-1706
> Project: Apache NiFi
> Issue Type: Improvement
> Components: Core Framework
> Affects Versions: 1.4.0
> Reporter: Paul Bormans
> Assignee: Peter Wicks
> Priority: Major
> Labels: features
>
> The QueryDatabaseTable is able to observe a configured database table for new
> rows and yield these into the flowfile. The model of an rdbms however is
> often (if not always) normalized so you would need to join various tables in
> order to "flatten" the data into useful events for a processing pipeline as
> can be build with nifi or various tools within the hadoop ecosystem.
> The request is to extend the processor to specify an arbitrary sql query
> instead of specifying the table name + columns.
> In addition (this may be another issue?) it is desired to limit the number of
> rows returned per run. Not just because of bandwidth issue's from the nifi
> pipeline onwards but mainly because huge databases may not be able to return
> so many records within a reasonable time.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)