[
https://issues.apache.org/jira/browse/NIFI-1251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16537315#comment-16537315
]
ASF subversion and git services commented on NIFI-1251:
-------------------------------------------------------
Commit 382653654652ac6677b6171a48018f648952428b in nifi's branch
refs/heads/master from patricker
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=3826536 ]
NIFI-1251 ExecuteSQL Max Rows and Output Batching
NIFI-1251: Fixed doc and checkstyle issues
Signed-off-by: Matthew Burgess <[email protected]>
This closes #2834
> Allow ExecuteSQL to send out large result sets in chunks
> --------------------------------------------------------
>
> Key: NIFI-1251
> URL: https://issues.apache.org/jira/browse/NIFI-1251
> Project: Apache NiFi
> Issue Type: Improvement
> Components: Extensions
> Reporter: Mark Payne
> Assignee: Peter Wicks
> Priority: Major
> Fix For: 1.8.0
>
>
> Currently, when using ExecuteSQL, if a result set is very large, it can take
> quite a long time to pull back all of the results. It would be nice to have
> the ability to specify the maximum number of records to put into a FlowFile,
> so that if we pull back say 1 million records we can configure it to create
> 1000 FlowFiles, each with 1000 records. This way, we can begin processing the
> first 1,000 records while the next 1000 are being pulled from the remote
> database.
> This suggestion comes from Vinay via the dev@ mailing list:
> Is there way to have streaming feature when large result set is fetched from
> database basically to reads data from the database in chunks of records
> instead of loading the full result set into memory.
> As part of ExecuteSQL can a property be specified called "FetchSize" which
> Indicates how many rows should be fetched from the resultSet.
> Since jam bit new in using NIFI , can any guide me on above.
> Thanks in advance
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)