[
https://issues.apache.org/jira/browse/NIFI-3029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15725899#comment-15725899
]
ASF GitHub Bot commented on NIFI-3029:
--------------------------------------
Github user mattyb149 commented on a diff in the pull request:
https://github.com/apache/nifi/pull/1213#discussion_r91108610
--- Diff:
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
---
@@ -179,6 +189,9 @@ public void onTrigger(final ProcessContext context,
final ProcessSessionFactory
final String maxValueColumnNames =
context.getProperty(MAX_VALUE_COLUMN_NAMES).getValue();
final Integer fetchSize =
context.getProperty(FETCH_SIZE).asInteger();
final Integer maxRowsPerFlowFile =
context.getProperty(MAX_ROWS_PER_FLOW_FILE).asInteger();
+ final Integer maxFragments =
context.getProperty(MAX_FRAGMENTS).isSet()
+ ? context.getProperty(MAX_FRAGMENTS).asInteger()
+ : 0;
--- End diff --
Is zero a valid number of fragments on its own (versus leaving the property
blank)? If so then perhaps the "not set" value should be -1 and the logic below
changed to >= 0. Otherwise perhaps the default value for the property should be
zero and the description updated to reflect that zero means all fragments.
> QueryDatabaseTable supports max fragments property
> --------------------------------------------------
>
> Key: NIFI-3029
> URL: https://issues.apache.org/jira/browse/NIFI-3029
> Project: Apache NiFi
> Issue Type: Bug
> Components: Extensions
> Affects Versions: 1.1.0
> Reporter: Byunghwa Yun
> Priority: Minor
>
> When QueryDatabaseTable ingests huge table that has ten billion data at the
> first time, NiFi throws OutOfMemoryError.
> Because QueryDatabaseTable creates too many fragments in memory event setting
> the MaxRowsPerFlowFile property.
> So I suggest QueryDatabaseTable supports max fragments property.
> Thank you.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)