[
https://issues.apache.org/jira/browse/NIFI-3029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15727285#comment-15727285
]
ASF GitHub Bot commented on NIFI-3029:
--------------------------------------
Github user combineads commented on a diff in the pull request:
https://github.com/apache/nifi/pull/1213#discussion_r91208324
--- Diff:
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
---
@@ -179,6 +189,9 @@ public void onTrigger(final ProcessContext context,
final ProcessSessionFactory
final String maxValueColumnNames =
context.getProperty(MAX_VALUE_COLUMN_NAMES).getValue();
final Integer fetchSize =
context.getProperty(FETCH_SIZE).asInteger();
final Integer maxRowsPerFlowFile =
context.getProperty(MAX_ROWS_PER_FLOW_FILE).asInteger();
+ final Integer maxFragments =
context.getProperty(MAX_FRAGMENTS).isSet()
+ ? context.getProperty(MAX_FRAGMENTS).asInteger()
+ : 0;
--- End diff --
I changed the default value to zero and modified the description for zero
means.
Thank you for your the reviews. @mattyb149
> QueryDatabaseTable supports max fragments property
> --------------------------------------------------
>
> Key: NIFI-3029
> URL: https://issues.apache.org/jira/browse/NIFI-3029
> Project: Apache NiFi
> Issue Type: Bug
> Components: Extensions
> Affects Versions: 1.1.0
> Reporter: Byunghwa Yun
> Priority: Minor
>
> When QueryDatabaseTable ingests huge table that has ten billion data at the
> first time, NiFi throws OutOfMemoryError.
> Because QueryDatabaseTable creates too many fragments in memory event setting
> the MaxRowsPerFlowFile property.
> So I suggest QueryDatabaseTable supports max fragments property.
> Thank you.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)