[ 
https://issues.apache.org/jira/browse/NIFI-3029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15725893#comment-15725893
 ] 

ASF GitHub Bot commented on NIFI-3029:
--------------------------------------

Github user mattyb149 commented on a diff in the pull request:

    https://github.com/apache/nifi/pull/1213#discussion_r91108199
  
    --- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java
 ---
    @@ -120,6 +120,15 @@
                 
.addValidator(StandardValidators.NON_NEGATIVE_INTEGER_VALIDATOR)
                 .build();
     
    +    public static final PropertyDescriptor MAX_FRAGMENTS = new 
PropertyDescriptor.Builder()
    +            .name("Maximum Number of Fragments")
    --- End diff --
    
    This is not required, but many folks have adopted a convention for new 
properties where name() contains a more machine-friendly name (like 
"qdt-max-frags" for example) and displayName() contains the user-friendly name 
of "Maximum Number of Fragments".


> QueryDatabaseTable supports max fragments property
> --------------------------------------------------
>
>                 Key: NIFI-3029
>                 URL: https://issues.apache.org/jira/browse/NIFI-3029
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Extensions
>    Affects Versions: 1.1.0
>            Reporter: Byunghwa Yun
>            Priority: Minor
>
> When QueryDatabaseTable ingests huge table that has ten billion data at the 
> first time, NiFi throws OutOfMemoryError.
> Because QueryDatabaseTable creates too many fragments in memory event setting 
> the MaxRowsPerFlowFile property.
> So I suggest QueryDatabaseTable supports max fragments property.
> Thank you.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to