[ 
https://issues.apache.org/jira/browse/NIFI-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim updated NIFI-5788:
------------------------
    Description: 
Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
prepared SQL statements. Specifically, Teradata JDBC driver 
([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would fail 
SQL statement when the batch overflows the internal limits.

Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
work around the issue in certain scenarios, but generally, this solution is not 
perfect because the SQL statements would be executed in different transaction 
contexts and data integrity would not be preserved.

The solution suggests the following:
 * introduce a new optional parameter in *PutDatabaseRecord* processor, 
*max_batch_size* which defines the maximum batch size in INSERT/UPDATE 
statement; the default value zero (INFINITY) preserves the old behavior
 * divide the input into batches of the specified size and invoke 
PreparedStatement.executeBatch()  for each batch

Pull request: [https://github.com/apache/nifi/pull/3128]

 

[EDIT] Changed batch_size to max_batch_size. The default value would be zero 
(INFINITY) 

  was:
Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
prepared SQL statements. Specifically, Teradata JDBC driver 
([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would fail 
SQL statement when the batch overflows the internal limits.

Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
work around the issue in certain scenarios, but generally, this solution is not 
perfect because the SQL statements would be executed in different transaction 
contexts and data integrity would not be preserved.

The solution suggests the following:
 * introduce a new optional parameter in *PutDatabaseRecord* processor, 
*batch_size* which defines the maximum size of the bulk in INSERT/UPDATE 
statement; its default value is -1 (INFINITY) preserves the old behavior
 * divide the input into batches of the specified size and invoke 
PreparedStatement.executeBatch()  for each batch

Pull request: [https://github.com/apache/nifi/pull/3128]

 


> Introduce batch size limit in PutDatabaseRecord processor
> ---------------------------------------------------------
>
>                 Key: NIFI-5788
>                 URL: https://issues.apache.org/jira/browse/NIFI-5788
>             Project: Apache NiFi
>          Issue Type: Improvement
>          Components: Core Framework
>         Environment: Teradata DB
>            Reporter: Vadim
>            Priority: Major
>              Labels: pull-request-available
>
> Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE 
> prepared SQL statements. Specifically, Teradata JDBC driver 
> ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would 
> fail SQL statement when the batch overflows the internal limits.
> Dividing data into smaller chunks before the PutDatabaseRecord is applied can 
> work around the issue in certain scenarios, but generally, this solution is 
> not perfect because the SQL statements would be executed in different 
> transaction contexts and data integrity would not be preserved.
> The solution suggests the following:
>  * introduce a new optional parameter in *PutDatabaseRecord* processor, 
> *max_batch_size* which defines the maximum batch size in INSERT/UPDATE 
> statement; the default value zero (INFINITY) preserves the old behavior
>  * divide the input into batches of the specified size and invoke 
> PreparedStatement.executeBatch()  for each batch
> Pull request: [https://github.com/apache/nifi/pull/3128]
>  
> [EDIT] Changed batch_size to max_batch_size. The default value would be zero 
> (INFINITY) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to