[jira] [Commented] (NIFI-10442) Create PutIceberg processor

2023-09-19 Thread Abdelrahim Ahmad (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17766756#comment-17766756
 ] 

Abdelrahim Ahmad commented on NIFI-10442:
-

Hi Guys, Thanks for this great processor. Is there any chance in the future for 
this one to support the modern data storage systems like Minio, S3 and other 
object storage tools?
Thanks

 

> Create PutIceberg processor
> ---
>
> Key: NIFI-10442
> URL: https://issues.apache.org/jira/browse/NIFI-10442
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mark Bathori
>Assignee: Mark Bathori
>Priority: Major
> Fix For: 1.19.0
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Add a processor that is able to ingest data into Iceberg tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11449) add autocommit property to PutDatabaseRecord processor

2023-07-05 Thread Abdelrahim Ahmad (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740328#comment-17740328
 ] 

Abdelrahim Ahmad commented on NIFI-11449:
-

Hi [~sabonyi] , Thanks for your reply. I saw the processor you mentioned but it 
works only with Hive database or HDFS. this one doesn't support Object storage 
like Minio, AWS or GCP.
 So this one cannot be used with the modern data Lakehouse systems.
Best regards
AA

> add autocommit property to PutDatabaseRecord processor
> --
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver or Dremio-JDBC-Driver to write 
> to an Iceberg catalog, it disables the autocommit feature. This leads to 
> errors such as "{*}Catalog only supports writes using autocommit: iceberg{*}".
> the autocommit feature needs to be added in the processor to be 
> enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> _Improving this processor will allow Nifi to be the main tool to ingest data 
> into these new Technologies. So we don't have to deal with another tool to do 
> so._
> +*_{color:#de350b}BUT:{color}_*+
> I have reviewed The {{PutDatabaseRecord}} processor in NiFi. It inserts 
> records one by one into the database using a prepared statement, and commits 
> the transaction at the end of the loop that processes each record. This 
> approach can be inefficient and slow when inserting large volumes of data 
> into tables that are optimized for bulk ingestion, such as Delta Lake, 
> Iceberg, and Hudi tables.
> These tables use various techniques to optimize the performance of bulk 
> ingestion, such as partitioning, clustering, and indexing. Inserting records 
> one by one using a prepared statement can bypass these optimizations, leading 
> to poor performance and potentially causing issues such as excessive disk 
> usage, increased memory consumption, and decreased query performance.
> To avoid these issues, it is recommended to have a new processor, or add 
> feature to the current one, to bulk insert method with AutoCommit feature 
> when inserting large volumes of data into Delta Lake, Iceberg, and Hudi 
> tables. 
>  
> P.S.: using PutSQL is not a have autoCommit but have the same performance 
> problem described above..
> Thanks and best regards :)
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11449) add autocommit property to PutDatabaseRecord processor

2023-04-14 Thread Abdelrahim Ahmad (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahim Ahmad updated NIFI-11449:

Description: 
The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver or Dremio-JDBC-Driver to write 
to an Iceberg catalog, it disables the autocommit feature. This leads to errors 
such as "{*}Catalog only supports writes using autocommit: iceberg{*}".

the autocommit feature needs to be added in the processor to be 
enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

_Improving this processor will allow Nifi to be the main tool to ingest data 
into these new Technologies. So we don't have to deal with another tool to do 
so._

+*_{color:#de350b}BUT:{color}_*+



I have reviewed The {{PutDatabaseRecord}} processor in NiFi. It inserts records 
one by one into the database using a prepared statement, and commits the 
transaction at the end of the loop that processes each record. This approach 
can be inefficient and slow when inserting large volumes of data into tables 
that are optimized for bulk ingestion, such as Delta Lake, Iceberg, and Hudi 
tables.

These tables use various techniques to optimize the performance of bulk 
ingestion, such as partitioning, clustering, and indexing. Inserting records 
one by one using a prepared statement can bypass these optimizations, leading 
to poor performance and potentially causing issues such as excessive disk 
usage, increased memory consumption, and decreased query performance.

To avoid these issues, it is recommended to have a new processor, or add 
feature to the current one, to bulk insert method with AutoCommit feature when 
inserting large volumes of data into Delta Lake, Iceberg, and Hudi tables. 

 

P.S.: using PutSQL is not a have autoCommit but have the same performance 
problem described above..

Thanks and best regards :)
Abdelrahim Ahmad

  was:
The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver or Dremio-JDBC-Driver to write 
to an Iceberg catalog, it disables the autocommit feature. This leads to errors 
such as "{*}Catalog only supports writes using autocommit: iceberg{*}".

To fix this issue, the autocommit feature needs to be added in the processor to 
be enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

_*Improving this processor will allow Nifi to be the main tool to ingest data 
into these new Technologies. So we don't have to deal with another tool to do 
so.*_

P.S.: using PutSQL is not a good option at all due to the sensitivity of these 
tables when dealing with small inserts.

Thanks and best regards
Abdelrahim Ahmad


> add autocommit property to PutDatabaseRecord processor
> --
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver or Dremio-JDBC-Driver to write 
> to an Iceberg catalog, it disables the autocommit feature. This leads to 
> errors such as "{*}Catalog only supports writes using autocommit: iceberg{*}".
> the autocommit feature needs to be added in the processor to be 
> enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> _Improving this processor will allow Nifi to be the main tool to ingest data 
> into these new Technologies. So we don't have to deal with another tool to do 
> so._
> +*_{color:#de350b}BUT:{color}_*+
> I have reviewed The {{PutDatabaseRecord}} processor in NiFi. It inserts 
> records one by one into the database using a prepared statement, and commits 
> the transaction at the end of the loop that 

[jira] [Updated] (NIFI-11449) add autocommit property to PutDatabaseRecord processor

2023-04-14 Thread Abdelrahim Ahmad (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahim Ahmad updated NIFI-11449:

Issue Type: New Feature  (was: Improvement)

> add autocommit property to PutDatabaseRecord processor
> --
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver or Dremio-JDBC-Driver to write 
> to an Iceberg catalog, it disables the autocommit feature. This leads to 
> errors such as "{*}Catalog only supports writes using autocommit: iceberg{*}".
> To fix this issue, the autocommit feature needs to be added in the processor 
> to be enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> _*Improving this processor will allow Nifi to be the main tool to ingest data 
> into these new Technologies. So we don't have to deal with another tool to do 
> so.*_
> P.S.: using PutSQL is not a good option at all due to the sensitivity of 
> these tables when dealing with small inserts.
> Thanks and best regards
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11449) add autocommit property to PutDatabaseRecord processor

2023-04-13 Thread Abdelrahim Ahmad (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahim Ahmad updated NIFI-11449:

Summary: add autocommit property to PutDatabaseRecord processor  (was: add 
autocommit property to control commit in PutDatabaseRecord processor)

> add autocommit property to PutDatabaseRecord processor
> --
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver or Dremio-JDBC-Driver to write 
> to an Iceberg catalog, it disables the autocommit feature. This leads to 
> errors such as "{*}Catalog only supports writes using autocommit: iceberg{*}".
> To fix this issue, the autocommit feature needs to be added in the processor 
> to be enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> _*Improving this processor will allow Nifi to be the main tool to ingest data 
> into these new Technologies. So we don't have to deal with another tool to do 
> so.*_
> P.S.: using PutSQL is not a good option at all due to the sensitivity of 
> these tables when dealing with small inserts.
> Thanks and best regards
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11449) add autocommit property to control commit in PutDatabaseRecord processor

2023-04-13 Thread Abdelrahim Ahmad (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahim Ahmad updated NIFI-11449:

Description: 
The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver or Dremio-JDBC-Driver to write 
to an Iceberg catalog, it disables the autocommit feature. This leads to errors 
such as "{*}Catalog only supports writes using autocommit: iceberg{*}".

To fix this issue, the autocommit feature needs to be added in the processor to 
be enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

_*Improving this processor will allow Nifi to be the main tool to ingest data 
into these new Technologies. So we don't have to deal with another tool to do 
so.*_

P.S.: using PutSQL is not a good option at all due to the sensitivity of these 
tables when dealing with small inserts.

Thanks and best regards
Abdelrahim Ahmad

  was:
The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver to write to an Iceberg catalog, 
it disables the autocommit feature. This leads to errors such as "{*}Catalog 
only supports writes using autocommit: iceberg{*}".

To fix this issue, the autocommit feature needs to be added in the processor to 
be enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

_*Improving this processor will allow Nifi to be the main tool to ingest data 
into these new Technologies. So we don't have to deal with another tool to do 
so.*_

P.S.: using PutSQL is not a good option at all due to the sensitivity of these 
tables when dealing with small inserts.

Thanks and best regards
Abdelrahim Ahmad


> add autocommit property to control commit in PutDatabaseRecord processor
> 
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver or Dremio-JDBC-Driver to write 
> to an Iceberg catalog, it disables the autocommit feature. This leads to 
> errors such as "{*}Catalog only supports writes using autocommit: iceberg{*}".
> To fix this issue, the autocommit feature needs to be added in the processor 
> to be enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> _*Improving this processor will allow Nifi to be the main tool to ingest data 
> into these new Technologies. So we don't have to deal with another tool to do 
> so.*_
> P.S.: using PutSQL is not a good option at all due to the sensitivity of 
> these tables when dealing with small inserts.
> Thanks and best regards
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11449) add autocommit property to control commit in PutDatabaseRecord processor

2023-04-13 Thread Abdelrahim Ahmad (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahim Ahmad updated NIFI-11449:

Description: 
The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver to write to an Iceberg catalog, 
it disables the autocommit feature. This leads to errors such as "{*}Catalog 
only supports writes using autocommit: iceberg{*}".

To fix this issue, the autocommit feature needs to be added in the processor to 
be enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

_*Improving this processor will allow Nifi to be the main tool to ingest data 
into these new Technologies. So we don't have to deal with another tool to do 
so.*_

P.S.: using PutSQL is not a good option at all due to the sensitivity of these 
tables when dealing with small inserts.

Thanks and best regards
Abdelrahim Ahmad

  was:
The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver to write to an Iceberg catalog, 
it disables the autocommit feature. This leads to errors such as "{*}Catalog 
only supports writes using autocommit: iceberg{*}".

To fix this issue, the autocommit feature needs to be added in the processor to 
be enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

_*Improving this process will allow Nifi to be the main tool to ingest data 
into these new Technologies. So we don't have to deal with another way to 
ingest data.*_

P.S.: using PutSQL is not a good option at all due to the sensitivity of these 
tables when dealing with small inserts.

Thanks and best regards
Abdelrahim Ahmad


> add autocommit property to control commit in PutDatabaseRecord processor
> 
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver to write to an Iceberg 
> catalog, it disables the autocommit feature. This leads to errors such as 
> "{*}Catalog only supports writes using autocommit: iceberg{*}".
> To fix this issue, the autocommit feature needs to be added in the processor 
> to be enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> _*Improving this processor will allow Nifi to be the main tool to ingest data 
> into these new Technologies. So we don't have to deal with another tool to do 
> so.*_
> P.S.: using PutSQL is not a good option at all due to the sensitivity of 
> these tables when dealing with small inserts.
> Thanks and best regards
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11449) add autocommit property to control commit in PutDatabaseRecord processor

2023-04-13 Thread Abdelrahim Ahmad (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahim Ahmad updated NIFI-11449:

Description: 
The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver to write to an Iceberg catalog, 
it disables the autocommit feature. This leads to errors such as "{*}Catalog 
only supports writes using autocommit: iceberg{*}".

To fix this issue, the autocommit feature needs to be added in the processor to 
be enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

_*Improving this process will allow Nifi to be the main tool to ingest data 
into these new Technologies. So we don't have to deal with another way to 
ingest data.*_

P.S.: using PutSQL is not a good option at all due to the sensitivity of these 
tables when dealing with small inserts.

Thanks and best regards
Abdelrahim Ahmad

  was:
The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver to write to an Iceberg catalog, 
it disables the autocommit feature. This leads to errors such as "{*}Catalog 
only supports writes using autocommit: iceberg{*}".

To fix this issue, the autocommit feature needs to be added in the processor to 
be enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

P.S.: using PutSQL is not a good option at all due to the sensitivity of these 
tables when dealing with small inserts.

Thanks and best regards
Abdelrahim Ahmad


> add autocommit property to control commit in PutDatabaseRecord processor
> 
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver to write to an Iceberg 
> catalog, it disables the autocommit feature. This leads to errors such as 
> "{*}Catalog only supports writes using autocommit: iceberg{*}".
> To fix this issue, the autocommit feature needs to be added in the processor 
> to be enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> _*Improving this process will allow Nifi to be the main tool to ingest data 
> into these new Technologies. So we don't have to deal with another way to 
> ingest data.*_
> P.S.: using PutSQL is not a good option at all due to the sensitivity of 
> these tables when dealing with small inserts.
> Thanks and best regards
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11449) add autocommit property to control commit in PutDatabaseRecord processor

2023-04-13 Thread Abdelrahim Ahmad (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17712091#comment-17712091
 ] 

Abdelrahim Ahmad commented on NIFI-11449:
-

Improving this process will allow Nifi to be the main tool to ingest data into 
these new Technologies.
So we don't have to deal with another way to ingest data.

> add autocommit property to control commit in PutDatabaseRecord processor
> 
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver to write to an Iceberg 
> catalog, it disables the autocommit feature. This leads to errors such as 
> "{*}Catalog only supports writes using autocommit: iceberg{*}".
> To fix this issue, the autocommit feature needs to be added in the processor 
> to be enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> P.S.: using PutSQL is not a good option at all due to the sensitivity of 
> these tables when dealing with small inserts.
> Thanks and best regards
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (NIFI-11449) add autocommit property to control commit in PutDatabaseRecord processor

2023-04-13 Thread Abdelrahim Ahmad (Jira)


[ https://issues.apache.org/jira/browse/NIFI-11449 ]


Abdelrahim Ahmad deleted comment on NIFI-11449:
-

was (Author: abdelrahimk):
Improving this process will allow Nifi to be the main tool to ingest data into 
these new Technologies.
So we don't have to deal with another way to ingest data.

> add autocommit property to control commit in PutDatabaseRecord processor
> 
>
> Key: NIFI-11449
> URL: https://issues.apache.org/jira/browse/NIFI-11449
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.21.0
> Environment: Any Nifi Deployment
>Reporter: Abdelrahim Ahmad
>Priority: Blocker
>  Labels: Trino, autocommit, database, iceberg, putdatabaserecord
>
> The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
> using the processor with the Trino-JDBC-Driver to write to an Iceberg 
> catalog, it disables the autocommit feature. This leads to errors such as 
> "{*}Catalog only supports writes using autocommit: iceberg{*}".
> To fix this issue, the autocommit feature needs to be added in the processor 
> to be enabled/disabled.
> enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
> Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
> allowing atomic writes to be performed in the underlying database. This will 
> allow the process to be widely used with bigger range of databases.
> P.S.: using PutSQL is not a good option at all due to the sensitivity of 
> these tables when dealing with small inserts.
> Thanks and best regards
> Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-11449) add autocommit property to control commit in PutDatabaseRecord processor

2023-04-13 Thread Abdelrahim Ahmad (Jira)
Abdelrahim Ahmad created NIFI-11449:
---

 Summary: add autocommit property to control commit in 
PutDatabaseRecord processor
 Key: NIFI-11449
 URL: https://issues.apache.org/jira/browse/NIFI-11449
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.21.0
 Environment: Any Nifi Deployment
Reporter: Abdelrahim Ahmad


The issue is with the {{PutDatabaseRecord}} processor in Apache NiFi. When 
using the processor with the Trino-JDBC-Driver to write to an Iceberg catalog, 
it disables the autocommit feature. This leads to errors such as "{*}Catalog 
only supports writes using autocommit: iceberg{*}".

To fix this issue, the autocommit feature needs to be added in the processor to 
be enabled/disabled.
enabling auto-commit in the Nifi PutDatabaseRecord processor is important for 
Deltalake, Iceberg, and Hudi as it ensures data consistency and integrity by 
allowing atomic writes to be performed in the underlying database. This will 
allow the process to be widely used with bigger range of databases.

P.S.: using PutSQL is not a good option at all due to the sensitivity of these 
tables when dealing with small inserts.

Thanks and best regards
Abdelrahim Ahmad



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-7899) InvokeHTTP does not timeout

2021-09-03 Thread Abdelrahim Ahmad (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409566#comment-17409566
 ] 

Abdelrahim Ahmad commented on NIFI-7899:


Hi All,
I have exactly the same issue with Version 1.12.1. the process works for a 
couple of days and then hangs. I have to terminate the process in order to stop 
it and if started again it doesn't work at all.

the workaround I do:
 * Eather to stop the process each around 1 day and start it again (before it 
hangs).
 * or I replace the process with a new copy of it (when it hangs and does not 
work).

BR
Abdelrahim Ahmad

> InvokeHTTP does not timeout
> ---
>
> Key: NIFI-7899
> URL: https://issues.apache.org/jira/browse/NIFI-7899
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.11.4
> Environment: Ubuntu 18.04. Nifi 1.11.4.
> 4 core, 8GB mem. Java set to 4GB mem
>Reporter: Jens M Kofoed
>Priority: Major
>
> We have some issues with the InvokeHTTP process. It "randomly" hangs in the 
> process without timing out. The processor shows that there are 1 task running 
> (upper right corner) and it can runs for hours without any outputs, but with 
> multiply flowfiles in the queue.
> Trying to stop it takes forever so I have to terminate it. restart the 
> processor and everything works fine for a long time. until next time it hangs.
> Our configuration of the process is as follow:
>  Penalty: 30s, Yield: 1s,
>  Scheduling: timer driven, Concurrent Task: 1, Run Schedule: 0, Run duration: > 0
> HTTP Method: GET
> Connection timeout: 5s
> Read timeout: 15s
> Idle Timeout: 5m
> Max idle Connection: 5
> I could not find any other bug reports here. but there are other people 
> metion same issues:
> [https://webcache.googleusercontent.com/search?q=cache:LMqcymQiM-IJ:https://community.cloudera.com/t5/Support-Questions/InvokeHTTP-randomly-hangs/td-p/296184+=1=da=clnk=dk]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)