Great, thank you!

Am Mo., 15. Apr. 2019 um 16:28 Uhr schrieb Papadopoulos, Konstantinos <
konstantinos.papadopou...@iriworldwide.com>:

> Hi Fabian,
>
>
>
> I opened the following issue to track the improvement proposed:
>
> https://issues.apache.org/jira/browse/FLINK-12198
>
>
>
> Best,
>
> Konstantinos
>
>
>
> *From:* Papadopoulos, Konstantinos
> <konstantinos.papadopou...@iriworldwide.com>
> *Sent:* Δευτέρα, 15 Απριλίου 2019 12:30 μμ
> *To:* Fabian Hueske <fhue...@gmail.com>
> *Cc:* Rong Rong <walter...@gmail.com>; user <user@flink.apache.org>
> *Subject:* RE: Flink JDBC: Disable auto-commit mode
>
>
>
> Hi Fabian,
>
>
>
> Glad to hear that you agree for such an improvement. Of course, I can
> handle it.
>
>
>
> Best,
>
> Konstantinos
>
>
>
> *From:* Fabian Hueske <fhue...@gmail.com>
> *Sent:* Δευτέρα, 15 Απριλίου 2019 11:56 πμ
> *To:* Papadopoulos, Konstantinos <
> konstantinos.papadopou...@iriworldwide.com>
> *Cc:* Rong Rong <walter...@gmail.com>; user <user@flink.apache.org>
> *Subject:* Re: Flink JDBC: Disable auto-commit mode
>
>
>
> Hi Konstantinos,
>
>
>
> This sounds like a useful extension to me.
>
> Would you like to create a Jira issue and contribute the improvement?
>
>
>
> In the meantime, you can just fork the code of JDBCInputFormat and adjust
> it to your needs.
>
>
>
> Best, Fabian
>
>
>
> Am Mo., 15. Apr. 2019 um 08:53 Uhr schrieb Papadopoulos, Konstantinos <
> konstantinos.papadopou...@iriworldwide.com>:
>
> Hi Rong,
>
>
>
> We have already tried to set the fetch size with no success. According to
> PG documentation we have to set both configuration parameters (i.e.,
> auto-commit to false and limit fetch) to achieve our purpose.
>
>
>
> Thanks,
>
> Konstantinos
>
>
>
> *From:* Rong Rong <walter...@gmail.com>
> *Sent:* Παρασκευή, 12 Απριλίου 2019 6:50 μμ
> *To:* Papadopoulos, Konstantinos <
> konstantinos.papadopou...@iriworldwide.com>
> *Cc:* user <user@flink.apache.org>
> *Subject:* Re: Flink JDBC: Disable auto-commit mode
>
>
>
> Hi Konstantinos,
>
>
>
> Seems like setting for auto commit is not directly possible in the current
> JDBCInputFormatBuilder.
>
> However there's a way to specify the fetch size [1] for your DB
> round-trip, doesn't that resolve your issue?
>
>
>
> Similarly in JDBCOutputFormat, a batching mode was also used to stash
> upload rows before flushing to DB.
>
>
>
> --
>
> Rong
>
>
>
> [1]
> https://docs.oracle.com/cd/E18283_01/java.112/e16548/resltset.htm#insertedID4
> <https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.oracle.com%2Fcd%2FE18283_01%2Fjava.112%2Fe16548%2Fresltset.htm%23insertedID4&data=02%7C01%7C%7C2cc5777737b740b23dcd08d6c185070b%7C43728c2044474b27ac2e4bdabb3c0121%7C0%7C0%7C636909174427110104&sdata=vHwPnVT%2BI41Xxkp1Zfl%2BOgTReZ0ILL5RkhDez72jJvM%3D&reserved=0>
>
>
>
> On Fri, Apr 12, 2019 at 6:23 AM Papadopoulos, Konstantinos <
> konstantinos.papadopou...@iriworldwide.com> wrote:
>
> Hi all,
>
> We are facing an issue when trying to integrate PostgreSQL with Flink
> JDBC. When you establish a connection to the PostgreSQL database, it is in
> auto-commit mode. It means that each SQL statement is treated as a
> transaction and is automatically committed, but this functionality results
> in unexpected behavior (e.g., out-of-memory errors) when executed for large
> result sets. In order to bypass such issues, we must disable the
> auto-commit mode. To do this, in a simple Java application, we call the
> setAutoCommit() method of the Connection object.
>
> So, my question is: How can we achieve this by using JDBCInputFormat of
> Flink?
>
> Thanks in advance,
>
> Konstantinos
>
>

Reply via email to