[jira] [Updated] (FLINK-35274) Occasional failure issue with Flink CDC Db2 UT

2024-04-30 Thread Xin Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Gong updated FLINK-35274:
-
Fix Version/s: 3.1.0

> Occasional failure issue with Flink CDC Db2 UT
> --
>
> Key: FLINK-35274
> URL: https://issues.apache.org/jira/browse/FLINK-35274
> Project: Flink
>  Issue Type: Bug
>Reporter: Xin Gong
>Priority: Critical
> Fix For: 3.1.0
>
>
> Occasional failure issue with Flink CDC Db2 UT. Because db2 redolog data 
> tableId don't have database name, it will cause table schame occasional not 
> found when task exception restart. I will fix it by supplement database name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35274) Occasional failure issue with Flink CDC Db2 UT

2024-04-30 Thread Xin Gong (Jira)
Xin Gong created FLINK-35274:


 Summary: Occasional failure issue with Flink CDC Db2 UT
 Key: FLINK-35274
 URL: https://issues.apache.org/jira/browse/FLINK-35274
 Project: Flink
  Issue Type: Bug
Reporter: Xin Gong


Occasional failure issue with Flink CDC Db2 UT. Because db2 redolog data 
tableId don't have database name, it will cause table schame occasional not 
found when task exception restart. I will fix it by supplement database name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35151) Flink mysql cdc will stuck when suspend binlog split and ChangeEventQueue is full

2024-04-17 Thread Xin Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838452#comment-17838452
 ] 

Xin Gong edited comment on FLINK-35151 at 4/18/24 4:46 AM:
---

I get an idea to address this issue by setting 

"currentTaskRunning || queue.remainingCapacity() == 0" for 
BinlogSplitReader#pollSplitRecords. [~Leonard] PTAL


was (Author: JIRAUSER292212):
I get an idea to address this issue by set 

"currentTaskRunning || queue.remainingCapacity() == 0" for 
BinlogSplitReader#pollSplitRecords. [~Leonard] PTAL

> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full
> --
>
> Key: FLINK-35151
> URL: https://issues.apache.org/jira/browse/FLINK-35151
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
> Environment: I use master branch reproduce it.
>Reporter: Xin Gong
>Priority: Major
> Attachments: dumpstack.txt
>
>
> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full.
> Reason is that producing binlog is too fast.  
> MySqlSplitReader#suspendBinlogReaderIfNeed will execute 
> BinlogSplitReader#stopBinlogReadTask to set 
> currentTaskRunning to be false after MysqSourceReader receives binlog split 
> update event.
> MySqlSplitReader#pollSplitRecords is executed and 
> dataIt is null to execute closeBinlogReader when currentReader is 
> BinlogSplitReader. closeBinlogReader will execute 
> statefulTaskContext.getBinaryLogClient().disconnect(), it could dead lock. 
> Because BinaryLogClient#connectLock is not release  when 
> MySqlStreamingChangeEventSource add element to full queue.
>  
> You can set StatefulTaskContext#queue to be 1 and run UT 
> NewlyAddedTableITCase#testRemoveAndAddNewTable.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35151) Flink mysql cdc will stuck when suspend binlog split and ChangeEventQueue is full

2024-04-17 Thread Xin Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838452#comment-17838452
 ] 

Xin Gong edited comment on FLINK-35151 at 4/18/24 4:44 AM:
---

I get an idea to address this issue by set 

"currentTaskRunning || queue.remainingCapacity() == 0" for 
BinlogSplitReader#pollSplitRecords. [~Leonard] PTAL


was (Author: JIRAUSER292212):
I get an idea to address this issue by set 

currentTaskRunning || queue.remainingCapacity() == 0 for 
BinlogSplitReader#pollSplitRecords. [~Leonard] PTAL

> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full
> --
>
> Key: FLINK-35151
> URL: https://issues.apache.org/jira/browse/FLINK-35151
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
> Environment: I use master branch reproduce it.
>Reporter: Xin Gong
>Priority: Major
> Attachments: dumpstack.txt
>
>
> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full.
> Reason is that producing binlog is too fast.  
> MySqlSplitReader#suspendBinlogReaderIfNeed will execute 
> BinlogSplitReader#stopBinlogReadTask to set 
> currentTaskRunning to be false after MysqSourceReader receives binlog split 
> update event.
> MySqlSplitReader#pollSplitRecords is executed and 
> dataIt is null to execute closeBinlogReader when currentReader is 
> BinlogSplitReader. closeBinlogReader will execute 
> statefulTaskContext.getBinaryLogClient().disconnect(), it could dead lock. 
> Because BinaryLogClient#connectLock is not release  when 
> MySqlStreamingChangeEventSource add element to full queue.
>  
> You can set StatefulTaskContext#queue to be 1 and run UT 
> NewlyAddedTableITCase#testRemoveAndAddNewTable.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35151) Flink mysql cdc will stuck when suspend binlog split and ChangeEventQueue is full

2024-04-17 Thread Xin Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838452#comment-17838452
 ] 

Xin Gong edited comment on FLINK-35151 at 4/18/24 4:43 AM:
---

I get an idea to address this issue by set 

currentTaskRunning || queue.remainingCapacity() == 0 for 
BinlogSplitReader#pollSplitRecords. [~Leonard] PTAL


was (Author: JIRAUSER292212):
I get an idea to address this issue by set 

currentTaskRunning || queue.remainingCapacity() == 0 for 
BinlogSplitReader#pollSplitRecords.

> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full
> --
>
> Key: FLINK-35151
> URL: https://issues.apache.org/jira/browse/FLINK-35151
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
> Environment: I use master branch reproduce it.
>Reporter: Xin Gong
>Priority: Major
> Attachments: dumpstack.txt
>
>
> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full.
> Reason is that producing binlog is too fast.  
> MySqlSplitReader#suspendBinlogReaderIfNeed will execute 
> BinlogSplitReader#stopBinlogReadTask to set 
> currentTaskRunning to be false after MysqSourceReader receives binlog split 
> update event.
> MySqlSplitReader#pollSplitRecords is executed and 
> dataIt is null to execute closeBinlogReader when currentReader is 
> BinlogSplitReader. closeBinlogReader will execute 
> statefulTaskContext.getBinaryLogClient().disconnect(), it could dead lock. 
> Because BinaryLogClient#connectLock is not release  when 
> MySqlStreamingChangeEventSource add element to full queue.
>  
> You can set StatefulTaskContext#queue to be 1 and run UT 
> NewlyAddedTableITCase#testRemoveAndAddNewTable.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35151) Flink mysql cdc will stuck when suspend binlog split and ChangeEventQueue is full

2024-04-17 Thread Xin Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838452#comment-17838452
 ] 

Xin Gong commented on FLINK-35151:
--

I get an idea to address this issue by set 

currentTaskRunning || queue.remainingCapacity() == 0 for 
BinlogSplitReader#pollSplitRecords.

> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full
> --
>
> Key: FLINK-35151
> URL: https://issues.apache.org/jira/browse/FLINK-35151
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
> Environment: I use master branch reproduce it.
>Reporter: Xin Gong
>Priority: Major
> Attachments: dumpstack.txt
>
>
> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full.
> Reason is that producing binlog is too fast.  
> MySqlSplitReader#suspendBinlogReaderIfNeed will execute 
> BinlogSplitReader#stopBinlogReadTask to set 
> currentTaskRunning to be false after MysqSourceReader receives binlog split 
> update event.
> MySqlSplitReader#pollSplitRecords is executed and 
> dataIt is null to execute closeBinlogReader when currentReader is 
> BinlogSplitReader. closeBinlogReader will execute 
> statefulTaskContext.getBinaryLogClient().disconnect(), it could dead lock. 
> Because BinaryLogClient#connectLock is not release  when 
> MySqlStreamingChangeEventSource add element to full queue.
>  
> You can set StatefulTaskContext#queue to be 1 and run UT 
> NewlyAddedTableITCase#testRemoveAndAddNewTable.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35151) Flink mysql cdc will stuck when suspend binlog split and ChangeEventQueue is full

2024-04-17 Thread Xin Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Gong updated FLINK-35151:
-
Description: 
Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
full.

Reason is that producing binlog is too fast.  
MySqlSplitReader#suspendBinlogReaderIfNeed will execute 
BinlogSplitReader#stopBinlogReadTask to set 

currentTaskRunning to be false after MysqSourceReader receives binlog split 
update event.

MySqlSplitReader#pollSplitRecords is executed and 

dataIt is null to execute closeBinlogReader when currentReader is 
BinlogSplitReader. closeBinlogReader will execute 
statefulTaskContext.getBinaryLogClient().disconnect(), it could dead lock. 
Because BinaryLogClient#connectLock is not release  when 
MySqlStreamingChangeEventSource add element to full queue.

 

You can set StatefulTaskContext#queue to be 1 and run UT 
NewlyAddedTableITCase#testRemoveAndAddNewTable.

 

  was:
Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
full.

Reason is that producing binlog is too fast.  
MySqlSplitReader#suspendBinlogReaderIfNeed will execute 
BinlogSplitReader#stopBinlogReadTask to set 

currentTaskRunning to be false after MysqSourceReader receives binlog split 
update event.

MySqlSplitReader#pollSplitRecords is executed and 

dataIt is null to execute closeBinlogReader when currentReader is 
BinlogSplitReader. closeBinlogReader will execute 
statefulTaskContext.getBinaryLogClient().disconnect(), it could dead lock. 
Because BinaryLogClient#connectLock is not release  when 
MySqlStreamingChangeEventSource add element to full queue.

 


> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full
> --
>
> Key: FLINK-35151
> URL: https://issues.apache.org/jira/browse/FLINK-35151
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
> Environment: I use master branch reproduce it.
>Reporter: Xin Gong
>Priority: Major
> Attachments: dumpstack.txt
>
>
> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full.
> Reason is that producing binlog is too fast.  
> MySqlSplitReader#suspendBinlogReaderIfNeed will execute 
> BinlogSplitReader#stopBinlogReadTask to set 
> currentTaskRunning to be false after MysqSourceReader receives binlog split 
> update event.
> MySqlSplitReader#pollSplitRecords is executed and 
> dataIt is null to execute closeBinlogReader when currentReader is 
> BinlogSplitReader. closeBinlogReader will execute 
> statefulTaskContext.getBinaryLogClient().disconnect(), it could dead lock. 
> Because BinaryLogClient#connectLock is not release  when 
> MySqlStreamingChangeEventSource add element to full queue.
>  
> You can set StatefulTaskContext#queue to be 1 and run UT 
> NewlyAddedTableITCase#testRemoveAndAddNewTable.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35151) Flink mysql cdc will stuck when suspend binlog split and ChangeEventQueue is full

2024-04-17 Thread Xin Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Gong updated FLINK-35151:
-
Description: 
Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
full.

Reason is that producing binlog is too fast.  
MySqlSplitReader#suspendBinlogReaderIfNeed will execute 
BinlogSplitReader#stopBinlogReadTask to set 

currentTaskRunning to be false after MysqSourceReader receives binlog split 
update event.

MySqlSplitReader#pollSplitRecords is executed and 

dataIt is null to execute closeBinlogReader when currentReader is 
BinlogSplitReader. closeBinlogReader will execute 
statefulTaskContext.getBinaryLogClient().disconnect(), it could dead lock. 
Because BinaryLogClient#connectLock is not release  when 
MySqlStreamingChangeEventSource add element to full queue.

 

  was:
Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
full.

 


> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full
> --
>
> Key: FLINK-35151
> URL: https://issues.apache.org/jira/browse/FLINK-35151
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
> Environment: I use master branch reproduce it.
> Reason is that producing binlog is too fast.  
> MySqlSplitReader#suspendBinlogReaderIfNeed will execute 
> BinlogSplitReader#stopBinlogReadTask to set 
> currentTaskRunning to be false after MysqSourceReader receives binlog split 
> update event.
> MySqlSplitReader#pollSplitRecords is executed and 
> dataIt is null to execute closeBinlogReader when currentReader is 
> BinlogSplitReader. closeBinlogReader will execute 
> statefulTaskContext.getBinaryLogClient().disconnect(), it could dead lock. 
> Because BinaryLogClient#connectLock is not release  when 
> MySqlStreamingChangeEventSource add element to full queue.
>Reporter: Xin Gong
>Priority: Major
> Attachments: dumpstack.txt
>
>
> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full.
> Reason is that producing binlog is too fast.  
> MySqlSplitReader#suspendBinlogReaderIfNeed will execute 
> BinlogSplitReader#stopBinlogReadTask to set 
> currentTaskRunning to be false after MysqSourceReader receives binlog split 
> update event.
> MySqlSplitReader#pollSplitRecords is executed and 
> dataIt is null to execute closeBinlogReader when currentReader is 
> BinlogSplitReader. closeBinlogReader will execute 
> statefulTaskContext.getBinaryLogClient().disconnect(), it could dead lock. 
> Because BinaryLogClient#connectLock is not release  when 
> MySqlStreamingChangeEventSource add element to full queue.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35151) Flink mysql cdc will stuck when suspend binlog split and ChangeEventQueue is full

2024-04-17 Thread Xin Gong (Jira)
Xin Gong created FLINK-35151:


 Summary: Flink mysql cdc will  stuck when suspend binlog split and 
ChangeEventQueue is full
 Key: FLINK-35151
 URL: https://issues.apache.org/jira/browse/FLINK-35151
 Project: Flink
  Issue Type: Bug
  Components: Flink CDC
 Environment: I use master branch reproduce it.

Reason is that producing binlog is too fast.  
MySqlSplitReader#suspendBinlogReaderIfNeed will execute 
BinlogSplitReader#stopBinlogReadTask to set 

currentTaskRunning to be false after MysqSourceReader receives binlog split 
update event.

MySqlSplitReader#pollSplitRecords is executed and 

dataIt is null to execute closeBinlogReader when currentReader is 
BinlogSplitReader. closeBinlogReader will execute 
statefulTaskContext.getBinaryLogClient().disconnect(), it could dead lock. 
Because BinaryLogClient#connectLock is not release  when 
MySqlStreamingChangeEventSource add element to full queue.
Reporter: Xin Gong
 Attachments: dumpstack.txt

Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
full.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35151) Flink mysql cdc will stuck when suspend binlog split and ChangeEventQueue is full

2024-04-17 Thread Xin Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Gong updated FLINK-35151:
-
Environment: I use master branch reproduce it.  (was: I use master branch 
reproduce it.

Reason is that producing binlog is too fast.  
MySqlSplitReader#suspendBinlogReaderIfNeed will execute 
BinlogSplitReader#stopBinlogReadTask to set 

currentTaskRunning to be false after MysqSourceReader receives binlog split 
update event.

MySqlSplitReader#pollSplitRecords is executed and 

dataIt is null to execute closeBinlogReader when currentReader is 
BinlogSplitReader. closeBinlogReader will execute 
statefulTaskContext.getBinaryLogClient().disconnect(), it could dead lock. 
Because BinaryLogClient#connectLock is not release  when 
MySqlStreamingChangeEventSource add element to full queue.)

> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full
> --
>
> Key: FLINK-35151
> URL: https://issues.apache.org/jira/browse/FLINK-35151
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
> Environment: I use master branch reproduce it.
>Reporter: Xin Gong
>Priority: Major
> Attachments: dumpstack.txt
>
>
> Flink mysql cdc will  stuck when suspend binlog split and ChangeEventQueue is 
> full.
> Reason is that producing binlog is too fast.  
> MySqlSplitReader#suspendBinlogReaderIfNeed will execute 
> BinlogSplitReader#stopBinlogReadTask to set 
> currentTaskRunning to be false after MysqSourceReader receives binlog split 
> update event.
> MySqlSplitReader#pollSplitRecords is executed and 
> dataIt is null to execute closeBinlogReader when currentReader is 
> BinlogSplitReader. closeBinlogReader will execute 
> statefulTaskContext.getBinaryLogClient().disconnect(), it could dead lock. 
> Because BinaryLogClient#connectLock is not release  when 
> MySqlStreamingChangeEventSource add element to full queue.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34908) [Feature][Pipeline] Mysql pipeline to doris and starrocks will lost precision for timestamp

2024-04-06 Thread Xin Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834589#comment-17834589
 ] 

Xin Gong commented on FLINK-34908:
--

Starrocks only support second precision. 
[https://docs.starrocks.io/zh/docs/sql-reference/data-types/date-types/DATETIME/.|https://docs.starrocks.io/zh/docs/sql-reference/data-types/date-types/DATETIME/]
 So I will process doris.

> [Feature][Pipeline] Mysql pipeline to doris and starrocks will lost precision 
> for timestamp
> ---
>
> Key: FLINK-34908
> URL: https://issues.apache.org/jira/browse/FLINK-34908
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xin Gong
>Assignee: Xin Gong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> flink cdc pipeline will decide timestamp zone by config of pipeline. I found 
> mysql2doris and mysql2starracks will specific datetime format
> -MM-dd HH:mm:ss, it will cause lost datatime precision. I think we 
> should't set fixed datetime format, just return LocalDateTime object.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34990) [feature][cdc-connector][oracle] Oracle cdc support newly add table

2024-04-02 Thread Xin Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Gong updated FLINK-34990:
-
External issue URL:   (was: https://github.com/apache/flink-cdc/pull/3203)

> [feature][cdc-connector][oracle] Oracle cdc support newly add table
> ---
>
> Key: FLINK-34990
> URL: https://issues.apache.org/jira/browse/FLINK-34990
> Project: Flink
>  Issue Type: New Feature
>  Components: Flink CDC
>Reporter: Xin Gong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> [feature][cdc-connector][oracle] Oracle cdc support newly add table



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34990) [feature][cdc-connector][oracle] Oracle cdc support newly add table

2024-04-02 Thread Xin Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Gong updated FLINK-34990:
-
External issue URL: https://github.com/apache/flink-cdc/pull/3203

> [feature][cdc-connector][oracle] Oracle cdc support newly add table
> ---
>
> Key: FLINK-34990
> URL: https://issues.apache.org/jira/browse/FLINK-34990
> Project: Flink
>  Issue Type: New Feature
>  Components: Flink CDC
>Reporter: Xin Gong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> [feature][cdc-connector][oracle] Oracle cdc support newly add table



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34990) [feature][cdc-connector][oracle] Oracle cdc support newly add table

2024-04-02 Thread Xin Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17833221#comment-17833221
 ] 

Xin Gong commented on FLINK-34990:
--

I will submit the PR. cc [~Leonard] 

> [feature][cdc-connector][oracle] Oracle cdc support newly add table
> ---
>
> Key: FLINK-34990
> URL: https://issues.apache.org/jira/browse/FLINK-34990
> Project: Flink
>  Issue Type: New Feature
>  Components: Flink CDC
>Reporter: Xin Gong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> [feature][cdc-connector][oracle] Oracle cdc support newly add table



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34990) [feature][cdc-connector][oracle] Oracle cdc support newly add table

2024-04-02 Thread Xin Gong (Jira)
Xin Gong created FLINK-34990:


 Summary: [feature][cdc-connector][oracle] Oracle cdc support newly 
add table
 Key: FLINK-34990
 URL: https://issues.apache.org/jira/browse/FLINK-34990
 Project: Flink
  Issue Type: New Feature
  Components: Flink CDC
Reporter: Xin Gong
 Fix For: cdc-3.1.0


[feature][cdc-connector][oracle] Oracle cdc support newly add table



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34908) [Feature][Pipeline] Mysql pipeline to doris and starrocks will lost precision for timestamp

2024-03-21 Thread Xin Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829495#comment-17829495
 ] 

Xin Gong commented on FLINK-34908:
--

I will submit a PR. cc [~Leonard] 
 

> [Feature][Pipeline] Mysql pipeline to doris and starrocks will lost precision 
> for timestamp
> ---
>
> Key: FLINK-34908
> URL: https://issues.apache.org/jira/browse/FLINK-34908
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xin Gong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> flink cdc pipeline will decide timestamp zone by config of pipeline. I found 
> mysql2doris and mysql2starracks will specific datetime format
> -MM-dd HH:mm:ss, it will cause lost datatime precision. I think we 
> should't set fixed datetime format, just return LocalDateTime object.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34908) [Feature][Pipeline] Mysql pipeline to doris and starrocks will lost precision for timestamp

2024-03-21 Thread Xin Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Gong updated FLINK-34908:
-
Summary: [Feature][Pipeline] Mysql pipeline to doris and starrocks will 
lost precision for timestamp  (was: mysql pipeline to doris and starrocks will 
lost precision for timestamp)

> [Feature][Pipeline] Mysql pipeline to doris and starrocks will lost precision 
> for timestamp
> ---
>
> Key: FLINK-34908
> URL: https://issues.apache.org/jira/browse/FLINK-34908
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xin Gong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> flink cdc pipeline will decide timestamp zone by config of pipeline. I found 
> mysql2doris and mysql2starracks will specific datetime format
> -MM-dd HH:mm:ss, it will cause lost datatime precision. I think we 
> should't set fixed datetime format, just return LocalDateTime object.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34908) mysql pipeline to doris and starrocks will lost precision for timestamp

2024-03-21 Thread Xin Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Gong updated FLINK-34908:
-
Description: 
flink cdc pipeline will decide timestamp zone by config of pipeline. I found 
mysql2doris and mysql2starracks will specific datetime format

-MM-dd HH:mm:ss, it will cause lost datatime precision. I think we should't 
set fixed datetime format, just return LocalDateTime object.
 
 
 

  was:
flink cdc pipeline will decide timestamp zone by config of pipeline. I found 
mysql2doris and mysql2starracks will specific datetime format

-MM-dd HH:mm:ss, it will cause lost datatime precision. I think we don't 
set fixed datetime format, just return LocalDateTime object.
 
 


> mysql pipeline to doris and starrocks will lost precision for timestamp
> ---
>
> Key: FLINK-34908
> URL: https://issues.apache.org/jira/browse/FLINK-34908
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xin Gong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> flink cdc pipeline will decide timestamp zone by config of pipeline. I found 
> mysql2doris and mysql2starracks will specific datetime format
> -MM-dd HH:mm:ss, it will cause lost datatime precision. I think we 
> should't set fixed datetime format, just return LocalDateTime object.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34908) mysql pipeline to doris and starrocks will lost precision for timestamp

2024-03-21 Thread Xin Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Gong updated FLINK-34908:
-
Description: 
flink cdc pipeline will decide timestamp zone by config of pipeline. I found 
mysql2doris and mysql2starracks will specific datetime format

-MM-dd HH:mm:ss, it will cause lost datatime precision. I think we don't 
set fixed datetime format, just return LocalDateTime object.
 
 

  was:
flink cdc pipeline will decide timestamp zone by config of pipeline. I found 
mysql2doris and mysql2starracks will specific datetime format

-MM-dd HH:mm:ss, it will cause lost datatime precision. I think we don't 
specific datetime format, just return LocalDateTime object.
 


> mysql pipeline to doris and starrocks will lost precision for timestamp
> ---
>
> Key: FLINK-34908
> URL: https://issues.apache.org/jira/browse/FLINK-34908
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xin Gong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> flink cdc pipeline will decide timestamp zone by config of pipeline. I found 
> mysql2doris and mysql2starracks will specific datetime format
> -MM-dd HH:mm:ss, it will cause lost datatime precision. I think we don't 
> set fixed datetime format, just return LocalDateTime object.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34908) mysql pipeline to doris and starrocks will lost precision for timestamp

2024-03-21 Thread Xin Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Gong updated FLINK-34908:
-
Description: 
flink cdc pipeline will decide timestamp zone by config of pipeline. I found 
mysql2doris and mysql2starracks will specific datetime format

-MM-dd HH:mm:ss, it will cause lost datatime precision. I think we don't 
specific datetime format, just return LocalDateTime object.
 

  was:
flink cdc pipeline will decide timestamp zone by config of pipeline. I found 
mysql2doris and mysql2starracks will specific datetime format

-MM-dd HH:mm:ss, it will cause lost datatime precision. I think we can 
don't specific datetime format, just return LocalDateTime object.


> mysql pipeline to doris and starrocks will lost precision for timestamp
> ---
>
> Key: FLINK-34908
> URL: https://issues.apache.org/jira/browse/FLINK-34908
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xin Gong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> flink cdc pipeline will decide timestamp zone by config of pipeline. I found 
> mysql2doris and mysql2starracks will specific datetime format
> -MM-dd HH:mm:ss, it will cause lost datatime precision. I think we don't 
> specific datetime format, just return LocalDateTime object.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34908) mysql pipeline to doris and starrocks will lost precision for timestamp

2024-03-21 Thread Xin Gong (Jira)
Xin Gong created FLINK-34908:


 Summary: mysql pipeline to doris and starrocks will lost precision 
for timestamp
 Key: FLINK-34908
 URL: https://issues.apache.org/jira/browse/FLINK-34908
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: Xin Gong
 Fix For: cdc-3.1.0


flink cdc pipeline will decide timestamp zone by config of pipeline. I found 
mysql2doris and mysql2starracks will specific datetime format

-MM-dd HH:mm:ss, it will cause lost datatime precision. I think we can 
don't specific datetime format, just return LocalDateTime object.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34715) Fix mysql ut about closing BinlogSplitReader

2024-03-18 Thread Xin Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17827885#comment-17827885
 ] 

Xin Gong commented on FLINK-34715:
--

[~Leonard] PTAL

> Fix mysql ut about closing BinlogSplitReader
> 
>
> Key: FLINK-34715
> URL: https://issues.apache.org/jira/browse/FLINK-34715
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xin Gong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> BinlogSplitReaderTest#readBinlogSplitsFromSnapshotSplits should test binlog 
> reader is closed after binlog reader close. But code always test snapshot 
> split reader is closed.
> {code:java}
> binlogReader.close();
> assertNotNull(snapshotSplitReader.getExecutorService());
> assertTrue(snapshotSplitReader.getExecutorService().isTerminated());{code}
> We shoud change code to 
> {code:java}
> binlogReader.close();
> assertNotNull(binlogReader.getExecutorService());
> assertTrue(binlogReader.getExecutorService().isTerminated()); {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34715) Fix mysql ut about closing BinlogSplitReader

2024-03-18 Thread Xin Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Gong updated FLINK-34715:
-
External issue URL: https://github.com/apache/flink-cdc/pull/3161

> Fix mysql ut about closing BinlogSplitReader
> 
>
> Key: FLINK-34715
> URL: https://issues.apache.org/jira/browse/FLINK-34715
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xin Gong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> BinlogSplitReaderTest#readBinlogSplitsFromSnapshotSplits should test binlog 
> reader is closed after binlog reader close. But code always test snapshot 
> split reader is closed.
> {code:java}
> binlogReader.close();
> assertNotNull(snapshotSplitReader.getExecutorService());
> assertTrue(snapshotSplitReader.getExecutorService().isTerminated());{code}
> We shoud change code to 
> {code:java}
> binlogReader.close();
> assertNotNull(binlogReader.getExecutorService());
> assertTrue(binlogReader.getExecutorService().isTerminated()); {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34715) Fix mysql ut about closing BinlogSplitReader

2024-03-18 Thread Xin Gong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Gong updated FLINK-34715:
-
Description: 
BinlogSplitReaderTest#readBinlogSplitsFromSnapshotSplits should test binlog 
reader is closed after binlog reader close. But code always test snapshot split 
reader is closed.
{code:java}
binlogReader.close();
assertNotNull(snapshotSplitReader.getExecutorService());
assertTrue(snapshotSplitReader.getExecutorService().isTerminated());{code}
We shoud change code to 
{code:java}
binlogReader.close();
assertNotNull(binlogReader.getExecutorService());
assertTrue(binlogReader.getExecutorService().isTerminated()); {code}
 

  was:
BinlogSplitReaderTest#readBinlogSplitsFromSnapshotSplits should test binlog 
reader is closed after binlog reader close. But code always test snapshot split 
reader is closed.

```java

binlogReader.close();

assertNotNull(snapshotSplitReader.getExecutorService());
assertTrue(snapshotSplitReader.getExecutorService().isTerminated());

```

We shoud change code to 

```java

binlogReader.close();

assertNotNull(binlogReader.getExecutorService());
assertTrue(binlogReader.getExecutorService().isTerminated());

```


> Fix mysql ut about closing BinlogSplitReader
> 
>
> Key: FLINK-34715
> URL: https://issues.apache.org/jira/browse/FLINK-34715
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xin Gong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> BinlogSplitReaderTest#readBinlogSplitsFromSnapshotSplits should test binlog 
> reader is closed after binlog reader close. But code always test snapshot 
> split reader is closed.
> {code:java}
> binlogReader.close();
> assertNotNull(snapshotSplitReader.getExecutorService());
> assertTrue(snapshotSplitReader.getExecutorService().isTerminated());{code}
> We shoud change code to 
> {code:java}
> binlogReader.close();
> assertNotNull(binlogReader.getExecutorService());
> assertTrue(binlogReader.getExecutorService().isTerminated()); {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34715) Fix mysql ut about closing BinlogSplitReader

2024-03-18 Thread Xin Gong (Jira)
Xin Gong created FLINK-34715:


 Summary: Fix mysql ut about closing BinlogSplitReader
 Key: FLINK-34715
 URL: https://issues.apache.org/jira/browse/FLINK-34715
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: Xin Gong
 Fix For: cdc-3.1.0


BinlogSplitReaderTest#readBinlogSplitsFromSnapshotSplits should test binlog 
reader is closed after binlog reader close. But code always test snapshot split 
reader is closed.

```java

binlogReader.close();

assertNotNull(snapshotSplitReader.getExecutorService());
assertTrue(snapshotSplitReader.getExecutorService().isTerminated());

```

We shoud change code to 

```java

binlogReader.close();

assertNotNull(binlogReader.getExecutorService());
assertTrue(binlogReader.getExecutorService().isTerminated());

```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)