MrCong233 opened a new issue, #22026: URL: https://github.com/apache/shardingsphere/issues/22026
## Bug Report I developed a java program for testing read half commits problem. the program logic is init table first (create table and insert datas, set auoCommit = false), and then create write and read transactions in a loop. I find sometimes the JDBC connection created first (not auto commit) and write transaction createed later are automatically merged into the same explicit transaction. ### Which version of ShardingSphere did you use? ShardingSphere 5.2.0 ### Which project did you use? ShardingSphere-JDBC or ShardingSphere-Proxy? ShardingSphere-Proxy ### Expected behavior JDBC connection with setAutoCommit(true) should not an implicit transaction, and should not be affected by other JDBC connection. ### Actual behavior according to openGauss' pg_log, the JDBC connection with setAutoCommit(true) (include "drop table, "create table", "insert into" as shown below) merges into the same explicit transaction as a JDBC connection with setAutoCommit(false) (include "update account_0", "select balance", "update account_1"). ```shell 2022-11-08 19:35:04.127 tpccuser tpccdb 20.20.20.76 281066139857840 0[0:0#0] 0 [XACT] LOG: start transaction succ. In Node dn0, trans state: START -> INPROGR 2022-11-08 19:35:04.128 tpccuser tpccdb 20.20.20.76 281066139857840 0[0:0#0] 0 [BACKEND] LOG: execute S_1: START TRANSACTION 2022-11-08 19:35:04.202 tpccuser tpccdb 20.20.20.76 281066139857840 0[0:0#0] 0 [BACKEND] LOG: execute <unnamed>: drop table if exists account_0 2022-11-08 19:35:04.224 tpccuser tpccdb 20.20.20.76 281066139857840 0[0:0#0] 0 [BACKEND] LOG: execute <unnamed>: drop table if exists account_1 2022-11-08 19:35:04.265 tpccuser tpccdb 20.20.20.76 281066139857840 0[0:0#0] 0 [BACKEND] LOG: execute <unnamed>: create table account_0(id int, balance float, transaction_id int) 2022-11-08 19:35:04.281 tpccuser tpccdb 20.20.20.76 281066139857840 0[0:0#0] 0 [BACKEND] LOG: execute <unnamed>: create table account_1(id int, balance float, transaction_id int) 2022-11-08 19:35:04.728 tpccuser tpccdb 20.20.20.76 281066139857840 0[0:0#0] 0 [BACKEND] LOG: Bypass execute fetch from S_14/C_15: insert into account_1(id, transaction_id, balance) values (0, 1, 0) 2022-11-08 19:35:05.214 tpccuser tpccdb 20.20.20.76 281066139857840 0[0:0#0] 0 [BACKEND] LOG: execute S_4/C_20: update account_0 set balance=balance-1 where transaction_id=2 2022-11-08 19:35:05.834 tpccuser tpccdb 20.20.20.76 281066139857840 0[0:0#0] 5348024557504827 [BACKEND] LOG: execute S_21/C_22: select balance as a1667907490200 from account_1 where transaction_id = 1 2022-11-08 19:35:05.881 tpccuser tpccdb 20.20.20.76 281066139857840 0[0:0#0] 0 [BACKEND] LOG: execute S_8/C_23: update account_1 set balance=balance+1 where transaction_id=1 2022-11-08 19:35:05.905 tpccuser tpccdb 20.20.20.76 281066139857840 0[0:0#0] 0 [BACKEND] LOG: execute <unnamed>: PREPARE TRANSACTION '1096044365_MjAuMjAuMjAuNzYudG0xNjY3OTA3MzA1MTQzMDMzMjA=_MjAuMjAuMjAuNzYudG02NjM5' 2022-11-08 19:35:05.947 tpccuser tpccdb 20.20.20.76 281066139857840 0[0:0#0] 0 [BACKEND] LOG: execute <unnamed>: COMMIT PREPARED '1096044365_MjAuMjAuMjAuNzYudG0xNjY3OTA3MzA1MTQzMDMzMjA=_MjAuMjAuMjAuNzYudG02NjM5' ``` ### Reason analyze (If you can) The JDBC connection created first (not auto commit) and the JDBC connection created later (auto commit) are automatically merged into the same explicit transaction. ### Steps to reproduce the behavior, such as: SQL to execute, sharding rule configuration, when exception occur etc. This program can occasionally reproduce the problem, will occurs read consistency exception after creating table. https://gitee.com/congzhou2603/shardingclient ### Example codes for reproduce this issue (such as a github link). ```yaml rules: - !AUTHORITY users: - root@%:root - sharding@:sharding - zhoucong@:zhoucong provider: type: ALL_PRIVILEGES_PERMITTED - !TRANSACTION defaultType: XA providerType: Atomikos #providerType: Narayana props: max-connections-size-per-query: 1 proxy-backend-query-fetch-size: 50 proxy-frontend-executor-size: 500 # Proxy frontend executor size. The proxy-backend-executor-suitable: OLTP proxy-frontend-flush-threshold: 128 # The default value is 128. proxy.transaction.type: XA ``` ```yaml dataSources: ds_0: connectionTimeoutMilliseconds: 3600000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 410 minPoolSize: 400 password: url: jdbc:opengauss://XX.XX.XX.XX:XXXXX/tpccdb?serverTimezone=UTC&useSSL=false&connectTimeout=60&batchMode=on&loggerLevel=OFF username: ds_1: connectionTimeoutMilliseconds: 3600000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 410 minPoolSize: 400 password: url: jdbc:opengauss://XX.XX.XX.XX:XXXXX/tpccdb?serverTimezone=UTC&useSSL=false&connectTimeout=60&batchMode=on&loggerLevel=OFF username: rules: - !SHARDING bindingTables: broadcastTables: defaultDatabaseStrategy: standard: shardingAlgorithmName: database_inline shardingColumn: ds_id defaultTableStrategy: none: null shardingAlgorithms: ds_count_inline: type: INLINE props: algorithm-expression: ds_${id % 2} t_account_inline: type: INLINE props: algorithm-expression: account_${transaction_id % 2} t_account2_inline: type: INLINE props: algorithm-expression: account2_${transaction_id % 2} database_inline: props: algorithm-expression: ds_${ds_id % 1} type: INLINE tables: account: actualDataNodes: ds_${0..1}.account_${0..1} databaseStrategy: standard: shardingAlgorithmName: ds_count_inline shardingColumn: id tableStrategy: standard: shardingColumn: transaction_id shardingAlgorithmName: t_account_inline account2: actualDataNodes: ds_${0..1}.account2_${0..1} databaseStrategy: standard: shardingAlgorithmName: ds_count_inline shardingColumn: id tableStrategy: standard: shardingColumn: transaction_id shardingAlgorithmName: t_account2_inline schemaName: tpcc_glt_db ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
