This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
     new 1a373331084 branch-3.0: [bug](group commit) Fix group commit blocked 
after schema change throw exception (#54312)
1a373331084 is described below

commit 1a3733310843106d1851c88261e7b0a0a972b361
Author: xy720 <[email protected]>
AuthorDate: Tue Aug 12 10:53:36 2025 +0800

    branch-3.0: [bug](group commit) Fix group commit blocked after schema 
change throw exception (#54312)
    
    Fix group commit blocked after schema change throw exception
    
    Problem Summary:
    
    pick #54113
    
    Reproduce step:
    
    1、create table
    
    ```
    CREATE TABLE `test_table_uniq` (
      `company_id` varchar(32) NOT NULL,
      `date` datetime NOT NULL,
      `discount` decimal(19,10) NULL,
    ) ENGINE=OLAP
    UNIQUE KEY(`company_id`, `date`)
    DISTRIBUTED BY HASH(`company_id`) BUCKETS 8
    ```
    
    2、group commit insert  SUCCESS
    
    ```
    SET group_commit = async_mode;
    INSERT INTO test_table_uniq (company_id, date, discount) VALUES(1, 
'2025-07-25', 10);
    ```
    
    3、create rollup and wait job SUCCESS
    
    ```
    CREATE MATERIALIZED VIEW mv_company_day
    AS
    SELECT   company_id,   date
    FROM test_table_uniq;
    ```
    
    4、group commit insert  SUCCESS
    
    ```
    SET group_commit = async_mode;
    INSERT INTO test_table_uniq (company_id, date, discount) VALUES(2, 
'2025-07-25', 11);
    ```
    
    5、create rollup with same name and throw exception
    
    ```
    CREATE MATERIALIZED VIEW mv_company_day
    AS
    SELECT   company_id,   date
    FROM test_table_uniq;
    ERROR 1105 (HY000): errCode = 2, detailMessage = Materialized 
view[mv_company_day] already exists
    ```
    
    6、group commit insert  FAIL
    
    ```
    SET group_commit = async_mode;
    INSERT INTO test_table_uniq (company_id, date, discount) VALUES(3, 
'2025-07-25', 12);
    ```
    
    JDBC ERROR LOG:
    ```
    java.sql.BatchUpdateException: errCode = 2, detailMessage = insert table 
1753872336812 is blocked on schema change
            at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method)
            at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
            at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
            at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
            at com.mysql.cj.util.Util.handleNewInstance(Util.java:192)
            at com.mysql.cj.util.Util.getInstance(Util.java:167)
            at com.mysql.cj.util.Util.getInstance(Util.java:174)
            at 
com.mysql.cj.jdbc.exceptions.SQLError.createBatchUpdateException(SQLError.java:224)
            at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchedInserts(ClientPreparedStatement.java:755)
            at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchInternal(ClientPreparedStatement.java:426)
            at 
com.mysql.cj.jdbc.StatementImpl.executeBatch(StatementImpl.java:795)
            at DorisStressTest.main(DorisStressTest.java:78)
    Caused by: java.sql.SQLException: errCode = 2, detailMessage = insert table 
1753872336812 is blocked on schema change
            at 
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)
            at 
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
            at 
com.mysql.cj.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement.java:633)
            at 
com.mysql.cj.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedStatement.java:417)
            at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1098)
            at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1046)
            at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeLargeUpdate(ClientPreparedStatement.java:1371)
            at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchedInserts(ClientPreparedStatement.java:716)
    ```
    
    master/3.0/2.1
    
    ### Check List (For Author)
    
    - Test <!-- At least one of them must be included. -->
        - [ ] Regression test
        - [ ] Unit Test
        - [ ] Manual test (add detailed scripts or steps below)
        - [ ] No need to test or manual test. Explain why:
    - [ ] This is a refactor/code format and no logic has been changed.
            - [ ] Previous test can cover this change.
            - [ ] No code files have been changed.
            - [ ] Other reason <!-- Add your reason?  -->
    
    - Behavior changed:
        - [ ] No.
        - [ ] Yes. <!-- Explain the behavior change -->
    
    - Does this need documentation?
        - [ ] No.
    - [ ] Yes. <!-- Add document PR link here. eg:
    https://github.com/apache/doris-website/pull/1214 -->
    
    ### Check List (For Reviewer who merge this PR)
    
    - [ ] Confirm the release note
    - [ ] Confirm test cases
    - [ ] Confirm document
    - [ ] Add branch pick label <!-- Add branch pick label that this PR
    should merge into -->
---
 .../apache/doris/alter/MaterializedViewHandler.java   | 15 +++++++++++++++
 .../suites/insert_p0/insert_group_commit_into.groovy  | 19 ++++++++++++++++++-
 2 files changed, 33 insertions(+), 1 deletion(-)

diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/alter/MaterializedViewHandler.java 
b/fe/fe-core/src/main/java/org/apache/doris/alter/MaterializedViewHandler.java
index 64a747e99e4..84def3c6266 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/alter/MaterializedViewHandler.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/alter/MaterializedViewHandler.java
@@ -162,6 +162,8 @@ public class MaterializedViewHandler extends AlterHandler {
             if (tableNotFinalStateJobIdset == null) {
                 // This could happen when this job is already removed before.
                 // return false, so that we will not set table's to NORMAL 
again.
+                LOG.warn("alter job is already removed before. tableId: {}, 
jobId: {}",
+                        tableId, jobId);
                 return false;
             }
             tableNotFinalStateJobIdset.remove(jobId);
@@ -228,6 +230,11 @@ public class MaterializedViewHandler extends AlterHandler {
             Env.getCurrentEnv().getEditLog().logAlterJob(rollupJobV2);
             LOG.info("finished to create materialized view job: {}", 
rollupJobV2.getJobId());
         } finally {
+            if (olapTable.getState() != OlapTableState.ROLLUP) {
+                // state is not ROLLUP, means encountered some exception 
before jobs submitted,
+                // so we need to unblock table here.
+                
Env.getCurrentEnv().getGroupCommitManager().unblockTable(olapTable.getId());
+            }
             olapTable.writeUnlock();
         }
     }
@@ -333,6 +340,11 @@ public class MaterializedViewHandler extends AlterHandler {
             }
             throw e;
         } finally {
+            if (olapTable.getState() != OlapTableState.ROLLUP) {
+                // state is not ROLLUP, means encountered some exception 
before jobs submitted,
+                // so we need to unblock table here.
+                
Env.getCurrentEnv().getGroupCommitManager().unblockTable(olapTable.getId());
+            }
             olapTable.writeUnlock();
         }
     }
@@ -1220,6 +1232,9 @@ public class MaterializedViewHandler extends AlterHandler 
{
             changeTableStatus(alterJob.getDbId(), alterJob.getTableId(), 
OlapTableState.NORMAL);
             LOG.info("set table's state to NORMAL, table id: {}, job id: {}", 
alterJob.getTableId(),
                     alterJob.getJobId());
+        } else {
+            LOG.warn("Failed to remove job from tableNotFinalStateJobMap, 
table id: {}, job id: {}",
+                    alterJob.getTableId(), alterJob.getJobId());
         }
     }
 
diff --git a/regression-test/suites/insert_p0/insert_group_commit_into.groovy 
b/regression-test/suites/insert_p0/insert_group_commit_into.groovy
index 90318a5226b..b486c443881 100644
--- a/regression-test/suites/insert_p0/insert_group_commit_into.groovy
+++ b/regression-test/suites/insert_p0/insert_group_commit_into.groovy
@@ -229,6 +229,23 @@ suite("insert_group_commit_into") {
             logger.info("row count: " + rowCount)
             assertEquals(23, rowCount[0][0])
 
+            // 8. Test create rollup throw exception and group commit behavior
+            try {
+                sql """ alter table ${table} ADD ROLLUP r1(name, score); """
+                assertTrue(false, "create rollup with duplicate name should 
fail.")
+            } catch (Exception e) {
+                logger.info("Expected create rollup error: " + e.getMessage())
+                assertTrue(e.getMessage().contains("already exists"))
+            }
+
+            group_commit_insert_with_retry """ insert into ${table}(id, name) 
values(2, 'b');  """, 1
+            group_commit_insert_with_retry """ insert into ${table}(id) 
values(6); """, 1
+            getRowCount(25)
+
+            // Verify group commit works after add rollup throw exception
+            group_commit_insert """ insert into ${table}(id, name) values(2, 
'b'); """, 1
+            getRowCount(26)
+
             // txn insert
             sql """ set enable_nereids_dml = true; """
             sql """ set enable_nereids_planner=true; """
@@ -242,7 +259,7 @@ suite("insert_group_commit_into") {
 
             rowCount = sql "select count(*) from ${table}"
             logger.info("row count: " + rowCount)
-            assertEquals(rowCount[0][0], 25)
+            assertEquals(rowCount[0][0], 28)
         }
     } finally {
         // try_sql("DROP TABLE ${table}")


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to