This is an automated email from the ASF dual-hosted git repository.
zhangliang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/shardingsphere.git
The following commit(s) were added to refs/heads/master by this push:
new c09f13eef1b Use MultiSQLSplitter to split SQLs (#37078)
c09f13eef1b is described below
commit c09f13eef1bdb8f77c038a69126f629e369849a5
Author: Liang Zhang <[email protected]>
AuthorDate: Wed Nov 12 17:03:18 2025 +0800
Use MultiSQLSplitter to split SQLs (#37078)
* Use MultiSQLSplitter to split SQLs
* Use MultiSQLSplitter to split SQLs
* Update release notes
* Update release notes
---
AGENTS.md | 111 +++++++++
RELEASE-NOTES.md | 3 +-
parser/sql/statement/core/pom.xml | 7 +
.../statement/core/util/MultiSQLSplitter.java | 256 +++++++++++++++++++++
.../statement/core/util/MultiSQLSplitterTest.java | 96 ++++++++
.../text/query/MySQLComQueryPacketExecutor.java | 6 +-
.../MySQLMultiStatementsProxyBackendHandler.java | 26 +--
7 files changed, 478 insertions(+), 27 deletions(-)
diff --git a/AGENTS.md b/AGENTS.md
new file mode 100644
index 00000000000..d3b434c9e53
--- /dev/null
+++ b/AGENTS.md
@@ -0,0 +1,111 @@
+# ShardingSphere AI Development Guide
+
+This guide is written **for AI coding agents only**. Follow it literally;
improvise only when the rules explicitly authorize it.
+
+## Operating Charter
+- `CODE_OF_CONDUCT.md` is the binding “law” for any generated artifact. Review
it once per session and refuse to keep code that conflicts with it (copyright,
inclusivity, licensing, etc.).
+- Technical choices must honor ASF expectations: license headers, transparent
intent, explicit rationale in user-facing notes.
+- Instruction precedence: `CODE_OF_CONDUCT.md` > user directive > this guide >
other repository documents.
+
+## Team Signals
+- **Release tempo:** expect monthly feature trains plus weekly patch windows.
Default to smallest safe change unless explicitly asked for broader refactors.
+- **Approval gates:** structural changes (new modules, configuration knobs)
require human confirmation; doc-only or localized fixes may proceed after
self-review. Always surface what evidence reviewers need (tests, configs,
reproduction steps).
+- **Quality bias:** team prefers deterministic builds, measurable test
coverage, and clear rollback plans. Avoid speculative features without benefit
statements.
+
+## System Context Snapshot
+- ShardingSphere adds sharding, encryption, traffic governance, and
observability atop existing databases.
+- Module map:
+ - `infra`, `database`, `parser`, `kernel`, `mode`: shared infrastructure,
SQL parsing, routing, governance.
+ - `jdbc`, `jdbc-dialect`, `proxy`: integration surfaces for
clients/protocols.
+ - `features`: sharding, read/write splitting, encryption, shadow, traffic
control.
+ - `agent`: bytecode agent utilities; `examples`: runnable demos; `docs` /
`distribution`: documentation and release assets.
+- Layout standard: `src/main/java` + `src/test/java`. Generated outputs live
under `target/`—never edit them.
+
+## Data Flow & Integration Map
+1. **Client request** enters via `jdbc` or `proxy`.
+2. **SQL parsing/rewriting** occurs in `parser` and `infra` dialect layers.
+3. **Routing & planning** handled inside `kernel` using metadata from
`database` and governance hints from `mode`.
+4. **Feature hooks** (sharding/encryption/etc.) in `features` mutate route
decisions or payloads.
+5. **Executor/adapters** forward to physical databases and collect results.
+6. **Observability & governance** loops feed metrics/traffic rules back
through `mode`.
+Reference this flow when reasoning about new features or debugging regressions.
+
+## Design Playbook
+- **Patterns to lean on:** builder/factory helpers in `infra`, SPI-based
extension points, immutable DTOs for plan descriptions, explicit strategy enums
for behavior toggles.
+- **Anti-patterns:** duplicating SQL parsing logic, bypassing metadata caches,
silent fallbacks when configuration is invalid, adding static singletons in
shared modules.
+- **Known pitfalls:** routing regressions when skipping shadow rules, timezone
drift when mocking time poorly, forgetting to validate both standalone and
cluster (`mode`) settings, missing ASF headers in new files.
+- **Success recipe:** describe why a change is needed, point to affected data
flow step, keep public APIs backwards compatible, and document defaults in
`docs`.
+
+## AI Execution Workflow
+1. **Intake & Clarify** — restate the ask, map affected modules, confirm
sandbox/approval/network constraints.
+2. **Plan & Reason** — write a multi-step plan with checkpoints (analysis,
edits, tests). Align scope with release tempo (prefer incremental fixes unless
told otherwise).
+3. **Implement** — touch only necessary files, reuse abstractions, keep ASF
headers.
+4. **Validate** — choose the smallest meaningful command; if blocked (sandbox,
missing deps), explain what would have run and why it matters.
+5. **Report** — lead with intent, list edited files with rationale and line
references, state verification results, propose next actions.
+
+## Tooling & Verification Matrix
+
+| Command | Purpose | When to run |
+| --- | --- | --- |
+| `./mvnw clean install -B -T1C -Pcheck` | Full build with Spotless, license,
checkstyle gates | Before releasing or when cross-module impact is likely |
+| `./mvnw test -pl {module}[-am]` | Unit tests for targeted modules (+ rebuild
deps with `-am`) | After touching code in a module |
+| `./mvnw spotless:apply -Pcheck [-pl module]` | Auto-format + import ordering
| After edits that may violate style |
+| `./mvnw spotless:check -Pcheck` | Format check only (fast lint) | When
sandbox forbids writes or before pushing |
+| `./mvnw test jacoco:check@jacoco-check -Pcoverage-check` | Enforce Jacoco
thresholds | When coverage requirements are mentioned or when adding new
features |
+| `./mvnw -pl {module} -DskipITs -Dspotless.skip=true test` | Quick lint-free
smoke (unit tests only) | To shorten feedback loops during iteration |
+
+Always describe command intent before execution and summarize exit codes / key
output afterwards.
+
+## Testing Expectations
+- Use JUnit 5 + Mockito; tests mirror package paths and follow the
`ClassNameTest` convention.
+- Method names read `assertXxxCondition`; structure tests as
Arrange–Act–Assert sections with explicit separators/comments when clarity
drops.
+- Mock databases, time, and network boundaries; build POJOs directly.
+- When Jacoco fails, open `{module}/target/site/jacoco/index.html`, note
uncovered branches, and explain how new tests address them.
+
+## Run & Debug Cookbook
+- **Proxy quick start:** `./mvnw -pl proxy -am package` then run
`shardingsphere-proxy/bin/start.sh -c conf/server.yaml`. Point configs to
samples under `examples/src/resources/conf`.
+- **JDBC smoke:** run `./mvnw -pl jdbc -am test -Dtest=YourTest` after wiring
datasource configs from `examples`.
+- **Config changes:** document defaults in `docs/content` and ensure both
standalone (`server.yaml`) and cluster (`mode/`) configs include the new knob.
+- **Failure triage:** collect logs under `proxy/logs/`, inspect
`target/surefire-reports` for unit tests, and mention relevant error
codes/messages in the report.
+
+## Troubleshooting Playbook
+
+| Symptom | Likely cause | AI response pattern |
+| --- | --- | --- |
+| SQL routed incorrectly or misses shards | Feature rule (shadow/readwrite)
skipped, metadata stale, parser mis-dialect | Identify data-flow step impacted
(usually `features`/`kernel`), cite configs under `examples`, add reproduction
SQL, propose targeted test in `kernel` or `features` |
+| `jacoco:check` fails | New code paths lack tests or mocks bypass coverage |
Describe uncovered branches from `target/site/jacoco`, add focused unit tests,
rerun module-level tests |
+| Proxy fails to start | Missing configs, port conflicts, server.yaml mismatch
with mode config | Quote exact log snippet, point to `examples` config used,
suggest verifying `conf/server.yaml` + `mode` sections, avoid editing generated
files |
+| Spotless/checkstyle errors | Imports/order mismatched, header missing | Run
`./mvnw spotless:apply -Pcheck [-pl module]`, ensure ASF header present,
mention command result |
+| Integration blocked by sandbox/network | Restricted command or dependency
fetch | State attempted command, why it matters, what approval or artifact is
needed; wait for explicit user go-ahead |
+
+## AI Collaboration Patterns
+- **Prompt templates:**
+ - Change request: “Goal → Constraints → Files suspected → Desired
validation.”
+ - Code review: “Observed issue → Impact → Suggested fix.”
+ - Status update: “What changed → Verification → Pending risks.”
+- **Anti-pattern prompts:** Avoid vague asks like “optimize stuff” or
instructions lacking module names; request clarification instead of guessing.
+- **Hand-off checklist:** intent, touched files with reasons, commands run +
results, open risks/TODOs, references to issues/PRs if mentioned.
+- **Failure responses:** when blocked by sandbox/policy, state the attempted
action, why it matters, and what approval or artifact is needed next.
+
+### Module-oriented prompt hints
+- **Parser adjustments:** specify dialect, target SQL sample, expected AST
changes, and downstream modules consuming the parser output.
+- **Kernel routing strategy:** describe metadata shape (table count, binding
rules), existing rule config, and which `features` hook participates.
+- **Proxy runtime fixes:** include startup command, config file path, observed
log lines, and client protocol (MySQL/PostgreSQL).
+- **Docs/config updates:** mention audience (user/admin), file paths under
`docs/content`, and whether translation or screenshots exist.
+
+## Collaboration & Escalation
+- Commit messages use `module: intent` (e.g., `kernel: refine route planner`)
and cite why the change exists.
+- Reviews focus on risks first (regressions, coverage gaps, configuration
impact) before polish.
+- If repo state or sandbox limits conflict with `CODE_OF_CONDUCT.md`, stop
immediately and request direction—do not attempt workarounds.
+
+## Brevity & Signal
+- Prefer tables/bullets over prose walls; cite file paths (`kernel/src/...`)
directly.
+- Eliminate repeated wording; reference prior sections instead of restating.
+- Default to ASCII; only mirror existing non-ASCII content when necessary.
+
+## Reference Pointers
+- `CODE_OF_CONDUCT.md` — legal baseline; cite line numbers when flagging
violations.
+- `CONTRIBUTING.md` — human contributor workflow; reference when mirroring
commit/PR styles.
+- `docs/content` — user-facing docs; add/update pages when introducing config
knobs or behavior changes.
+- `examples` configs — canonical samples for proxy/JDBC; always mention which
sample you reused.
+- `MATURITY.md` / `README.md` — high-level positioning; useful when
summarizing project context for reports.
diff --git a/RELEASE-NOTES.md b/RELEASE-NOTES.md
index 4286bd90c8e..eda31ff3cf5 100644
--- a/RELEASE-NOTES.md
+++ b/RELEASE-NOTES.md
@@ -74,6 +74,7 @@
1. SQL Parser: Fix set OnDuplicateKeyColumnsSegment on INSERT for PostgreSQL -
[#34425](https://github.com/apache/shardingsphere/pull/34425)
1. SQL Parser: Fix SQL parser error when SQL contains implicit concat
expression for MySQL -
[#34660](https://github.com/apache/shardingsphere/pull/34660)
1. SQL Parser: Fix SQL parser error when SQL contains subquery with alias for
Oracle - [#35239](https://github.com/apache/shardingsphere/pull/35239)
+1. SQL Parser: Fix multiple SQLs spilt error when comma contained -
[#31609](https://github.com/apache/shardingsphere/pull/31609)
1. SQL Binder: Fix unable to find the outer table in the NotExpressionBinder -
[36135](https://github.com/apache/shardingsphere/pull/36135)
1. SQL Binder: Fix unable to find the outer table in the
ExistsSubqueryExpressionBinder -
[#36068](https://github.com/apache/shardingsphere/pull/36068)
1. SQL Binder: Fix column bind exception caused by oracle XMLELEMENT function
first parameter without quote -
[#36963](https://github.com/apache/shardingsphere/pull/36963)
@@ -95,7 +96,7 @@
1. Mode: Fix issue of drop schema can not work on standalone mode -
[#34470](https://github.com/apache/shardingsphere/pull/34470)
1. Encrypt: Resolve rewrite issue in nested concat function -
[#35815](https://github.com/apache/shardingsphere/pull/35815)
1. Sharding: Fix mod sharding algorithm judgement
-[#36386](https://github.com/apache/shardingsphere/pull/36386)
-1. Sharding: Fix check inline sharding algorithms in table rules -
(https://github.com/apache/shardingsphere/pull/36999)
+1. Sharding: Fix check inline sharding algorithms in table rules -
[#36999](https://github.com/apache/shardingsphere/pull/36999)
1. Pipeline: Recover value of migration incremental importer batch size -
[#34670](https://github.com/apache/shardingsphere/pull/34670)
1. Pipeline: Fix InventoryDumper first time dump SQL without ORDER BY on
multiple columns unique key table -
[#34736](https://github.com/apache/shardingsphere/pull/34736)
1. Pipeline: Fix MySQL JDBC query properties extension when SSL is required on
server - [#36581](https://github.com/apache/shardingsphere/pull/36581)
diff --git a/parser/sql/statement/core/pom.xml
b/parser/sql/statement/core/pom.xml
index 591c38c3101..5e3370db909 100644
--- a/parser/sql/statement/core/pom.xml
+++ b/parser/sql/statement/core/pom.xml
@@ -33,6 +33,13 @@
<version>${project.version}</version>
</dependency>
+ <dependency>
+ <groupId>org.apache.shardingsphere</groupId>
+ <artifactId>shardingsphere-test-infra-fixture-database</artifactId>
+ <version>${project.version}</version>
+ <scope>test</scope>
+ </dependency>
+
<dependency>
<groupId>org.apache.groovy</groupId>
<artifactId>groovy</artifactId>
diff --git
a/parser/sql/statement/core/src/main/java/org/apache/shardingsphere/sql/parser/statement/core/util/MultiSQLSplitter.java
b/parser/sql/statement/core/src/main/java/org/apache/shardingsphere/sql/parser/statement/core/util/MultiSQLSplitter.java
new file mode 100644
index 00000000000..3e69900d947
--- /dev/null
+++
b/parser/sql/statement/core/src/main/java/org/apache/shardingsphere/sql/parser/statement/core/util/MultiSQLSplitter.java
@@ -0,0 +1,256 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.shardingsphere.sql.parser.statement.core.util;
+
+import com.google.common.base.CharMatcher;
+import lombok.AccessLevel;
+import lombok.NoArgsConstructor;
+import
org.apache.shardingsphere.database.connector.core.metadata.database.enums.QuoteCharacter;
+import
org.apache.shardingsphere.sql.parser.statement.core.statement.SQLStatement;
+import
org.apache.shardingsphere.sql.parser.statement.core.statement.type.dml.DeleteStatement;
+import
org.apache.shardingsphere.sql.parser.statement.core.statement.type.dml.InsertStatement;
+import
org.apache.shardingsphere.sql.parser.statement.core.statement.type.dml.UpdateStatement;
+
+import java.util.Collection;
+import java.util.Collections;
+import java.util.LinkedList;
+
+/**
+ * Multi SQL splitter.
+ */
+@NoArgsConstructor(access = AccessLevel.PRIVATE)
+public final class MultiSQLSplitter {
+
+ /**
+ * Determine whether SQL contains multi statements that match the same DML
type.
+ *
+ * @param sqlStatementSample parsed SQL statement sample
+ * @param sqls SQLs
+ * @return whether multi statements exist
+ */
+ public static boolean hasSameTypeMultiStatements(final SQLStatement
sqlStatementSample, final Collection<String> sqls) {
+ if (sqls.size() <= 1) {
+ return false;
+ }
+ for (String each : sqls) {
+ if (!matchesStatementType(stripLeadingComments(each),
sqlStatementSample)) {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ private static String stripLeadingComments(final String sql) {
+ int index = 0;
+ while (index < sql.length()) {
+ index = skipWhitespace(sql, index);
+ if (index >= sql.length()) {
+ break;
+ }
+ if ('/' == sql.charAt(index) && index + 1 < sql.length() && '*' ==
sql.charAt(index + 1)) {
+ int end = sql.indexOf("*/", index + 2);
+ if (end < 0) {
+ return "";
+ }
+ index = end + 2;
+ continue;
+ }
+ if (isDashCommentStart(sql, index)) {
+ index = skipLine(sql, index + 2);
+ continue;
+ }
+ if ('#' == sql.charAt(index)) {
+ index = skipLine(sql, index + 1);
+ continue;
+ }
+ break;
+ }
+ return sql.substring(index).trim();
+ }
+
+ private static int skipWhitespace(final String sql, final int start) {
+ int index = CharMatcher.whitespace().negate().indexIn(sql, start);
+ return -1 == index ? sql.length() : index;
+ }
+
+ private static int skipLine(final String sql, final int startIndex) {
+ int index = startIndex;
+ while (index < sql.length()) {
+ char current = sql.charAt(index);
+ index++;
+ if ('\n' == current) {
+ break;
+ }
+ if ('\r' == current) {
+ if (index < sql.length() && '\n' == sql.charAt(index)) {
+ index++;
+ }
+ break;
+ }
+ }
+ return index;
+ }
+
+ private static boolean matchesStatementType(final String sql, final
SQLStatement sqlStatementSample) {
+ if (sql.isEmpty()) {
+ return false;
+ }
+ if (sqlStatementSample instanceof InsertStatement) {
+ return startsWithIgnoreCase(sql, "insert");
+ }
+ if (sqlStatementSample instanceof UpdateStatement) {
+ return startsWithIgnoreCase(sql, "update");
+ }
+ if (sqlStatementSample instanceof DeleteStatement) {
+ return startsWithIgnoreCase(sql, "delete");
+ }
+ return false;
+ }
+
+ private static boolean startsWithIgnoreCase(final String text, final
String prefix) {
+ return text.regionMatches(true, 0, prefix, 0, prefix.length());
+ }
+
+ /**
+ * Split SQL text by semicolon ignoring literals and comments.
+ *
+ * @param sql SQL text
+ * @return SQL statements
+ */
+ public static Collection<String> split(final String sql) {
+ if (null == sql || sql.isEmpty()) {
+ return Collections.emptyList();
+ }
+ Collection<String> result = new LinkedList<>();
+ StringBuilder current = new StringBuilder(sql.length());
+ ScanState state = ScanState.NORMAL;
+ QuoteCharacter quote = QuoteCharacter.NONE;
+ int index = 0;
+ int length = sql.length();
+ while (index < length) {
+ char ch = sql.charAt(index);
+ char next = index + 1 < length ? sql.charAt(index + 1) : '\0';
+ int step = 1;
+ switch (state) {
+ case QUOTE:
+ current.append(ch);
+ if (QuoteCharacter.BACK_QUOTE != quote && '\\' == ch &&
index + 1 < length) {
+ current.append(sql.charAt(index + 1));
+ step = 2;
+ break;
+ }
+ if (isQuoteEnd(quote, ch)) {
+ if (isRepeatedQuote(sql, quote, index)) {
+ current.append(sql.charAt(index + 1));
+ step = 2;
+ } else {
+ quote = QuoteCharacter.NONE;
+ state = ScanState.NORMAL;
+ }
+ }
+ break;
+ case LINE_COMMENT:
+ current.append(ch);
+ if ('\n' == ch || '\r' == ch) {
+ state = ScanState.NORMAL;
+ }
+ break;
+ case BLOCK_COMMENT:
+ current.append(ch);
+ if ('*' == ch && '/' == next) {
+ current.append(next);
+ step = 2;
+ state = ScanState.NORMAL;
+ }
+ break;
+ default:
+ if (';' == ch) {
+ appendStatement(result, current);
+ break;
+ }
+ QuoteCharacter quoteCandidate =
QuoteCharacter.getQuoteCharacter(String.valueOf(ch));
+ if (isSupportedQuote(quoteCandidate)) {
+ quote = quoteCandidate;
+ state = ScanState.QUOTE;
+ current.append(ch);
+ break;
+ }
+ if (isDashCommentStart(sql, index)) {
+ state = ScanState.LINE_COMMENT;
+ current.append(ch);
+ current.append(next);
+ step = 2;
+ break;
+ }
+ if ('#' == ch) {
+ state = ScanState.LINE_COMMENT;
+ current.append(ch);
+ break;
+ }
+ if ('/' == ch && '*' == next) {
+ state = ScanState.BLOCK_COMMENT;
+ current.append(ch);
+ current.append(next);
+ step = 2;
+ break;
+ }
+ current.append(ch);
+ break;
+ }
+ index += step;
+ }
+ appendStatement(result, current);
+ return result;
+ }
+
+ private static void appendStatement(final Collection<String> statements,
final StringBuilder current) {
+ String value = current.toString().trim();
+ if (!value.isEmpty()) {
+ statements.add(value);
+ }
+ current.setLength(0);
+ }
+
+ private static boolean isDashCommentStart(final String sql, final int
index) {
+ if (index + 1 >= sql.length()) {
+ return false;
+ }
+ if ('-' != sql.charAt(index) || '-' != sql.charAt(index + 1)) {
+ return false;
+ }
+ int commentContentIndex = index + 2;
+ return commentContentIndex >= sql.length() ||
Character.isWhitespace(sql.charAt(commentContentIndex));
+ }
+
+ private static boolean isSupportedQuote(final QuoteCharacter quote) {
+ return QuoteCharacter.SINGLE_QUOTE == quote || QuoteCharacter.QUOTE ==
quote || QuoteCharacter.BACK_QUOTE == quote;
+ }
+
+ private static boolean isQuoteEnd(final QuoteCharacter quote, final char
ch) {
+ return QuoteCharacter.NONE != quote &&
quote.getEndDelimiter().charAt(0) == ch;
+ }
+
+ private static boolean isRepeatedQuote(final String sql, final
QuoteCharacter quote, final int index) {
+ int nextIndex = index + 1;
+ return nextIndex < sql.length() && sql.charAt(nextIndex) ==
quote.getEndDelimiter().charAt(0);
+ }
+
+ private enum ScanState {
+ NORMAL, QUOTE, LINE_COMMENT, BLOCK_COMMENT
+ }
+}
diff --git
a/parser/sql/statement/core/src/test/java/org/apache/shardingsphere/sql/parser/statement/core/util/MultiSQLSplitterTest.java
b/parser/sql/statement/core/src/test/java/org/apache/shardingsphere/sql/parser/statement/core/util/MultiSQLSplitterTest.java
new file mode 100644
index 00000000000..a8d6af2061a
--- /dev/null
+++
b/parser/sql/statement/core/src/test/java/org/apache/shardingsphere/sql/parser/statement/core/util/MultiSQLSplitterTest.java
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.shardingsphere.sql.parser.statement.core.util;
+
+import org.apache.shardingsphere.database.connector.core.type.DatabaseType;
+import org.apache.shardingsphere.infra.spi.type.typed.TypedSPILoader;
+import
org.apache.shardingsphere.sql.parser.statement.core.statement.SQLStatement;
+import
org.apache.shardingsphere.sql.parser.statement.core.statement.type.dml.DeleteStatement;
+import
org.apache.shardingsphere.sql.parser.statement.core.statement.type.dml.InsertStatement;
+import
org.apache.shardingsphere.sql.parser.statement.core.statement.type.dml.UpdateStatement;
+import org.junit.jupiter.params.ParameterizedTest;
+import org.junit.jupiter.params.provider.Arguments;
+import org.junit.jupiter.params.provider.MethodSource;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.stream.Stream;
+
+import static org.hamcrest.CoreMatchers.is;
+import static org.hamcrest.MatcherAssert.assertThat;
+
+class MultiSQLSplitterTest {
+
+ private static final DatabaseType DATABASE_TYPE =
TypedSPILoader.getService(DatabaseType.class, "Fixture");
+
+ @ParameterizedTest(name = "{0}")
+ @MethodSource("provideHasSameTypeArguments")
+ void assertHasSameTypeMultiStatements(final String name, final
SQLStatement sqlStatement, final Collection<String> sqls, final boolean
expected) {
+ assertThat(name,
MultiSQLSplitter.hasSameTypeMultiStatements(sqlStatement, sqls), is(expected));
+ }
+
+ private static Stream<Arguments> provideHasSameTypeArguments() {
+ return Stream.of(
+ Arguments.of("nonDmlSample", new SQLStatement(DATABASE_TYPE),
Arrays.asList("select * from t_order;", "select * from t_order_item;"), false),
+ Arguments.of("singleStatementFalse", new
UpdateStatement(DATABASE_TYPE), Collections.singletonList("update t_order set
status='OK' where id=1"), false),
+ Arguments.of("insertWithBlockComment", new
InsertStatement(DATABASE_TYPE),
+ Arrays.asList(" /*comment*/ INSERT INTO t_order
VALUES (1);", "/*remark*/ insert into t_order values (2)"), true),
+ Arguments.of("updateWithDashComment", new
UpdateStatement(DATABASE_TYPE),
+ Arrays.asList("-- comment before\r\nupdate t_order set
status='PAID' where id=1;", "-- \t\nupdate t_order set status='FAIL' where
id=2;"), true),
+ Arguments.of("deleteWithHashComment", new
DeleteStatement(DATABASE_TYPE),
+ Arrays.asList("# comment before\n delete from t_order
where id=1;", "#\t\n delete from t_order where id=2;"), true),
+ Arguments.of("hashCommentWithCRLF", new
DeleteStatement(DATABASE_TYPE),
+ Arrays.asList("# comment\r\ndelete from t_order where
id=1;", "# comment\r\ndelete from t_order where id=2;"), true),
+ Arguments.of("hashCommentWithCROnly", new
DeleteStatement(DATABASE_TYPE),
+ Arrays.asList("# comment\rdelete from t_order where
id=1;", "# comment\rdelete from t_order where id=2;"), true),
+ Arguments.of("updateTypeMismatch", new
UpdateStatement(DATABASE_TYPE), Arrays.asList("update t_order set status='PAID'
where id=1;", "select * from t_order"), false),
+ Arguments.of("unterminatedBlockComment", new
InsertStatement(DATABASE_TYPE), Arrays.asList("/* incomplete comment", "insert
into t_order values (1);"), false),
+ Arguments.of("dashCommentOnlySegment", new
UpdateStatement(DATABASE_TYPE), Arrays.asList("--", "update t_order set
status='DONE' where id=1;"), false),
+ Arguments.of("whitespaceOnlySegment", new
UpdateStatement(DATABASE_TYPE), Arrays.asList(" \t ", "update t_order set
status='DONE' where id=1;"), false));
+ }
+
+ @ParameterizedTest(name = "{0}")
+ @MethodSource("provideSplitArguments")
+ void assertSplit(final String name, final String sql, final
Collection<String> expected) {
+ assertThat(name, MultiSQLSplitter.split(sql), is(expected));
+ }
+
+ private static Stream<Arguments> provideSplitArguments() {
+ return Stream.of(
+ Arguments.of("nullSqlReturnsEmpty", null,
Collections.emptyList()),
+ Arguments.of("emptySqlReturnsEmpty", "",
Collections.emptyList()),
+ Arguments.of("semicolonInsideLiteral", "update t_order set
status='WAIT;PAID' where id=1", Collections.singletonList("update t_order set
status='WAIT;PAID' where id=1")),
+ Arguments.of("multipleStatementsWithTrailingSemicolon",
"update t_order set status='PAID' where id=1; update t_order set
status='FAILED' where id=2;",
+ Arrays.asList("update t_order set status='PAID' where
id=1", "update t_order set status='FAILED' where id=2")),
+ Arguments.of("hintBlockComment", "/* ShardingSphere hint:
dataSourceName=foo_ds; foo=bar */ delete from t_order where id=1;",
+ Collections.singletonList("/* ShardingSphere hint:
dataSourceName=foo_ds; foo=bar */ delete from t_order where id=1")),
+ Arguments.of("dashCommentIgnoresSemicolon", "-- comment; still
comment\r\nupdate t_order set status=1; insert into t_order values (2);",
+ Arrays.asList("-- comment; still comment\r\nupdate
t_order set status=1", "insert into t_order values (2)")),
+ Arguments.of("hashCommentIgnoresSemicolon", "# comment ; still
comment\nupdate t_order set status=1;",
+ Collections.singletonList("# comment ; still
comment\nupdate t_order set status=1")),
+ Arguments.of("blockCommentInsideStatement", "select /* block;
comment */ 1; select /*another*/ 2;", Arrays.asList("select /* block; comment
*/ 1", "select /*another*/ 2")),
+ Arguments.of("repeatedQuotesInsideLiteral", "insert into
t_order values ('it''s;ok');", Collections.singletonList("insert into t_order
values ('it''s;ok')")),
+ Arguments.of("escapedQuoteInsideLiteral", "insert into t_order
values ('need\\'escape;');", Collections.singletonList("insert into t_order
values ('need\\'escape;')")),
+ Arguments.of("backtickIdentifiersWithSemicolon", "insert into
`t;order` values (1);", Collections.singletonList("insert into `t;order` values
(1)")),
+ Arguments.of("doubleQuoteIdentifiersWithSemicolon", "insert
into \"T;ORDER\" values (1);", Collections.singletonList("insert into
\"T;ORDER\" values (1)")),
+ Arguments.of("unterminatedStringHandled", "update t_order set
status='OPEN\\", Collections.singletonList("update t_order set
status='OPEN\\")),
+ Arguments.of("doubleDashWithoutWhitespaceTreatedAsText",
"--not comment; update t_order set status=1;", Arrays.asList("--not comment",
"update t_order set status=1")),
+ Arguments.of("singleTrailingDash", "update t_order set price =
price -", Collections.singletonList("update t_order set price = price -")));
+ }
+}
diff --git
a/proxy/frontend/dialect/mysql/src/main/java/org/apache/shardingsphere/proxy/frontend/mysql/command/query/text/query/MySQLComQueryPacketExecutor.java
b/proxy/frontend/dialect/mysql/src/main/java/org/apache/shardingsphere/proxy/frontend/mysql/command/query/text/query/MySQLComQueryPacketExecutor.java
index cf91cbabb82..5c52e5fe1be 100644
---
a/proxy/frontend/dialect/mysql/src/main/java/org/apache/shardingsphere/proxy/frontend/mysql/command/query/text/query/MySQLComQueryPacketExecutor.java
+++
b/proxy/frontend/dialect/mysql/src/main/java/org/apache/shardingsphere/proxy/frontend/mysql/command/query/text/query/MySQLComQueryPacketExecutor.java
@@ -42,6 +42,7 @@ import
org.apache.shardingsphere.sql.parser.statement.core.statement.SQLStatemen
import
org.apache.shardingsphere.sql.parser.statement.core.statement.type.dml.DeleteStatement;
import
org.apache.shardingsphere.sql.parser.statement.core.statement.type.dml.InsertStatement;
import
org.apache.shardingsphere.sql.parser.statement.core.statement.type.dml.UpdateStatement;
+import
org.apache.shardingsphere.sql.parser.statement.core.util.MultiSQLSplitter;
import java.sql.SQLException;
import java.util.Collection;
@@ -72,8 +73,9 @@ public final class MySQLComQueryPacketExecutor implements
QueryCommandExecutor {
}
private boolean areMultiStatements(final ConnectionSession
connectionSession, final SQLStatement sqlStatement, final String sql) {
- // TODO Multi statements should be identified by SQL Parser instead of
checking if sql contains ";".
- return isMultiStatementsEnabled(connectionSession) &&
isSuitableMultiStatementsSQLStatement(sqlStatement) && sql.contains(";");
+ return isMultiStatementsEnabled(connectionSession)
+ && isSuitableMultiStatementsSQLStatement(sqlStatement)
+ && MultiSQLSplitter.hasSameTypeMultiStatements(sqlStatement,
MultiSQLSplitter.split(sql));
}
private boolean isMultiStatementsEnabled(final ConnectionSession
connectionSession) {
diff --git
a/proxy/frontend/dialect/mysql/src/main/java/org/apache/shardingsphere/proxy/frontend/mysql/command/query/text/query/MySQLMultiStatementsProxyBackendHandler.java
b/proxy/frontend/dialect/mysql/src/main/java/org/apache/shardingsphere/proxy/frontend/mysql/command/query/text/query/MySQLMultiStatementsProxyBackendHandler.java
index bdd8ff79584..9088b6fc117 100644
---
a/proxy/frontend/dialect/mysql/src/main/java/org/apache/shardingsphere/proxy/frontend/mysql/command/query/text/query/MySQLMultiStatementsProxyBackendHandler.java
+++
b/proxy/frontend/dialect/mysql/src/main/java/org/apache/shardingsphere/proxy/frontend/mysql/command/query/text/query/MySQLMultiStatementsProxyBackendHandler.java
@@ -54,30 +54,21 @@ import
org.apache.shardingsphere.proxy.backend.response.header.update.MultiState
import
org.apache.shardingsphere.proxy.backend.response.header.update.UpdateResponseHeader;
import org.apache.shardingsphere.proxy.backend.session.ConnectionSession;
import
org.apache.shardingsphere.sql.parser.statement.core.statement.SQLStatement;
-import
org.apache.shardingsphere.sql.parser.statement.core.statement.type.dml.InsertStatement;
-import
org.apache.shardingsphere.sql.parser.statement.core.statement.type.dml.UpdateStatement;
+import
org.apache.shardingsphere.sql.parser.statement.core.util.MultiSQLSplitter;
import java.sql.Connection;
import java.sql.SQLException;
import java.util.ArrayList;
-import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.LinkedList;
import java.util.List;
-import java.util.regex.Pattern;
/**
* MySQL multi-statements proxy backend handler.
*/
public final class MySQLMultiStatementsProxyBackendHandler implements
ProxyBackendHandler {
- private static final Pattern MULTI_INSERT_STATEMENTS =
Pattern.compile(";(?=\\s*insert)", Pattern.CASE_INSENSITIVE);
-
- private static final Pattern MULTI_UPDATE_STATEMENTS =
Pattern.compile(";(?=\\s*update)", Pattern.CASE_INSENSITIVE);
-
- private static final Pattern MULTI_DELETE_STATEMENTS =
Pattern.compile(";(?=\\s*delete)", Pattern.CASE_INSENSITIVE);
-
private final DatabaseType databaseType =
TypedSPILoader.getService(DatabaseType.class, "MySQL");
private final MetaDataContexts metaDataContexts =
ProxyContext.getInstance().getContextManager().getMetaDataContexts();
@@ -96,32 +87,19 @@ public final class MySQLMultiStatementsProxyBackendHandler
implements ProxyBacke
this.sqlStatementSample = sqlStatementSample;
JDBCExecutor jdbcExecutor = new
JDBCExecutor(BackendExecutorContext.getInstance().getExecutorEngine(),
connectionSession.getConnectionContext());
batchExecutor = new
BatchPreparedStatementExecutor(metaDataContexts.getMetaData().getDatabase(connectionSession.getUsedDatabaseName()),
jdbcExecutor, connectionSession.getProcessId());
- Pattern pattern = getPattern(sqlStatementSample);
SQLParserEngine sqlParserEngine = getSQLParserEngine();
- for (String each : extractMultiStatements(pattern, sql)) {
+ for (String each : MultiSQLSplitter.split(sql)) {
SQLStatement eachSQLStatement = sqlParserEngine.parse(each, false);
multiSQLQueryContexts.add(createQueryContext(each,
eachSQLStatement));
}
}
- private Pattern getPattern(final SQLStatement sqlStatementSample) {
- if (sqlStatementSample instanceof InsertStatement) {
- return MULTI_INSERT_STATEMENTS;
- }
- return sqlStatementSample instanceof UpdateStatement ?
MULTI_UPDATE_STATEMENTS : MULTI_DELETE_STATEMENTS;
- }
-
private SQLParserEngine getSQLParserEngine() {
MetaDataContexts metaDataContexts =
ProxyContext.getInstance().getContextManager().getMetaDataContexts();
SQLParserRule sqlParserRule =
metaDataContexts.getMetaData().getGlobalRuleMetaData().getSingleRule(SQLParserRule.class);
return sqlParserRule.getSQLParserEngine(databaseType);
}
- private List<String> extractMultiStatements(final Pattern pattern, final
String sql) {
- // TODO Multi statements should be split by SQL Parser instead of
simple regexp.
- return Arrays.asList(pattern.split(sql));
- }
-
private QueryContext createQueryContext(final String sql, final
SQLStatement sqlStatement) {
HintValueContext hintValueContext = SQLHintUtils.extractHint(sql);
SQLStatementContext sqlStatementContext = new SQLBindEngine(