This is an automated email from the ASF dual-hosted git repository.
zhangliang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/shardingsphere.git
The following commit(s) were added to refs/heads/master by this push:
new de1d74f97cf Add RuleMetaDataTest (#37084)
de1d74f97cf is described below
commit de1d74f97cfe4e68e1ed8d9af867a958ca52a3f7
Author: Liang Zhang <[email protected]>
AuthorDate: Thu Nov 13 00:50:02 2025 +0800
Add RuleMetaDataTest (#37084)
---
AGENTS.md | 136 +++++++++++++++------
.../infra/metadata/database/rule/RuleMetaData.java | 5 +-
.../metadata/database/rule/RuleMetaDataTest.java | 35 ++++++
3 files changed, 135 insertions(+), 41 deletions(-)
diff --git a/AGENTS.md b/AGENTS.md
index d3b434c9e53..3ceb9005f11 100644
--- a/AGENTS.md
+++ b/AGENTS.md
@@ -30,17 +30,24 @@ This guide is written **for AI coding agents only**. Follow
it literally; improv
6. **Observability & governance** loops feed metrics/traffic rules back
through `mode`.
Reference this flow when reasoning about new features or debugging regressions.
+## Deployment & Topology Snapshot
+- **Proxy cluster + registry** (ZooKeeper/Etcd): clients speak
MySQL/PostgreSQL to `proxy`; governance data lives in `mode` configs.
+- **JDBC embedded**: applications embed `jdbc` driver with sharding/encryption
rules shipped via YAML/Spring configs.
+- **Hybrid**: traffic governance and observability via `mode` while compute
happens inside applications.
+Mention which topology you target, the registry used, and any compatibility
constraints (e.g., MySQL 5.7 vs 8.0) when proposing changes.
+
## Design Playbook
- **Patterns to lean on:** builder/factory helpers in `infra`, SPI-based
extension points, immutable DTOs for plan descriptions, explicit strategy enums
for behavior toggles.
- **Anti-patterns:** duplicating SQL parsing logic, bypassing metadata caches,
silent fallbacks when configuration is invalid, adding static singletons in
shared modules.
- **Known pitfalls:** routing regressions when skipping shadow rules, timezone
drift when mocking time poorly, forgetting to validate both standalone and
cluster (`mode`) settings, missing ASF headers in new files.
- **Success recipe:** describe why a change is needed, point to affected data
flow step, keep public APIs backwards compatible, and document defaults in
`docs`.
+- **Case in point:** a prior shadow-rule regression was fixed by (1)
reproducing via `proxy` + sample config, (2) adding a `kernel` unit test
covering the skipped branch, (3) updating docs with the exact YAML flag—mirror
that discipline for new features.
## AI Execution Workflow
1. **Intake & Clarify** — restate the ask, map affected modules, confirm
sandbox/approval/network constraints.
2. **Plan & Reason** — write a multi-step plan with checkpoints (analysis,
edits, tests). Align scope with release tempo (prefer incremental fixes unless
told otherwise).
3. **Implement** — touch only necessary files, reuse abstractions, keep ASF
headers.
-4. **Validate** — choose the smallest meaningful command; if blocked (sandbox,
missing deps), explain what would have run and why it matters.
+4. **Validate** — choose the smallest meaningful command, announce the intent
before execution, summarize exit codes afterward; if blocked (sandbox, missing
deps), explain what would have run and why it matters.
5. **Report** — lead with intent, list edited files with rationale and line
references, state verification results, propose next actions.
## Tooling & Verification Matrix
@@ -53,59 +60,114 @@ Reference this flow when reasoning about new features or
debugging regressions.
| `./mvnw spotless:check -Pcheck` | Format check only (fast lint) | When
sandbox forbids writes or before pushing |
| `./mvnw test jacoco:check@jacoco-check -Pcoverage-check` | Enforce Jacoco
thresholds | When coverage requirements are mentioned or when adding new
features |
| `./mvnw -pl {module} -DskipITs -Dspotless.skip=true test` | Quick lint-free
smoke (unit tests only) | To shorten feedback loops during iteration |
-
-Always describe command intent before execution and summarize exit codes / key
output afterwards.
+| `./mvnw -pl {module} -DskipTests spotbugs:check checkstyle:check` | Static
analysis without running tests | Before landing riskier refactors or when
reviewing user patches |
+| `./mvnw -pl proxy -am -DskipTests package &&
shardingsphere-proxy/bin/start.sh -c conf/perf.yaml` | Proxy packaging +
lightweight perf smoke | When change may influence runtime characteristics;
capture QPS/latency deltas |
## Testing Expectations
- Use JUnit 5 + Mockito; tests mirror package paths and follow the
`ClassNameTest` convention.
- Method names read `assertXxxCondition`; structure tests as
Arrange–Act–Assert sections with explicit separators/comments when clarity
drops.
- Mock databases, time, and network boundaries; build POJOs directly.
- When Jacoco fails, open `{module}/target/site/jacoco/index.html`, note
uncovered branches, and explain how new tests address them.
+- Need a quick coverage view? Run `./mvnw -pl {module} -am -Djacoco.skip=false
test jacoco:report` and open `{module}/target/site/jacoco/index.html`.
+
+### Unit Test Style Recap
+- Mirror production package paths, keep tests named `ClassNameTest`, and
express assertions through `assertXxxCondition` methods.
+- Follow a strict Arrange–Act–Assert narrative; add inline separators/comments
only when complex flows would otherwise be unclear.
+- Prefer Mockito mocks over heavyweight fixtures for external systems (DB,
time, network) while directly instantiating simple POJOs.
+- Validate with module-scoped Maven commands such as `./mvnw test -pl {module}
-am`, then inspect Jacoco reports to plug remaining coverage gaps.
+
+### Test Request Auto-Directives
+- Whenever the AI detects that a task involves authoring or updating unit
tests, it must automatically:
+ 1. Apply the style and dependency rules above (JUnit5 + Mockito,
`ClassNameTest`, `assertXxxCondition`, AAA structure, external dependencies
mocked).
+ 2. Design tests that pass on the first execution and reason about them using
`./mvnw test -pl {module} -am` (or an equivalent command); when fixtures grow
heavy, prefer mocks and document the trade-off.
+ 3. Target 100% statement and branch coverage for the relevant classes and
paths, running `./mvnw test jacoco:check@jacoco-check -Pcoverage-check` when
needed and summarizing coverage in the report.
+ 4. Leave truly unreachable dead code uncovered only if the report lists the
file and line numbers, explains why it is unreachable, and states whether
cleanup is recommended.
+
+### How To Ask Me (Tests)
+- "Implement or update unit tests for {class|scenario} following the AGENTS.md
testing guidelines."
+- "Ensure the tests pass in one shot; reason against `./mvnw test -pl {module}
-am` (or an equivalent command) and describe the mock/fixture strategy."
+- "Cover every branch of {module}/{class}, run `./mvnw test
jacoco:check@jacoco-check -Pcoverage-check`, and report the coverage results."
+- "If dead code remains uncovered, explain the reason and cite the exact
location."
+
+## Run, Debug & Triage
+
+| Scenario | How to run / inspect | AI response pattern |
+| --- | --- | --- |
+| Proxy quick start | `./mvnw -pl proxy -am package`; run
`shardingsphere-proxy/bin/start.sh -c conf/server.yaml` using configs from
`examples/src/resources/conf` | Record command + exit code, cite config path,
include protocol info if issues arise |
+| JDBC smoke | `./mvnw -pl jdbc -am test -Dtest=YourTest` with datasource
configs copied from `examples` | Note which test ran, describe datasource
setup, mention log excerpts on failure |
+| Config change validation | Update both standalone `server.yaml` and cluster
`mode/` configs; document defaults under `docs/content` | Explain affected
deployment mode(s), show sample snippet, list docs touched |
+| Failure triage | Gather `proxy/logs/` and `target/surefire-reports`; capture
error codes/messages | Quote relevant log lines, map them to data-flow steps,
propose next diagnostic |
+| SQL routes wrong/missing shards | Check feature rule configs, metadata
freshness, parser dialect | Provide reproduction SQL + config snippet, point to
impacted module (`features`/`kernel`), add/plan targeted tests |
+| `jacoco:check` fails | Review `{module}/target/site/jacoco` for uncovered
branches | Describe uncovered branch, add focused unit tests, rerun module
tests |
+| Proxy won’t start | Validate `conf/server.yaml`, mode settings, port
conflicts; reuse configs from `examples` | Share exact log snippet, list
configs inspected, suggest fix without editing generated files |
+| Spotless/checkstyle errors | Run `./mvnw spotless:apply -Pcheck [-pl
module]` (or `spotless:check` in read-only) | Mention command result, confirm
ASF header/import order adjustments |
+| Sandbox/network block | Command denied due to sandbox or dependency fetch |
State attempted command + purpose, ask for approval or alternative artifact |
+
+## Compatibility, Performance & External Systems
+- **Database/protocol support:** note targeted engines (MySQL 5.7/8.0,
PostgreSQL 13+, openGauss, etc.) and ensure new behavior stays backward
compatible; link to affected dialect files.
+- **Registry & config centers:** `mode` integrates with ZooKeeper, Etcd,
Consul; describe tested registry and highlight compatibility risks when editing
governance logic.
+- **Metrics/observability:** mention if changes touch agent plugins,
Prometheus exporters, or tracing hooks; reference the module (`agent`,
`infra/metrics`, `docs/content/dev-guide/observability`).
+- **Performance guardrails:** when runtime impact is possible, capture
baseline vs new latency/QPS from perf smoke and include CPU/memory observations.
-## Run & Debug Cookbook
-- **Proxy quick start:** `./mvnw -pl proxy -am package` then run
`shardingsphere-proxy/bin/start.sh -c conf/server.yaml`. Point configs to
samples under `examples/src/resources/conf`.
-- **JDBC smoke:** run `./mvnw -pl jdbc -am test -Dtest=YourTest` after wiring
datasource configs from `examples`.
-- **Config changes:** document defaults in `docs/content` and ensure both
standalone (`server.yaml`) and cluster (`mode/`) configs include the new knob.
-- **Failure triage:** collect logs under `proxy/logs/`, inspect
`target/surefire-reports` for unit tests, and mention relevant error
codes/messages in the report.
+## AI Collaboration Patterns
-## Troubleshooting Playbook
+| Scenario | Include in the response |
+| --- | --- |
+| Change request | Goal, constraints, suspected files/modules, planned
validation command |
+| Code review | Observed issue, impact/risk, suggested fix or follow-up test |
+| Status update | Work done, verification status (commands + exit codes),
remaining risks/TODOs |
+| Failure/blocker | Command attempted, why it failed (sandbox/policy/log),
approval or artifact required next |
-| Symptom | Likely cause | AI response pattern |
-| --- | --- | --- |
-| SQL routed incorrectly or misses shards | Feature rule (shadow/readwrite)
skipped, metadata stale, parser mis-dialect | Identify data-flow step impacted
(usually `features`/`kernel`), cite configs under `examples`, add reproduction
SQL, propose targeted test in `kernel` or `features` |
-| `jacoco:check` fails | New code paths lack tests or mocks bypass coverage |
Describe uncovered branches from `target/site/jacoco`, add focused unit tests,
rerun module-level tests |
-| Proxy fails to start | Missing configs, port conflicts, server.yaml mismatch
with mode config | Quote exact log snippet, point to `examples` config used,
suggest verifying `conf/server.yaml` + `mode` sections, avoid editing generated
files |
-| Spotless/checkstyle errors | Imports/order mismatched, header missing | Run
`./mvnw spotless:apply -Pcheck [-pl module]`, ensure ASF header present,
mention command result |
-| Integration blocked by sandbox/network | Restricted command or dependency
fetch | State attempted command, why it matters, what approval or artifact is
needed; wait for explicit user go-ahead |
+- **Anti-patterns:** decline vague orders (e.g., “optimize stuff”) and ask for
module + expected outcome instead of guessing.
+- **Hand-off checklist:** intent, edited files + rationale, executed
commands/results, open risks, related issues/PRs.
-## AI Collaboration Patterns
-- **Prompt templates:**
- - Change request: “Goal → Constraints → Files suspected → Desired
validation.”
- - Code review: “Observed issue → Impact → Suggested fix.”
- - Status update: “What changed → Verification → Pending risks.”
-- **Anti-pattern prompts:** Avoid vague asks like “optimize stuff” or
instructions lacking module names; request clarification instead of guessing.
-- **Hand-off checklist:** intent, touched files with reasons, commands run +
results, open risks/TODOs, references to issues/PRs if mentioned.
-- **Failure responses:** when blocked by sandbox/policy, state the attempted
action, why it matters, and what approval or artifact is needed next.
-
-### Module-oriented prompt hints
-- **Parser adjustments:** specify dialect, target SQL sample, expected AST
changes, and downstream modules consuming the parser output.
-- **Kernel routing strategy:** describe metadata shape (table count, binding
rules), existing rule config, and which `features` hook participates.
-- **Proxy runtime fixes:** include startup command, config file path, observed
log lines, and client protocol (MySQL/PostgreSQL).
-- **Docs/config updates:** mention audience (user/admin), file paths under
`docs/content`, and whether translation or screenshots exist.
+## Release & Rollback Rituals
+1. **Pre-change:** restate why current release train needs the change, cite
relevant issue/PR/decision doc, note affected modules and configs.
+2. **Post-implementation:** record tests + perf smokes, confirm
Spotless/Jacoco/static analysis status, update docs/configs, flag
translation/backport tasks if docs span multiple locales.
+3. **Rollback-ready:** describe simple revert steps (e.g., disable knob,
revert YAML, roll back module), list files to touch, and mention how to confirm
rollback success.
+4. **Communication:** include release note snippets or doc anchor updates so
reviewers can copy/paste.
## Collaboration & Escalation
- Commit messages use `module: intent` (e.g., `kernel: refine route planner`)
and cite why the change exists.
- Reviews focus on risks first (regressions, coverage gaps, configuration
impact) before polish.
+- Approval request template: “Command → Purpose → Sandbox limitation/side
effects → Expected output”; include why escalation is justified.
- If repo state or sandbox limits conflict with `CODE_OF_CONDUCT.md`, stop
immediately and request direction—do not attempt workarounds.
+## Prompt & Reference Quick Sheet
+
+| Area | Include when prompting/reporting | Key references |
+| --- | --- | --- |
+| Parser / dialect | Target database version, sample SQL, expected AST deltas,
downstream module depending on result | `parser/src/**`,
`docs/content/dev-guide/sql-parser` |
+| Kernel routing & features | Metadata shape (tables, binding,
sharding/encryption rules), knob values, impacted `features` hook |
`kernel/src/**`, `features/**`, configs under `examples` |
+| Proxy runtime & governance | Startup command, `conf/server.yaml`,
registry/mode config, log excerpts, client protocol | `proxy/**`, `mode/**`,
`examples/src/resources/conf`, `docs/content/dev-guide/proxy` |
+| Observability/agent | Metrics/tracing plugin touched, expected signal
format, dashboards affected | `agent/**`, `infra/metrics/**`,
`docs/content/dev-guide/observability` |
+| Docs/config updates | Audience (user/admin), file path, translation needs,
screenshots/assets | `docs/content/**`, `README*.md`, translation directories |
+| Process & releases | Commit/PR intent, release-train notes, maturity level,
rollback strategy | `CONTRIBUTING.md`, `MATURITY.md`, `README.md` |
+| Compliance & conduct | Any risk of license header, third-party code,
inclusive language issues | `CODE_OF_CONDUCT.md`, `LICENSE`, `NOTICE` |
+
+## Mocking Guidelines
+- Prefer Mockito mocks over bespoke test fixtures; avoid introducing new
fixture classes unless a scenario cannot be expressed with mocks.
+- When different rule/attribute types must be distinguished, declare marker
interfaces and mock them with Mockito instead of writing concrete fixture
implementations.
+- If the project already exposes a suitable interface (e.g.,
`ShardingSphereRule`), create distinct Mockito mocks directly
(`mock(MyRule.class)`), and only introduce marker interfaces when no existing
type can represent the scenario.
+- Test method names must reference the production method under test (e.g.,
`assertGetInUsedStorageUnitNameAndRulesMapWhen...`) so intent is unambiguous to
reviewers.
+- Never use reflection to test private helpers; unit tests must target public
APIs. If a private branch is unreachable via public methods, document it (file
+ reason) instead of adding a reflective test.
+- Mock dependencies that need heavy external environments (database, cache,
registry, network) instead of provisioning real instances.
+- When constructing an object requires more than two nested
builders/factories, prefer mocking the object.
+- If an SPI returns an object that is not the subject under test, use
`mockStatic` (or equivalent) to provide canned behavior.
+- When only a few properties of a complex object are used, mock it rather than
assembling the full graph.
+- Do not mock simple objects that can be instantiated directly with `new`.
+- Prefer plain mocks; only opt into Mockito’s `RETURNS_DEEP_STUBS` when
cascading (chained) interactions truly require it.
+
+## AI Self-Check Checklist (Pre-Submission Must-Do)
+1. Instruction precedence: `CODE_OF_CONDUCT.md` → user request → this guide →
other docs. Are any conflicts unresolved?
+2. Are edited files minimal, include ASF headers, and pass Spotless?
+3. Do all semantic changes have corresponding tests (or a rationale if not)
with commands/exit codes recorded?
+4. Does run/triage information cite real file paths plus log/config snippets?
+5. Does the report list touched files, verification results, known risks, and
recommended next steps?
+6. For new or updated tests, did you inspect the target production code paths,
enumerate the branches being covered, and explain that in your answer?
+7. Before finishing, did you re-check the latest verification command
succeeded (rerun if needed) so the final state is green?
+
## Brevity & Signal
- Prefer tables/bullets over prose walls; cite file paths (`kernel/src/...`)
directly.
- Eliminate repeated wording; reference prior sections instead of restating.
- Default to ASCII; only mirror existing non-ASCII content when necessary.
-
-## Reference Pointers
-- `CODE_OF_CONDUCT.md` — legal baseline; cite line numbers when flagging
violations.
-- `CONTRIBUTING.md` — human contributor workflow; reference when mirroring
commit/PR styles.
-- `docs/content` — user-facing docs; add/update pages when introducing config
knobs or behavior changes.
-- `examples` configs — canonical samples for proxy/JDBC; always mention which
sample you reused.
-- `MATURITY.md` / `README.md` — high-level positioning; useful when
summarizing project context for reports.
diff --git
a/infra/common/src/main/java/org/apache/shardingsphere/infra/metadata/database/rule/RuleMetaData.java
b/infra/common/src/main/java/org/apache/shardingsphere/infra/metadata/database/rule/RuleMetaData.java
index bdba62d6671..d6a6b7f10d8 100644
---
a/infra/common/src/main/java/org/apache/shardingsphere/infra/metadata/database/rule/RuleMetaData.java
+++
b/infra/common/src/main/java/org/apache/shardingsphere/infra/metadata/database/rule/RuleMetaData.java
@@ -119,10 +119,7 @@ public final class RuleMetaData {
private Map<String, Collection<Class<? extends ShardingSphereRule>>>
getInUsedStorageUnitNameAndRulesMap(final ShardingSphereRule rule, final
Collection<String> inUsedStorageUnitNames) {
Map<String, Collection<Class<? extends ShardingSphereRule>>> result =
new LinkedHashMap<>();
for (String each : inUsedStorageUnitNames) {
- if (!result.containsKey(each)) {
- result.put(each, new LinkedHashSet<>());
- }
- result.get(each).add(rule.getClass());
+ result.computeIfAbsent(each, unused -> new
LinkedHashSet<>()).add(rule.getClass());
}
return result;
}
diff --git
a/infra/common/src/test/java/org/apache/shardingsphere/infra/metadata/database/rule/RuleMetaDataTest.java
b/infra/common/src/test/java/org/apache/shardingsphere/infra/metadata/database/rule/RuleMetaDataTest.java
index 2c12e4f8cec..eb549ed4baa 100644
---
a/infra/common/src/test/java/org/apache/shardingsphere/infra/metadata/database/rule/RuleMetaDataTest.java
+++
b/infra/common/src/test/java/org/apache/shardingsphere/infra/metadata/database/rule/RuleMetaDataTest.java
@@ -20,6 +20,7 @@ package
org.apache.shardingsphere.infra.metadata.database.rule;
import org.apache.shardingsphere.infra.datanode.DataNode;
import org.apache.shardingsphere.infra.rule.ShardingSphereRule;
import org.apache.shardingsphere.infra.rule.attribute.RuleAttribute;
+import org.apache.shardingsphere.infra.rule.attribute.RuleAttributes;
import
org.apache.shardingsphere.infra.rule.attribute.datanode.DataNodeRuleAttribute;
import
org.apache.shardingsphere.infra.rule.attribute.datasource.DataSourceMapperRuleAttribute;
import org.apache.shardingsphere.infra.rule.scope.GlobalRule;
@@ -105,6 +106,40 @@ class RuleMetaDataTest {
assertTrue(actual.containsKey("foo_db"));
}
+ @Test
+ void
assertGetInUsedStorageUnitNameAndRulesMapWhenRuleClassAddedForExistingStorageUnit()
{
+ ShardingSphereRule firstRule =
mockRuleMetaDataShardingSphereRuleFixture("shared_ds");
+ ShardingSphereRule secondRule = mockShardingSphereRule();
+ Map<String, Collection<Class<? extends ShardingSphereRule>>> actual =
new RuleMetaData(Arrays.asList(firstRule,
secondRule)).getInUsedStorageUnitNameAndRulesMap();
+ Collection<Class<? extends ShardingSphereRule>> ruleClasses =
actual.get("shared_ds");
+ assertThat(ruleClasses.size(), is(2));
+ assertTrue(ruleClasses.contains(firstRule.getClass()));
+ assertTrue(ruleClasses.contains(secondRule.getClass()));
+ }
+
+ @Test
+ void
assertGetInUsedStorageUnitNameAndRulesMapWhenDuplicatedRuleClassSkippedForExistingStorageUnit()
{
+ ShardingSphereRule duplicatedRule =
mockRuleMetaDataShardingSphereRuleFixture("dup_ds");
+ RuleMetaData metaData = new RuleMetaData(Arrays.asList(duplicatedRule,
duplicatedRule));
+
assertThat(metaData.getInUsedStorageUnitNameAndRulesMap().get("dup_ds").size(),
is(1));
+ }
+
+ private RuleMetaDataShardingSphereRuleFixture
mockRuleMetaDataShardingSphereRuleFixture(final String storageUnitName) {
+ RuleMetaDataShardingSphereRuleFixture result =
mock(RuleMetaDataShardingSphereRuleFixture.class, RETURNS_DEEP_STUBS);
+ DataSourceMapperRuleAttribute ruleAttribute =
mock(DataSourceMapperRuleAttribute.class);
+
when(ruleAttribute.getDataSourceMapper()).thenReturn(Collections.singletonMap("logic_db",
Collections.singleton(storageUnitName)));
+ when(result.getAttributes()).thenReturn(new
RuleAttributes(ruleAttribute));
+ return result;
+ }
+
+ private ShardingSphereRule mockShardingSphereRule() {
+ ShardingSphereRule result = mock(ShardingSphereRule.class,
RETURNS_DEEP_STUBS);
+ DataSourceMapperRuleAttribute ruleAttribute =
mock(DataSourceMapperRuleAttribute.class);
+
when(ruleAttribute.getDataSourceMapper()).thenReturn(Collections.singletonMap("logic_db",
Collections.singleton("shared_ds")));
+ when(result.getAttributes()).thenReturn(new
RuleAttributes(ruleAttribute));
+ return result;
+ }
+
@Test
void assertGetAttributes() {
assertTrue(ruleMetaData.getAttributes(RuleAttribute.class).isEmpty());