terrymanu commented on issue #37851:
URL:
https://github.com/apache/shardingsphere/issues/37851#issuecomment-3800005226
Problem understanding
- Env: ShardingSphere-Proxy 5.5.2, PostgreSQL 17.6 backend, sharded table
t21 using colocated_key (HASH_MOD 8).
- SQL: bit_or(ref_0.c37) OVER (PARTITION BY ref_0.c36 ORDER BY ...), then
(c37) BETWEEN (c40) AND (window_value).
- Symptom: Row count matches, but true/false/null counts differ from
single-node PostgreSQL; logs show the window query is pushed down and run
independently on each shard.
Root cause
- The standard sharding pipeline (Simple Push Down) runs the window
function per shard and only merges results; it cannot rebuild a global window
when PARTITION BY (c36) is not aligned with the sharding key (colocated_key).
- Global window semantics would require a global partition/order plan,
which the current default sharding execution does not provide. SQL Federation
is the component that can do global planning, but it’s marked experimental and
not enabled by default
(docs/document/content/features/sql-federation/limitation.en.md; sharding flow
described in docs/document/content/reference/sharding/_index.cn.md).
Problem analysis
- Logs show multiple physical SQLs to ds_0..ds_4, confirming the standard
sharding path (no federation).
- Because rows sharing the same c36 live on different shards, local window
frames produce different bit_or values; the final merge step only combines
rows, not window frames, so boolean results flip.
- This is an execution-capability gap rather than a backend PostgreSQL
issue.
Problem conclusion
- Not a backend bug; it is an unsupported scenario in the default sharding
execution path: cross-shard window functions need global partition/order, which
Simple Push Down does not supply.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]