sahnib opened a new pull request, #46035:
URL: https://github.com/apache/spark/pull/46035

   <!--
   Thanks for sending a pull request!  Here are some tips for you:
     1. If this is your first time, please read our contributor guidelines: 
https://spark.apache.org/contributing.html
     2. Ensure you have added or run the appropriate tests for your PR: 
https://spark.apache.org/developer-tools.html
     3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., 
'[WIP][SPARK-XXXX] Your PR title ...'.
     4. Be sure to keep the PR description updated to reflect all changes.
     5. Please write your PR title to summarize what this PR proposes.
     6. If possible, provide a concise example to reproduce the issue for a 
faster review.
     7. If you want to add a new configuration, please read the guideline first 
for naming configurations in
        
'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'.
     8. If you want to add or modify an error type or message, please read the 
guideline first in
        'common/utils/src/main/resources/error/README.md'.
   -->
   
   ### What changes were proposed in this pull request?
   <!--
   Please clarify what changes you are proposing. The purpose of this section 
is to outline the changes and how this PR fixes the issue. 
   If possible, please consider writing useful notes for better and faster 
reviews in your PR. See the examples below.
     1. If you refactor some codes with changing classes, showing the class 
hierarchy will help reviewers.
     2. If you fix some SQL features, you can provide some references of other 
DBMSes.
     3. If there is design documentation, please add the link.
     4. If there is a discussion in the mailing list, please add the link.
   -->
   
   Streaming queries with Union of 2 data streams followed by an Aggregate 
(groupBy) can produce incorrect results if the grouping key is a constant 
literal for micro-batch duration.
   
   The query produces incorrect results because the query optimizer recognizes 
the literal value in the grouping key as foldable and replaces the grouping key 
expression with the actual literal value. This optimization is correct for 
batch queries. However Streaming queries also read information from StateStore, 
and the output contains both the results from StateStore (computed in previous 
microbatches) and data from input sources (computed in this microbatch). The 
HashAggregate node after StateStore always reads grouping key value as the 
optimized literal (as the grouping key expression is optimized into a literal 
by the optimizer). This ends up replacing keys in StateStore with the literal 
value resulting in incorrect output. 
   
   See an example logical and physical plan below for a query performing a 
union on 2 data streams, followed by a groupBy. Note that the name#4 expression 
has been optimized to ds1. The Streaming query Aggregate adds StateStoreSave 
node as child of HashAggregate, however any grouping key read from StateStore 
will still be read as ds1 due to the optimization. 
   
   
   ### Optimized Logical Plan
   
   ```
   === Applying Rule 
org.apache.spark.sql.catalyst.optimizer.FoldablePropagation ===
   
   === Old Plan ===
   
   WriteToMicroBatchDataSource MemorySink, 
eb67645e-30fc-41a8-8006-35bb7649c202, Complete, 0
   +- Aggregate [name#4], [name#4, count(1) AS count#31L]
      +- Project [ds1 AS name#4]
        +- StreamingDataSourceV2ScanRelation[value#1] MemoryStreamDataSource
   
   
   === New Plan ===
   
   WriteToMicroBatchDataSource MemorySink, 
eb67645e-30fc-41a8-8006-35bb7649c202, Complete, 0
   +- Aggregate [ds1], [ds1 AS name#4, count(1) AS count#31L]
      +- Project [ds1 AS name#4]
        +- StreamingDataSourceV2ScanRelation[value#1] MemoryStreamDataSource
   
   
   ====
   ```
   
   
   ### Corresponding Physical Plan
   
   ```
   WriteToDataSourceV2 MicroBatchWrite[epoch: 0, writer: 
org.apache.spark.sql.execution.streaming.sources.MemoryStreamingWrite@2b4c6242],
 
org.apache.spark.sql.execution.datasources.v2.DataSourceV2Strategy$$Lambda$3143/1859075634@35709d26
   +- HashAggregate(keys=[ds1#39], functions=[finalmerge_count(merge count#38L) 
AS count(1)#30L], output=[name#4, count#31L])
      +- StateStoreSave [ds1#39], state info [ checkpoint = 
file:/tmp/streaming.metadata-e470782a-18a3-463c-9e61-3a10d0bdf180/state, runId 
= 4dedecca-910c-4518-855e-456702617414, opId = 0, ver = 0, numPartitions = 5], 
Complete, 0, 0, 2
        +- HashAggregate(keys=[ds1#39], functions=[merge_count(merge count#38L) 
AS count#38L], output=[ds1#39, count#38L])
                +- StateStoreRestore [ds1#39], state info [ checkpoint = 
file:/tmp/streaming.metadata-e470782a-18a3-463c-9e61-3a10d0bdf180/state, runId 
= 4dedecca-910c-4518-855e-456702617414, opId = 0, ver = 0, numPartitions = 5], 2
                +- HashAggregate(keys=[ds1#39], functions=[merge_count(merge 
count#38L) AS count#38L], output=[ds1#39, count#38L])
                +- HashAggregate(keys=[ds1 AS ds1#39], 
functions=[partial_count(1) AS count#38L], output=[ds1#39, count#38L])
                        +- Project
                        +- MicroBatchScan[value#1] MemoryStreamDataSource
   
   ```
   
   This PR disables foldable propagation across Streaming Aggregate/Join nodes 
in the logical plan. 
   
   ### Why are the changes needed?
   <!--
   Please clarify why the changes are needed. For instance,
     1. If you propose a new API, clarify the use case for a new API.
     2. If you fix a bug, you can clarify why it is a bug.
   -->
   
   Changes are needed to ensure that Streaming queries with literal value for 
grouping key/join key produce correct results. 
   
   ### Does this PR introduce _any_ user-facing change?
   <!--
   Note that it means *any* user-facing change including all aspects such as 
the documentation fix.
   If yes, please clarify the previous behavior and the change this PR proposes 
- provide the console output, description and/or an example to show the 
behavior difference if possible.
   If possible, please also clarify if this is a user-facing change compared to 
the released Spark versions or within the unreleased branches such as master.
   If no, write 'No'.
   -->
   No
   
   ### How was this patch tested?
   <!--
   If tests were added, say they were added here. Please make sure to add some 
test cases that check the changes thoroughly including negative and positive 
cases if possible.
   If it was tested in a way different from regular unit tests, please clarify 
how you tested step by step, ideally copy and paste-able, so that other 
reviewers can test and check, and descendants can verify in the future.
   If tests were not added, please describe why they were not added and/or why 
it was difficult to add.
   If benchmark tests were added, please run the benchmarks in GitHub Actions 
for the consistent environment, and the instructions could accord to: 
https://spark.apache.org/developer-tools.html#github-workflow-benchmarks.
   -->
   
   Added 
`sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQueryOptimizationCorrectnessSuite.scala`
 testcase.
   
   ```
   
   [info] Run completed in 54 seconds, 150 milliseconds.
   [info] Total number of tests run: 9
   [info] Suites: completed 1, aborted 0
   [info] Tests: succeeded 9, failed 0, canceled 0, ignored 0, pending 0
   [info] All tests passed.
   
   ```
   
   ### Was this patch authored or co-authored using generative AI tooling?
   <!--
   If generative AI tooling has been used in the process of authoring this 
patch, please include the
   phrase: 'Generated-by: ' followed by the name of the tool and its version.
   If no, write 'No'.
   Please refer to the [ASF Generative Tooling 
Guidance](https://www.apache.org/legal/generative-tooling.html) for details.
   -->
   
   No


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to