wangyum opened a new pull request #36045:
URL: https://github.com/apache/spark/pull/36045


   ### What changes were proposed in this pull request?
   
   Use sideBySide to format the log plan in `AdaptiveSparkPlanExec`.
   Before:
   ```
   12:08:36.876 ERROR 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec: Plan changed 
from SortMergeJoin [key#13], [a#23], Inner
   :- Sort [key#13 ASC NULLS FIRST], false, 0
   :  +- ShuffleQueryStage 0
   :     +- Exchange hashpartitioning(key#13, 5), ENSURE_REQUIREMENTS, [id=#110]
   :        +- *(1) Filter (isnotnull(value#14) AND (value#14 = 1))
   :           +- *(1) SerializeFromObject [knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData, true])).key AS key#13, 
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, 
fromString, knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData, true])).value, true, false, 
true) AS value#14]
   :              +- Scan[obj#12]
   +- Sort [a#23 ASC NULLS FIRST], false, 0
      +- ShuffleQueryStage 1
         +- Exchange hashpartitioning(a#23, 5), ENSURE_REQUIREMENTS, [id=#129]
            +- *(2) SerializeFromObject [knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData2, true])).a AS a#23, 
knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData2, true])).b AS b#24]
               +- Scan[obj#22]
    to BroadcastHashJoin [key#13], [a#23], Inner, BuildLeft, false
   :- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, 
false] as bigint)),false), [id=#145]
   :  +- ShuffleQueryStage 0
   :     +- Exchange hashpartitioning(key#13, 5), ENSURE_REQUIREMENTS, [id=#110]
   :        +- *(1) Filter (isnotnull(value#14) AND (value#14 = 1))
   :           +- *(1) SerializeFromObject [knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData, true])).key AS key#13, 
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, 
fromString, knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData, true])).value, true, false, 
true) AS value#14]
   :              +- Scan[obj#12]
   +- ShuffleQueryStage 1
      +- Exchange hashpartitioning(a#23, 5), ENSURE_REQUIREMENTS, [id=#129]
         +- *(2) SerializeFromObject [knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData2, true])).a AS a#23, 
knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData2, true])).b AS b#24]
            +- Scan[obj#22]
   ```
   
   Ater:
   ```
   15:57:59.481 ERROR 
org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec: Plan changed:
   !SortMergeJoin [key#13], [a#23], Inner                                       
                                                                                
                                                                                
                                                                                
                                                  BroadcastHashJoin [key#13], 
[a#23], Inner, BuildLeft, false
   !:- Sort [key#13 ASC NULLS FIRST], false, 0                                  
                                                                                
                                                                                
                                                                                
                                                  :- BroadcastExchange 
HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)),false), 
[id=#145]
    :  +- ShuffleQueryStage 0                                                   
                                                                                
                                                                                
                                                                                
                                                  :  +- ShuffleQueryStage 0
    :     +- Exchange hashpartitioning(key#13, 5), ENSURE_REQUIREMENTS, 
[id=#110]                                                                       
                                                                                
                                                                                
                                                          :     +- Exchange 
hashpartitioning(key#13, 5), ENSURE_REQUIREMENTS, [id=#110]
    :        +- *(1) Filter (isnotnull(value#14) AND (value#14 = 1))            
                                                                                
                                                                                
                                                                                
                                                  :        +- *(1) Filter 
(isnotnull(value#14) AND (value#14 = 1))
    :           +- *(1) SerializeFromObject 
[knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData, true])).key AS key#13, 
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, 
fromString, knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData, true])).value, true, false, 
true) AS value#14]   :           +- *(1) SerializeFromObject 
[knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData, true])).key AS key#13, 
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, 
fromString, knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData, true])).value, true, false, 
true) AS value#14]
    :              +- Scan[obj#12]                                              
                                                                                
                                                                                
                                                                                
                                                  :              +- Scan[obj#12]
   !+- Sort [a#23 ASC NULLS FIRST], false, 0                                    
                                                                                
                                                                                
                                                                                
                                                  +- ShuffleQueryStage 1
   !   +- ShuffleQueryStage 1                                                   
                                                                                
                                                                                
                                                                                
                                                     +- Exchange 
hashpartitioning(a#23, 5), ENSURE_REQUIREMENTS, [id=#129]
   !      +- Exchange hashpartitioning(a#23, 5), ENSURE_REQUIREMENTS, [id=#129] 
                                                                                
                                                                                
                                                                                
                                                        +- *(2) 
SerializeFromObject [knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData2, true])).a AS a#23, 
knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData2, true])).b AS b#24]
   !         +- *(2) SerializeFromObject [knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData2, true])).a AS a#23, 
knownnotnull(assertnotnull(input[0, 
org.apache.spark.sql.test.SQLTestData$TestData2, true])).b AS b#24]             
                                                                                
                                     +- Scan[obj#22]
   !            +- Scan[obj#22] 
   ```
   
   ### Why are the changes needed?
   
   Enhance readability.
   
   ### Does this PR introduce _any_ user-facing change?
   
   No.
   
   ### How was this patch tested?
   
   Manual testing.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to