zhztheplayer commented on issue #6790:
URL: 
https://github.com/apache/incubator-gluten/issues/6790#issuecomment-2295689315

   > That seems to be an issue with our Spark version 3.2.1
   
   Could be possible but I am not sure. I only tested with 3.2.2.
   
   I think the issue actually happened in this stage in your query plan:
   
   ```
   BroadcastQueryStage 27
             +- ColumnarBroadcastExchange 
HashedRelationBroadcastMode(List(input[0, string, true], input[1, string, 
true], input[2, string, true], (input[4, int, false] - 1)),false), [id=#3024]
                +- BroadcastQueryStage 26
                   +- ReusedExchange [i_category#531, i_brand#527, cc_name#609, 
sum_sales#147, rn#638], ColumnarBroadcastExchange 
HashedRelationBroadcastMode(List(input[0, string, true], input[1, string, 
true], input[2, string, true], (input[4, int, false] + 1)),false), [id=#2832]
   ```
   
   The reason here is, one broadcast exchange cannot become another broadcast 
exchange's child. This limitation is by vanilla Spark.
   
   Perhaps you can add a breakpoint at the creation of the two 
`ColumnarBroadcastExchange`s to debug the program, to see why two exchanges 
rather than one exchange are needed. As eventually query planner should give a 
plan that has only one exchange.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to