Github user KaiXinXiaoLei commented on the issue:
https://github.com/apache/spark/pull/20670
@srowen i redescribe the problem. Now i hive a small table `ls` with one
row , and a big table `catalog_sales` with One hundred billion rows. And in the
big table, the non null value about `cs_order_number` field has one million.
Then i join this tables with the query:`select ls.cs_order_number from ls
left semi join catalog_sales cs on ls.cs_order_number = cs.cs_order_number`. My
job is running, and there has been a data skew. Then i find the null value
cause this phenomenon.
The join condition is `ls.cs_order_number = cs.cs_order_number`. In the
Optimized Logical Plan, the left table has "Filter
isnotnull(cs_order_number#1)" action, so i think the right table should have
âFilter isnotnullâ action. Then the right table will filter null value
firstly , and join with left table secondly. So the data skew will not be
caused by null value.
Using this idea, my sql runs success.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]