Github user baibaichen commented on the issue:
https://github.com/apache/spark/pull/18652
The naive database join implementation looks like:
```
for each tuple in left relation
for each tuple in right relation
matching join condition for each tuple pair then ..
else ..
```
Both inner and outer join will first build a cross-join, and then remove
the tuple pairs which don't match the join condition. In the deterministic
case, you can do any optimization if the final result is same with above
computation.
However, the join has no unique result in the non-deterministic case. For
example, considering pseudo random condition `on rand(10) < 0.5`, we can get
the same sequence for the same seed, but the final result depends on how tuple
pairs are produced.
Since the result highly depends on internal execution engine, there is no
standard behavior. For example, explaining following SQL in hive (version 1.2.1)
```
SELECT a.date_id from tmp.tmp_lifan_trfc_tpa_hive a left outer join
dw.dim_site_categ_ext c
on case
when a.nav_tcdt is null then
cast(rand(9) * 1000 - 9999999999 as string)
else
a.nav_tcdt
end = c.site_categ_id
and rand(c.site_categ_skid) < 0.5
and rand(a.pltfm_id) >=0.5;
```
I find that HIVE pushes down `rand(c.site_categ_skid) < 0.5` and
`rand(a.pltfm_id) >=0.5` to filter operator. I guess that HIVE does't consider
non-deterministic in the join-condition. I will verify this later.
By the way, Spark is distributed execution engine which is different with
traditional DBMS(MySQL, Oracle), we can't do the same thing, for example. rand
will start with initial seed in each worker.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]