[
https://issues.apache.org/jira/browse/SPARK-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15583874#comment-15583874
]
Apache Spark commented on SPARK-17957:
--------------------------------------
User 'gatorsmile' has created a pull request for this issue:
https://github.com/apache/spark/pull/15523
> Calling outer join and na.fill(0) and then inner join will miss rows
> --------------------------------------------------------------------
>
> Key: SPARK-17957
> URL: https://issues.apache.org/jira/browse/SPARK-17957
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.0.1
> Environment: Spark 2.0.1, Mac, Local
> Reporter: Linbo
> Assignee: Xiao Li
> Priority: Critical
> Labels: correctness
>
> I reported a similar bug two months ago and it's fixed in Spark 2.0.1:
> https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when
> I insert a na.fill(0) call between outer join and inner join in the same
> workflow in SPARK-17060 I get wrong result.
> {code:title=spark-shell|borderStyle=solid}
> scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b")
> a: org.apache.spark.sql.DataFrame = [a: int, b: int]
> scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c")
> b: org.apache.spark.sql.DataFrame = [a: int, c: int]
> scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
> ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
> scala> ab.show
> +---+---+---+
> | a| b| c|
> +---+---+---+
> | 1| 2| 0|
> | 3| 0| 4|
> | 2| 3| 5|
> +---+---+---+
> scala> val c = Seq((3, 1)).toDF("a", "d")
> c: org.apache.spark.sql.DataFrame = [a: int, d: int]
> scala> c.show
> +---+---+
> | a| d|
> +---+---+
> | 3| 1|
> +---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> | a| b| c| d|
> +---+---+---+---+
> +---+---+---+---+
> {code}
> And again if i use persist, the result is correct. I think the problem is
> join optimizer similar to this pr: https://github.com/apache/spark/pull/14661
> {code:title=spark-shell|borderStyle=solid}
> scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0).persist
> ab: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, b: int
> ... 1 more field]
> scala> ab.show
> +---+---+---+
> | a| b| c|
> +---+---+---+
> | 1| 2| 0|
> | 3| 0| 4|
> | 2| 3| 5|
> +---+---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> | a| b| c| d|
> +---+---+---+---+
> | 3| 0| 4| 1|
> +---+---+---+---+
> {code}
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]