[jira] [Updated] (SPARK-17957) Calling outer join and na.fill(0) and then inner join will miss rows

2016-10-15 Thread Xiao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Li updated SPARK-17957:

Priority: Critical  (was: Major)

> Calling outer join and na.fill(0) and then inner join will miss rows
> 
>
> Key: SPARK-17957
> URL: https://issues.apache.org/jira/browse/SPARK-17957
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: Spark 2.0.1, Mac, Local
>Reporter: Linbo
>Priority: Critical
>  Labels: correctness
>
> I reported a similar bug two months ago and it's fixed in Spark 2.0.1: 
> https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when 
> I insert a na.fill(0) call between outer join and inner join in the same 
> workflow in SPARK-17060 I get wrong result.
> {code:title=spark-shell|borderStyle=solid}
> scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b")
> a: org.apache.spark.sql.DataFrame = [a: int, b: int]
> scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c")
> b: org.apache.spark.sql.DataFrame = [a: int, c: int]
> scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
> ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> val c = Seq((3, 1)).toDF("a", "d")
> c: org.apache.spark.sql.DataFrame = [a: int, d: int]
> scala> c.show
> +---+---+
> |  a|  d|
> +---+---+
> |  3|  1|
> +---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> +---+---+---+---+
> {code}
> And again if i use persist, the result is correct. I think the problem is 
> join optimizer similar to this pr: https://github.com/apache/spark/pull/14661
> {code:title=spark-shell|borderStyle=solid}
> scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0).persist
> ab: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, b: int 
> ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> |  3|  0|  4|  1|
> +---+---+---+---+
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-17957) Calling outer join and na.fill(0) and then inner join will miss rows

2016-10-15 Thread Xiao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Li updated SPARK-17957:

Target Version/s: 2.0.2, 2.1.0  (was: 2.0.2)

> Calling outer join and na.fill(0) and then inner join will miss rows
> 
>
> Key: SPARK-17957
> URL: https://issues.apache.org/jira/browse/SPARK-17957
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: Spark 2.0.1, Mac, Local
>Reporter: Linbo
>  Labels: correctness
>
> I reported a similar bug two months ago and it's fixed in Spark 2.0.1: 
> https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when 
> I insert a na.fill(0) call between outer join and inner join in the same 
> workflow in SPARK-17060 I get wrong result.
> {code:title=spark-shell|borderStyle=solid}
> scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b")
> a: org.apache.spark.sql.DataFrame = [a: int, b: int]
> scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c")
> b: org.apache.spark.sql.DataFrame = [a: int, c: int]
> scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
> ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> val c = Seq((3, 1)).toDF("a", "d")
> c: org.apache.spark.sql.DataFrame = [a: int, d: int]
> scala> c.show
> +---+---+
> |  a|  d|
> +---+---+
> |  3|  1|
> +---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> +---+---+---+---+
> {code}
> And again if i use persist, the result is correct. I think the problem is 
> join optimizer similar to this pr: https://github.com/apache/spark/pull/14661
> {code:title=spark-shell|borderStyle=solid}
> scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0).persist
> ab: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, b: int 
> ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> |  3|  0|  4|  1|
> +---+---+---+---+
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-17957) Calling outer join and na.fill(0) and then inner join will miss rows

2016-10-15 Thread Xiao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Li updated SPARK-17957:

Labels: correctness  (was: joins na.fill)

> Calling outer join and na.fill(0) and then inner join will miss rows
> 
>
> Key: SPARK-17957
> URL: https://issues.apache.org/jira/browse/SPARK-17957
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: Spark 2.0.1, Mac, Local
>Reporter: Linbo
>  Labels: correctness
>
> I reported a similar bug two months ago and it's fixed in Spark 2.0.1: 
> https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when 
> I insert a na.fill(0) call between outer join and inner join in the same 
> workflow in SPARK-17060 I get wrong result.
> {code:title=spark-shell|borderStyle=solid}
> scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b")
> a: org.apache.spark.sql.DataFrame = [a: int, b: int]
> scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c")
> b: org.apache.spark.sql.DataFrame = [a: int, c: int]
> scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
> ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> val c = Seq((3, 1)).toDF("a", "d")
> c: org.apache.spark.sql.DataFrame = [a: int, d: int]
> scala> c.show
> +---+---+
> |  a|  d|
> +---+---+
> |  3|  1|
> +---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> +---+---+---+---+
> {code}
> And again if i use persist, the result is correct. I think the problem is 
> join optimizer similar to this pr: https://github.com/apache/spark/pull/14661
> {code:title=spark-shell|borderStyle=solid}
> scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0).persist
> ab: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, b: int 
> ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> |  3|  0|  4|  1|
> +---+---+---+---+
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-17957) Calling outer join and na.fill(0) and then inner join will miss rows

2016-10-15 Thread Linbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Linbo updated SPARK-17957:
--
Description: 
I reported a similar bug two months ago and it's fixed in Spark 2.0.1: 
https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when I 
insert a na.fill(0) call between outer join and inner join in the same workflow 
in SPARK-17060 I get wrong result.

{code:title=spark-shell|borderStyle=solid}
scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b")
a: org.apache.spark.sql.DataFrame = [a: int, b: int]

scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c")
b: org.apache.spark.sql.DataFrame = [a: int, c: int]

scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]

scala> ab.show
+---+---+---+
|  a|  b|  c|
+---+---+---+
|  1|  2|  0|
|  3|  0|  4|
|  2|  3|  5|
+---+---+---+

scala> val c = Seq((3, 1)).toDF("a", "d")
c: org.apache.spark.sql.DataFrame = [a: int, d: int]

scala> c.show
+---+---+
|  a|  d|
+---+---+
|  3|  1|
+---+---+

scala> ab.join(c, "a").show
+---+---+---+---+
|  a|  b|  c|  d|
+---+---+---+---+
+---+---+---+---+
{code}

And again if i use persist, the result is correct. I think the problem is join 
optimizer similar to this pr: https://github.com/apache/spark/pull/14661

{code:title=spark-shell|borderStyle=solid}
scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0).persist
ab: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, b: int 
... 1 more field]

scala> ab.show
+---+---+---+
|  a|  b|  c|
+---+---+---+
|  1|  2|  0|
|  3|  0|  4|
|  2|  3|  5|
+---+---+---+

scala> ab.join(c, "a").show
+---+---+---+---+
|  a|  b|  c|  d|
+---+---+---+---+
|  3|  0|  4|  1|
+---+---+---+---+
{code}
  

  was:
I reported a similar bug two months ago and it's fixed in Spark 2.0.1: 
https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when I 
insert a na.fill(0) call between outer join and inner join in the same workflow 
in SPARK-17060 I get wrong result.

{code:title=spark-shell|borderStyle=solid}
scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b")
a: org.apache.spark.sql.DataFrame = [a: int, b: int]

scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c")
b: org.apache.spark.sql.DataFrame = [a: int, c: int]

scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]

scala> ab.show
+---+---+---+
|  a|  b|  c|
+---+---+---+
|  1|  2|  0|
|  3|  0|  4|
|  2|  3|  5|
+---+---+---+

scala> val c = Seq((3, 1)).toDF("a", "d")
c: org.apache.spark.sql.DataFrame = [a: int, d: int]

scala> c.show
+---+---+
|  a|  d|
+---+---+
|  3|  1|
+---+---+

scala> ab.join(c, "a").show
+---+---+---+---+
|  a|  b|  c|  d|
+---+---+---+---+
+---+---+---+---+

scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0)
ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]

scala> ab.join(c, "a").show
+---+---+---+---+
|  a|  b|  c|  d|
+---+---+---+---+
+---+---+---+---+
{code}

And again if i use persist, the result is correct. I think the problem is join 
optimizer similar to this pr: https://github.com/apache/spark/pull/14661

{code:title=spark-shell|borderStyle=solid}
scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0).persist
ab: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, b: int 
... 1 more field]

scala> ab.show
+---+---+---+
|  a|  b|  c|
+---+---+---+
|  1|  2|  0|
|  3|  0|  4|
|  2|  3|  5|
+---+---+---+


scala> ab.join(c, "a").show
+---+---+---+---+
|  a|  b|  c|  d|
+---+---+---+---+
|  3|  0|  4|  1|
+---+---+---+---+
{code}
  


> Calling outer join and na.fill(0) and then inner join will miss rows
> 
>
> Key: SPARK-17957
> URL: https://issues.apache.org/jira/browse/SPARK-17957
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: Spark 2.0.1, Mac, Local
>Reporter: Linbo
>  Labels: joins, na.fill
>
> I reported a similar bug two months ago and it's fixed in Spark 2.0.1: 
> https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when 
> I insert a na.fill(0) call between outer join and inner join in the same 
> workflow in SPARK-17060 I get wrong result.
> {code:title=spark-shell|borderStyle=solid}
> scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b")
> a: org.apache.spark.sql.DataFrame = [a: int, b: int]
> scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c")
> b: org.apache.spark.sql.DataFrame = [a: int, c: int]
> scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
> ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala>