[jira] [Commented] (SPARK-17957) Calling outer join and na.fill(0) and then inner join will miss rows

2016-11-05 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638757#comment-15638757
 ] 

Apache Spark commented on SPARK-17957:
--

User 'gatorsmile' has created a pull request for this issue:
https://github.com/apache/spark/pull/15781

> Calling outer join and na.fill(0) and then inner join will miss rows
> 
>
> Key: SPARK-17957
> URL: https://issues.apache.org/jira/browse/SPARK-17957
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: Spark 2.0.1, Mac, Local
>Reporter: Linbo
>Assignee: Xiao Li
>Priority: Critical
>  Labels: correctness
> Fix For: 2.1.0
>
>
> I reported a similar bug two months ago and it's fixed in Spark 2.0.1: 
> https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when 
> I insert a na.fill(0) call between outer join and inner join in the same 
> workflow in SPARK-17060 I get wrong result.
> {code:title=spark-shell|borderStyle=solid}
> scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b")
> a: org.apache.spark.sql.DataFrame = [a: int, b: int]
> scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c")
> b: org.apache.spark.sql.DataFrame = [a: int, c: int]
> scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
> ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> val c = Seq((3, 1)).toDF("a", "d")
> c: org.apache.spark.sql.DataFrame = [a: int, d: int]
> scala> c.show
> +---+---+
> |  a|  d|
> +---+---+
> |  3|  1|
> +---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> +---+---+---+---+
> {code}
> And again if i use persist, the result is correct. I think the problem is 
> join optimizer similar to this pr: https://github.com/apache/spark/pull/14661
> {code:title=spark-shell|borderStyle=solid}
> scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0).persist
> ab: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, b: int 
> ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> |  3|  0|  4|  1|
> +---+---+---+---+
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-17957) Calling outer join and na.fill(0) and then inner join will miss rows

2016-10-17 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583874#comment-15583874
 ] 

Apache Spark commented on SPARK-17957:
--

User 'gatorsmile' has created a pull request for this issue:
https://github.com/apache/spark/pull/15523

> Calling outer join and na.fill(0) and then inner join will miss rows
> 
>
> Key: SPARK-17957
> URL: https://issues.apache.org/jira/browse/SPARK-17957
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: Spark 2.0.1, Mac, Local
>Reporter: Linbo
>Assignee: Xiao Li
>Priority: Critical
>  Labels: correctness
>
> I reported a similar bug two months ago and it's fixed in Spark 2.0.1: 
> https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when 
> I insert a na.fill(0) call between outer join and inner join in the same 
> workflow in SPARK-17060 I get wrong result.
> {code:title=spark-shell|borderStyle=solid}
> scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b")
> a: org.apache.spark.sql.DataFrame = [a: int, b: int]
> scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c")
> b: org.apache.spark.sql.DataFrame = [a: int, c: int]
> scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
> ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> val c = Seq((3, 1)).toDF("a", "d")
> c: org.apache.spark.sql.DataFrame = [a: int, d: int]
> scala> c.show
> +---+---+
> |  a|  d|
> +---+---+
> |  3|  1|
> +---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> +---+---+---+---+
> {code}
> And again if i use persist, the result is correct. I think the problem is 
> join optimizer similar to this pr: https://github.com/apache/spark/pull/14661
> {code:title=spark-shell|borderStyle=solid}
> scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0).persist
> ab: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, b: int 
> ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> |  3|  0|  4|  1|
> +---+---+---+---+
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-17957) Calling outer join and na.fill(0) and then inner join will miss rows

2016-10-15 Thread Linbo (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15579294#comment-15579294
 ] 

Linbo commented on SPARK-17957:
---

Thank you!

> Calling outer join and na.fill(0) and then inner join will miss rows
> 
>
> Key: SPARK-17957
> URL: https://issues.apache.org/jira/browse/SPARK-17957
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: Spark 2.0.1, Mac, Local
>Reporter: Linbo
>Priority: Critical
>  Labels: correctness
>
> I reported a similar bug two months ago and it's fixed in Spark 2.0.1: 
> https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when 
> I insert a na.fill(0) call between outer join and inner join in the same 
> workflow in SPARK-17060 I get wrong result.
> {code:title=spark-shell|borderStyle=solid}
> scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b")
> a: org.apache.spark.sql.DataFrame = [a: int, b: int]
> scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c")
> b: org.apache.spark.sql.DataFrame = [a: int, c: int]
> scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
> ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> val c = Seq((3, 1)).toDF("a", "d")
> c: org.apache.spark.sql.DataFrame = [a: int, d: int]
> scala> c.show
> +---+---+
> |  a|  d|
> +---+---+
> |  3|  1|
> +---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> +---+---+---+---+
> {code}
> And again if i use persist, the result is correct. I think the problem is 
> join optimizer similar to this pr: https://github.com/apache/spark/pull/14661
> {code:title=spark-shell|borderStyle=solid}
> scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0).persist
> ab: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, b: int 
> ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> |  3|  0|  4|  1|
> +---+---+---+---+
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-17957) Calling outer join and na.fill(0) and then inner join will miss rows

2016-10-15 Thread Xiao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15579282#comment-15579282
 ] 

Xiao Li commented on SPARK-17957:
-

Found the bug. 
{noformat}
Project [a#29, b#30, c#31, d#48]
+- Join Inner, (a#29 = a#47)
   :- Project [cast(coalesce(cast(coalesce(a#5, a#15) as double), 0.0) as int) 
AS a#29, cast(coalesce(cast(b#6 as double), 0.0) as int) AS b#30, 
cast(coalesce(cast(c#16 as double), 0.0) as int) AS c#31]
   :  +- Filter isnotnull(cast(coalesce(cast(coalesce(a#5, a#15) as double), 
0.0) as int))
   : +- Join FullOuter, (a#5 = a#15)
   ::- LocalRelation [a#5, b#6]
   :+- LocalRelation [a#15, c#16]
   +- LocalRelation [a#47, d#48]
{noformat}

Will fix it soon. 



> Calling outer join and na.fill(0) and then inner join will miss rows
> 
>
> Key: SPARK-17957
> URL: https://issues.apache.org/jira/browse/SPARK-17957
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: Spark 2.0.1, Mac, Local
>Reporter: Linbo
>  Labels: joins, na.fill
>
> I reported a similar bug two months ago and it's fixed in Spark 2.0.1: 
> https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when 
> I insert a na.fill(0) call between outer join and inner join in the same 
> workflow in SPARK-17060 I get wrong result.
> {code:title=spark-shell|borderStyle=solid}
> scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b")
> a: org.apache.spark.sql.DataFrame = [a: int, b: int]
> scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c")
> b: org.apache.spark.sql.DataFrame = [a: int, c: int]
> scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
> ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> val c = Seq((3, 1)).toDF("a", "d")
> c: org.apache.spark.sql.DataFrame = [a: int, d: int]
> scala> c.show
> +---+---+
> |  a|  d|
> +---+---+
> |  3|  1|
> +---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> +---+---+---+---+
> {code}
> And again if i use persist, the result is correct. I think the problem is 
> join optimizer similar to this pr: https://github.com/apache/spark/pull/14661
> {code:title=spark-shell|borderStyle=solid}
> scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0).persist
> ab: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, b: int 
> ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> |  3|  0|  4|  1|
> +---+---+---+---+
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-17957) Calling outer join and na.fill(0) and then inner join will miss rows

2016-10-15 Thread Xiao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15579260#comment-15579260
 ] 

Xiao Li commented on SPARK-17957:
-

You can see the plan. The optimized plan is still full outer join. : ) Thus, it 
should not be caused by outer join elimination. 

{noformat}
val a = Seq((1, 2), (2, 3)).toDF("a", "b")
val b = Seq((2, 5), (3, 4)).toDF("a", "c")
val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
val c = Seq((3, 1)).toDF("a", "d")
ab.join(c, "a").explain(true)
{noformat}

{noformat}
== Optimized Logical Plan ==
Project [a#29, b#30, c#31, d#42]
+- Join Inner, (a#29 = a#41)
   :- Project [cast(coalesce(cast(coalesce(a#5, a#15) as double), 0.0) as int) 
AS a#29, cast(coalesce(cast(b#6 as double), 0.0) as int) AS b#30, 
cast(coalesce(cast(c#16 as double), 0.0) as int) AS c#31]
   :  +- Filter isnotnull(cast(coalesce(cast(coalesce(a#5, a#15) as double), 
0.0) as int))
   : +- Join FullOuter, (a#5 = a#15)
   ::- LocalRelation [a#5, b#6]
   :+- LocalRelation [a#15, c#16]
   +- LocalRelation [a#41, d#42]
{noformat}

Let me find what is the cause. 

> Calling outer join and na.fill(0) and then inner join will miss rows
> 
>
> Key: SPARK-17957
> URL: https://issues.apache.org/jira/browse/SPARK-17957
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: Spark 2.0.1, Mac, Local
>Reporter: Linbo
>  Labels: joins, na.fill
>
> I reported a similar bug two months ago and it's fixed in Spark 2.0.1: 
> https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when 
> I insert a na.fill(0) call between outer join and inner join in the same 
> workflow in SPARK-17060 I get wrong result.
> {code:title=spark-shell|borderStyle=solid}
> scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b")
> a: org.apache.spark.sql.DataFrame = [a: int, b: int]
> scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c")
> b: org.apache.spark.sql.DataFrame = [a: int, c: int]
> scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
> ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> val c = Seq((3, 1)).toDF("a", "d")
> c: org.apache.spark.sql.DataFrame = [a: int, d: int]
> scala> c.show
> +---+---+
> |  a|  d|
> +---+---+
> |  3|  1|
> +---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> +---+---+---+---+
> {code}
> And again if i use persist, the result is correct. I think the problem is 
> join optimizer similar to this pr: https://github.com/apache/spark/pull/14661
> {code:title=spark-shell|borderStyle=solid}
> scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0).persist
> ab: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, b: int 
> ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> |  3|  0|  4|  1|
> +---+---+---+---+
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-17957) Calling outer join and na.fill(0) and then inner join will miss rows

2016-10-15 Thread Xiao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15579252#comment-15579252
 ] 

Xiao Li commented on SPARK-17957:
-

Thank you for reporting it. Let me do a quick check.

> Calling outer join and na.fill(0) and then inner join will miss rows
> 
>
> Key: SPARK-17957
> URL: https://issues.apache.org/jira/browse/SPARK-17957
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: Spark 2.0.1, Mac, Local
>Reporter: Linbo
>  Labels: joins, na.fill
>
> I reported a similar bug two months ago and it's fixed in Spark 2.0.1: 
> https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when 
> I insert a na.fill(0) call between outer join and inner join in the same 
> workflow in SPARK-17060 I get wrong result.
> {code:title=spark-shell|borderStyle=solid}
> scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b")
> a: org.apache.spark.sql.DataFrame = [a: int, b: int]
> scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c")
> b: org.apache.spark.sql.DataFrame = [a: int, c: int]
> scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
> ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> val c = Seq((3, 1)).toDF("a", "d")
> c: org.apache.spark.sql.DataFrame = [a: int, d: int]
> scala> c.show
> +---+---+
> |  a|  d|
> +---+---+
> |  3|  1|
> +---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> +---+---+---+---+
> {code}
> And again if i use persist, the result is correct. I think the problem is 
> join optimizer similar to this pr: https://github.com/apache/spark/pull/14661
> {code:title=spark-shell|borderStyle=solid}
> scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0).persist
> ab: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, b: int 
> ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> |  3|  0|  4|  1|
> +---+---+---+---+
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-17957) Calling outer join and na.fill(0) and then inner join will miss rows

2016-10-15 Thread Linbo (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15579142#comment-15579142
 ] 

Linbo commented on SPARK-17957:
---

cc [~smilegator]

> Calling outer join and na.fill(0) and then inner join will miss rows
> 
>
> Key: SPARK-17957
> URL: https://issues.apache.org/jira/browse/SPARK-17957
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.1
> Environment: Spark 2.0.1, Mac, Local
>Reporter: Linbo
>  Labels: joins, na.fill
>
> I reported a similar bug two months ago and it's fixed in Spark 2.0.1: 
> https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when 
> I insert a na.fill(0) call between outer join and inner join in the same 
> workflow in SPARK-17060 I get wrong result.
> {code:title=spark-shell|borderStyle=solid}
> scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b")
> a: org.apache.spark.sql.DataFrame = [a: int, b: int]
> scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c")
> b: org.apache.spark.sql.DataFrame = [a: int, c: int]
> scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0)
> ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> val c = Seq((3, 1)).toDF("a", "d")
> c: org.apache.spark.sql.DataFrame = [a: int, d: int]
> scala> c.show
> +---+---+
> |  a|  d|
> +---+---+
> |  3|  1|
> +---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> +---+---+---+---+
> {code}
> And again if i use persist, the result is correct. I think the problem is 
> join optimizer similar to this pr: https://github.com/apache/spark/pull/14661
> {code:title=spark-shell|borderStyle=solid}
> scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0).persist
> ab: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, b: int 
> ... 1 more field]
> scala> ab.show
> +---+---+---+
> |  a|  b|  c|
> +---+---+---+
> |  1|  2|  0|
> |  3|  0|  4|
> |  2|  3|  5|
> +---+---+---+
> scala> ab.join(c, "a").show
> +---+---+---+---+
> |  a|  b|  c|  d|
> +---+---+---+---+
> |  3|  0|  4|  1|
> +---+---+---+---+
> {code}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org