Re: can not join dataset with itself

2016-04-08 Thread JH P
I’m using Spark 1.6.1

Class is case class DistinctValues(statType: Int, dataType: Int, _id: Int, 
values: Array[(String, Long)], numOfMembers: Int,category: String)

and

error for newGnsDS.joinWith(newGnsDS, $"dataType”)
Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot 
resolve 'dataType' given input columns: [countUnique, median, recentEdid, max, 
cdid, dataType, firstQuarter, sigma, replicationRateAvg, thirdQuarter, 
accCount, avg, countNotNull, statType, categoryId, category, min, numRows, 
numDistinctRows];

error for newGnsDS.as("a").joinWith(newGnsDS.as("b"), $"a.dataType" === 
$"b.datatype”)
Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot 
resolve 'a.dataType' given input columns: [countUnique, median, recentEdid, 
max, cdid, dataType, firstQuarter, sigma, replicationRateAvg, thirdQuarter, 
accCount, avg, countNotNull, statType, categoryId, category, min, numRows, 
numDistinctRows];

Common error
at 
org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:60)
at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:57)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:335)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:335)
at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:334)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:332)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:332)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:281)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at 
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at 
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at 
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:321)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:332)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionUp$1(QueryPlan.scala:108)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2(QueryPlan.scala:119)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$2.apply(QueryPlan.scala:127)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at 
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at 
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at 
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:127)
at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:57)
at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:50)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:121)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:120)
at 

Re: can not join dataset with itself

2016-04-08 Thread Ted Yu
Looks like you're using Spark 1.6.x

What error(s) did you get for the first two joins ?

Thanks

On Fri, Apr 8, 2016 at 3:53 AM, JH P  wrote:

> Hi. I want a dataset join with itself. So i tried below codes.
>
> 1. newGnsDS.joinWith(newGnsDS, $"dataType”)
>
> 2. newGnsDS.as("a").joinWith(newGnsDS.as("b"), $"a.dataType" === $
> "b.datatype”)
>
> 3. val a = newGnsDS.map(x => x).as("a")
>val b = newGnsDS.map(x => x).as("b")
>
>
>a.joinWith(b, $"a.dataType" === $"b.datatype")
>
> 1,2 doesn’t work, but 3 works. I don’t know why it works, better idea
> exists. please help
>


can not join dataset with itself

2016-04-08 Thread JH P
Hi. I want a dataset join with itself. So i tried below codes.

1. newGnsDS.joinWith(newGnsDS, $"dataType”)

2. newGnsDS.as("a").joinWith(newGnsDS.as("b"), $"a.dataType" === $"b.datatype”)

3. val a = newGnsDS.map(x => x).as("a")
   val b = newGnsDS.map(x => x).as("b")
  
   a.joinWith(b, $"a.dataType" === $"b.datatype")

1,2 doesn’t work, but 3 works. I don’t know why it works, better idea exists. 
please help