If I try to inner-join two dataframes which originated from the same initial
dataframe that was loaded using spark.sql() call, it results in an error -

    // reading from Hive .. the data is stored in Parquet format in Amazon
S3
    val d1 = spark.sql("select * from <hivetable>") 
    val df1 =
d1.groupBy("key1","key2").agg(avg("totalprice").as("avgtotalprice"))
    val df2 = d1.groupBy("key1","key2").agg(avg("itemcount").as("avgqty")) 
    df1.join(df2, Seq("key1","key2")) gives error -
     org.apache.spark.sql.AnalysisException: using columns ['key1,'key2] can
not be resolved given input columns: [key1, key2, avgtotalprice, avgqty];

If the same Dataframe is initialized via spark.read.parquet(), the above
code works. This same code above also worked with Spark 1.6.2. I created a
JIRA too ..  SPARK-17709 <https://issues.apache.org/jira/browse/SPARK-17709>  

Any help appreciated!

Thanks,
Ashish



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-2-0-issue-tp27818.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to