[
https://issues.apache.org/jira/browse/SPARK-17709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15537264#comment-15537264
]
Ashish Shrowty commented on SPARK-17709:
----------------------------------------
[~dkbiswal] Attached are the explain() outputs -
df1.explain
== Physical Plan ==
*HashAggregate(keys=[companyid#3364, loyaltycardnumber#3370],
functions=[avg(cast(itemcount#3372 as bigint))])
+- Exchange hashpartitioning(companyid#3364, loyaltycardnumber#3370, 200)
+- *HashAggregate(keys=[companyid#3364, loyaltycardnumber#3370],
functions=[partial_avg(cast(itemcount#3372 as bigint))])
+- *Project [loyaltycardnumber#3370, itemcount#3372, companyid#3364]
+- *BatchedScan parquet
facts.storetransaction[loyaltycardnumber#3370,itemcount#3372,year#3362,month#3363,companyid#3364]
Format: ParquetFormat, InputPaths:
s3://com.birdzi.datalake.test/basedatasets/facts/storetransaction/2016-09-15-2012/year=2002/month...,
PushedFilters: [], ReadSchema: struct<loyaltycardnumber:string,itemcount:int>
df2.explain
== Physical Plan ==
*HashAggregate(keys=[companyid#3364, loyaltycardnumber#3370],
functions=[avg(totalprice#3373)])
+- Exchange hashpartitioning(companyid#3364, loyaltycardnumber#3370, 200)
+- *HashAggregate(keys=[companyid#3364, loyaltycardnumber#3370],
functions=[partial_avg(totalprice#3373)])
+- *Project [loyaltycardnumber#3370, totalprice#3373, companyid#3364]
+- *BatchedScan parquet
facts.storetransaction[loyaltycardnumber#3370,totalprice#3373,year#3362,month#3363,companyid#3364]
Format: ParquetFormat, InputPaths:
s3://com.birdzi.datalake.test/basedatasets/facts/storetransaction/2016-09-15-2012/year=2002/month...,
PushedFilters: [], ReadSchema:
struct<loyaltycardnumber:string,totalprice:double>
> spark 2.0 join - column resolution error
> ----------------------------------------
>
> Key: SPARK-17709
> URL: https://issues.apache.org/jira/browse/SPARK-17709
> Project: Spark
> Issue Type: Bug
> Affects Versions: 2.0.0
> Reporter: Ashish Shrowty
> Labels: easyfix
>
> If I try to inner-join two dataframes which originated from the same initial
> dataframe that was loaded using spark.sql() call, it results in an error -
> // reading from Hive .. the data is stored in Parquet format in Amazon S3
> val d1 = spark.sql("select * from <hivetable>")
> val df1 = d1.groupBy("key1","key2")
> .agg(avg("totalprice").as("avgtotalprice"))
> val df2 = d1.groupBy("key1","key2")
> .agg(avg("itemcount").as("avgqty"))
> df1.join(df2, Seq("key1","key2")) gives error -
> org.apache.spark.sql.AnalysisException: using columns ['key1,'key2] can
> not be resolved given input columns: [key1, key2, avgtotalprice, avgqty];
> If the same Dataframe is initialized via spark.read.parquet(), the above code
> works. This same code above worked with Spark 1.6.2
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]