[ 
https://issues.apache.org/jira/browse/SPARK-15549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yadong Qi updated SPARK-15549:
------------------------------
    Summary: Bucket column only need to be found in the output of relation when 
use bucketed table  (was: Bucket column only need to be found in the output 
relation when use bucketed table)

> Bucket column only need to be found in the output of relation when use 
> bucketed table
> -------------------------------------------------------------------------------------
>
>                 Key: SPARK-15549
>                 URL: https://issues.apache.org/jira/browse/SPARK-15549
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.0.0
>            Reporter: Yadong Qi
>
> I create a bucketed table test(i int, j int, k int) with bucket column i, 
> {code:java}
> case class Data(i: Int, j: Int, k: Int)
> sc.makeRDD(Array((1, 2, 3))).map(x => Data(x._1, x._2, 
> x._3)).toDF.write.bucketBy(2, "i").saveAsTable("test")
> {code}
> and I run the following SQL:
> {code:sql}
> SELECT j FROM test;
> Error in query: bucket column i not found in existing columns (j);
> SELECT j, MAX(k) FROM test GROUP BY j;
> Error in query: bucket column i not found in existing columns (j, k);
> {code}
> I think the bucket column only need to be found in the output of relation. So 
> the 2 sqls bellow should be executed right.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to