[ 
https://issues.apache.org/jira/browse/SPARK-13445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168608#comment-15168608
 ] 

Xiao Li commented on SPARK-13445:
---------------------------------

Tried it in the latest 1.6 upstream. The query can be resolved. 

{code}
scala> sqlContext.read.json(sc.makeRDD("""{"data": {"type": 1}}""" :: 
Nil)).registerTempTable("event_record_sample")

scala> sql("SELECT data, row_number() over (partition by data.type) as foo from 
event_record_sample").explain(true)

== Parsed Logical Plan ==
'Project [unresolvedalias('data),unresolvedalias('row_number() 
windowspecdefinition('data.type,UnspecifiedFrame) AS foo#17)]
+- 'UnresolvedRelation `event_record_sample`, None

== Analyzed Logical Plan ==
data: struct<type:bigint>, foo: int
Project [data#5,foo#17]
+- Project [data#5,foo#17,foo#17]
   +- Window [data#5], 
[HiveWindowFunction#org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRowNumber()
 windowspecdefinition(data#5.type,ROWS BETWEEN UNBOUNDED PRECEDING AND 
UNBOUNDED FOLLOWING) AS foo#17], [data#5.type]
      +- Project [data#5]
         +- Subquery event_record_sample
            +- Relation[data#5] JSONRelation

== Optimized Logical Plan ==
Window [data#5], 
[HiveWindowFunction#org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRowNumber()
 windowspecdefinition(data#5.type,ROWS BETWEEN UNBOUNDED PRECEDING AND 
UNBOUNDED FOLLOWING) AS foo#17], [data#5.type]
+- Project [data#5]
   +- Relation[data#5] JSONRelation

== Physical Plan ==
Window [data#5], 
[HiveWindowFunction#org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRowNumber()
 windowspecdefinition(data#5.type,ROWS BETWEEN UNBOUNDED PRECEDING AND 
UNBOUNDED FOLLOWING) AS foo#17], [data#5.type]
+- Sort [data#5.type ASC], false, 0
   +- TungstenExchange hashpartitioning(data#5.type,200), None
      +- Scan JSONRelation[data#5] InputPaths:
{code}

However, I do not think the query should be accepted when row_number() is used 
without the order-by clause in the window specification.

Tomorrow, I will try to find which PR brings the output difference between 2.0 
and 1.6.1

> Selecting "data" with window function does not work unless aliased (using 
> PARTITION BY)
> ---------------------------------------------------------------------------------------
>
>                 Key: SPARK-13445
>                 URL: https://issues.apache.org/jira/browse/SPARK-13445
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.6.0
>            Reporter: Reynold Xin
>            Priority: Critical
>
> The code does not throw an exception if "data" is aliased.  Maybe this is a 
> reserved word or aliases are just required when using PARTITION BY?
> {code}
> sql("""
>   SELECT 
>     data as the_data,
>     row_number() over (partition BY data.type) AS foo
>   FROM event_record_sample
> """)
> {code}
> However, this code throws an error:
> {code}
> sql("""
>   SELECT 
>     data,
>     row_number() over (partition BY data.type) AS foo
>   FROM event_record_sample
> """)
> {code}
> {code}
> org.apache.spark.sql.AnalysisException: resolved attribute(s) type#15246 
> missing from 
> data#15107,par_cat#15112,schemaMajorVersion#15110,source#15108,recordId#15103,features#15106,eventType#15105,ts#15104L,schemaMinorVersion#15111,issues#15109
>  in operator !Project [data#15107,type#15246];
>       at 
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:38)
>       at 
> org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:44)
>       at 
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:183)
>       at 
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:50)
>       at 
> org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:105)
>       at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:104)
>       at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:104)
>       at scala.collection.immutable.List.foreach(List.scala:318)
>       at 
> org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:104)
>       at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:104)
>       at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:104)
>       at scala.collection.immutable.List.foreach(List.scala:318)
>       at 
> org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:104)
>       at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:104)
>       at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:104)
>       at scala.collection.immutable.List.foreach(List.scala:318)
>       at 
> org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:104)
>       at 
> org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:50)
>       at 
> org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:44)
>       at 
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:34)
>       at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
>       at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
>       at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:816)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to