MaxGekk commented on code in PR #48893:
URL: https://github.com/apache/spark/pull/48893#discussion_r1851000156


##########
sql/core/src/test/resources/sql-tests/results/window.sql.out:
##########
@@ -1442,3 +1442,69 @@ a        1       2
 a      NULL    1
 b      1       1
 b      2       2
+
+
+-- !query
+SELECT *, mean(val_double) over (partition BY val ORDER BY val_date RANGE 
INTERVAL '5' DAY PRECEDING) AS mean FROM testData
+-- !query schema
+struct<val:int,val_long:bigint,val_double:double,val_date:date,val_timestamp:timestamp,cate:string,mean:double>
+-- !query output
+1      1       1.0     2017-08-01      2017-07-31 17:00:00     a       1.0
+1      2       2.5     2017-08-02      2017-08-05 23:13:20     a       1.5
+1      NULL    1.0     2017-08-01      2017-07-31 17:00:00     b       1.0
+2      2147483650      100.001 2020-12-31      2020-12-30 16:00:00     a       
100.001
+2      3       3.3     2017-08-03      2017-08-17 13:00:00     b       3.3
+3      1       1.0     2017-08-01      2017-07-31 17:00:00     NULL    1.0
+3      2147483650      100.001 2020-12-31      2020-12-30 16:00:00     b       
100.001
+NULL   1       1.0     2017-08-01      2017-07-31 17:00:00     a       1.0
+NULL   NULL    NULL    NULL    NULL    NULL    NULL
+
+
+-- !query
+SELECT *, mean(val_double) over (partition BY val ORDER BY val_date RANGE 
INTERVAL '1 2:3:4.001' DAY TO SECOND PRECEDING) AS mean FROM testData
+-- !query schema
+struct<>
+-- !query output
+org.apache.spark.sql.catalyst.ExtendedAnalysisException
+{
+  "errorClass" : "DATATYPE_MISMATCH.RANGE_FRAME_INVALID_TYPE",
+  "sqlState" : "42K09",
+  "messageParameters" : {
+    "orderSpecType" : "\"DATE\"",
+    "sqlExpr" : "\"(PARTITION BY val ORDER BY val_date ASC NULLS FIRST RANGE 
BETWEEN INTERVAL '1 02:03:04.001' DAY TO SECOND PRECEDING AND CURRENT ROW)\"",
+    "valueBoundaryType" : "\"INTERVAL DAY TO SECOND\""
+  },
+  "queryContext" : [ {
+    "objectType" : "",
+    "objectName" : "",
+    "startIndex" : 33,
+    "stopIndex" : 121,
+    "fragment" : "(partition BY val ORDER BY val_date RANGE INTERVAL '1 
2:3:4.001' DAY TO SECOND PRECEDING)"
+  } ]
+}
+
+
+-- !query
+SELECT *, mean(val_double) over (partition BY val ORDER BY val_date RANGE DATE 
'2024-01-01' PRECEDING) AS mean FROM testData
+-- !query schema
+struct<>
+-- !query output
+org.apache.spark.sql.catalyst.ExtendedAnalysisException
+{
+  "errorClass" : "DATATYPE_MISMATCH.UNEXPECTED_INPUT_TYPE",
+  "sqlState" : "42K09",
+  "messageParameters" : {
+    "inputSql" : "\"DATE '2024-01-01'\"",
+    "inputType" : "\"DATE\"",
+    "paramIndex" : "first",
+    "requiredType" : "(\"NUMERIC\" or \"INTERVAL DAY TO SECOND\" or \"INTERVAL 
YEAR TO MONTH\" or \"INTERVAL\")",

Review Comment:
   It would be nice to output `INTERVAL DAY`, and don't confuse users, but this 
might be not related to this PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to