in the Spark SQL example, `year("1912")` means, first cast "1912" to date
type, and then call the "year" function.
in the Postgres example, `date_part('year',TIMESTAMP '2017')` means, get a
timestamp literal, and call the "date_part" function.
Can you try date literal in Postgres?
On Mon, Feb 18
It is hard to say this is a bug. In the existing Spark applications, the
current behavior might be already considered as a feature instead of a bug.
I am thinking if we should introduce a strict mode to throw an exception
for these type casting, like what Postgres behaves.
Darcy Shen 于2019年2月17日周
For PostgreSQL:
postgres=# SELECT date_part('year',TIMESTAMP '2017-01-01');
date_part
---
2017
(1 row)
postgres=# SELECT date_part('year',TIMESTAMP '2017');
ERROR: invalid input syntax for type timestamp: "2017"
LINE 1: SELECT date_part('year',TIMESTAMP '2017');
We normally do not follow MySQL. Check the commercial database [like
Oracle]? or the open source PostgreSQL?
Sean Owen 于2019年2月15日周五 上午5:34写道:
> year("1912") == 1912 makes sense; month("1912") == 1 is odd but not
> wrong. On the one hand, some answer might be better than none. But
> then, we are
year("1912") == 1912 makes sense; month("1912") == 1 is odd but not
wrong. On the one hand, some answer might be better than none. But
then, we are trying to match Hive semantics where the SQL standard is
silent. Is this actually defined behavior in a SQL standard, or, what
does MySQL do?
On Fri,
See https://issues.apache.org/jira/browse/SPARK-26885 and
https://github.com/apache/spark/blob/71170e74df5c7ec657f61154212d1dc2ba7d0613/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
stringToTimestamp, stringToDate support , as a result:
select year(