MaxGekk commented on a change in pull request #35502:
URL: https://github.com/apache/spark/pull/35502#discussion_r809787232
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
##########
@@ -1163,4 +1163,40 @@ object DateTimeUtils {
val localStartTs = getLocalDateTime(startMicros, zoneId)
ChronoUnit.MICROS.between(localStartTs, localEndTs)
}
+
+ /**
+ * Adds the specified number of units to a timestamp.
+ *
+ * @param unit A keyword that specifies the interval units to add to the
input timestamp.
+ * @param quantity The amount of `unit`s to add. It can be positive or
negative.
+ * @param micros The input timestamp value, expressed in microseconds since
1970-01-01 00:00:00Z.
+ * @param zoneId The time zone ID at which the operation is performed.
+ * @return A timestamp value, expressed in microseconds since 1970-01-01
00:00:00Z.
+ */
+ def timestampAdd(unit: String, quantity: Int, micros: Long, zoneId: ZoneId):
Long = {
+ unit.toUpperCase(Locale.ROOT) match {
+ case "MICROSECOND" =>
+ timestampAddDayTime(micros, quantity, zoneId)
+ case "MILLISECOND" =>
+ timestampAddDayTime(micros, quantity * MICROS_PER_MILLIS, zoneId)
+ case "SECOND" =>
+ timestampAddDayTime(micros, quantity * MICROS_PER_SECOND, zoneId)
+ case "MINUTE" =>
+ timestampAddDayTime(micros, quantity * MICROS_PER_MINUTE, zoneId)
+ case "HOUR" =>
+ timestampAddDayTime(micros, quantity * MICROS_PER_HOUR, zoneId)
+ case "DAY" | "DAYOFYEAR" =>
+ timestampAddDayTime(micros, quantity * MICROS_PER_DAY, zoneId)
+ case "WEEK" =>
+ timestampAddDayTime(micros, quantity * MICROS_PER_DAY * DAYS_PER_WEEK,
zoneId)
+ case "MONTH" =>
+ timestampAddMonths(micros, quantity, zoneId)
+ case "QUARTER" =>
+ timestampAddMonths(micros, quantity * 3, zoneId)
+ case "YEAR" =>
+ timestampAddMonths(micros, quantity * MONTHS_PER_YEAR, zoneId)
+ case _ =>
+ throw QueryExecutionErrors.invalidUnitInTimestampAdd(unit)
Review comment:
> It's a waste of resource if we submit a Spark job which fails with
wrong unit name.
Not sure. Can you imagine a cluster of 1000 executors waiting for the driver
that is still processing a query because we eagerly want to check everything
even when user's queries and data don't have any issues. This is real waste of
user's resources.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]