uros-db commented on code in PR #51575:
URL: https://github.com/apache/spark/pull/51575#discussion_r2223854016


##########
sql/api/src/main/scala/org/apache/spark/sql/functions.scala:
##########
@@ -5699,6 +5699,39 @@ object functions {
   def unix_timestamp(s: Column, p: String): Column =
     Column.fn("unix_timestamp", s, lit(p))
 
+  /**
+   * Parses a string value to a time value.
+   *
+   * @param str
+   *   A string to be parsed to time

Review Comment:
   For scala functions, there really isn't a proposed uniform format for 
`@param`. Instead, the widely used format seems to be: start with lowercase, no 
period at the end.
   
   For some functions, it's: start with uppercase, no period at the end.
   
   And for a minimal number of functions, there's a period at the end.
   
   In conclusion, I don't think there's really strict formatting here, but 
let's stick with a single approach across TIME-related functions. I'm fine with 
either way, so let's stick with you current proposal: start with uppercase, 
period at the end.



##########
sql/api/src/main/scala/org/apache/spark/sql/functions.scala:
##########
@@ -5699,6 +5699,39 @@ object functions {
   def unix_timestamp(s: Column, p: String): Column =
     Column.fn("unix_timestamp", s, lit(p))
 
+  /**
+   * Parses a string value to a time value.
+   *
+   * @param str
+   *   A string to be parsed to time
+   * @return
+   *   A time, or raises an error if the input is malformed.
+   * @group datetime_funcs
+   * @since 4.1.0
+   */
+  def to_time(str: Column): Column = {
+    Column.fn("to_time", str)
+  }
+
+  /**
+   * Parses a string value to a time value.
+   *
+   * See <a 
href="https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html";> 
Datetime
+   * Patterns</a> for valid time format patterns.
+   *
+   * @param str
+   *   A string to be parsed to time

Review Comment:
   Same as 
https://github.com/apache/spark/pull/51575/files/a38b76dce876b8464ed30b582791ed966e84b5b6#r2223270557.



##########
sql/api/src/main/scala/org/apache/spark/sql/functions.scala:
##########
@@ -5699,6 +5699,39 @@ object functions {
   def unix_timestamp(s: Column, p: String): Column =
     Column.fn("unix_timestamp", s, lit(p))
 
+  /**
+   * Parses a string value to a time value.
+   *
+   * @param str
+   *   A string to be parsed to time
+   * @return
+   *   A time, or raises an error if the input is malformed.
+   * @group datetime_funcs
+   * @since 4.1.0
+   */
+  def to_time(str: Column): Column = {
+    Column.fn("to_time", str)
+  }
+
+  /**
+   * Parses a string value to a time value.
+   *
+   * See <a 
href="https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html";> 
Datetime
+   * Patterns</a> for valid time format patterns.
+   *
+   * @param str
+   *   A string to be parsed to time
+   * @param format
+   *   Time format pattern to follow

Review Comment:
   Same as 
https://github.com/apache/spark/pull/51575/files/a38b76dce876b8464ed30b582791ed966e84b5b6#r2223270557.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to