Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16781#discussion_r112042439
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
 ---
    @@ -498,6 +498,11 @@ object DateTimeUtils {
         false
       }
     
    +  lazy val validTimezones = TimeZone.getAvailableIDs().toSet
    +  def isValidTimezone(timezoneId: String): Boolean = {
    +    validTimezones.contains(timezoneId)
    --- End diff --
    
    Java does a case-sensitive check, which means Hive does too.  I don't think 
we want to write out a timezone w/ the wrong capitalization, and then have 
another tool throw an error.
    
    ```scala
    scala> val tzId = "America/Los_Angeles"
    tzId: String = America/Los_Angeles
    
    scala> java.util.TimeZone.getTimeZone(tzId).getID()
    res1: String = America/Los_Angeles
    
    scala> java.util.TimeZone.getTimeZone(tzId.toLowerCase()).getID()
    res2: String = GMT
    ```
    
    
https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/timestamp/NanoTimeUtils.java#L167
    
    We could try to auto-convert the user's timezone to the correct 
capitilazation, but do you think that is worth it?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to