Github user felixcheung commented on a diff in the pull request:

    https://github.com/apache/spark/pull/6759#discussion_r32299296
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/parquet/ParquetConverter.scala ---
    @@ -498,69 +493,21 @@ private[parquet] object CatalystArrayConverter {
     }
     
     private[parquet] object CatalystTimestampConverter {
    -  // TODO most part of this comes from Hive-0.14
    -  // Hive code might have some issues, so we need to keep an eye on it.
    -  // Also we use NanoTime and Int96Values from parquet-examples.
    -  // We utilize jodd to convert between NanoTime and Timestamp
    -  val parquetTsCalendar = new ThreadLocal[Calendar]
    -  def getCalendar: Calendar = {
    -    // this is a cache for the calendar instance.
    -    if (parquetTsCalendar.get == null) {
    -      
parquetTsCalendar.set(Calendar.getInstance(TimeZone.getTimeZone("GMT")))
    -    }
    -    parquetTsCalendar.get
    -  }
    -  val NANOS_PER_SECOND: Long = 1000000000
    -  val SECONDS_PER_MINUTE: Long = 60
    -  val MINUTES_PER_HOUR: Long = 60
    -  val NANOS_PER_MILLI: Long = 1000000
    +  // see 
http://stackoverflow.com/questions/466321/convert-unix-timestamp-to-julian
    +  val JULIAN_DAY_OF_EPOCH = 2440587.5
    --- End diff --
    
    if we generate parquet with hive query like this we could compare the 
timestamp value in Spark?
    ```
    USE default;
    DROP TABLE timestamptable;
    
    CREATE TABLE timestamptable 
    STORED AS PARQUET
    AS
    SELECT cast(from_unixtime(unix_timestamp()) as timestamp) as t, * FROM 
sample_07;
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to