srowen commented on issue #23411: [SPARK-26503][CORE] Get rid of 
spark.sql.legacy.timeParser.enabled
URL: https://github.com/apache/spark/pull/23411#issuecomment-450573302
 
 
   Hm, this is a tough one @MaxGekk . For the particular test case that failed 
most recently, the random timestamp that is generated that causes a problem is 
-61688070376409L. I believe it's Jackson that ultimately serializes the 
timestamp to formatted date, and it produces 
`{"index":2,"col":"0015-03-08T09:00:45.591-07:52"}`
   
   That's consistent with `java.time` formatting, sort of:
   ```
   scala> java.time.Instant.ofEpochMilli(-61688070376409L)
   res0: java.time.Instant = 0015-03-08T16:53:43.591Z
   ```
   
   After accounting for time zone differnece, there's a 1 minute and 2 second 
difference.
   
   That's also the difference in the Spark test failures:
   ```
   |2    |0015-03-10 08:53:43.591|
   ...
   |2    |0015-03-10 08:52:45.591|
   ```
   ... but there's a nearly 2 day difference with the formatting from 
`java.time`!
   
   is this what you're getting to with 
https://github.com/apache/spark/pull/23391 and I am just catching up?
   
   I suppose we can't merge this until this difference is really fixed, but 
until that happens, the new functionality doesn't quite work for old dates 
right?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to