MaxGekk commented on code in PR #39239:
URL: https://github.com/apache/spark/pull/39239#discussion_r1058176658
##########
python/pyspark/sql/types.py:
##########
@@ -276,7 +276,15 @@ def toInternal(self, dt: datetime.datetime) -> int:
def fromInternal(self, ts: int) -> datetime.datetime:
if ts is not None:
# using int to avoid precision loss in float
- return datetime.datetime.fromtimestamp(ts //
1000000).replace(microsecond=ts % 1000000)
+ return (
+ datetime.datetime
+ # Set the time zone to UTC because the TIMESTAMP type stores
timestamps
+ # as the number of microseconds from the epoch of
1970-01-01T00:00:00.000000Z
+ # in the UTC time zone.
+ .fromtimestamp(ts // 1000000,
tz=datetime.timezone.utc).replace(
Review Comment:
> it was a naive datetime that is a local time when it's collected before
Just coping of an offset from the epoch in UTC and saying that it is a local
offset is incorrect, I do believe. So, the current impl is wrong definitely.
What I did here and semantically the same what we do while converting
TimestampType to external Java type - java.time.Instant.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]