chenghuichen commented on code in PR #7077:
URL: https://github.com/apache/paimon/pull/7077#discussion_r2707982571


##########
paimon-python/pypaimon/schema/data_types.py:
##########
@@ -589,31 +595,40 @@ def to_avro_type(field_type: pyarrow.DataType, 
field_name: str) -> Union[str, Di
             return {"type": "int", "logicalType": "date"}
         elif pyarrow.types.is_timestamp(field_type):
             unit = field_type.unit
-            if unit == 'us':
-                return {"type": "long", "logicalType": "timestamp-micros"}
-            elif unit == 'ms':
-                return {"type": "long", "logicalType": "timestamp-millis"}
+            if field_type.tz is None:
+                if unit == 'us':
+                    return {"type": "long", "logicalType": 
"local-timestamp-micros"}
+                elif unit == 'ms':
+                    return {"type": "long", "logicalType": 
"local-timestamp-millis"}
+                else:
+                    return {"type": "long", "logicalType": 
"local-timestamp-micros"}

Review Comment:
   @JingsongLi hello, I have had a discussion with HongBo with this part of 
code. 
   
   My understanding is that the type mapping in this code is consistent with 
the design intent, but it's inconsistent with the Java version. And the Java 
version, like Flink, may be a historical mistake 
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/formats/avro/
 
   > Before 1.19, The default behavior of Flink wrongly mapped both SQL 
TIMESTAMP and TIMESTAMP_LTZ type to AVRO TIMESTAMP.
   > The correct behavior is Flink SQL TIMESTAMP maps Avro LOCAL TIMESTAMP and 
Flink SQL TIMESTAMP_LTZ maps Avro TIMESTAMP
    If my understanding is correct, the question now becomes whether pypaimon 
should fix this in advance. I need your help to confirm this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to