emkornfield commented on code in PR #2301:
URL: https://github.com/apache/iceberg-rust/pull/2301#discussion_r3017288176


##########
crates/iceberg/src/arrow/reader.rs:
##########
@@ -1232,6 +1253,121 @@ fn add_fallback_field_ids_to_arrow_schema(arrow_schema: 
&ArrowSchemaRef) -> Arc<
     ))
 }
 
+/// Coerce Arrow schema types for INT96 columns to match the Iceberg table 
schema.
+///
+/// arrow-rs defaults INT96 to `Timestamp(Nanosecond)`, which overflows i64 
for dates outside
+/// ~1677-2262. We use arrow-rs's schema hint mechanism to read INT96 at the 
resolution
+/// specified by the Iceberg schema (`timestamp` → microsecond, `timestamp_ns` 
→ nanosecond).
+///
+/// Iceberg Java handles this differently: it bypasses parquet-mr with a 
custom column reader
+/// (`GenericParquetReaders.TimestampInt96Reader`). We achieve the same result 
via schema hints.
+///
+/// References:
+/// - Iceberg spec primitive types: 
<https://iceberg.apache.org/spec/#primitive-types>
+/// - arrow-rs schema hint support: 
<https://github.com/apache/arrow-rs/pull/7285>
+fn coerce_int96_timestamps(
+    parquet_schema: &SchemaDescriptor,
+    arrow_schema: &ArrowSchemaRef,
+    iceberg_schema: &Schema,
+) -> Option<Arc<ArrowSchema>> {
+    use arrow_schema::{DataType, Field, Fields, TimeUnit};

Review Comment:
   I've received prior feedback, that I think the general preference to do 
imports at the top of the modules.  I would guess this applies here but maybe 
there is an exception for arrow.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to