lidavidm commented on code in PR #2917:
URL: https://github.com/apache/arrow-adbc/pull/2917#discussion_r2127671967


##########
docs/source/driver/snowflake.rst:
##########
@@ -469,6 +469,10 @@ These options map 1:1 with the Snowflake `Config object 
<https://pkg.go.dev/gith
     non-zero scaled columns will be returned as ``Float64`` typed Arrow 
columns.
     The default is ``true``.
 
+``adbc.snowflake.sql.client_option.use_max_microseconds_precision``
+    When ``true``, nanoseconds will be converted to microseconds
+    to avoid the overflow of the Timestamp type. Does not affect the Time32 or 
Time64 types.

Review Comment:
   (1) Can we explicitly list what types it _does_ apply to?
   (2) Is there any chance we will want to pick seconds/milliseconds instead of 
microseconds ever?



##########
go/adbc/driver/snowflake/connection.go:
##########
@@ -483,9 +484,17 @@ func (c *connectionImpl) toArrowField(columnInfo 
driverbase.ColumnInfo) arrow.Fi
        case "DATETIME":
                fallthrough
        case "TIMESTAMP", "TIMESTAMP_NTZ":
-               field.Type = &arrow.TimestampType{Unit: arrow.Nanosecond}
+               if c.useMaxMicrosecondTimestampPrecision {

Review Comment:
   I think that's unavoidable with the nature of the option hierarchy. But 
ideally we should respect the option wherever possible.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to