zeroshade commented on code in PR #1456:
URL: https://github.com/apache/arrow-adbc/pull/1456#discussion_r1462153780
##########
go/adbc/driver/snowflake/record_reader.go:
##########
@@ -212,13 +215,7 @@ func getTransformer(sc *arrow.Schema, ld
gosnowflake.ArrowStreamLoader, useHighP
continue
}
- q := int64(t) /
int64(math.Pow10(int(srcMeta.Scale)))
- r := int64(t) %
int64(math.Pow10(int(srcMeta.Scale)))
- v, err :=
arrow.TimestampFromTime(time.Unix(q, r), dt.Unit)
- if err != nil {
- return nil, err
- }
- tb.Append(v)
+ tb.Append(arrow.Timestamp(t))
Review Comment:
> Since the values are already in the unit specified by scale, we can just
add the existing value to the array
This is assuming the unit for the arrow column matches the scale that
snowflake is returning which i'm not sure is a valid assumption. We should
ensure we scale the value properly to the desired arrow unit. (which is what
the previous code did). Are we sure we can always assume the unit for the Arrow
column will match the scale that Snowflake returned?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]