joellubi commented on code in PR #1456: URL: https://github.com/apache/arrow-adbc/pull/1456#discussion_r1459047927
########## go/adbc/driver/snowflake/record_reader.go: ########## @@ -212,13 +215,7 @@ func getTransformer(sc *arrow.Schema, ld gosnowflake.ArrowStreamLoader, useHighP continue } - q := int64(t) / int64(math.Pow10(int(srcMeta.Scale))) - r := int64(t) % int64(math.Pow10(int(srcMeta.Scale))) - v, err := arrow.TimestampFromTime(time.Unix(q, r), dt.Unit) - if err != nil { - return nil, err - } - tb.Append(v) + tb.Append(arrow.Timestamp(t)) Review Comment: It's entirely possible that they did, the behavior seems to vary between different types (such as TIMESTAMP_TZ vs TIMESTAMP_LTZ). When I switched the type to TIMESTAMP_LTZ, all results were returned in this format rather than the struct representation. In this particular case, I observed that the returned Int64 value corresponds to the scalar value of the timestamp in whatever unit is specified by the type. If the scale is 3, then the Int64 is already meant to denote milliseconds, and so on. Since the values are already in the unit specified by scale, we can just add the existing value to the array. The test [TestSqlIngestTimestampTypes](https://github.com/apache/arrow-adbc/pull/1456/files#diff-62fd1aaf4052e89445863d9d98aad5b6ff9b92e737f95e2700e516ecf965fddeR612) confirms the roundtrip behavior (currently skipped but it does pass when the upstream changes are pulled in). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org