zeroshade commented on code in PR #33965:
URL: https://github.com/apache/arrow/pull/33965#discussion_r1093715942


##########
go/parquet/file/record_reader.go:
##########
@@ -751,14 +751,15 @@ type byteArrayRecordReader struct {
        valueBuf []parquet.ByteArray
 }
 
-func newByteArrayRecordReader(descr *schema.Column, info LevelInfo, mem 
memory.Allocator, bufferPool *sync.Pool) RecordReader {
+func newByteArrayRecordReader(descr *schema.Column, info LevelInfo, dtype 
arrow.DataType, mem memory.Allocator, bufferPool *sync.Pool) RecordReader {
        if mem == nil {
                mem = memory.DefaultAllocator
        }
 
-       dt := arrow.BinaryTypes.Binary
-       if descr.LogicalType().Equals(schema.StringLogicalType{}) {
-               dt = arrow.BinaryTypes.String
+       dt, ok := dtype.(arrow.BinaryDataType)
+       // arrow.DecimalType will also come through here, which we want to 
treat as binary
+       if !ok {
+               dt = arrow.BinaryTypes.Binary

Review Comment:
   I don't like requiring the `arrow.DataType` here as part of the point of 
this interface was that you could create a record reader without needing to 
explicitly code the underlying datatype, it determines it from the parquet 
schema. Could we instead use the metadata to determine whether or not we need 
to use a LargeString/LargeBinary builder by checking how large the total data 
column is? Or some other way of determining it? I'm not sure if that would 
work, but it would be preferable if it is possible.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to