lidavidm commented on code in PR #197:
URL: https://github.com/apache/arrow-go/pull/197#discussion_r1859403082
##########
parquet/pqarrow/properties.go:
##########
@@ -165,6 +165,11 @@ type ArrowReadProperties struct {
Parallel bool
// BatchSize is the size used for calls to NextBatch when reading whole
columns
BatchSize int64
+ // Setting ForceLarge to true will force the reader to use
LargeString/LargeBinary
+ // for string and binary columns respectively, instead of the default
variants. This
+ // can be necessary if you know that there are columns which contain
more than 2GB of
+ // data, which would prevent use of int32 offsets.
+ ForceLarge bool
Review Comment:
Doing it automatically would be surprising to users IMO. It would also
potentially make inconsistent schemas when reading multiple files.
Reducing the batch size may make sense; alternatively an option to use
StringView?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]