Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/23014#discussion_r232885260
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/WritableColumnVector.java
---
@@ -101,7 +101,8 @@ private void throwUnsupportedException(int
requiredCapacity, Throwable cause) {
String message = "Cannot reserve additional contiguous bytes in the
vectorized reader (" +
(requiredCapacity >= 0 ? "requested " + requiredCapacity + "
bytes" : "integer overflow") +
"). As a workaround, you can reduce the vectorized reader batch
size, or disable the " +
- "vectorized reader. For parquet file format, refer to " +
+ "vectorized reader, or disable " +
SQLConf.BUCKETING_ENABLED().key() + " if you read " +
+ "from bucket table. For parquet file format, refer to " +
--- End diff --
parquet -> Parquet
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]