Github user henryr commented on a diff in the pull request:
https://github.com/apache/spark/pull/21070#discussion_r181846514
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedPlainValuesReader.java
---
@@ -63,115 +58,139 @@ public final void readBooleans(int total,
WritableColumnVector c, int rowId) {
}
}
+ private ByteBuffer getBuffer(int length) {
+ try {
+ return in.slice(length).order(ByteOrder.LITTLE_ENDIAN);
+ } catch (IOException e) {
+ throw new ParquetDecodingException("Failed to read " + length + "
bytes", e);
+ }
+ }
+
@Override
public final void readIntegers(int total, WritableColumnVector c, int
rowId) {
- c.putIntsLittleEndian(rowId, total, buffer, offset -
Platform.BYTE_ARRAY_OFFSET);
- offset += 4 * total;
+ int requiredBytes = total * 4;
+ ByteBuffer buffer = getBuffer(requiredBytes);
+
+ for (int i = 0; i < total; i += 1) {
--- End diff --
Agreed that fixing the `ByteBuffer` / `ColumnVector` interaction should be
dealt with elsewhere. I'm just raising the possibility of _regressing_ the read
path here because the copies are less efficient. Since it's going to be a while
before 2.4.0, that might be ok if we commit to fixing it - but it superficially
seems like a manageable change to the PR since the code to call the bulk APIs
is already there. What do you think?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]