Github user nongli commented on a diff in the pull request:
https://github.com/apache/spark/pull/10593#discussion_r49893792
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedPlainValuesReader.java
---
@@ -0,0 +1,50 @@
+package org.apache.spark.sql.execution.datasources.parquet;
+
+import java.io.IOException;
+
+import org.apache.spark.sql.execution.vectorized.ColumnVector;
+import org.apache.spark.unsafe.Platform;
+
+import org.apache.parquet.column.values.ValuesReader;
+
+/**
+ * An implementation of the Parquet PLAIN decoder that supports the
vectorized interface.
+ */
+public class VectorizedPlainValuesReader extends ValuesReader implements
VectorizedValuesReader {
+ private byte[] buffer;
+ private int offset;
+ private final int byteSize;
+
+ public VectorizedPlainValuesReader(int byteSize) {
+ this.byteSize = byteSize;
+ }
+
+ @Override
+ public void initFromPage(int valueCount, byte[] bytes, int offset)
throws IOException {
+ this.buffer = bytes;
+ this.offset = offset + Platform.BYTE_ARRAY_OFFSET;
+ }
+
+ @Override
+ public void skip() {
+ offset += byteSize;
+ }
+
+ @Override
+ public void skip(int n) {
+ offset += n * byteSize;
+ }
+
+ @Override
+ public void readIntegers(int total, ColumnVector c, int rowId) {
+ c.putIntsLittleEndian(rowId, total, buffer, offset -
Platform.BYTE_ARRAY_OFFSET);
--- End diff --
I'll update the comment in putIntsLittleEndian. It is assuming that the
input byte array contains little endian encoded integers. The API does not care
what the host machine's endianness is. i.e. parquet always stores it as little
endian, the column vector has to figur eit out.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]