sadikovi commented on a change in pull request #34611:
URL: https://github.com/apache/spark/pull/34611#discussion_r753867551
##########
File path:
sql/core/src/test/scala/org/apache/spark/sql/execution/vectorized/ColumnarBatchSuite.scala
##########
@@ -130,6 +133,89 @@ class ColumnarBatchSuite extends SparkFunSuite {
}
}
+ testVector("Boolean APIs", 1024, BooleanType) {
+ column =>
+ val reference = mutable.ArrayBuffer.empty[Boolean]
+
+ var values = Array(true, false, true, false, false)
+ var bits = values.foldRight(0)((b, i) => i << 1 | (if (b) 1 else
0)).toByte
+ column.appendBooleans(2, bits, 0)
+ reference ++= values.slice(0, 2)
+
+ column.appendBooleans(3, bits, 2)
+ reference ++= values.slice(2, 5)
+
+ column.appendBooleans(6, true)
+ reference ++= Array.fill(6)(true)
+
+ column.appendBoolean(false)
+ reference += false
+
+ var idx = column.elementsAppended
+
+ values = Array(true, true, false, true, false, true, false, true)
+ bits = values.foldRight(0)((b, i) => i << 1 | (if (b) 1 else 0)).toByte
+ column.putBooleans(idx, 2, bits, 0)
+ reference ++= values.slice(0, 2)
+ idx += 2
+
+ column.putBooleans(idx, 3, bits, 2)
+ reference ++= values.slice(2, 5)
+ idx += 3
+
+ column.putBooleans(idx, bits)
+ reference ++= values
+ idx += 8
+
+ column.putBoolean(idx, false)
+ reference += false
+ idx += 1
+
+ column.putBooleans(idx, 3, true)
+ reference ++= Array.fill(3)(true)
+ idx += 3
+
+ implicit def intToByte(i: Int): Byte = i.toByte
+ val buf = ByteBuffer.wrap(Array(0x33, 0x5A, 0xA5, 0xCC, 0x0F, 0xF0,
0xEE))
+ val reader = new VectorizedPlainValuesReader()
+ reader.initFromPage(0, ByteBufferInputStream.wrap(buf))
+ column.putBoolean(idx, reader.readBoolean) // bit index 0
+ reference += true
+ idx += 1
+
+ reader.skipBooleans(1)
+
+ column.putBoolean(idx, reader.readBoolean) // bit index 2
+ reference += false
+ idx += 1
+
+ reader.skipBooleans(5)
+
+ column.putBoolean(idx, reader.readBoolean) // bit index 8
+ reference += false
+ idx += 1
+
+ reader.skipBooleans(8)
Review comment:
How about we include `skipBooleans(0)`?
##########
File path:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedPlainValuesReader.java
##########
@@ -53,19 +53,47 @@ public void skip() {
throw new UnsupportedOperationException();
}
+ private void updateCurrentByte() {
+ try {
+ currentByte = (byte) in.read();
+ } catch (IOException e) {
+ throw new ParquetDecodingException("Failed to read a byte", e);
+ }
+ }
+
@Override
public final void readBooleans(int total, WritableColumnVector c, int rowId)
{
- // TODO: properly vectorize this
- for (int i = 0; i < total; i++) {
- c.putBoolean(rowId + i, readBoolean());
+ int i = 0;
+ if (bitOffset > 0) {
+ i = Math.min(8 - bitOffset, total);
+ c.putBooleans(rowId, i, currentByte, bitOffset);
+ bitOffset = (bitOffset + i) & 7;
+ }
+ for (; i + 7 < total; i += 8) {
+ updateCurrentByte();
+ c.putBooleans(rowId + i, currentByte);
+ }
+ if (i < total) {
+ updateCurrentByte();
+ bitOffset = total - i;
+ c.putBooleans(rowId + i, bitOffset, currentByte, 0);
}
}
@Override
public final void skipBooleans(int total) {
- // TODO: properly vectorize this
- for (int i = 0; i < total; i++) {
- readBoolean();
+ // using >>3 instead of /8 below since Java division rounds towards zero
i.e. (-1)/8=0
Review comment:
Can we revisit this comment? It might be inaccurate. In this case,
`(total - (8 - bitOffset))` should never be negative, otherwise it indicates a
bug in the code.
You can add a check if you are skipping 0 values, this should be a no-op. I
am curious if there is a test for it.
##########
File path:
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/WritableColumnVector.java
##########
@@ -470,6 +493,18 @@ public final int appendBooleans(int count, boolean v) {
return result;
}
+ /**
+ * Append bits from [src[offset], src[offset + count])
+ * src must contain bit-packed 8 Booleans in the byte.
Review comment:
nit: `booleans` or `boolean values`.
##########
File path:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedPlainValuesReader.java
##########
@@ -53,19 +53,47 @@ public void skip() {
throw new UnsupportedOperationException();
}
+ private void updateCurrentByte() {
+ try {
+ currentByte = (byte) in.read();
+ } catch (IOException e) {
+ throw new ParquetDecodingException("Failed to read a byte", e);
+ }
+ }
+
@Override
public final void readBooleans(int total, WritableColumnVector c, int rowId)
{
- // TODO: properly vectorize this
- for (int i = 0; i < total; i++) {
- c.putBoolean(rowId + i, readBoolean());
+ int i = 0;
+ if (bitOffset > 0) {
+ i = Math.min(8 - bitOffset, total);
Review comment:
Do we need to update the total here?
##########
File path:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedPlainValuesReader.java
##########
@@ -53,19 +53,47 @@ public void skip() {
throw new UnsupportedOperationException();
}
+ private void updateCurrentByte() {
Review comment:
I am sorry for being a bit late with these kind of comments, it is fine
as is but to make the method reflect its purpose, shall we rename to something
like this:
```java
private void readNextByte() {
try {
currentByte = (byte) in.read();
} catch (IOException e) {
throw new ParquetDecodingException("Failed to read the next byte when
decoding boolean values", e);
}
}
```
It is up you to change.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]