kazuyukitanimura commented on a change in pull request #34611:
URL: https://github.com/apache/spark/pull/34611#discussion_r750833865



##########
File path: 
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/WritableColumnVector.java
##########
@@ -179,6 +180,18 @@ public WritableColumnVector reserveDictionaryIds(int 
capacity) {
    */
   protected abstract void reserveInternal(int capacity);
 
+  /**
+   * Each byte of the returned value (long) has one bit from `bits`. I.e. it 
is equivalent to
+   *    byte[] a = {(byte)(bits >> 0 & 1), (byte)(bits >> 1 & 1),
+   *                (byte)(bits >> 2 & 1), (byte)(bits >> 3 & 1),
+   *                (byte)(bits >> 4 & 1), (byte)(bits >> 5 & 1),
+   *                (byte)(bits >> 6 & 1), (byte)(bits >> 7 & 1)};
+   *    return ByteBuffer.wrap(a).getLong();
+   */
+  protected final long toBitPerByte(int bits) {
+    return ((bits * 0x8040201008040201L) >>> 7) & 0x101010101010101L;

Review comment:
       The purpose is for converting `0x000000FF` to `0x0101010101010101L` as 
minimum ops as possible.
   The simple shit mask would be something like
   ```((bits << 56) | (bits << 47) | (bits << 38) | (bits << 29) | (bits << 20) 
| (bits << 11) | (bits <<2) | (bits >> 7))  & 0x101010101010101L```
   that requires 16 ops (8 Shifts, 7 ORs, and 1 And.)
   The equivalent 
   ```((bits * 0x8040201008040201L) >>> 7) & 0x101010101010101L```
   requires only 3 ops (1 Multiplication, 1 Shift, and 1 And.)
   Multiplication with modern CPUs is as fast as 1 clock cycle. So this code 
will save 13 ops.
   
   I tested several times. The variance between test results was too large to 
see the gain. 
   Given `bits * 0x8040201008040201L` is easy enough to understand, I kept it. 
In theory, it should be faster.
   
   Updated the comment, hopefully it is clearer now.

##########
File path: 
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/WritableColumnVector.java
##########
@@ -201,6 +214,15 @@ public WritableColumnVector reserveDictionaryIds(int 
capacity) {
    */
   public abstract void putBooleans(int rowId, int count, boolean value);
 
+  /**
+   * Sets bits from [src[srcIndex], src[srcIndex + count]) to [rowId, rowId + 
count)
+   * src must be positive and contain 8 bits of bitmask in the lowest byte.
+   */
+  public void putBooleans(int rowId, int count, int src, int srcIndex) {
+    putBytes(rowId, count, 
ByteBuffer.allocate(8).putLong(toBitPerByte(src)).array(), srcIndex);
+  }
+  public void putBooleans(int rowId, int src) {putBooleans(rowId, 8, src, 0);}

Review comment:
       Updated

##########
File path: 
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/WritableColumnVector.java
##########
@@ -201,6 +214,15 @@ public WritableColumnVector reserveDictionaryIds(int 
capacity) {
    */
   public abstract void putBooleans(int rowId, int count, boolean value);
 
+  /**
+   * Sets bits from [src[srcIndex], src[srcIndex + count]) to [rowId, rowId + 
count)
+   * src must be positive and contain 8 bits of bitmask in the lowest byte.
+   */
+  public void putBooleans(int rowId, int count, int src, int srcIndex) {
+    putBytes(rowId, count, 
ByteBuffer.allocate(8).putLong(toBitPerByte(src)).array(), srcIndex);

Review comment:
       Updated

##########
File path: 
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/WritableColumnVector.java
##########
@@ -201,6 +214,15 @@ public WritableColumnVector reserveDictionaryIds(int 
capacity) {
    */
   public abstract void putBooleans(int rowId, int count, boolean value);
 
+  /**
+   * Sets bits from [src[srcIndex], src[srcIndex + count]) to [rowId, rowId + 
count)
+   * src must be positive and contain 8 bits of bitmask in the lowest byte.
+   */
+  public void putBooleans(int rowId, int count, int src, int srcIndex) {

Review comment:
       `int src` is more convenient and causes less number of casting.
   Especially `int src` can prevent from from forgetting `& 0xFF`. If `byte 
src` we have to do `(long)src & 0xFF` because `byte src = 0xFF` will be 
`(long)src == 0xFFFFFFFFFFFFFFFFL` that is not convenient for `|` operations.
   
   Anyway `in.read()` returns `int`, it is more natural to take an `int`

##########
File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/DataSourceReadBenchmark.scala
##########
@@ -541,6 +548,9 @@ object DataSourceReadBenchmark extends SqlBasedBenchmark {
   }
 
   override def runBenchmarkSuite(mainArgs: Array[String]): Unit = {
+    runBenchmark("SQL Single Boolean Column Scan") {
+      numericScanBenchmark(1024 * 1024 * 15, BooleanType)

Review comment:
       Updated




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to