Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18704#discussion_r139361487
  
    --- Diff: 
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/WritableColumnVector.java
 ---
    @@ -147,6 +147,11 @@ private void throwUnsupportedException(int 
requiredCapacity, Throwable cause) {
       public abstract void putShorts(int rowId, int count, short[] src, int 
srcIndex);
     
       /**
    +   * Sets values from [rowId, rowId + count) to [src[srcIndex], 
src[srcIndex + count])
    --- End diff --
    
    let's update them in this PR. BTW `WritableColumnVector` may be exposed to 
end users, so that they can build columnar batch to data source v2 columnar 
scan, so the document is very important.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to