Github user davies commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10820#discussion_r50783615
  
    --- Diff: 
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/ColumnarBatch.java
 ---
    @@ -82,25 +91,53 @@ public void close() {
        * Adapter class to interop with existing components that expect 
internal row. A lot of
        * performance is lost with this translation.
        */
    -  public final class Row extends InternalRow {
    +  public static final class Row extends InternalRow {
         private int rowId;
    +    private final ColumnarBatch parent;
    +    private final int fixedLenRowSize;
    +
    +    private Row(ColumnarBatch parent) {
    +      this.parent = parent;
    +      this.fixedLenRowSize = 
UnsafeRow.calculateFixedPortionByteSize(parent.numCols());
    +    }
     
         /**
          * Marks this row as being filtered out. This means a subsequent 
iteration over the rows
          * in this batch will not include this row.
          */
         public final void markFiltered() {
    -      ColumnarBatch.this.markFiltered(rowId);
    +      parent.markFiltered(rowId);
         }
     
         @Override
         public final int numFields() {
    -      return ColumnarBatch.this.numCols();
    +      return parent.numCols();
         }
     
         @Override
    +    /**
    +     * Revisit this. This is expensive.
    --- End diff --
    
    This may be too slow for Join (or any operator that requires to hold a 
row). Could we use a lazy generated UnsafeProjection here?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to