Github user davies commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11435#discussion_r54939033
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/ExistingRDD.scala ---
    @@ -149,14 +149,55 @@ private[sql] case class PhysicalRDD(
         ctx.INPUT_ROW = row
         ctx.currentVars = null
         val columns = exprs.map(_.gen(ctx))
    +
    +    // The input RDD can either return (all) ColumnarBatches or 
InternalRows. We determine this
    +    // by looking at the first value of the RDD and then calling the 
function which will process
    +    // the remaining. It is faster to return batches.
    +    // TODO: The abstractions between this class and SqlNewHadoopRDD makes 
it difficult to know
    +    // here which path to use. Fix this.
    +
    +    val columnarBatchClz = 
"org.apache.spark.sql.execution.vectorized.ColumnarBatch"
    +
    +    val scanBatches = ctx.freshName("processBatches")
    +    ctx.addNewFunction(scanBatches,
    +      s"""
    +      | private void $scanBatches($columnarBatchClz batch) throws 
java.io.IOException {
    +      |  while (true) {
    +      |     int numRows = batch.numRows();
    +      |     $numOutputRows.add(numRows);
    +      |     for (int i = 0; i < numRows; i++) {
    +      |       InternalRow $row = batch.getRow(i);
    +      |       ${columns.map(_.code).mkString("\n").trim}
    +      |       ${consume(ctx, columns).trim}
    --- End diff --
    
    We are going increase the number of buffered rows, it may blowed up when 
joining.
    
    I think we should break the loop here, if we get enough rows.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to