Github user viirya commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15821#discussion_r108581072
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
    @@ -2828,4 +2839,16 @@ class Dataset[T] private[sql](
           Dataset(sparkSession, logicalPlan)
         }
       }
    +
    +  /** Convert to an RDD of ArrowPayload byte arrays */
    +  private[sql] def toArrowPayloadBytes(): RDD[Array[Byte]] = {
    +    val schema_captured = this.schema
    +    queryExecution.toRdd.mapPartitionsInternal { iter =>
    +      val converter = new ArrowConverters
    +      val payload = converter.interalRowIterToPayload(iter, 
schema_captured)
    +      val payloadBytes = ArrowConverters.payloadToByteArray(payload, 
schema_captured)
    --- End diff --
    
    This works now by consuming all rows from the iterator at once and 
constructing a `ArrowPayload` for them. It might harm for memory usage if the 
rows are huge.
    
    I think a better way might be to only construct a `ArrowPayload` for a 
group of rows, not all rows.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to