wForget opened a new issue, #9436:
URL: https://github.com/apache/incubator-gluten/issues/9436

   ### Description
   
   OOM occurs when constructing broadcast relation: 
   
   job call stack:
   ```
   org.apache.spark.rdd.RDD.collect(RDD.scala:1045)
   
org.apache.gluten.backendsapi.velox.VeloxSparkPlanExecApi.createBroadcastRelation(VeloxSparkPlanExecApi.scala:620)
   
org.apache.spark.sql.execution.ColumnarBroadcastExchangeExec.$anonfun$relationFuture$2(ColumnarBroadcastExchangeExec.scala:77)
   org.apache.gluten.utils.Arm$.withResource(Arm.scala:25)
   org.apache.gluten.metrics.GlutenTimeMetric$.millis(GlutenTimeMetric.scala:37)
   
org.apache.spark.sql.execution.ColumnarBroadcastExchangeExec.$anonfun$relationFuture$1(ColumnarBroadcastExchangeExec.scala:65)
   ```
   
   error:
   ```
   25/04/27 15:01:11 ERROR Executor: Exception in task 0.0 in stage 6.0 (TID 9)
   java.lang.OutOfMemoryError: Java heap space
        at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
        at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
        at 
org.apache.spark.serializer.SerializerHelper$.$anonfun$serializeToChunkedBuffer$1(SerializerHelper.scala:40)
        at 
org.apache.spark.serializer.SerializerHelper$.$anonfun$serializeToChunkedBuffer$1$adapted(SerializerHelper.scala:40)
        at 
org.apache.spark.serializer.SerializerHelper$$$Lambda$936/276501963.apply(Unknown
 Source)
        at 
org.apache.spark.util.io.ChunkedByteBufferOutputStream.allocateNewChunkIfNeeded(ChunkedByteBufferOutputStream.scala:87)
        at 
org.apache.spark.util.io.ChunkedByteBufferOutputStream.write(ChunkedByteBufferOutputStream.scala:75)
        at com.esotericsoftware.kryo.io.Output.flush(Output.java:185)
        at com.esotericsoftware.kryo.io.Output.require(Output.java:164)
        at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:251)
        at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:237)
        at 
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:49)
        at 
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:38)
        at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:629)
        at 
com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:86)
        at 
com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:508)
        at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:651)
        at 
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:361)
        at 
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:302)
        at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:651)
        at 
org.apache.spark.serializer.KryoSerializationStream.writeObject(KryoSerializer.scala:278)
        at 
org.apache.spark.serializer.SerializerHelper$.serializeToChunkedBuffer(SerializerHelper.scala:42)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:665)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
   ```
   
   The read file size is about 300MB, native serialized data size is 1059 MB, 
Kryo serialization seems to make a copy and cause OOM (executor heap memory is 
2g). Vanilla Spark can be executed normally because spark io data will be 
compressed:
   
   
https://github.com/apache/spark/blob/4c9c41e0b7c43618053408d34427ead2e05a2e23/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala#L382
   
   So I want to support compression for `ColumnarBatchSerializer`.
   
   ### Gluten version
   
   main branch


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to