barunkumaracharya opened a new issue, #37994:
URL: https://github.com/apache/beam/issues/37994

   ### What happened?
   
   Scenario 1: Use Managed Iceberg IO to write some data to Iceberg Tables 
using Spark Runner. Spark StorageLevel used - MEMORY_ONLY
   This works perfectly fine.
   
   Scenario 2: Use Managed Iceberg IO to write some data to Iceberg Tables 
using Spark Runner. Spark StorageLevel used - MEMORY_ONLY_SER or 
MEMORY_AND_DISK.
   
   This does not work when run with Spark Runner. I see Null Pointer exception 
while writing to Iceberg Tables. If i change the sink to a Avro Sink or a file 
sink, the job works fine but with Managed Iceberg IO, it is failing with null 
pointer exception.
   From my analysis, what i could find was, there is a beam tuple tag called 
"WRITTEN_ROWS_TAG" whose coder, Spark's Java Serializer / Kyro Serializer is 
not able to find. Although, i did see that when the tag output is being 
created, a coder is being set to the tag in 
[WriteUngroupedRowsToFiles.java](https://github.com/apache/beam/blob/master/sdks/java/io/iceberg/src/main/java/org/apache/beam/sdk/io/iceberg/WriteUngroupedRowsToFiles.java)
 but that coder does not get propagated further,  as it is not assigned to 
CoderMap in line 486 in 
[TransformTranslator.java](https://github.com/apache/beam/blob/master/runners/spark/src/main/java/org/apache/beam/runners/spark/translation/TransformTranslator.java),
 and this happens because in the same TransformTranslator in line 472, we skip 
Output tags which do not have a further consumer in **skipUnconsumedOutputs** 
function.
   
   
   **IMPACT** - I cannot scale my Spark Job as Managed Iceberg IO only allows 
MEMORY_ONLY storage level in spark job. For me to be able to process large 
amount of data, i need to be able to compress data and store it to disk .i.e, i 
should be able to use MEMORY_AND_DISK and MEMORY_ONLY_SER  storage levels, 
which this bug is not allowing me to do. Requesting support from beam community 
here.
   
   Exception that i see - 
   `**java.lang.NullPointerException
        at 
org.apache.beam.runners.spark.translation.ValueAndCoderLazySerializable.writeCommon(ValueAndCoderLazySerializable.java:98)
        at 
org.apache.beam.runners.spark.translation.ValueAndCoderLazySerializable.writeObject(ValueAndCoderLazySerializable.java:134)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at 
java.base/java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1027)
        at 
java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1497)
        at 
java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433)
        at 
java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179)
        at 
java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553)
        at 
java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510)
        at 
java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433)
        at 
java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179)
        at 
java.base/java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:349)
        at 
org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
        at 
org.apache.spark.storage.memory.SerializedValuesHolder.storeValue(MemoryStore.scala:729)
        at 
org.apache.spark.storage.memory.MemoryStore.putIterator(MemoryStore.scala:224)
        at 
org.apache.spark.storage.memory.MemoryStore.putIteratorAsBytes(MemoryStore.scala:352)
        at 
org.apache.spark.storage.BlockManager.$anonfun$doPutIterator$1(BlockManager.scala:1618)
        at 
org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$doPut(BlockManager.scala:1528)
        at 
org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1592)
        at 
org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:1389)
        at 
org.apache.spark.storage.BlockManager.getOrElseUpdateRDDBlock(BlockManager.scala:1343)
        at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:376)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:326)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
        at 
org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:104)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54)
        at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
        at org.apache.spark.scheduler.Task.run(Task.scala:141)
        at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
        at 
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
        at 
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:834)**
   `
   
   ### Issue Priority
   
   Priority: 1 (data loss / total loss of function)
   
   ### Issue Components
   
   - [ ] Component: Python SDK
   - [x] Component: Java SDK
   - [ ] Component: Go SDK
   - [ ] Component: Typescript SDK
   - [ ] Component: IO connector
   - [ ] Component: Beam YAML
   - [ ] Component: Beam examples
   - [ ] Component: Beam playground
   - [ ] Component: Beam katas
   - [ ] Component: Website
   - [ ] Component: Infrastructure
   - [x] Component: Spark Runner
   - [ ] Component: Flink Runner
   - [ ] Component: Samza Runner
   - [ ] Component: Twister2 Runner
   - [ ] Component: Hazelcast Jet Runner
   - [ ] Component: Google Cloud Dataflow Runner


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to