adutra commented on PR #1491:
URL: https://github.com/apache/polaris/pull/1491#issuecomment-2847989195
I am 80% convinced that the issue is with Avro. When I SSH into a running
`regtest` container and execute the test manually, I also get the same error.
Here is the full stack trace:
```
25/05/02 19:41:43 ERROR Utils: Aborting task
java.lang.NoClassDefFoundError: Could not initialize class
org.apache.iceberg.GenericDataFile
at org.apache.iceberg.DataFiles$Builder.build(DataFiles.java:335)
at org.apache.iceberg.io.DataWriter.close(DataWriter.java:93)
at
org.apache.iceberg.io.RollingFileWriter.closeCurrentWriter(RollingFileWriter.java:122)
at
org.apache.iceberg.io.RollingFileWriter.close(RollingFileWriter.java:147)
at
org.apache.iceberg.io.RollingDataWriter.close(RollingDataWriter.java:32)
at
org.apache.iceberg.spark.source.SparkWrite$UnpartitionedDataWriter.close(SparkWrite.java:747)
at
org.apache.iceberg.spark.source.SparkWrite$UnpartitionedDataWriter.commit(SparkWrite.java:729)
at
org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.$anonfun$run$5(WriteToDataSourceV2Exec.scala:475)
at
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1397)
at
org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.run(WriteToDataSourceV2Exec.scala:491)
at
org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.run$(WriteToDataSourceV2Exec.scala:430)
at
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:496)
at
org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:393)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
at
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
at org.apache.spark.scheduler.Task.run(Task.scala:141)
at
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
at
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
at
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: java.lang.ExceptionInInitializerError: Exception
java.lang.NoSuchMethodError: 'org.apache.avro.LogicalTypes$TimestampNanos
org.apache.avro.LogicalTypes.timestampNanos()' [in thread "Executor task launch
worker for task 1.0 in stage 0.0 (TID 1)"]
at
org.apache.iceberg.avro.TypeToSchema.<clinit>(TypeToSchema.java:50)
at
org.apache.iceberg.avro.AvroSchemaUtil.convert(AvroSchemaUtil.java:76)
at
org.apache.iceberg.avro.AvroSchemaUtil.convert(AvroSchemaUtil.java:72)
at
org.apache.iceberg.PartitionData.partitionDataSchema(PartitionData.java:42)
at org.apache.iceberg.PartitionData.<init>(PartitionData.java:71)
at org.apache.iceberg.BaseFile$1.<init>(BaseFile.java:51)
at org.apache.iceberg.BaseFile.<clinit>(BaseFile.java:50)
... 24 more
```
I just don't understand yet why the stack trace is not printed in the stdout
file.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]