cxxiii opened a new issue, #3739:
URL: https://github.com/apache/amoro/issues/3739
### What happened?
Got NoSuchFieldError: chunkSize when selecting data from table in AMS
terminal. This error indicates that during runtime, the system tried to access
the chunkSize field in the
io.netty.buffer.PooledByteBufAllocatorL$InnerAllocator class, but that field
does not exist in the loaded version of Netty. This typically occurs due to a
dependency conflict between libraries such as Apache Arrow, Apache Iceberg, and
Spark, which rely on different versions of the Netty library.
**Exception**
```
2025/08/20 15:27:00 org.apache.spark.SparkException: Job aborted due to
stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost
task 0.0 in stage 0.0 (TID 0) (10.35.17.53 executor driver):
java.lang.NoSuchFieldError: chunkSize
at
io.netty.buffer.PooledByteBufAllocatorL$InnerAllocator.(PooledByteBufAllocatorL.java:153)
at
io.netty.buffer.PooledByteBufAllocatorL.(PooledByteBufAllocatorL.java:49)
at
org.apache.arrow.memory.NettyAllocationManager.(NettyAllocationManager.java:51)
at
org.apache.arrow.memory.DefaultAllocationManagerFactory.(DefaultAllocationManagerFactory.java:26)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:315)
at
org.apache.arrow.memory.DefaultAllocationManagerOption.getFactory(DefaultAllocationManagerOption.java:108)
at
org.apache.arrow.memory.DefaultAllocationManagerOption.getDefaultAllocationManagerFactory(DefaultAllocationManagerOption.java:98)
at
org.apache.arrow.memory.BaseAllocator$Config.getAllocationManagerFactory(BaseAllocator.java:733)
at
org.apache.arrow.memory.ImmutableConfig.access$801(ImmutableConfig.java:24)
at
org.apache.arrow.memory.ImmutableConfig$InitShim.getAllocationManagerFactory(ImmutableConfig.java:83)
at org.apache.arrow.memory.ImmutableConfig.(ImmutableConfig.java:47)
at org.apache.arrow.memory.ImmutableConfig.(ImmutableConfig.java:24)
at
org.apache.arrow.memory.ImmutableConfig$Builder.build(ImmutableConfig.java:485)
at org.apache.arrow.memory.BaseAllocator.(BaseAllocator.java:61)
at org.apache.iceberg.arrow.ArrowAllocation.(ArrowAllocation.java:25)
at
org.apache.iceberg.arrow.vectorized.VectorizedReaderBuilder.(VectorizedReaderBuilder.java:60)
at
org.apache.iceberg.spark.data.vectorized.VectorizedSparkParquetReaders$ReaderBuilder.(VectorizedSparkParquetReaders.java:115)
at
org.apache.iceberg.spark.data.vectorized.VectorizedSparkParquetReaders.buildReader(VectorizedSparkParquetReaders.java:61)
at
org.apache.iceberg.spark.source.BaseBatchReader.lambda$newParquetIterable$0(BaseBatchReader.java:90)
at org.apache.iceberg.parquet.ReadConf.(ReadConf.java:137)
at
org.apache.iceberg.parquet.VectorizedParquetReader.init(VectorizedParquetReader.java:90)
at
org.apache.iceberg.parquet.VectorizedParquetReader.iterator(VectorizedParquetReader.java:99)
at
org.apache.iceberg.spark.source.BatchDataReader.open(BatchDataReader.java:109)
at
org.apache.iceberg.spark.source.BatchDataReader.open(BatchDataReader.java:41)
at org.apache.iceberg.spark.source.BaseReader.next(BaseReader.java:141)
at
org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:119)
at
org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:156)
at
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1(DataSourceRDD.scala:63)
at
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1$adapted(DataSourceRDD.scala:63)
at scala.Option.exists(Option.scala:376)
at
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:63)
at
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.advanceToNextIter(DataSourceRDD.scala:97)
at
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:63)
at
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(generated.java:29)
at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(generated.java:43)
at
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
at
org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:364)
at
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:890)
at
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:890)
at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
```
### Affects Versions
master
### What table formats are you seeing the problem on?
Iceberg
### What engines are you seeing the problem on?
AMS
### How to reproduce
_No response_
### Relevant log output
```shell
```
### Anything else
_No response_
### Are you willing to submit a PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's Code of Conduct
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]