Hans-Raintree opened a new issue, #9962:
URL: https://github.com/apache/hudi/issues/9962
**Describe the problem you faced**
Getting several errors after upgrading to 0.14.0 when running jobs, it seems
like the data is correct, but I haven't checked all of the several hundred
tables, but there are some intermittent gaps occasionally where a job that
should take a minute or a few minutes takes 30+ minutes and nothing is running
in Spark UI.
**To Reproduce**
Steps to reproduce the behavior:
1. Submit a spark job in emr serverless with
spark.jars=<s3path>hudi-spark3.4-bundle_2.12-0.14.0.jar,<s3path>hudi-aws-bundle-0.14.0.jar
2. Write Hudi data.
3. Example write_options:
```
write_options = {
'hoodie.table.name': transformation['target_table'],
'hoodie.datasource.write.table.type': 'COPY_ON_WRITE',
'hoodie.datasource.write.recordkey.field':
','.join(transformation['primary_key']),
'hoodie.datasource.write.precombine.field': 'bi_ts',
'hoodie.datasource.write.hive_style_partitioning': 'true',
'hoodie.datasource.write.keygenerator.class':
'org.apache.hudi.keygen.ComplexKeyGenerator',
'hoodie.datasource.write.partitionpath.field':
'customer_partition,db_partition',
'hoodie.datasource.write.payload.class':
'org.apache.hudi.common.model.PartialUpdateAvroPayload',
'hoodie.datasource.meta.sync.enable': 'true',
'hoodie.datasource.hive_sync.ignore_exceptions': 'true',
'hoodie.datasource.hive_sync.mode': 'hms',
'hoodie.datasource.hive_sync.enable': 'true',
'hoodie.datasource.hive_sync.use_jdbc': 'false',
'hoodie.datasource.hive_sync.support_timestamp': 'true',
'hoodie.datasource.hive_sync.database': 'default',
'hoodie.datasource.hive_sync.partition_fields':
'customer_partition,db_partition',
'hoodie.datasource.hive_sync.partition_extractor_class':
'org.apache.hudi.hive.MultiPartKeysValueExtractor',
'hoodie.datasource.hive_sync.table': transformation['target_table'],
'hoodie.table.services.enabled': 'false',
'hoodie.meta.sync.client.tool.class':
'org.apache.hudi.aws.sync.AwsGlueCatalogSyncTool',
'hoodie.table.cdc.enabled': 'true',
'hoodie.table.cdc.supplemental.logging.mode': 'data_before_after',
'hoodie.write.concurrency.mode': 'optimistic_concurrency_control',
'hoodie.write.lock.provider' :
'org.apache.hudi.aws.transaction.lock.DynamoDBBasedLockProvider',
'hoodie.write.lock.dynamodb.table' : 'concurrency_control_table',
'hoodie.write.lock.dynamodb.region' : 'us-east-1',
'hoodie.write.lock.dynamodb.partition_key' :
transformation['target_table'],
'hoodie.write.lock.dynamodb.endpoint_url':
'dynamodb.us-east-1.amazonaws.com',
'hoodie.write.lock.dynamodb.billing_mode' : 'PAY_PER_REQUEST',
'hoodie.write.lock.wait_time_ms_between_retry': '5000',
'hoodie.write.lock.wait_time_ms': '300000',
'hoodie.write.lock.num_retries': '75',
'hoodie.write.lock.client.wait_time_ms_between_retry': '25000',
'hoodie.write.lock.client.num_retries': '250',
}
```
**Expected behavior**
No errors, no 30 minute gaps in the job.
**Environment Description**
* Hudi version : 0.14.0
* Spark version : 3.4.0
* Hive version : 3.1.3
* Hadoop version : 3.3.3
* Storage (HDFS/S3/GCS..) : S3
* Running on Docker? (yes/no) : no
**Stacktrace**
Error 1:
```
23/10/31 21:06:29 ERROR NettyNioAsyncHttpClient: Unable to close channel
pools
java.util.concurrent.RejectedExecutionException: event executor terminated
at
io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:934)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:351)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:344)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:836)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor.execute0(SingleThreadEventExecutor.java:827)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:817)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
~[?:1.8.0_382]
at
io.netty.util.concurrent.AbstractEventExecutor.submit(AbstractEventExecutor.java:118)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
org.apache.hudi.software.amazon.awssdk.http.nio.netty.internal.utils.NettyUtils.doInEventLoop(NettyUtils.java:246)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.http.nio.netty.internal.http2.HttpOrHttp2ChannelPool.close(HttpOrHttp2ChannelPool.java:215)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.http.nio.netty.internal.ListenerInvokingChannelPool.close(ListenerInvokingChannelPool.java:132)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.http.nio.netty.internal.ReleaseOnceChannelPool.close(ReleaseOnceChannelPool.java:95)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.http.nio.netty.internal.HealthCheckedChannelPool.close(HealthCheckedChannelPool.java:150)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.http.nio.netty.internal.CancellableAcquireChannelPool.close(CancellableAcquireChannelPool.java:76)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.http.nio.netty.internal.SimpleChannelPoolAwareChannelPool.close(SimpleChannelPoolAwareChannelPool.java:57)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.http.nio.netty.internal.SdkChannelPoolMap.remove(SdkChannelPoolMap.java:56)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
java.util.concurrent.ConcurrentHashMap$KeySetView.forEach(ConcurrentHashMap.java:4647)
~[?:1.8.0_382]
at
org.apache.hudi.software.amazon.awssdk.http.nio.netty.internal.SdkChannelPoolMap.close(SdkChannelPoolMap.java:93)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.http.nio.netty.internal.AwaitCloseChannelPoolMap.close(AwaitCloseChannelPoolMap.java:164)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.http.nio.netty.internal.utils.NettyUtils.runAndLogError(NettyUtils.java:378)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.http.nio.netty.NettyNioAsyncHttpClient.close(NettyNioAsyncHttpClient.java:197)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.utils.IoUtils.closeQuietly(IoUtils.java:70)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.utils.IoUtils.closeIfCloseable(IoUtils.java:87)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.utils.AttributeMap.lambda$close$0(AttributeMap.java:87)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at java.util.HashMap$Values.forEach(HashMap.java:982) ~[?:1.8.0_382]
at
org.apache.hudi.software.amazon.awssdk.utils.AttributeMap.close(AttributeMap.java:87)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.core.client.config.SdkClientConfiguration.close(SdkClientConfiguration.java:79)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.core.internal.http.HttpClientDependencies.close(HttpClientDependencies.java:80)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.core.internal.http.AmazonAsyncHttpClient.close(AmazonAsyncHttpClient.java:70)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.core.internal.handler.BaseAsyncClientHandler.close(BaseAsyncClientHandler.java:253)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.software.amazon.awssdk.services.glue.DefaultGlueAsyncClient.close(DefaultGlueAsyncClient.java:16597)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.aws.sync.AWSGlueCatalogSyncClient.close(AWSGlueCatalogSyncClient.java:524)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at org.apache.hudi.hive.HiveSyncTool.close(HiveSyncTool.java:210)
~[hudi-spark3.4-bundle_2.12-0.14.0.jar:0.14.0]
at
org.apache.hudi.sync.common.util.SyncUtilHelpers.runHoodieMetaSync(SyncUtilHelpers.java:80)
~[hudi-spark3.4-bundle_2.12-0.14.0.jar:0.14.0]
at
org.apache.hudi.HoodieSparkSqlWriter$.$anonfun$metaSync$2(HoodieSparkSqlWriter.scala:993)
~[hudi-spark3.4-bundle_2.12-0.14.0.jar:0.14.0]
at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
~[scala-library-2.12.15.jar:?]
at
org.apache.hudi.HoodieSparkSqlWriter$.metaSync(HoodieSparkSqlWriter.scala:991)
~[hudi-spark3.4-bundle_2.12-0.14.0.jar:0.14.0]
at
org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:1089)
~[hudi-spark3.4-bundle_2.12-0.14.0.jar:0.14.0]
at
org.apache.hudi.HoodieSparkSqlWriter$.writeInternal(HoodieSparkSqlWriter.scala:441)
~[hudi-spark3.4-bundle_2.12-0.14.0.jar:0.14.0]
at
org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:132)
~[hudi-spark3.4-bundle_2.12-0.14.0.jar:0.14.0]
at
org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:150)
~[hudi-spark3.4-bundle_2.12-0.14.0.jar:0.14.0]
at
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:47)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:104)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
~[spark-catalyst_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:250)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:123)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$9(SQLExecution.scala:160)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
~[spark-catalyst_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:250)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$8(SQLExecution.scala:160)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:271)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:159)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:69)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:101)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:97)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:554)
~[spark-catalyst_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:107)
~[spark-catalyst_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:554)
~[spark-catalyst_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32)
~[spark-catalyst_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
~[spark-catalyst_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
~[spark-catalyst_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
~[spark-catalyst_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
~[spark-catalyst_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:530)
~[spark-catalyst_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:97)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:84)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:82)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:142)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:856)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:387)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at
org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:360)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239)
~[spark-sql_2.12-3.4.0-amzn-0.jar:3.4.0-amzn-0]
at sun.reflect.GeneratedMethodAccessor154.invoke(Unknown Source) ~[?:?]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[?:1.8.0_382]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_382]
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
~[py4j-0.10.9.7.jar:?]
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
~[py4j-0.10.9.7.jar:?]
at py4j.Gateway.invoke(Gateway.java:282) ~[py4j-0.10.9.7.jar:?]
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
~[py4j-0.10.9.7.jar:?]
at py4j.commands.CallCommand.execute(CallCommand.java:79)
~[py4j-0.10.9.7.jar:?]
at
py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
~[py4j-0.10.9.7.jar:?]
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
~[py4j-0.10.9.7.jar:?]
at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_382]
```
INFO/Exception?:
```
23/10/31 21:06:24 INFO AmazonDynamoDBLockClient: Heartbeat thread recieved
interrupt, exiting run() (possibly exiting thread)
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method) ~[?:1.8.0_382]
at
com.amazonaws.services.dynamodbv2.AmazonDynamoDBLockClient.run(AmazonDynamoDBLockClient.java:1248)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_382]
```
Error2:
```
23/10/31 20:43:02 ERROR rejectedExecution: Failed to submit a listener
notification task. Event loop shut down?
java.util.concurrent.RejectedExecutionException: event executor terminated
at
io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:934)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:351)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:344)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:836)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor.execute0(SingleThreadEventExecutor.java:827)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:817)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:862)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:500)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
org.apache.hudi.software.amazon.awssdk.http.nio.netty.internal.CancellableAcquireChannelPool.lambda$acquire$1(CancellableAcquireChannelPool.java:58)
~[hudi-aws-bundle-0.14.0.jar:0.14.0]
at
io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.DefaultPromise.access$200(DefaultPromise.java:35)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:503)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)
~[netty-transport-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
~[netty-common-4.1.87.Final.jar:4.1.87.Final]
at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_382]
```
WARN:
```
23/10/31 20:30:29 WARN ApacheUtils: NoSuchMethodException was thrown when
disabling normalizeUri. This indicates you are using an old version (< 4.5.8)
of Apache http client. It is recommended to use http client version >= 4.5.9 to
avoid the breaking change introduced in apache client 4.5.7 and the latency in
exception handling. See https://github.com/aws/aws-sdk-java/issues/1919 for
more information
```
Although from the Spark UI under the environment, it looks like the http
client version is 4.5.9.
WARN:
```
23/10/31 21:03:27 WARN DefaultEmrServerlessRMClient: Encountered errors when
releasing containers: [{ContainerGroupId:
f0c5c3b5-ee61-bab3-0465-2f7cb7b55a78,ContainerId:
a2c5c3ba-abb3-d260-16dc-6b88887346da,ErrorCode: INTERNAL_ERROR}]
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]