[ 
https://issues.apache.org/jira/browse/HADOOP-17063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17125823#comment-17125823
 ] 

Steve Loughran commented on HADOOP-17063:
-----------------------------------------

Looking at the relevant code, we are scheduling the PUT operation into a 
semaphore-guarded bounded thread pool.

It is deadlocking, then it may be that something else has exhausted that thread 
pool and is blocking.

Also looking at the relevant code, we don't need to execute the put in a 
separate thread -because it straight after submitting we wait for the result. 
It is different for big files where we do multipart uploads, as that we do want 
to upload data blocks while the worker threads generate more data.

Try setting fs.s3a.max.total.tasks to something big; the default is 32. Try 2x 
the number of workers you have; update fs.s3a.connection.maximum to match. And 
see 
https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/bk_cloud-data-access/content/spark-perf.html
 for a broader set of suggestions -but do know that ASF spark doesn't ship with 
the s3a committer integration. Your job commits will be slow and potentially at 
risk of failing intermittently with 404s on rename.

If you do see the problem again -can you get a full jstack thread dump and 
attach it to this JIRA? thanks.

> S3ABlockOutputStream.putObject looks stuck and never timeout
> ------------------------------------------------------------
>
>                 Key: HADOOP-17063
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17063
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 3.2.1
>         Environment: hadoop 3.2.1
> spark 2.4.4
>  
>            Reporter: Dyno
>            Priority: Major
>
> {code}
> sun.misc.Unsafe.park(Native Method) 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:523) 
> com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
>  
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:446)
>  
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
>  
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>  org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) 
> org.apache.parquet.hadoop.util.HadoopPositionOutputStream.close(HadoopPositionOutputStream.java:64)
>  org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:685) 
> org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:122)
>  
> org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:165)
>  
> org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
>  
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:57)
>  
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:74)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
>  
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:248)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
>  org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 
> org.apache.spark.scheduler.Task.run(Task.scala:123) 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
>  org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  java.lang.Thread.run(Thread.java:748)
> {code}
>  
> we are using spark 2.4.4 with hadoop 3.2.1 on kubernetes/spark-operator, 
> sometimes we see this hang with the stacktrace above. it looks like the 
> putObject never return, we have to kill the executor to make the job move 
> forward. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to