[ 
https://issues.apache.org/jira/browse/HADOOP-17312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17216626#comment-17216626
 ] 

Steve Loughran edited comment on HADOOP-17312 at 10/19/20, 11:06 AM:
---------------------------------------------------------------------

{code}
20/10/18 01:13:01 ERROR TaskContextImpl: Error in TaskCompletionListener
org.apache.http.ConnectionClosedException: Premature end of Content-Length 
delimited message body (expected: 31439128; received: 11113005
    at 
org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178)
    at 
org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:198)
    at 
org.apache.http.impl.io.ContentLengthInputStream.close(ContentLengthInputStream.java:101)
    at 
org.apache.http.conn.BasicManagedEntity.streamClosed(BasicManagedEntity.java:166)
    at 
org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:228)
    at 
org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:172)
    at java.base/java.io.FilterInputStream.close(FilterInputStream.java:180)
    at java.base/java.io.FilterInputStream.close(FilterInputStream.java:180)
    at java.base/java.io.FilterInputStream.close(FilterInputStream.java:180)
    at java.base/java.io.FilterInputStream.close(FilterInputStream.java:180)
    at 
com.amazonaws.services.s3.model.S3ObjectInputStream.abort(S3ObjectInputStream.java:90)
    at org.apache.hadoop.fs.s3a.S3AInputStream.close(S3AInputStream.java:199)
    at java.base/java.io.FilterInputStream.close(FilterInputStream.java:180)
    at org.apache.hadoop.util.LineReader.close(LineReader.java:150)
    at 
org.apache.hadoop.mapreduce.lib.input.LineRecordReader.close(LineRecordReader.java:231)
    at 
org.apache.spark.sql.execution.datasources.RecordReaderIterator.close(RecordReaderIterator.scala:62)
    at 
org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.close(HadoopFileLinesReader.scala:73)
    at 
org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1$$anonfun$apply$1$$anonfun$apply$2.apply(TextFileFormat.scala:123)
    at 
org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1$$anonfun$apply$1$$anonfun$apply$2.apply(TextFileFormat.scala:123)
    at 
org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
    at 
org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
    at 
org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
    at 
org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
    at 
org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
    at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at 
org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
    at 
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
    at org.apache.spark.scheduler.Task.run(Task.scala:133)
    at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)
20/10/18 01:13:01 ERROR Utils: Uncaught exception in thread Executor task 
launch worker for task 9
java.lang.NullPointerException
    at 
org.apache.spark.scheduler.Task$$anonfun$run$1.apply$mcV$sp(Task.scala:144)
    at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340)
    at org.apache.spark.scheduler.Task.run(Task.scala:142)
    at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)
20/10/18 01:13:01 INFO ShutdownHookManager: Deleting directory 
/private/var/folders/_c/gf1xl24d2y7f69vdqjthq4p40000gn/T/spark-2241418f-6797-4d06-85bb-6577b42d5d86/pyspark-58a27a10-8221-489a-b7a8-b04e29e8db60
{code}



was (Author: [email protected]):
{code}

    at 
org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178)
    at 
org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:198)
    at 
org.apache.http.impl.io.ContentLengthInputStream.close(ContentLengthInputStream.java:101)
    at 
org.apache.http.conn.BasicManagedEntity.streamClosed(BasicManagedEntity.java:166)
    at 
org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:228)
    at 
org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:172)
    at java.base/java.io.FilterInputStream.close(FilterInputStream.java:180)
    at java.base/java.io.FilterInputStream.close(FilterInputStream.java:180)
    at java.base/java.io.FilterInputStream.close(FilterInputStream.java:180)
    at java.base/java.io.FilterInputStream.close(FilterInputStream.java:180)
    at 
com.amazonaws.services.s3.model.S3ObjectInputStream.abort(S3ObjectInputStream.java:90)
    at org.apache.hadoop.fs.s3a.S3AInputStream.close(S3AInputStream.java:199)
    at java.base/java.io.FilterInputStream.close(FilterInputStream.java:180)
    at org.apache.hadoop.util.LineReader.close(LineReader.java:150)

{code}


> S3AInputStream to be resilient to faiures in abort(); translate AWS Exceptions
> ------------------------------------------------------------------------------
>
>                 Key: HADOOP-17312
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17312
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.3.0, 3.2.1
>            Reporter: Steve Loughran
>            Priority: Major
>
> Stack overflow issue complaining about ConnectionClosedException during 
> S3AInputStream close(), seems triggered by an EOF exception in abort. That 
> is: we are trying to close the stream and it is failing because the stream is 
> closed. oops.
> https://stackoverflow.com/questions/64412010/pyspark-org-apache-http-connectionclosedexception-premature-end-of-content-leng
> Looking @ the stack, we aren't translating AWS exceptions in abort() to IOEs, 
> which may be a factor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to