[ 
https://issues.apache.org/jira/browse/HADOOP-17847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17407325#comment-17407325
 ] 

Li Rong commented on HADOOP-17847:
----------------------------------

aha the this log is what i eventually got, it's a different error seems that 
the inprogress file is not there, wonder if for kubenetes this need to be a 
Persistent volume ?
{code:java}
21/08/31 12:52:01 INFO SingleEventLogFileWriter: Logging events to 
s3a://spark-event-logs-dev-us-west-2/logs/spark-application-1630414319279.inprogress21/08/31
 12:52:01 INFO SingleEventLogFileWriter: Logging events to 
s3a://spark-event-logs-dev-us-west-2/logs/spark-application-1630414319279.inprogress21/08/31
 12:52:01 WARN S3ABlockOutputStream: Application invoked the Syncable API 
against stream writing to logs/spark-application-1630414319279.inprogress. This 
is unsupported21/08/31 12:52:05 INFO ResourceProfile: Default ResourceProfile 
created, executor resources: Map(cores -> name: cores, amount: 1, script: , 
vendor: , memory -> name: memory, amount: 512, script: , vendor: ), task 
resources: Map(cpus -> name: cpus, amount: 1.0)21/08/31 12:52:06 INFO 
KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Registered executor 
NettyRpcEndpointRef(spark-client://Executor) (10.144.149.180:52974) with ID 
121/08/31 12:52:06 INFO KubernetesClusterSchedulerBackend: SchedulerBackend is 
ready for scheduling beginning after reached minRegisteredResourcesRatio: 
0.821/08/31 12:52:06 INFO BlockManagerMasterEndpoint: Registering block manager 
10.144.149.180:41899 with 117.0 MiB RAM, BlockManagerId(1, 10.144.149.180, 
41899, None)[Row(Roll_Num='008005', Name='Yogesh', Percentage=94, 
Department='MCA'), Row(Roll_Num='007014', Name='Ananya', Percentage=98, 
Department='MCA')]21/08/31 12:58:11 WARN ExecutorPodsWatchSnapshotSource: 
Kubernetes client has been closed (this is expected if the application is 
shutting down.)21/08/31 12:58:12 WARN S3AInstrumentation: Closing output stream 
statistics while data is still marked as pending upload in 
OutputStreamStatistics{counters=((object_multipart_aborted.failures=0) 
(multipart_upload_completed=0) (stream_write_total_time=0) 
(action_executor_acquired.failures=0) (multipart_upload_completed.failures=0) 
(stream_write_total_data=0) (op_abort=0) 
(stream_write_exceptions_completing_upload=0) (object_multipart_aborted=0) 
(stream_write_block_uploads=1) (stream_write_bytes=117142) (op_hflush=11) 
(stream_write_queue_duration=0) (stream_write_exceptions=0) 
(op_abort.failures=0) (action_executor_acquired=0) 
(op_hsync=0));gauges=((stream_write_block_uploads_data_pending=117142) 
(stream_write_block_uploads_pending=1));minimums=((multipart_upload_completed.min=-1)
 (object_multipart_aborted.failures.min=-1) 
(multipart_upload_completed.failures.min=-1) (op_abort.failures.min=-1) 
(op_abort.min=-1) (action_executor_acquired.failures.min=-1) 
(action_executor_acquired.min=-1) 
(object_multipart_aborted.min=-1));maximums=((object_multipart_aborted.max=-1) 
(object_multipart_aborted.failures.max=-1) (op_abort.max=-1) 
(multipart_upload_completed.failures.max=-1) 
(action_executor_acquired.failures.max=-1) (op_abort.failures.max=-1) 
(multipart_upload_completed.max=-1) 
(action_executor_acquired.max=-1));means=((op_abort.mean=(samples=0, sum=0, 
mean=0.0000)) (multipart_upload_completed.failures.mean=(samples=0, sum=0, 
mean=0.0000)) (object_multipart_aborted.failures.mean=(samples=0, sum=0, 
mean=0.0000)) (object_multipart_aborted.mean=(samples=0, sum=0, mean=0.0000)) 
(action_executor_acquired.mean=(samples=0, sum=0, mean=0.0000)) 
(action_executor_acquired.failures.mean=(samples=0, sum=0, mean=0.0000)) 
(op_abort.failures.mean=(samples=0, sum=0, mean=0.0000)) 
(multipart_upload_completed.mean=(samples=0, sum=0, mean=0.0000)));, 
blocksActive=0, blockUploadsCompleted=0, blocksAllocated=1, blocksReleased=1, 
blocksActivelyAllocated=0, transferDuration=0 ms, totalUploadDuration=0 ms, 
effectiveBandwidth=0.0 bytes/s}21/08/31 12:58:12 ERROR Utils: Uncaught 
exception in thread shutdown-hook-0java.io.FileNotFoundException: No such file 
or directory: 
s3a://spark-event-logs-dev-us-west-2/logs/spark-application-1630414319279.inprogress
 at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3356) 
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185)
 at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initiateRename(S3AFileSystem.java:1506) 
at org.apache.hadoop.fs.s3a.S3AFileSystem.innerRename(S3AFileSystem.java:1608) 
at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:1465) at 
org.apache.spark.deploy.history.EventLogFileWriter.renameFile(EventLogFileWriters.scala:141)
 at 
org.apache.spark.deploy.history.SingleEventLogFileWriter.stop(EventLogFileWriters.scala:238)
 at 
org.apache.spark.scheduler.EventLoggingListener.stop(EventLoggingListener.scala:246)
 at org.apache.spark.SparkContext.$anonfun$stop$17(SparkContext.scala:2002) at 
org.apache.spark.SparkContext.$anonfun$stop$17$adapted(SparkContext.scala:2002) 
at scala.Option.foreach(Option.scala:407) at 
org.apache.spark.SparkContext.$anonfun$stop$16(SparkContext.scala:2002) at 
org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1357) at 
org.apache.spark.SparkContext.stop(SparkContext.scala:2002) at 
org.apache.spark.SparkContext.$anonfun$new$35(SparkContext.scala:638) at 
org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:214) at 
org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$2(ShutdownHookManager.scala:188)
 at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1934) at 
org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$1(ShutdownHookManager.scala:188)
 at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
scala.util.Try$.apply(Try.scala:213) at 
org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
 at 
org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)
{code}

> S3AInstrumentation Closing output stream statistics while data is still 
> marked as pending upload in OutputStreamStatistics
> --------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-17847
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17847
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 3.2.1
>         Environment: hadoop: 3.2.1
> spark: 3.0.2
> k8s server version: 1.18
> aws.java.sdk.bundle.version:1.11.1033
>            Reporter: Li Rong
>            Priority: Major
>         Attachments: logs.txt
>
>
> When using hadoop s3a file upload for spark event Logs, the logs were queued 
> up and not uploaded before the process is shut down:
> {code:java}
> // 21/08/13 12:22:39 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client 
> has been closed (this is expected if the application is shutting down.)
> 21/08/13 12:22:39 WARN S3AInstrumentation: Closing output stream statistics 
> while data is still marked as pending upload in 
> OutputStreamStatistics{blocksSubmitted=1, blocksInQueue=1, blocksActive=0, 
> blockUploadsCompleted=0, blockUploadsFailed=0, bytesPendingUpload=106716, 
> bytesUploaded=0, blocksAllocated=1, blocksReleased=1, 
> blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, 
> transferDuration=0 ms, queueDuration=0 ms, averageQueueTime=0 ms, 
> totalUploadDuration=0 ms, effectiveBandwidth=0.0 bytes/s}{code}
> details see logs attached



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to