This is during shutdown right? Looks ok to me since connections are being
closed. We could've handle this more gracefully, but the logs look
harmless.

On Wednesday, September 17, 2014, wyphao.2007 <wyphao.2...@163.com> wrote:

> Hi,  When I run spark job on yarn,and the job finished success,but I found
> there are some error logs in the logfile as follow(the red color text):
>
> 14/09/17 18:25:03 INFO ui.SparkUI: Stopped Spark web UI at
> http://sparkserver2.cn:63937
> 14/09/17 18:25:03 INFO scheduler.DAGScheduler: Stopping DAGScheduler
> 14/09/17 18:25:03 INFO cluster.YarnClusterSchedulerBackend: Shutting down
> all executors
> 14/09/17 18:25:03 INFO cluster.YarnClusterSchedulerBackend: Asking each
> executor to shut down
> 14/09/17 18:25:03 INFO network.ConnectionManager: Removing
> SendingConnection to ConnectionManagerId(sparkserver2.cn,9072)
> 14/09/17 18:25:03 INFO network.ConnectionManager: Removing
> ReceivingConnection to ConnectionManagerId(sparkserver2.cn,9072)
> 14/09/17 18:25:03 ERROR network.ConnectionManager: Corresponding
> SendingConnection to ConnectionManagerId(sparkserver2.cn,9072) not found
> 14/09/17 18:25:03 INFO network.ConnectionManager: Removing
> ReceivingConnection to ConnectionManagerId(sparkserver2.cn,14474)
> 14/09/17 18:25:03 INFO network.ConnectionManager: Removing
> SendingConnection to ConnectionManagerId(sparkserver2.cn,14474)
> 14/09/17 18:25:03 INFO network.ConnectionManager: Removing
> SendingConnection to ConnectionManagerId(sparkserver2.cn,14474)
> 14/09/17 18:25:04 INFO spark.MapOutputTrackerMasterActor:
> MapOutputTrackerActor stopped!
> 14/09/17 18:25:04 INFO network.ConnectionManager: Selector thread was
> interrupted!
> 14/09/17 18:25:04 INFO network.ConnectionManager: Removing
> SendingConnection to ConnectionManagerId(sparkserver2.cn,9072)
> 14/09/17 18:25:04 INFO network.ConnectionManager: Removing
> SendingConnection to ConnectionManagerId(sparkserver2.cn,14474)
> 14/09/17 18:25:04 INFO network.ConnectionManager: Removing
> ReceivingConnection to ConnectionManagerId(sparkserver2.cn,9072)
> 14/09/17 18:25:04 ERROR network.ConnectionManager: Corresponding
> SendingConnection to ConnectionManagerId(sparkserver2.cn,9072) not found
> 14/09/17 18:25:04 INFO network.ConnectionManager: Removing
> ReceivingConnection to ConnectionManagerId(sparkserver2.cn,14474)
> 14/09/17 18:25:04 ERROR network.ConnectionManager: Corresponding
> SendingConnection to ConnectionManagerId(sparkserver2.cn,14474) not found
> 14/09/17 18:25:04 WARN network.ConnectionManager: All connections not
> cleaned up
> 14/09/17 18:25:04 INFO network.ConnectionManager: ConnectionManager stopped
> 14/09/17 18:25:04 INFO storage.MemoryStore: MemoryStore cleared
> 14/09/17 18:25:04 INFO storage.BlockManager: BlockManager stopped
> 14/09/17 18:25:04 INFO storage.BlockManagerMaster: BlockManagerMaster
> stopped
> 14/09/17 18:25:04 INFO spark.SparkContext: Successfully stopped
> SparkContext
> 14/09/17 18:25:04 INFO yarn.ApplicationMaster: Unregistering
> ApplicationMaster with SUCCEEDED
> 14/09/17 18:25:04 INFO remote.RemoteActorRefProvider$RemotingTerminator:
> Shutting down remote daemon.
> 14/09/17 18:25:04 INFO remote.RemoteActorRefProvider$RemotingTerminator:
> Remote daemon shut down; proceeding with flushing remote transports.
> 14/09/17 18:25:04 INFO impl.AMRMClientImpl: Waiting for application to be
> successfully unregistered.
> 14/09/17 18:25:04 INFO Remoting: Remoting shut down
> 14/09/17 18:25:04 INFO remote.RemoteActorRefProvider$RemotingTerminator:
> Remoting shut down.
>
> What is the cause of this error? My spark version is 1.1.0 &  hadoop
> version is 2.2.0.
> Thank you.
>
>
>

Reply via email to