[
https://issues.apache.org/jira/browse/SPARK-22708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16281232#comment-16281232
]
jinchen commented on SPARK-22708:
---------------------------------
This happened when I do :
{code:java}
df.createOrReplaceTempView
{code}
How to catch this error and set yarn exit code?
> spark on yarn error but Final app status: SUCCEEDED, exitCode: 0
> ----------------------------------------------------------------
>
> Key: SPARK-22708
> URL: https://issues.apache.org/jira/browse/SPARK-22708
> Project: Spark
> Issue Type: Question
> Components: YARN
> Affects Versions: 2.2.0
> Reporter: jinchen
>
> I got log :
> {code:java}
> 17/12/06 18:14:59 INFO state.StateStoreCoordinatorRef: Registered
> StateStoreCoordinator endpoint
> 17/12/06 18:15:01 INFO util.Version: Elasticsearch Hadoop v6.0.0 [8b59a8f82d]
> 17/12/06 18:15:02 INFO httpclient.HttpMethodDirector: I/O exception
> (java.net.ConnectException) caught when processing request: Connection
> refused (Connection refused)
> 17/12/06 18:15:02 INFO httpclient.HttpMethodDirector: Retrying request
> 17/12/06 18:15:02 INFO httpclient.HttpMethodDirector: I/O exception
> (java.net.ConnectException) caught when processing request: Connection
> refused (Connection refused)
> 17/12/06 18:15:02 INFO httpclient.HttpMethodDirector: Retrying request
> 17/12/06 18:15:02 INFO httpclient.HttpMethodDirector: I/O exception
> (java.net.ConnectException) caught when processing request: Connection
> refused (Connection refused)
> 17/12/06 18:15:02 INFO httpclient.HttpMethodDirector: Retrying request
> 17/12/06 18:15:02 ERROR rest.NetworkClient: Node [192.168.200.154:9200]
> failed (Connection refused (Connection refused)); no other nodes left -
> aborting...
> 17/12/06 18:15:02 ERROR dispatcher.StrategyDispatcher: 调用链路异常
> org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES
> version - typically this happens if the network/Elasticsearch cluster is not
> accessible or when targeting a WAN/Cloud instance without the proper setting
> 'es.nodes.wan.only'
> at
> org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:327)
> at
> org.elasticsearch.spark.sql.SchemaUtils$.discoverMappingAndGeoFields(SchemaUtils.scala:98)
> at
> org.elasticsearch.spark.sql.SchemaUtils$.discoverMapping(SchemaUtils.scala:91)
> at
> org.elasticsearch.spark.sql.ElasticsearchRelation.lazySchema$lzycompute(DefaultSource.scala:196)
> at
> org.elasticsearch.spark.sql.ElasticsearchRelation.lazySchema(DefaultSource.scala:196)
> at
> org.elasticsearch.spark.sql.ElasticsearchRelation$$anonfun$schema$1.apply(DefaultSource.scala:200)
> at
> org.elasticsearch.spark.sql.ElasticsearchRelation$$anonfun$schema$1.apply(DefaultSource.scala:200)
> at scala.Option.getOrElse(Option.scala:121)
> at
> org.elasticsearch.spark.sql.ElasticsearchRelation.schema(DefaultSource.scala:200)
> at
> org.apache.spark.sql.execution.datasources.LogicalRelation$.apply(LogicalRelation.scala:77)
> at
> org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:415)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:172)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:156)
> at
> streaming.core.compositor.spark.source.MultiSQLSourceCompositor$$anonfun$result$1.apply(MultiSQLSourceCompositor.scala:37)
> at
> streaming.core.compositor.spark.source.MultiSQLSourceCompositor$$anonfun$result$1.apply(MultiSQLSourceCompositor.scala:27)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at
> streaming.core.compositor.spark.source.MultiSQLSourceCompositor.result(MultiSQLSourceCompositor.scala:27)
> at
> streaming.core.strategy.SparkStreamingStrategy.result(SparkStreamingStrategy.scala:52)
> at
> serviceframework.dispatcher.StrategyDispatcher$$anonfun$dispatch$2.apply(StrategyDispatcher.scala:65)
> at
> serviceframework.dispatcher.StrategyDispatcher$$anonfun$dispatch$2.apply(StrategyDispatcher.scala:63)
> at scala.collection.immutable.List.flatMap(List.scala:327)
> at
> serviceframework.dispatcher.StrategyDispatcher.dispatch(StrategyDispatcher.scala:62)
> at
> streaming.core.strategy.platform.PlatformManager$$anonfun$run$3.apply(PlatformManager.scala:120)
> at
> streaming.core.strategy.platform.PlatformManager$$anonfun$run$3.apply(PlatformManager.scala:118)
> at
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at
> streaming.core.strategy.platform.PlatformManager.run(PlatformManager.scala:117)
> at streaming.core.StreamingApp$.main(StreamingApp.scala:14)
> at streaming.core.StreamingApp.main(StreamingApp.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:635)
> Caused by: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException:
> Connection error (check network and/or proxy settings)- all nodes failed;
> tried [[192.168.200.154:9200]]
> at
> org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:149)
> at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:466)
> at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:430)
> at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
> at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:155)
> at
> org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:660)
> at
> org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:320)
> ... 36 more
> 17/12/06 18:15:02 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED,
> exitCode: 0
> 17/12/06 18:15:02 INFO spark.SparkContext: Invoking stop() from shutdown hook
> 17/12/06 18:15:02 INFO ui.SparkUI: Stopped Spark web UI at
> http://192.168.13.150:45073
> 17/12/06 18:15:02 INFO yarn.YarnAllocator: Driver requested a total number of
> 0 executor(s).
> 17/12/06 18:15:02 INFO cluster.YarnClusterSchedulerBackend: Shutting down all
> executors
> 17/12/06 18:15:02 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint:
> Asking each executor to shut down
> 17/12/06 18:15:02 INFO cluster.SchedulerExtensionServices: Stopping
> SchedulerExtensionServices
> (serviceOption=None,
> services=List(),
> started=false)
> 17/12/06 18:15:02 INFO spark.MapOutputTrackerMasterEndpoint:
> MapOutputTrackerMasterEndpoint stopped!
> 17/12/06 18:15:02 INFO memory.MemoryStore: MemoryStore cleared
> 17/12/06 18:15:02 INFO storage.BlockManager: BlockManager stopped
> 17/12/06 18:15:02 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
> 17/12/06 18:15:02 INFO
> scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
> OutputCommitCoordinator stopped!
> 17/12/06 18:15:02 INFO spark.SparkContext: Successfully stopped SparkContext
> 17/12/06 18:15:02 INFO yarn.ApplicationMaster: Unregistering
> ApplicationMaster with SUCCEEDED
> 17/12/06 18:15:02 INFO impl.AMRMClientImpl: Waiting for application to be
> successfully unregistered.
> 17/12/06 18:15:02 INFO yarn.ApplicationMaster: Deleting staging directory
> hdfs://nameservice1/user/spark2/.sparkStaging/application_1511515104748_13554
> 17/12/06 18:15:02 INFO util.ShutdownHookManager: Shutdown hook called
> {code}
> But
> {code:java}
> 17/12/06 10:45:15 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED,
> exitCode: 0
> {code}
> spark version 2.2.0 yarn version 2.6
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]