[ 
https://issues.apache.org/jira/browse/SPARK-1537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated SPARK-1537:
----------------------------------
    Comment: was deleted

(was: Full application log. Application hasn't actually stopped, which is 
interesting.
{code}
$ dist/bin/spark-submit  \
                                                    --class 
org.apache.spark.examples.SparkPi \
                                                    --properties-file 
../clusterconfigs/clusters/devix/spark/spark-defaults.conf \
                                                    --master yarn-client \
                                                    --executor-memory 128m \
                                                    --num-executors 1 \
                                                    --executor-cores 1 \
                                                    --driver-memory 128m \
                                                    
dist/lib/spark-examples-1.5.0-SNAPSHOT-hadoop2.6.0.jar 12
2015-06-09 17:01:59,596 [main] INFO  spark.SparkContext 
(Logging.scala:logInfo(59)) - Running Spark version 1.5.0-SNAPSHOT
2015-06-09 17:02:01,309 [sparkDriver-akka.actor.default-dispatcher-2] INFO  
slf4j.Slf4jLogger (Slf4jLogger.scala:applyOrElse(80)) - Slf4jLogger started
2015-06-09 17:02:01,359 [sparkDriver-akka.actor.default-dispatcher-2] INFO  
Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Starting remoting
2015-06-09 17:02:01,542 [sparkDriver-akka.actor.default-dispatcher-2] INFO  
Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Remoting started; listening on 
addresses :[akka.tcp://sparkDriver@192.168.1.86:51476]
2015-06-09 17:02:01,549 [main] INFO  util.Utils (Logging.scala:logInfo(59)) - 
Successfully started service 'sparkDriver' on port 51476.
2015-06-09 17:02:01,568 [main] INFO  spark.SparkEnv (Logging.scala:logInfo(59)) 
- Registering MapOutputTracker
2015-06-09 17:02:01,587 [main] INFO  spark.SparkEnv (Logging.scala:logInfo(59)) 
- Registering BlockManagerMaster
2015-06-09 17:02:01,831 [main] INFO  spark.HttpServer 
(Logging.scala:logInfo(59)) - Starting HTTP Server
2015-06-09 17:02:01,891 [main] INFO  util.Utils (Logging.scala:logInfo(59)) - 
Successfully started service 'HTTP file server' on port 51477.
2015-06-09 17:02:01,905 [main] INFO  spark.SparkEnv (Logging.scala:logInfo(59)) 
- Registering OutputCommitCoordinator
2015-06-09 17:02:02,038 [main] INFO  util.Utils (Logging.scala:logInfo(59)) - 
Successfully started service 'SparkUI' on port 4040.
2015-06-09 17:02:02,039 [main] INFO  ui.SparkUI (Logging.scala:logInfo(59)) - 
Started SparkUI at http://192.168.1.86:4040
2015-06-09 17:02:03,071 [main] INFO  spark.SparkContext 
(Logging.scala:logInfo(59)) - Added JAR 
file:/Users/stevel/Projects/Hortonworks/Projects/sparkwork/spark/dist/lib/spark-examples-1.5.0-SNAPSHOT-hadoop2.6.0.jar
 at 
http://192.168.1.86:51477/jars/spark-examples-1.5.0-SNAPSHOT-hadoop2.6.0.jar 
with timestamp 1433865723062
2015-06-09 17:02:03,691 [main] INFO  impl.TimelineClientImpl 
(TimelineClientImpl.java:serviceInit(285)) - Timeline service address: 
http://devix.cotham.uk:8188/ws/v1/timeline/
2015-06-09 17:02:03,808 [main] INFO  client.RMProxy 
(RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at 
devix.cotham.uk/192.168.1.134:8050
2015-06-09 17:02:04,577 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Requesting a new application from cluster with 1 NodeManagers
2015-06-09 17:02:04,637 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Verifying our application has not requested more than the maximum memory 
capability of the cluster (2048 MB per container)
2015-06-09 17:02:04,637 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Will allocate AM container, with 896 MB memory including 384 MB overhead
2015-06-09 17:02:04,638 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Setting up container launch context for our AM
2015-06-09 17:02:04,643 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Preparing resources for our AM container
2015-06-09 17:02:05,096 [main] WARN  shortcircuit.DomainSocketFactory 
(DomainSocketFactory.java:<init>(116)) - The short-circuit local reads feature 
cannot be used because libhadoop cannot be loaded.
2015-06-09 17:02:05,106 [main] DEBUG yarn.YarnSparkHadoopUtil 
(Logging.scala:logDebug(63)) - delegation token renewer is: 
rm/devix.cotham.uk@COTHAM
2015-06-09 17:02:05,107 [main] INFO  yarn.YarnSparkHadoopUtil 
(Logging.scala:logInfo(59)) - getting token for namenode: 
hdfs://devix.cotham.uk:8020/user/stevel/.sparkStaging/application_1433777033372_0005
2015-06-09 17:02:06,129 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
HiveMetaStore configured in localmode
2015-06-09 17:02:06,130 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
HBase Class not found: java.lang.ClassNotFoundException: 
org.apache.hadoop.hbase.HBaseConfiguration
2015-06-09 17:02:06,225 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Uploading resource 
file:/Users/stevel/Projects/Hortonworks/Projects/sparkwork/spark/dist/lib/spark-assembly-1.5.0-SNAPSHOT-hadoop2.6.0.jar
 -> 
hdfs://devix.cotham.uk:8020/user/stevel/.sparkStaging/application_1433777033372_0005/spark-assembly-1.5.0-SNAPSHOT-hadoop2.6.0.jar
2015-06-09 17:02:12,750 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Uploading resource 
file:/private/var/folders/57/xyts0qt105z1f1k0twk6rd8m0000gq/T/spark-626c525c-d321-4368-8c2b-c4b85c4f0c26/__hadoop_conf__730606999913751870.zip
 -> 
hdfs://devix.cotham.uk:8020/user/stevel/.sparkStaging/application_1433777033372_0005/__hadoop_conf__730606999913751870.zip
2015-06-09 17:02:13,030 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Setting up the launch environment for our AM container
2015-06-09 17:02:13,045 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
Using the default MR application classpath: 
$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
2015-06-09 17:02:13,061 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
===============================================================================
2015-06-09 17:02:13,062 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
Yarn AM launch context:
2015-06-09 17:02:13,062 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
    user class: N/A
2015-06-09 17:02:13,063 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
    env:
2015-06-09 17:02:13,063 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
        CLASSPATH -> 
{{PWD}}<CPS>{{PWD}}/__hadoop_conf__<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>/usr/hdp/current/hadoop-client/*<CPS>/usr/hdp/current/hadoop-client/lib/*<CPS>/usr/hdp/current/hadoop-hdfs-client/*<CPS>/usr/hdp/current/hadoop-hdfs-client/lib/*<CPS>/usr/hdp/current/hadoop-yarn-client/*<CPS>/usr/hdp/current/hadoop-yarn-client/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
2015-06-09 17:02:13,063 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
        SPARK_YARN_CACHE_FILES_FILE_SIZES -> 163870522
2015-06-09 17:02:13,063 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
        SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1433777033372_0005
2015-06-09 17:02:13,064 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
        SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE
2015-06-09 17:02:13,064 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
        SPARK_USER -> stevel
2015-06-09 17:02:13,064 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
        SPARK_YARN_MODE -> true
2015-06-09 17:02:13,064 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
        SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1433865732588
2015-06-09 17:02:13,064 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
        SPARK_YARN_CACHE_FILES -> 
hdfs://devix.cotham.uk:8020/user/stevel/.sparkStaging/application_1433777033372_0005/spark-assembly-1.5.0-SNAPSHOT-hadoop2.6.0.jar#__spark__.jar
2015-06-09 17:02:13,064 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
    resources:
2015-06-09 17:02:13,117 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
        __spark__.jar -> resource { scheme: "hdfs" host: "devix.cotham.uk" 
port: 8020 file: 
"/user/stevel/.sparkStaging/application_1433777033372_0005/spark-assembly-1.5.0-SNAPSHOT-hadoop2.6.0.jar"
 } size: 163870522 timestamp: 1433865732588 type: FILE visibility: PRIVATE
2015-06-09 17:02:13,118 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
        __hadoop_conf__ -> resource { scheme: "hdfs" host: "devix.cotham.uk" 
port: 8020 file: 
"/user/stevel/.sparkStaging/application_1433777033372_0005/__hadoop_conf__730606999913751870.zip"
 } size: 70998 timestamp: 1433865732995 type: ARCHIVE visibility: PRIVATE
2015-06-09 17:02:13,118 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
    command:
2015-06-09 17:02:13,119 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
        {{JAVA_HOME}}/bin/java -server -Xmx512m -Djava.io.tmpdir={{PWD}}/tmp 
'-Dspark.externalBlockStore.folderName=spark-0746af56-efd3-43d5-b4d4-2908fbcb7f38'
 
'-Dspark.yarn.services=org.apache.spark.deploy.history.yarn.YarnHistoryService' 
'-Dspark.fileserver.uri=http://192.168.1.86:51477' 
'-Dspark.executor.memory=128m' '-Dspark.master=yarn-client' 
'-Dspark.executor.id=driver' '-Dspark.yarn.max_executor.failures=3' 
'-Dspark.jars=file:/Users/stevel/Projects/Hortonworks/Projects/sparkwork/spark/dist/lib/spark-examples-1.5.0-SNAPSHOT-hadoop2.6.0.jar'
 '-Dspark.driver.host=192.168.1.86' '-Dspark.executor.instances=1' 
'-Dspark.driver.memory=128m' '-Dspark.driver.port=51476' '-Dspark.cores.max=2' 
'-Dspark.app.name=Spark Pi' 
'-Dspark.history.provider=org.apache.spark.deploy.history.yarn.YarnHistoryProvider'
 '-Dspark.executor.cores=1' 
'-Dspark.driver.appUIAddress=http://192.168.1.86:4040' 
-Dspark.yarn.app.container.log.dir=<LOG_DIR> 
org.apache.spark.deploy.yarn.ExecutorLauncher --arg '192.168.1.86:51476' 
--executor-memory 128m --executor-cores 1 --num-executors  1 1> 
<LOG_DIR>/stdout 2> <LOG_DIR>/stderr
2015-06-09 17:02:13,119 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
===============================================================================
2015-06-09 17:02:13,121 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
spark.yarn.maxAppAttempts is not set. Cluster's default value will be used.
2015-06-09 17:02:13,122 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Submitting application 5 to ResourceManager
2015-06-09 17:02:18,561 [main] INFO  impl.YarnClientImpl 
(YarnClientImpl.java:submitApplication(265)) - Application submission is not 
finished, submitted application application_1433777033372_0005 is still in NEW
2015-06-09 17:02:20,620 [main] INFO  impl.YarnClientImpl 
(YarnClientImpl.java:submitApplication(251)) - Submitted application 
application_1433777033372_0005
2015-06-09 17:02:20,778 [main] INFO  impl.TimelineClientImpl 
(TimelineClientImpl.java:serviceInit(285)) - Timeline service address: 
http://devix.cotham.uk:8188/ws/v1/timeline/
2015-06-09 17:02:20,780 [main] DEBUG yarn.YarnHistoryService 
(Logging.scala:logDebug(63)) - Registering listener to spark context
2015-06-09 17:02:20,780 [HistoryEventHandlingThread] INFO  
yarn.YarnHistoryService (Logging.scala:logInfo(59)) - Starting Dequeue service 
for AppId application_1433777033372_0005
2015-06-09 17:02:20,781 [main] INFO  yarn.YarnHistoryService 
(Logging.scala:logInfo(59)) - Service History Service in state History Service: 
STARTED endpoint=http://devix.cotham.uk:8188/ws/v1/timeline/; bonded to 
ATS=true; listening=true; batchSize=3; flush count=0; current queue size=0; 
total number queued=0, processed=0; post failures=0;
2015-06-09 17:02:21,802 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Application report for application_1433777033372_0005 (state: FAILED)
2015-06-09 17:02:21,805 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
         client token: N/A
         diagnostics: Failed to renew token: Kind: TIMELINE_DELEGATION_TOKEN, 
Service: 192.168.1.134:8188, Ident: (owner=stevel, renewer=yarn, realUser=, 
issueDate=1433865735471, maxDate=1434470535471, sequenceNumber=5, masterKeyId=3)
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1433865735787
         final status: FAILED
         tracking URL: 
http://devix.cotham.uk:8088/proxy/application_1433777033372_0005/
         user: stevel
2015-06-09 17:02:21,811 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Deleting staging directory .sparkStaging/application_1433777033372_0005
2015-06-09 17:02:21,891 [main] ERROR spark.SparkContext 
(Logging.scala:logError(96)) - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might 
have been killed or unable to launch application master.
        at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:117)
        at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
        at 
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:497)
        at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:28)
        at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:663)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2015-06-09 17:02:21,974 [main] INFO  ui.SparkUI (Logging.scala:logInfo(59)) - 
Stopped Spark web UI at http://192.168.1.86:4040
2015-06-09 17:02:21,976 [main] ERROR spark.SparkContext 
(Logging.scala:logError(96)) - Error stopping SparkContext after init error.
java.lang.NullPointerException
        at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:153)
        at 
org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:421)
        at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1407)
        at org.apache.spark.SparkContext.stop(SparkContext.scala:1642)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:565)
        at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:28)
        at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:663)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Exception in thread "main" org.apache.spark.SparkException: Yarn application 
has already ended! It might have been killed or unable to launch application 
master.
        at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:117)
        at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
        at 
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:497)
        at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:28)
        at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:663)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
{code}
)

> Add integration with Yarn's Application Timeline Server
> -------------------------------------------------------
>
>                 Key: SPARK-1537
>                 URL: https://issues.apache.org/jira/browse/SPARK-1537
>             Project: Spark
>          Issue Type: New Feature
>          Components: YARN
>            Reporter: Marcelo Vanzin
>         Attachments: SPARK-1537.txt, spark-1573.patch
>
>
> It would be nice to have Spark integrate with Yarn's Application Timeline 
> Server (see YARN-321, YARN-1530). This would allow users running Spark on 
> Yarn to have a single place to go for all their history needs, and avoid 
> having to manage a separate service (Spark's built-in server).
> At the moment, there's a working version of the ATS in the Hadoop 2.4 branch, 
> although there is still some ongoing work. But the basics are there, and I 
> wouldn't expect them to change (much) at this point.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to