[jira] [Issue Comment Deleted] (SPARK-1537) Add integration with Yarn's Application Timeline Server

2015-06-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-1537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated SPARK-1537:
--
Comment: was deleted

(was: Full application log. Application hasn't actually stopped, which is 
interesting.
{code}
$ dist/bin/spark-submit  \
--class 
org.apache.spark.examples.SparkPi \
--properties-file 
../clusterconfigs/clusters/devix/spark/spark-defaults.conf \
--master yarn-client \
--executor-memory 128m \
--num-executors 1 \
--executor-cores 1 \
--driver-memory 128m \

dist/lib/spark-examples-1.5.0-SNAPSHOT-hadoop2.6.0.jar 12
2015-06-09 17:01:59,596 [main] INFO  spark.SparkContext 
(Logging.scala:logInfo(59)) - Running Spark version 1.5.0-SNAPSHOT
2015-06-09 17:02:01,309 [sparkDriver-akka.actor.default-dispatcher-2] INFO  
slf4j.Slf4jLogger (Slf4jLogger.scala:applyOrElse(80)) - Slf4jLogger started
2015-06-09 17:02:01,359 [sparkDriver-akka.actor.default-dispatcher-2] INFO  
Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Starting remoting
2015-06-09 17:02:01,542 [sparkDriver-akka.actor.default-dispatcher-2] INFO  
Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Remoting started; listening on 
addresses :[akka.tcp://sparkDriver@192.168.1.86:51476]
2015-06-09 17:02:01,549 [main] INFO  util.Utils (Logging.scala:logInfo(59)) - 
Successfully started service 'sparkDriver' on port 51476.
2015-06-09 17:02:01,568 [main] INFO  spark.SparkEnv (Logging.scala:logInfo(59)) 
- Registering MapOutputTracker
2015-06-09 17:02:01,587 [main] INFO  spark.SparkEnv (Logging.scala:logInfo(59)) 
- Registering BlockManagerMaster
2015-06-09 17:02:01,831 [main] INFO  spark.HttpServer 
(Logging.scala:logInfo(59)) - Starting HTTP Server
2015-06-09 17:02:01,891 [main] INFO  util.Utils (Logging.scala:logInfo(59)) - 
Successfully started service 'HTTP file server' on port 51477.
2015-06-09 17:02:01,905 [main] INFO  spark.SparkEnv (Logging.scala:logInfo(59)) 
- Registering OutputCommitCoordinator
2015-06-09 17:02:02,038 [main] INFO  util.Utils (Logging.scala:logInfo(59)) - 
Successfully started service 'SparkUI' on port 4040.
2015-06-09 17:02:02,039 [main] INFO  ui.SparkUI (Logging.scala:logInfo(59)) - 
Started SparkUI at http://192.168.1.86:4040
2015-06-09 17:02:03,071 [main] INFO  spark.SparkContext 
(Logging.scala:logInfo(59)) - Added JAR 
file:/Users/stevel/Projects/Hortonworks/Projects/sparkwork/spark/dist/lib/spark-examples-1.5.0-SNAPSHOT-hadoop2.6.0.jar
 at 
http://192.168.1.86:51477/jars/spark-examples-1.5.0-SNAPSHOT-hadoop2.6.0.jar 
with timestamp 1433865723062
2015-06-09 17:02:03,691 [main] INFO  impl.TimelineClientImpl 
(TimelineClientImpl.java:serviceInit(285)) - Timeline service address: 
http://devix.cotham.uk:8188/ws/v1/timeline/
2015-06-09 17:02:03,808 [main] INFO  client.RMProxy 
(RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at 
devix.cotham.uk/192.168.1.134:8050
2015-06-09 17:02:04,577 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Requesting a new application from cluster with 1 NodeManagers
2015-06-09 17:02:04,637 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Verifying our application has not requested more than the maximum memory 
capability of the cluster (2048 MB per container)
2015-06-09 17:02:04,637 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Will allocate AM container, with 896 MB memory including 384 MB overhead
2015-06-09 17:02:04,638 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Setting up container launch context for our AM
2015-06-09 17:02:04,643 [main] INFO  yarn.Client (Logging.scala:logInfo(59)) - 
Preparing resources for our AM container
2015-06-09 17:02:05,096 [main] WARN  shortcircuit.DomainSocketFactory 
(DomainSocketFactory.java:init(116)) - The short-circuit local reads feature 
cannot be used because libhadoop cannot be loaded.
2015-06-09 17:02:05,106 [main] DEBUG yarn.YarnSparkHadoopUtil 
(Logging.scala:logDebug(63)) - delegation token renewer is: 
rm/devix.cotham.uk@COTHAM
2015-06-09 17:02:05,107 [main] INFO  yarn.YarnSparkHadoopUtil 
(Logging.scala:logInfo(59)) - getting token for namenode: 
hdfs://devix.cotham.uk:8020/user/stevel/.sparkStaging/application_1433777033372_0005
2015-06-09 17:02:06,129 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
HiveMetaStore configured in localmode
2015-06-09 17:02:06,130 [main] DEBUG yarn.Client (Logging.scala:logDebug(63)) - 
HBase Class not found: java.lang.ClassNotFoundException: 
org.apache.hadoop.hbase.HBaseConfiguration
2015-06-09 17:02:06,225 [main] INFO  yarn.Client 

[jira] [Issue Comment Deleted] (SPARK-1537) Add integration with Yarn's Application Timeline Server

2015-02-20 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-1537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang updated SPARK-1537:
--
Comment: was deleted

(was: [~sowen] By the way, I am not waiting for someone to give me the patch. 
It is because someone declare the patch is almost ready half year ago. After I 
submit mine, then some one keep saying my patch is not much different from his.)

 Add integration with Yarn's Application Timeline Server
 ---

 Key: SPARK-1537
 URL: https://issues.apache.org/jira/browse/SPARK-1537
 Project: Spark
  Issue Type: New Feature
  Components: YARN
Reporter: Marcelo Vanzin
Assignee: Marcelo Vanzin
 Attachments: SPARK-1537.txt, spark-1573.patch


 It would be nice to have Spark integrate with Yarn's Application Timeline 
 Server (see YARN-321, YARN-1530). This would allow users running Spark on 
 Yarn to have a single place to go for all their history needs, and avoid 
 having to manage a separate service (Spark's built-in server).
 At the moment, there's a working version of the ATS in the Hadoop 2.4 branch, 
 although there is still some ongoing work. But the basics are there, and I 
 wouldn't expect them to change (much) at this point.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org