If you are using the trunk code, you should be able to config spark to use eventlog to log the application/task UI contents into the history server and be able to check out the application/task details later.
There are different config need to be done for standalone mode v.s. yarn/mesos mode. Checkout latest docs/monitoring.md for detail. Best Regards, Raymond Liu -----Original Message----- From: wxhsdp [mailto:wxh...@gmail.com] Sent: Tuesday, April 29, 2014 8:07 AM To: u...@spark.incubator.apache.org Subject: Re: questions about debugging a spark application thanks for your reply, daniel what do you mean by "the logs contain everything to reconstruct the same data." ? i also use times to look into the logs, but only get a little. as i can see, it logs the flow to run the application, but there are no more details about each task, for example, see the following logs 14/04/28 16:36:16.740 INFO CoarseGrainedExecutorBackend: Got assigned task 70 14/04/28 16:36:16.740 INFO Executor: Running task ID 70 14/04/28 16:36:16.742 INFO BlockManager: Found block broadcast_0 locally 14/04/28 16:36:16.747 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Getting 49 non-zero-bytes blocks out of 49 blocks 14/04/28 16:36:16.747 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Started 0 remote gets in 0 ms 14/04/28 16:36:16.821 INFO Executor: Serialized size of result for 70 is 1449738 14/04/28 16:36:16.821 INFO Executor: Sending result for 70 directly to driver 14/04/28 16:36:16.825 INFO Executor: Finished task ID 70 what do you mean by "the logs contain everything to reconstruct the same data." ? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/questions-about-debugging-a-spark-application-tp4891p4994.html Sent from the Apache Spark User List mailing list archive at Nabble.com.