Hi, Sumona:
      It's a bug in Spark old version, In spark 1.6.0, it is fixed.
      After the application complete, spark master will load event log to 
memory, and it is sync because of actor. If the event log is big, spark master 
will hang a long time, and you can not submit any applications, if your master 
memory is to small, you master will die!
      The solution in spark 1.6 is not very good, the operation is async, and 
so you still need to set a big java heap for master.






------------------ ???????? ------------------
??????: "Shixiong(Ryan) Zhu";<shixi...@databricks.com>;
????????: 2016??3??1??(??????) ????8:02
??????: "Sumona Routh"<sumos...@gmail.com>; 
????: "user@spark.apache.org"<user@spark.apache.org>; 
????: Re: Spark UI standalone "crashes" after an application finishes



Do you mean you cannot access Master UI after your application completes? Could 
you check the master log?

On Mon, Feb 29, 2016 at 3:48 PM, Sumona Routh <sumos...@gmail.com> wrote:
Hi there,

I've been doing some performance tuning of our Spark application, which is 
using Spark 1.2.1 standalone. I have been using the spark metrics to graph out 
details as I run the jobs, as well as the UI to review the tasks and stages.


I notice that after my application completes, or is near completion, the UI 
"crashes." I get a Connection Refused response. Sometimes, the page eventually 
recovers and will load again, but sometimes I end up having to restart the 
Spark master to get it back. When I look at my graphs on the app, the memory 
consumption (of driver, executors, and what I believe to be the daemon 
(spark.jvm.total.used)) appears to be healthy. Monitoring the master machine 
itself, memory and CPU appear healthy as well.


Has anyone else seen this issue? Are there logs for the UI itself, and where 
might I find those?


Thanks!

Sumona

Reply via email to