You should always call sc.stop(), so it cleans up state and does not fill
up your disk over time. The strange behavior you observe is mostly benign,
as it only occurs after you have supposedly finished all of your work with
the SparkContext. I am not aware of a bug in Spark that causes this
behavior.

What are you doing in your application? Do you see any exceptions in the
logs? Have you looked at the worker logs? You can browse through these on
the worker web UI on http://<worker-url>:8081

Andrew

Reply via email to