Hi, Could you give more information about your Spark environment? cluster manager, spark version, using dynamic allocation or not, etc..
Generally, executors will delete temporary directories for shuffle files on exit because JVM shutdown hooks are registered. Unless they are brutally killed. You can safely delete the directories when you are sure that the spark applications related to them have finished. A crontab task may be used for automatic clean up. > On Sep 2, 2016, at 12:18, 汪洋 <tiandiwo...@icloud.com> wrote: > > Hi all, > > I discovered that sometimes executor exits unexpectedly and when it is > restarted, it will create another blockmgr directory without deleting the old > ones. Thus, for a long running application, some shuffle files will never be > cleaned up. Sometimes those files could take up the whole disk. > > Is there a way to clean up those unused file automatically? Or is it safe to > delete the old directory manually only leaving the newest one? > > Here is the executor’s local directory. > <D7718580-FF26-47F8-B6F8-00FB1F20A8C0.png> > > Any advice on this? > > Thanks. > > Yang --------------------------------------------------------------------- To unsubscribe e-mail: dev-unsubscr...@spark.apache.org