I am working on potentially porting druid.io to a mesos framework. One
limitation for production use is that there is a lot of data cached locally
on disk that does not need to be re-fetched during a rolling restart.

If I were to take the simplest mesos route, each instance of the
disk-cache-heavy task would have its own executor and would have to refresh
the disk cache from deep storage each time it starts.

A more complex route would be to have a standalone executor which handles
the forking and restarts of tasks in order to maintain the working
directory of the task.

A slightly more hacky way of doing it would be to allow the
disk-cache-heavy task to each have their own executor but use a common
SharedFilesystem. But I'm not clear if SharedFilesystem would persist
beyond an executor's lifespan.

In such a case (where on-disk data would need to be "immediately" available
after a rolling restart) is there a recommended approach to making sure the
data persists properly?

Thanks,
Charles Allen

Reply via email to