We are running spark streaming jobs (version 1.1.0). After a sufficient
amount of time, the stderr file grows until the disk is full at 100% and
crashes the cluster. I've read this

https://github.com/apache/spark/pull/895

and also read this

http://spark.apache.org/docs/latest/configuration.html#spark-streaming


So I've tried testing with this in an attempt to get the stderr log file to
roll.

sparkConf.set("spark.executor.logs.rolling.strategy", "size")
            .set("spark.executor.logs.rolling.size.maxBytes", "1024")
            .set("spark.executor.logs.rolling.maxRetainedFiles", "3")


Yet it does not roll and continues to grow. Am I missing something obvious?


thanks,
Duc

Reply via email to