This was a completely misleading error message..

The problem was due to a log message getting dumped to the stdout. This was
getting accumulated in the workers and hence there was no space left on
device after some time. 
When I re-tested with spark-0.9.1, the saveAsTextFile api threw "no space
left on device error" after writing the same 48 files. On checking the
master, it was all ok.
But on checking the slaves, the stdout contributed to 99% of the root
filesystem.
On removing the particular log, it is now working fine in both the versions.




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Bad-Digest-error-while-doing-aws-s3-put-tp10036p11642.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to