I have a Spark application that has a Task seem to fail, but it actually did write out some of the files that were assigned it. And Spark assigns another executor that task, and it gets a FileAlreadyExistsException. The Hadoop code seems to allow for files to be overwritten, but I see the 1.5.1 version of this code doesn’t allow for this to be passed in. Is that correct?
Peter Halliday --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org