Hey Cody,
In terms of Spark 1.1.1 - we wouldn't change a default value in a spot
release. Changing this to default is slotted for 1.2.0:
https://issues.apache.org/jira/browse/SPARK-3280
- Patrick
On Mon, Sep 22, 2014 at 9:08 AM, Cody Koeninger wrote:
> Unfortunately we were somewhat rushed to
Unfortunately we were somewhat rushed to get things working again and did
not keep the exact stacktraces, but one of the issues we saw was similar to
that reported in
https://issues.apache.org/jira/browse/SPARK-3032
We also saw FAILED_TO_UNCOMPRESS errors from snappy when reading the
shuffle file
Thanks for the heads up Cody. Any indication of what was going wrong?
On Mon, Sep 22, 2014 at 7:16 AM, Cody Koeninger wrote:
> Just as a heads up, we deployed 471e6a3a of master (in order to get some
> sql fixes), and were seeing jobs fail until we set
>
> spark.shuffle.manager=HASH
>
> I'd be
Just as a heads up, we deployed 471e6a3a of master (in order to get some
sql fixes), and were seeing jobs fail until we set
spark.shuffle.manager=HASH
I'd be reluctant to change the default to sort for the 1.1.1 release