Hey Cody,

In terms of Spark 1.1.1 - we wouldn't change a default value in a spot
release. Changing this to default is slotted for 1.2.0:

https://issues.apache.org/jira/browse/SPARK-3280

- Patrick

On Mon, Sep 22, 2014 at 9:08 AM, Cody Koeninger <c...@koeninger.org> wrote:
> Unfortunately we were somewhat rushed to get things working again and did
> not keep the exact stacktraces, but one of the issues we saw was similar to
> that reported in
>
> https://issues.apache.org/jira/browse/SPARK-3032
>
> We also saw FAILED_TO_UNCOMPRESS errors from snappy when reading the
> shuffle file.
>
>
>
> On Mon, Sep 22, 2014 at 10:54 AM, Sandy Ryza <sandy.r...@cloudera.com>
> wrote:
>
>> Thanks for the heads up Cody.  Any indication of what was going wrong?
>>
>> On Mon, Sep 22, 2014 at 7:16 AM, Cody Koeninger <c...@koeninger.org>
>> wrote:
>>
>>> Just as a heads up, we deployed 471e6a3a of master (in order to get some
>>> sql fixes), and were seeing jobs fail until we set
>>>
>>> spark.shuffle.manager=HASH
>>>
>>> I'd be reluctant to change the default to sort for the 1.1.1 release
>>>
>>
>>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to