Thanks for reporting these issues!

Please continue to test RC2 and report more issues.

Cheers,

Xiao

On Fri, May 22, 2020 at 7:40 AM Koert Kuipers <ko...@tresata.com> wrote:

> i would like to point out that SPARK-27194 is a fault tolerance bug that
> causes jobs to fail when any single task is retried. for us this is a major
> headache because we have to keep restarting jobs (and explain that spark is
> really fault tolerant generally, just not here).
> https://issues.apache.org/jira/browse/SPARK-27194
> this is not a regression and its not a blocker but if it could make it
> into spark 3.0.0 that would be a win i think. pullreq is waiting for review.
> thanks!
> best, koert
>
> On Thu, May 21, 2020 at 11:06 PM Jungtaek Lim <
> kabhwan.opensou...@gmail.com> wrote:
>
>> Looks like there're new blocker issues newly figured out.
>>
>> * https://issues.apache.org/jira/browse/SPARK-31786
>> * https://issues.apache.org/jira/browse/SPARK-31761 (not yet marked as
>> blocker but according to JIRA comment it's a regression issue as well as
>> correctness issue IMHO)
>>
>> Let's collect the list of blocker issues so that RC3 won't miss them.
>>
>> On Thu, May 21, 2020 at 2:12 AM Ryan Blue <rb...@netflix.com.invalid>
>> wrote:
>>
>>> Okay, I took a look at the PR and I think it should be okay. The new
>>> classes are unfortunately public, but are in catalyst which is considered
>>> private. So this is the approach we discussed.
>>>
>>> I'm fine with the commit, other than the fact that it violated ASF norms
>>> <https://www.apache.org/foundation/voting.html> to commit without
>>> waiting for a review.
>>>
>>> On Wed, May 20, 2020 at 10:00 AM Ryan Blue <rb...@netflix.com> wrote:
>>>
>>>> Why was https://github.com/apache/spark/pull/28523 merged with a
>>>> -1? We discussed this months ago and concluded that it was a bad idea to
>>>> introduce a new v2 API that cannot have reliable behavior across sources.
>>>>
>>>> The last time I checked that PR, the approach I discussed with
>>>> Tathagata was to not add update mode to DSv2. Instead, Tathagata gave a
>>>> couple of reasonable options to avoid it. Why were those not done?
>>>>
>>>> This is the second time this year that a PR with a -1 was merged. Does
>>>> the Spark community not follow the convention to build consensus before
>>>> merging changes?
>>>>
>>>> On Wed, May 20, 2020 at 12:13 AM Wenchen Fan <cloud0...@gmail.com>
>>>> wrote:
>>>>
>>>>> Seems the priority of SPARK-31706 is incorrectly marked, and it's a
>>>>> blocker now. The fix was merged just a few hours ago.
>>>>>
>>>>> This should be a -1 for RC2.
>>>>>
>>>>> On Wed, May 20, 2020 at 2:42 PM rickestcode <
>>>>> matthias.harder...@gmail.com> wrote:
>>>>>
>>>>>> +1
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/
>>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>>>>>>
>>>>>>
>>>>
>>>> --
>>>> Ryan Blue
>>>> Software Engineer
>>>> Netflix
>>>>
>>>
>>>
>>> --
>>> Ryan Blue
>>> Software Engineer
>>> Netflix
>>>
>>

-- 
<https://databricks.com/sparkaisummit/north-america>

Reply via email to