https://issues.apache.org/jira/browse/SPARK-13745

is really a defect and a blocker unless it is the decision to drop support
for Big Endian platforms. The PR has been reviewed and tested and I
strongly believe this needs to be targeted for 2.0.

On Mon, May 2, 2016 at 12:00 AM Reynold Xin <r...@databricks.com> wrote:

> Hi devs,
>
> Three weeks ago I mentioned on the dev list creating branch-2.0
> (effectively "feature freeze") in 2 - 3 weeks. I've just created Spark's
> branch-2.0 to form the basis of the 2.0 release. We have closed ~ 1700
> issues. That's huge progress, and we should celebrate that.
>
> Compared with past releases when we cut the release branch, we have way
> fewer open issues. In the past we usually have 200 - 400 open issues when
> we cut the release branch. As of today we have less than 100 open issues
> for 2.0.0, and among these 14 critical and 2 blocker (Jersey dependency
> upgrade and some remaining issues in separating out local linear algebra
> library).
>
> What does this mean for committers?
>
> 0. For patches that should go into Spark 2.0.0, make sure you also merge
> them into not just master, but also branch-2.0.
>
> 1. In the next couple of days, sheppard some of the more important,
> straggler pull requests in.
>
> 2. Switch the focus from new feature development to bug fixes, stability
> improvements, finalizing API tweaks, and documentation.
>
> 3. Experimental features (e.g. R, structured streaming) can continue to be
> developed, provided that the changes don't impact the non-experimental
> features.
>
> 4. We should become increasingly conservative as time goes on, even for
> experimental features.
>
> 5. Please un-target or re-target issues if they don't make sense for 2.0.
> We should burn # issues down to ~ 0 by the time we have a release candidate.
>
> 7. If possible, reach out to users and start testing branch-2.0 to find
> bugs. The more testing we can do on real workloads before the release, the
> less bugs we will find in the actual Spark 2.0 release.
>
>
>
>
>

Reply via email to