HI Wangda,

 Thank you for the head-up mail.
 We are in the branch (HDFS-10285) and trying to push the tasks sooner before 
the deadline.

Regards,
Uma

On 1/17/18, 11:35 AM, "Wangda Tan" <wheele...@gmail.com> wrote:

    Hi All,
    
    Since we're fast approaching previously proposed feature freeze date (Jan
    30, about 13 days from today). If you've any features which live in a
    branch and targeted to 3.1.0, please reply this email thread. Ideally, we
    should finish branch merging before feature freeze date.
    
    Here's an updated 3.1.0 feature status:
    
    1. Merged & Completed features:
    * (Sunil) YARN-5881: Support absolute value in CapacityScheduler.
    * (Wangda) YARN-6223: GPU support on YARN. Features in trunk and works
    end-to-end.
    * (Jian) YARN-5079,YARN-4793,YARN-4757,YARN-6419 YARN native services.
    * (Steve Loughran): HADOOP-13786: S3Guard committer for zero-rename commits.
    * (Suma): YARN-7117: Capacity Scheduler: Support Auto Creation of Leaf
    Queues While Doing Queue Mapping.
    * (Chris Douglas) HDFS-9806: HDFS Tiered Storage.
    
    2. Features close to finish:
    * (Zhankun) YARN-5983: FPGA support. Majority implementations completed and
    merged to trunk. Except for UI/documentation.
    * (Uma) HDFS-10285: HDFS SPS. Majority implementations are done, some
    discussions going on about implementation.
    * (Arun Suresh / Kostas / Wangda). YARN-6592: New SchedulingRequest and
    anti-affinity support. Close to finish, on track to be merged before Jan 30.
    
    3. Tentative features:
    * (Arun Suresh). YARN-5972: Support pausing/freezing opportunistic
    containers. Only one pending patch. Plan to finish before Jan 7th.
    * (Haibo Chen). YARN-1011: Resource overcommitment. Looks challenging to be
    done before Jan 2018.
    * (Anu): HDFS-7240: Ozone. Given the discussion on HDFS-7240. Looks
    challenging to be done before Jan 2018.
    * (Varun V) YARN-5673: container-executor write. Given security refactoring
    of c-e (YARN-6623) is already landed, IMHO other stuff may be moved to 3.2.
    
    Thanks,
    Wangda
    
    
    
    
    On Fri, Dec 15, 2017 at 1:20 PM, Wangda Tan <wheele...@gmail.com> wrote:
    
    > Hi all,
    >
    > Congratulations on the 3.0.0-GA release!
    >
    > As we discussed in the previous email thread [1], I'd like to restart
    > 3.1.0 release plans.
    >
    > a) Quick summary:
    > a.1 Release status
    > We started 3.1 release discussion on Sep 6, 2017 [1]. As of today,
    > there’re 232 patches loaded on 3.1.0 alone [2], besides 6 open blockers 
and
    > 22 open critical issues.
    >
    > a.2 Release date update
    > Considering delays of 3.0-GA release by month-and-a-half, I propose to
    > move the dates as follows
    >  - feature freeze date from Dec 15, 2017, to Jan 30, 2018 - last date for
    > any branches to get merged too;
    >  - code freeze (blockers & critical only) date to Feb 08, 2018;
    >  - release voting start by Feb 18, 2018, leaving time for at least two RCx
    >  - release date from Jan 15, 2018, to Feb 28, 2018;
    >
    > Unlike before, I added an additional milestone for release-vote-start so
    > that we can account for voting time-period also.
    >
    > This overall is still 5 1/2 months of release-timeline unlike the faster
    > cadence we hoped for, but this, in my opinion, is the best-updated 
timeline
    > given the delays of the final release of 3.0-GA.
    >
    > b) Individual feature status:
    > I spoke to several feature owners and checked the status of un-finished
    > features, following are status of features planned to 3.1.0:
    >
    > b.1 Merged & Completed features:
    > * (Sunil) YARN-5881: Support absolute value in CapacityScheduler.
    > * (Wangda) YARN-6223: GPU support on YARN. Features in trunk and works
    > end-to-end.
    > * (Jian) YARN-5079,YARN-4793,YARN-4757,YARN-6419 YARN native services.
    > * (Steve Loughran): HADOOP-13786: S3Guard committer for zero-rename
    > commits.
    > * (Suma): YARN-7117: Capacity Scheduler: Support Auto Creation of Leaf
    > Queues While Doing Queue Mapping.
    >
    > b.2 Features close to finish:
    > * (Chris Douglas) HDFS-9806: HDFS Tiered Storage. Being voting now.
    > * (Zhankun) YARN-5983: FPGA support. Majority implementations completed
    > and merged to trunk. Except for UI/documentation.
    > * (Uma) HDFS-10285: HDFS SPS. Majority implementations are done, some
    > discussions going on about implementation.
    >
    > b.3 Tentative features:
    > * (Arun Suresh). YARN-5972: Support pausing/freezing opportunistic
    > containers. Only one pending patch. Plan to finish before Jan 7th.
    > * (Haibo Chen). YARN-1011: Resource overcommitment. Looks challenging to
    > be done before Jan 2018.
    > * (Arun Suresh / Kostas / Wangda). YARN-6592: New SchedulingRequest and
    > anti-affinity support. Tentative will figure out by Jan 1st.
    > * (Anu): HDFS-7240: Ozone. Given the discussion on HDFS-7240. Looks
    > challenging to be done before Jan 2018.
    > * (Varun V) YARN-5673: container-executor write. Given security
    > refactoring of c-e (YARN-6623) is already landed, IMHO other stuff may be
    > moved to 3.2.
    >
    > b.4 Additional release drivers
    > * More exhaustive upgrade testing from 2.x to 3.x.
    >
    > c) Regarding branch cut:
    >
    > We will keep pointing trunk to 3.1 and cut branch-3.1 until: A. some
    > feature planned to 3.2 has to be landed on trunk or B. After feature 
freeze
    > date, whichever comes first.
    >
    > I've also talked offline with Vinod to get help on release-management
    > given this is my first release. He agreed to help do this release jointly.
    >
    > Thoughts?
    >
    > Thanks,
    > Wangda Tan
    >
    > [1] https://lists.apache.org/thread.html/c11506c3250c9481852130616b3cb0
    > 9a0e222f5c2465c015f9906dab@%3Cyarn-dev.hadoop.apache.org%3E
    > [2] "project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.0)
    > AND fixVersion not in (3.0.0,2.9.0) ORDER BY priority DESC”
    >
    >
    

Reply via email to