Re: [VOTE] Release Apache Hadoop 3.1.2 - RC1

2019-02-05 Thread Sunil G
Thanks Billie for pointing out.
I have updated source by removing patchprocess and extra line create
release.

Also updated checksum as well.

@bil...@apache.org   @Wangda Tan 
please help to verify this changed bit once.

Thanks
Sunil

On Tue, Feb 5, 2019 at 5:23 AM Billie Rinaldi 
wrote:

> Hey Sunil and Wangda, thanks for the RC. The source tarball has a
> patchprocess directory with some yetus code in it. Also, the file
> dev-support/bin/create-release file has the following line added:
>   export GPG_AGENT_INFO="/home/sunilg/.gnupg/S.gpg-agent:$(pgrep
> gpg-agent):1"
>
> I think we are probably due for an overall review of LICENSE and NOTICE. I
> saw some idiosyncrasies there but nothing that looked like a blocker.
>
> On Mon, Jan 28, 2019 at 10:20 PM Sunil G  wrote:
>
>> Hi Folks,
>>
>> On behalf of Wangda, we have an RC1 for Apache Hadoop 3.1.2.
>>
>> The artifacts are available here:
>> http://home.apache.org/~sunilg/hadoop-3.1.2-RC1/
>>
>> The RC tag in git is release-3.1.2-RC1:
>> https://github.com/apache/hadoop/commits/release-3.1.2-RC1
>>
>> The maven artifacts are available via repository.apache.org at
>> https://repository.apache.org/content/repositories/orgapachehadoop-1215
>>
>> This vote will run 5 days from now.
>>
>> 3.1.2 contains 325 [1] fixed JIRA issues since 3.1.1.
>>
>> We have done testing with a pseudo cluster and distributed shell job.
>>
>> My +1 to start.
>>
>> Best,
>> Wangda Tan and Sunil Govindan
>>
>> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.2)
>> ORDER BY priority DESC
>>
>


Re: [VOTE] Propose to start new Hadoop sub project "submarine"

2019-02-01 Thread Sunil G
+1 . Thanks Wangda.

- Sunil

On Sat, Feb 2, 2019 at 3:54 AM Wangda Tan  wrote:

> Hi all,
>
> According to positive feedbacks from the thread [1]
>
> This is vote thread to start a new subproject named "hadoop-submarine"
> which follows the release process already established for ozone.
>
> The vote runs for usual 7 days, which ends at Feb 8th 5 PM PDT.
>
> Thanks,
> Wangda Tan
>
> [1]
>
> https://lists.apache.org/thread.html/f864461eb188bd12859d51b0098ec38942c4429aae7e4d001a633d96@%3Cyarn-dev.hadoop.apache.org%3E
>


Re: [DISCUSS] Making submarine to different release model like Ozone

2019-01-31 Thread Sunil G
+1 from me on this.
ML/DL is one of the fast growing areas and a runtime on YARN helps customers
to have ML/DL workloads to run on same cluster where the ETL or other
traditional
big data workloads ingest or mine data.
Faster release cadence can pace up the development for Submarine and more
agile
to run in older hadoop version without any upgrade efforts.

- Sunil



On Fri, Feb 1, 2019 at 12:34 AM Wangda Tan  wrote:

> Hi devs,
>
> Since we started submarine-related effort last year, we received a lot of
> feedbacks, several companies (such as Netease, China Mobile, etc.)  are
> trying to deploy Submarine to their Hadoop cluster along with big data
> workloads. Linkedin also has big interests to contribute a Submarine TonY (
> https://github.com/linkedin/TonY) runtime to allow users to use the same
> interface.
>
> From what I can see, there're several issues of putting Submarine under
> yarn-applications directory and have same release cycle with Hadoop:
>
> 1) We started 3.2.0 release at Sep 2018, but the release is done at Jan
> 2019. Because of non-predictable blockers and security issues, it got
> delayed a lot. We need to iterate submarine fast at this point.
>
> 2) We also see a lot of requirements to use Submarine on older Hadoop
> releases such as 2.x. Many companies may not upgrade Hadoop to 3.x in a
> short time, but the requirement to run deep learning is urgent to them. We
> should decouple Submarine from Hadoop version.
>
> And why we wanna to keep it within Hadoop? First, Submarine included some
> innovation parts such as enhancements of user experiences for YARN
> services/containerization support which we can add it back to Hadoop later
> to address common requirements. In addition to that, we have a big overlap
> in the community developing and using it.
>
> There're several proposals we have went through during Ozone merge to trunk
> discussion:
>
> https://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201803.mbox/%3ccahfhakh6_m3yldf5a2kq8+w-5fbvx5ahfgs-x1vajw8gmnz...@mail.gmail.com%3E
>
> I propose to adopt Ozone model: which is the same master branch, different
> release cycle, and different release branch. It is a great example to show
> agile release we can do (2 Ozone releases after Oct 2018) with less
> overhead to setup CI, projects, etc.
>
> *Links:*
> - JIRA: https://issues.apache.org/jira/browse/YARN-8135
> - Design doc
> <
> https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit
> >
> - User doc
> <
> https://hadoop.apache.org/docs/r3.2.0/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/Index.html
> >
> (3.2.0
> release)
> - Blogposts, {Submarine} : Running deep learning workloads on Apache Hadoop
> <
> https://hortonworks.com/blog/submarine-running-deep-learning-workloads-apache-hadoop/
> >,
> (Chinese Translation: Link )
> - Talks: Strata Data Conf NY
> <
> https://conferences.oreilly.com/strata/strata-ny-2018/public/schedule/detail/68289
> >
>
> Thoughts?
>
> Thanks,
> Wangda Tan
>


[VOTE] Release Apache Hadoop 3.1.2 - RC1

2019-01-28 Thread Sunil G
Hi Folks,

On behalf of Wangda, we have an RC1 for Apache Hadoop 3.1.2.

The artifacts are available here:
http://home.apache.org/~sunilg/hadoop-3.1.2-RC1/

The RC tag in git is release-3.1.2-RC1:
https://github.com/apache/hadoop/commits/release-3.1.2-RC1

The maven artifacts are available via repository.apache.org at
https://repository.apache.org/content/repositories/orgapachehadoop-1215

This vote will run 5 days from now.

3.1.2 contains 325 [1] fixed JIRA issues since 3.1.1.

We have done testing with a pseudo cluster and distributed shell job.

My +1 to start.

Best,
Wangda Tan and Sunil Govindan

[1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.2)
ORDER BY priority DESC


[ANNOUNCE] Apache Hadoop 3.2.0 release

2019-01-22 Thread Sunil G
Greetings all,

It gives me great pleasure to announce that the Apache Hadoop community has
voted to
release Apache Hadoop 3.2.0.

Apache Hadoop 3.2.0 is the first release of Apache Hadoop 3.2 line for the
year 2019,
which includes 1092 fixes since previous Hadoop 3.1.0 release.
Of these fixes:
   - 230 in Hadoop Common
   - 344 in HDFS
   - 484 in YARN
   - 34 in MapReduce

Apache Hadoop 3.2.0 contains a number of significant features and
enhancements.
A few of them are noted as below.

- ABFS Filesystem connector : supports the latest Azure Datalake Gen2
Storage.
- Enhanced S3A connector : including better resilience to throttled AWS S3
and
  DynamoDB IO.
- Node Attributes Support in YARN : helps to tag multiple labels on the
nodes based
  on its attributes and supports placing the containers based on expression
of these labels.
- Storage Policy Satisfier : supports HDFS (Hadoop Distributed File System)
applications to
  move the blocks between storage types as they set the storage policies on
files/directories.
- Hadoop Submarine : enables data engineers to easily develop, train and
deploy deep learning
  models (in TensorFlow) on very same Hadoop YARN cluster.
- C++ HDFS client : helps to do async IO to HDFS which helps downstream
projects such as
  Apache ORC;
- Upgrades for long running services : supports in-place seamless upgrades
of long running
  containers via YARN Native Service API and CLI.

* For major changes included in Hadoop 3.2 line, please refer Hadoop
3.2.0 main page [1].
* For more details about fixes in 3.2.0 release, please read the
CHANGELOG [2] and RELEASENOTES [3].

The release news is posted on the Hadoop website too, you can go to
the downloads section directly [4].

Many thanks to everyone who contributed to the release, and everyone in the
Apache Hadoop community! This release is a direct result of your great
contributions.
Many thanks to Wangda Tan, Vinod Kumar Vavilapalli and Marton Elek who
helped in
this release process.

[1] https://hadoop.apache.org/docs/r3.2.0/
[2]
https://hadoop.apache.org/docs/r3.2.0/hadoop-project-dist/hadoop-common/release/3.2.0/CHANGELOG.3.2.0.html
[3]
https://hadoop.apache.org/docs/r3.2.0/hadoop-project-dist/hadoop-common/release/3.2.0/RELEASENOTES.3.2.0.html
[4] https://hadoop.apache.org/releases.html

Many Thanks,
Sunil Govindan


Re: [VOTE] Release Apache Hadoop 3.2.0 - RC1

2019-01-16 Thread Sunil G
Thanks everyone for helping to vote this release!

With 7 binding votes, 10 non-binding votes and no veto, this vote stands
 passed,
I'm going to work on staging the release.

Thanks,
Sunil

On Tue, Jan 15, 2019 at 9:59 PM Weiwei Yang  wrote:

> +1 (binding)
>
> - Setup a cluster, run teragen/terasort jobs
> - Verified general readability of documentation (titles/navigations)
> - Run some simple yarn commands: app/applicationattempt/container
> - Checked restful APIs: RM cluster/metrics/scheduler/nodes, NM
> node/apps/container
> - Verified simple failover scenario
> - Submitted distributed shell apps with affinity/anti-affinity constraints
> - Configured conf based node attribute provider, alter attribute values
> and verified the change
> - Verified CLI add/list/remove node-attributes, submitted app with simple
> node-attribute constraint
>
> --
> Weiwei
>


Re: [VOTE] Release Apache Hadoop 3.2.0 - RC1

2019-01-15 Thread Sunil G
Thanks folks for voting.

For the point mentioned from Zoltan, I re-ran the same source and deployed
to Nexus to avail those missing source.
https://repository.apache.org/content/repositories/orgapachehadoop-1186/

Please help to cross check the same.

Thanks & Regards
Sunil

On Tue, Jan 15, 2019 at 10:05 AM Wangda Tan  wrote:

> +1 (Binding).
>
> Deployed a local cluster from binary, and ran some sample sanity jobs.
>
> Thanks Sunil for driving the release.
>
> Best,
> Wangda
>
>
> On Mon, Jan 14, 2019 at 11:26 AM Virajith Jalaparti 
> wrote:
>
>> Thanks Sunil and others who have worked on the making this release happen!
>>
>> +1 (non-binding)
>>
>> - Built from source
>> - Deployed a pseudo-distributed one node cluster
>> - Ran basic wordcount, sort, pi jobs
>> - Basic HDFS/WebHDFS commands
>> - Ran all the ABFS driver tests against an ADLS Gen 2 account in EAST US
>>
>> Non-blockers (AFAICT): The following tests in ABFS (HADOOP-15407) fail:
>> - For ACLs ({{ITestAzureBlobFilesystemAcl}}) -- However, I believe these
>> have been fixed in trunk.
>> - {{
>> ITestAzureBlobFileSystemE2EScale#testWriteHeavyBytesToFileAcrossThreads}}
>> fails with an OutOfMemoryError exception. I see the same failure on
>> trunk as well.
>>
>>
>> On Mon, Jan 14, 2019 at 6:21 AM Elek, Marton  wrote:
>>
>>> Thanks Sunil to manage this release.
>>>
>>> +1 (non-binding)
>>>
>>> 1. built from the source (with clean local maven repo)
>>> 2. verified signatures + checksum
>>> 3. deployed 3 node cluster to Google Kubernetes Engine with generated
>>> k8s resources [1]
>>> 4. Executed basic HDFS commands
>>> 5. Executed basic yarn example jobs
>>>
>>> Marton
>>>
>>> [1]: FTR: resources:
>>> https://github.com/flokkr/k8s/tree/master/examples/hadoop , generator:
>>> https://github.com/elek/flekszible
>>>
>>>
>>> On 1/8/19 12:42 PM, Sunil G wrote:
>>> > Hi folks,
>>> >
>>> >
>>> > Thanks to all of you who helped in this release [1] and for helping to
>>> vote
>>> > for RC0. I have created second release candidate (RC1) for Apache
>>> Hadoop
>>> > 3.2.0.
>>> >
>>> >
>>> > Artifacts for this RC are available here:
>>> >
>>> > http://home.apache.org/~sunilg/hadoop-3.2.0-RC1/
>>> >
>>> >
>>> > RC tag in git is release-3.2.0-RC1.
>>> >
>>> >
>>> >
>>> > The maven artifacts are available via repository.apache.org at
>>> >
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1178/
>>> >
>>> >
>>> > This vote will run 7 days (5 weekdays), ending on 14th Jan at 11:59 pm
>>> PST.
>>> >
>>> >
>>> >
>>> > 3.2.0 contains 1092 [2] fixed JIRA issues since 3.1.0. Below feature
>>> > additions
>>> >
>>> > are the highlights of this release.
>>> >
>>> > 1. Node Attributes Support in YARN
>>> >
>>> > 2. Hadoop Submarine project for running Deep Learning workloads on YARN
>>> >
>>> > 3. Support service upgrade via YARN Service API and CLI
>>> >
>>> > 4. HDFS Storage Policy Satisfier
>>> >
>>> > 5. Support Windows Azure Storage - Blob file system in Hadoop
>>> >
>>> > 6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a
>>> >
>>> > 7. Improvements in Router-based HDFS federation
>>> >
>>> >
>>> >
>>> > Thanks to Wangda, Vinod, Marton for helping me in preparing the
>>> release.
>>> >
>>> > I have done few testing with my pseudo cluster. My +1 to start.
>>> >
>>> >
>>> >
>>> > Regards,
>>> >
>>> > Sunil
>>> >
>>> >
>>> >
>>> > [1]
>>> >
>>> >
>>> https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E
>>> >
>>> > [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in
>>> (3.2.0)
>>> > AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
>>> > ORDER BY fixVersion ASC
>>> >
>>>
>>> -
>>> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>>> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>>>
>>>


Re: [VOTE] Release Apache Hadoop 3.2.0 - RC1

2019-01-12 Thread Sunil G
Thanks Weiwei and Zoltan,

For the content issue in documentation, all the latest features are added
in left panel. It seems whenever new features went in, it was not added as
a summary. I will wait for the voting to see any issues and make a call at
the last.

@zoltan, Thanks for bringing this issue to my attention. I think if i
deploy with new command what Marton has mentioned, we will not have this
problem. And for nexus, we are not doing any checksum validation. @Steve
Loughran   @Wangda Tan   @Vinod
Kumar Vavilapalli  could you please advise whether I
can re-run nexus bit again and create a new link w/o creating another RC.
Please advise.

Thanks
Sunil
On Fri, 11 Jan 2019 at 8:10 AM, Gabor Bota  wrote:

>   Thanks for the work Sunil!
>
>   +1 (non-binding)
>
>   checked out git tag release-3.2.0-RC1.
>   hadoop-aws integration (mvn verify) test run was successful on eu-west-1
> (a known issue is there, it's fixed in trunk)
>   built from source on Mac OS X 10.14.2, java version 8.0.181-oracle
>   deployed on a 3 node cluster
>   verified pi job, teragen, terasort and teravalidate
>
>   Regards,
>   Gabor Bota
>
> On Fri, Jan 11, 2019 at 1:11 PM Zoltan Haindrich  wrote:
>
>> Hello,
>>
>> I would like to note that it seems like 3.2.0-RC1 release misses some
>> source attachments (as all releases lately). David Phillips just commented
>> on that jira yesterday; and
>> I've just noticed that a release vote is already going onso I think
>> now is the best time to talk about this - because
>> https://issues.apache.org/jira/browse/HADOOP-15205
>> is open now for almost a year.
>>
>> This might be just a documentation related issue; but then the
>> HowToRelease doc should be updated.
>> Steve Loughran was able to publish the artifacts in question for 2.7.7 -
>> but releases before and after that are missing these source attachements.
>>
>> People working on downstream projects (or at least me) may find it harder
>> to work with hadoop packages; beacuse of the missing source attachments.
>>
>> example artifact which misses the sources:
>>
>> https://repository.apache.org/content/repositories/orgapachehadoop-1178/org/apache/hadoop/hadoop-mapreduce-client-core/3.2.0/
>>
>> cheers,
>> Zoltan
>>
>> On 1/8/19 12:42 PM, Sunil G wrote:
>> > Hi folks,
>> >
>> >
>> > Thanks to all of you who helped in this release [1] and for helping to
>> vote
>> > for RC0. I have created second release candidate (RC1) for Apache Hadoop
>> > 3.2.0.
>> >
>> >
>> > Artifacts for this RC are available here:
>> >
>> > http://home.apache.org/~sunilg/hadoop-3.2.0-RC1/
>> >
>> >
>> > RC tag in git is release-3.2.0-RC1.
>> >
>> >
>> >
>> > The maven artifacts are available via repository.apache.org at
>> >
>> https://repository.apache.org/content/repositories/orgapachehadoop-1178/
>> >
>> >
>> > This vote will run 7 days (5 weekdays), ending on 14th Jan at 11:59 pm
>> PST.
>> >
>> >
>> >
>> > 3.2.0 contains 1092 [2] fixed JIRA issues since 3.1.0. Below feature
>> > additions
>> >
>> > are the highlights of this release.
>> >
>> > 1. Node Attributes Support in YARN
>> >
>> > 2. Hadoop Submarine project for running Deep Learning workloads on YARN
>> >
>> > 3. Support service upgrade via YARN Service API and CLI
>> >
>> > 4. HDFS Storage Policy Satisfier
>> >
>> > 5. Support Windows Azure Storage - Blob file system in Hadoop
>> >
>> > 6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a
>> >
>> > 7. Improvements in Router-based HDFS federation
>> >
>> >
>> >
>> > Thanks to Wangda, Vinod, Marton for helping me in preparing the release.
>> >
>> > I have done few testing with my pseudo cluster. My +1 to start.
>> >
>> >
>> >
>> > Regards,
>> >
>> > Sunil
>> >
>> >
>> >
>> > [1]
>> >
>> >
>> https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E
>> >
>> > [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.2.0)
>> > AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
>> > ORDER BY fixVersion ASC
>> >
>>
>> -
>> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>>
>>


[VOTE] Release Apache Hadoop 3.2.0 - RC1

2019-01-08 Thread Sunil G
Hi folks,


Thanks to all of you who helped in this release [1] and for helping to vote
for RC0. I have created second release candidate (RC1) for Apache Hadoop
3.2.0.


Artifacts for this RC are available here:

http://home.apache.org/~sunilg/hadoop-3.2.0-RC1/


RC tag in git is release-3.2.0-RC1.



The maven artifacts are available via repository.apache.org at
https://repository.apache.org/content/repositories/orgapachehadoop-1178/


This vote will run 7 days (5 weekdays), ending on 14th Jan at 11:59 pm PST.



3.2.0 contains 1092 [2] fixed JIRA issues since 3.1.0. Below feature
additions

are the highlights of this release.

1. Node Attributes Support in YARN

2. Hadoop Submarine project for running Deep Learning workloads on YARN

3. Support service upgrade via YARN Service API and CLI

4. HDFS Storage Policy Satisfier

5. Support Windows Azure Storage - Blob file system in Hadoop

6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a

7. Improvements in Router-based HDFS federation



Thanks to Wangda, Vinod, Marton for helping me in preparing the release.

I have done few testing with my pseudo cluster. My +1 to start.



Regards,

Sunil



[1]

https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E

[2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.2.0)
AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
ORDER BY fixVersion ASC


Re: Hadoop 3.2 Release Plan proposal

2019-01-06 Thread Sunil G
Hi Folks,

Given all blockers are closed now, I am planning to do RC1 tomorrow.
branch-3.2.0 remains closed for commit. Please let me know if any issues
are missed.

Thanks,
Sunil G

On Tue, Nov 13, 2018 at 8:11 PM Sunil G  wrote:

> Hi Folks,
>
> All blockers are closed by last weekend and corrected jiras as well.
> Preparing RC0 now. Facing some issue with shaded jars file size. Hence
> respinning again.
> Planning to complete this by end of this week.
>
> Thanks
> Sunil
>
> On Fri, Oct 26, 2018 at 2:34 AM Konstantin Shvachko 
> wrote:
>
>> Another thing is that I see a bunch of jiras under HDFS-8707, which don't
>> have the Fix Version field listing 3.2, some have it just empty.
>> This means they will not be populated into release notes.
>>
>> Thanks,
>> --Konstantin
>>
>> On Thu, Oct 25, 2018 at 7:59 AM Sunil G  wrote:
>>
>> > Thanks Konstantin for pointing out.
>> > As 3.2 is pretty much on RC level, its better we try to find a good
>> > solution to this issue.
>> >
>> > I ll follow up on this in the jira.
>> >
>> > - Sunil
>> >
>> > On Thu, Oct 25, 2018 at 11:35 AM Konstantin Shvachko <
>> shv.had...@gmail.com>
>> > wrote:
>> >
>> >> I've tried to attract attention to an incompatibility issue through the
>> >> jira, but it didn't work. So pitching in in this thread.
>> >> https://issues.apache.org/jira/browse/HDFS-12026
>> >> It introduced binary incompatibility, which will prevent people from
>> >> upgrading from 3.1 to 3.2.
>> >> I think it can get messy if we release anything with this feature.
>> >>
>> >> Thanks,
>> >> --Konstantin
>> >>
>> >> On Mon, Oct 22, 2018 at 5:01 AM Steve Loughran > >
>> >> wrote:
>> >>
>> >>> its in.
>> >>>
>> >>> good catch!
>> >>>
>> >>> On 20 Oct 2018, at 01:35, Wei-Chiu Chuang > > >>> weic...@cloudera.com>> wrote:
>> >>>
>> >>> Thanks Sunil G for driving the release,
>> >>> I filed HADOOP-15866<
>> https://issues.apache.org/jira/browse/HADOOP-15866>
>> >>> for a compat fix. If any one has cycle please review it, as I think
>> it is
>> >>> needed for 3.2.0.
>> >>>
>> >>> On Thu, Oct 18, 2018 at 4:43 AM Sunil G > >>> sun...@apache.org>> wrote:
>> >>> Hi Folks,
>> >>>
>> >>> As we previously communicated for 3.2.0 release, we have delayed due
>> to
>> >>> few
>> >>> blockers in our gate.
>> >>>
>> >>> I just cut branch-3.2.0 for release purpose. branch-3.2 will be open
>> for
>> >>> all bug fixes.
>> >>>
>> >>> - Sunil
>> >>>
>> >>>
>> >>> On Tue, Oct 16, 2018 at 8:59 AM Sunil G > >>> sun...@apache.org>> wrote:
>> >>>
>> >>> > Hi Folks,
>> >>> >
>> >>> > We are now close to RC as other blocker issues are now merged to
>> trunk
>> >>> and
>> >>> > branch-3.2. Last 2 critical issues are closer to merge and will be
>> >>> > committed in few hours.
>> >>> > With this, I will be creating 3.2.0 branch today and will go ahead
>> >>> with RC
>> >>> > related process.
>> >>> >
>> >>> > - Sunil
>> >>> >
>> >>> > On Mon, Oct 15, 2018 at 11:43 PM Jonathan Bender <
>> jonben...@stripe.com
>> >>> <mailto:jonben...@stripe.com>>
>> >>> > wrote:
>> >>> >
>> >>> >> Hello, were there any updates around the 3.2.0 RC timing? All I
>> see in
>> >>> >> the current blockers are related to the new Submarine subproject,
>> >>> wasn't
>> >>> >> sure if that is what is holding things up.
>> >>> >>
>> >>> >> Cheers,
>> >>> >> Jon
>> >>> >>
>> >>> >> On Tue, Oct 2, 2018 at 7:13 PM, Sunil G > >>> sun...@apache.org>> wrote:
>> >>> >>
>> >>> >>> Thanks Robert and Haibo for quickly correcting same.
>> >>> >>> Sigh, I somehow missed one file while committing the change. Sorry
>> >>> for

Re: [VOTE] Release Apache Hadoop 3.2.0 - RC0

2018-11-29 Thread Sunil G
Thanks @Eric Payne 
Due to another issue, we have to spin an RC1. In this case, could we revert
the patch which caused this problem.

Cancelling this RC0. Thanks to everyone who voted.
I will spin RC1 as soon as possible.

- Sunil


On Fri, Nov 30, 2018 at 4:14 AM Eric Payne 
wrote:

> The problem is not with preemption. The yarn-site.xml that I use for my
> pseudo-cluster includes a second xml:
> xi:include href=".../yarn-scheduler.xml"
>
> The property for yarn.resourcemanager.scheduler.monitor.enable = true is
> in this yarn-scheduler.xml.
>
> This value IS READ when then RM starts.
>
> However, when the refreshQueues command is run, this value IS NOT READ.
>
> So, it looks like xml include files are not read on refresh. This will
> affect any property. I just happened to notice it on the preemption
> properties.
>
> I would like input from all of you to determine if this is a blocker for
> release. I'm on the fence.
>
> Thanks,
> -Eric
>
>
>
>
>
>
> On Wednesday, November 28, 2018, 4:58:50 PM CST, Eric Payne <
> erichadoo...@yahoo.com.INVALID> wrote:
>
>
>
>
>
> Sunil,
>
> So, the basic symptoms are that if preemption is enabled on any queue, the
> preemption is disabled after a 'yarn rm -refreshQueues'. In addition, all
> of the preemption-specific properties are set back to the default values.
>
> This was introduced in branch-3.1, so it is NOT new behavior for release
> 3.2.0. I am still tracking down the cause. I will open a JIRA once I do
> further investigation if there is not one already.
>
> This will be a problem for installations which use preemption and which
> use the refreshQueues feature.
>
> Thanks,
> -Eric
>
>
> On Wednesday, November 28, 2018, 11:47:06 AM CST, Eric Payne <
> eric.payne1...@yahoo.com> wrote:
>
>
>
>
>
> Sunil, thanks for all of the hard work on this release.
>
> I have discovered that queue refresh doesn't work in some cases. For
> example, when I change
> yarn.scheduler.capacity.root.default.disable_preemption, it doesn't take
> effect unless I restart the RM.
>
> I am still investigating, but I thought I should bring this up asap.
>
> Thanks,
> -Eric
>
>
>
>
> On Friday, November 23, 2018, 6:07:04 AM CST, Sunil G 
> wrote:
>
>
>
>
>
> Hi folks,
>
>
>
> Thanks to all contributors who helped in this release [1]. I have created
>
> first release candidate (RC0) for Apache Hadoop 3.2.0.
>
>
> Artifacts for this RC are available here:
>
> http://home.apache.org/~sunilg/hadoop-3.2.0-RC0/
>
>
>
> RC tag in git is release-3.2.0-RC0.
>
>
>
> The maven artifacts are available via repository.apache.org at
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1174/
>
>
> This vote will run 7 days (5 weekdays), ending on Nov 30 at 11:59 pm PST.
>
>
>
> 3.2.0 contains 1079 [2] fixed JIRA issues since 3.1.0. Below feature
> additions
>
> are the highlights of this release.
>
> 1. Node Attributes Support in YARN
>
> 2. Hadoop Submarine project for running Deep Learning workloads on YARN
>
> 3. Support service upgrade via YARN Service API and CLI
>
> 4. HDFS Storage Policy Satisfier
>
> 5. Support Windows Azure Storage - Blob file system in Hadoop
>
> 6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a
>
> 7. Improvements in Router-based HDFS federation
>
>
>
> Thanks to Wangda, Vinod, Marton for helping me in preparing the release.
>
> I have done few testing with my pseudo cluster. My +1 to start.
>
>
>
> Regards,
>
> Sunil
>
>
>
> [1]
>
>
> https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E
>
> [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.2.0)
> AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
> ORDER BY fixVersion ASC
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Apache Hadoop 3.2.0 - RC0

2018-11-28 Thread Sunil G
Hi Eric,

Thanks for helping in verifying the release.

Post YARN-7370, preemption configs are refreshable. I tried to test by
making some changes in capacity-scheduler.xml and invoking yarn rmadmin
-refreshQueues.
I can see the changes reflected as per logs after refresh. Could you please
help to give some more scenarios so that i can try to reproduce.
Meanwhile i ll try some other combinations as well and let you know.


reservationsContinueLooking = true
*preemptionDisabled = true*
defaultAppPriorityPerQueue = 0
priority = 0
maxLifetime = -1 seconds
defaultLifetime = -1 seconds
2018-11-29 06:25:53,792 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger:
USER=sunilgovindan IP=127.0.0.1 OPERATION=refreshQueues TARGET=AdminService
RESULT=SUCCESS
2018-11-29 06:25:55,900 INFO
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy:
Capacity Scheduler configuration changed, updated preemption properties to:
max_ignored_over_capacity = 0.1
natural_termination_factor = 0.2
max_wait_before_kill = 15000
monitoring_interval = 3000
*total_preemption_per_round = 0.4*
observe_only = false
lazy-preemption-enabled = false
*intra-queue-preemption.enabled = false*
*intra-queue-preemption.max-allowable-limit = 0.4*
intra-queue-preemption.minimum-threshold = 0.5
intra-queue-preemption.preemption-order-policy = USERLIMIT_FIRST
priority-utilization.underutilized-preemption.enabled = false
select_based_on_reserved_containers = false
additional_res_balance_based_on_reserved_containers = false
Preemption-to-balance-queue-enabled = false

*now i disabled preemption for default queue and made some changed in
intraqueue-preemption params.*

reservationsContinueLooking = true
*preemptionDisabled = false*
defaultAppPriorityPerQueue = 0
priority = 0
maxLifetime = -1 seconds
defaultLifetime = -1 seconds
2018-11-29 06:29:32,620 INFO
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger:
USER=sunilgovindan IP=127.0.0.1 OPERATION=refreshQueues TARGET=AdminService
RESULT=SUCCESS
2018-11-29 06:29:34,893 INFO
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy:
Capacity Scheduler configuration changed, updated preemption properties to:
max_ignored_over_capacity = 0.1
natural_termination_factor = 0.2
max_wait_before_kill = 15000
monitoring_interval = 3000
*total_preemption_per_round = 0.7*
observe_only = false
lazy-preemption-enabled = false
*intra-queue-preemption.enabled = true*
*intra-queue-preemption.max-allowable-limit = 0.5*
intra-queue-preemption.minimum-threshold = 0.5
intra-queue-preemption.preemption-order-policy = USERLIMIT_FIRST
priority-utilization.underutilized-preemption.enabled = false
select_based_on_reserved_containers = false
additional_res_balance_based_on_reserved_containers = false
Preemption-to-balance-queue-enabled = false

On Thu, Nov 29, 2018 at 4:19 AM Eric Payne 
wrote:

> Sunil,
>
> So, the basic symptoms are that if preemption is enabled on any queue, the
> preemption is disabled after a 'yarn rm -refreshQueues'. In addition, all
> of the preemption-specific properties are set back to the default values.
>
> This was introduced in branch-3.1, so it is NOT new behavior for release
> 3.2.0. I am still tracking down the cause. I will open a JIRA once I do
> further investigation if there is not one already.
>
> This will be a problem for installations which use preemption and which
> use the refreshQueues feature.
>
> Thanks,
> -Eric
>
>
> On Wednesday, November 28, 2018, 11:47:06 AM CST, Eric Payne <
> eric.payne1...@yahoo.com> wrote:
>
>
>
>
>
> Sunil, thanks for all of the hard work on this release.
>
> I have discovered that queue refresh doesn't work in some cases. For
> example, when I change
> yarn.scheduler.capacity.root.default.disable_preemption, it doesn't take
> effect unless I restart the RM.
>
> I am still investigating, but I thought I should bring this up asap.
>
> Thanks,
> -Eric
>
>
>
>
> On Friday, November 23, 2018, 6:07:04 AM CST, Sunil G 
> wrote:
>
>
>
>
>
> Hi folks,
>
>
>
> Thanks to all contributors who helped in this release [1]. I have created
>
> first release candidate (RC0) for Apache Hadoop 3.2.0.
>
>
> Artifacts for this RC are available here:
>
> http://home.apache.org/~sunilg/hadoop-3.2.0-RC0/
>
>
>
> RC tag in git is release-3.2.0-RC0.
>
>
>
> The maven artifacts are available via repository.apache.org at
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1174/
>
>
> This vote will run 7 days (5 weekdays), ending on Nov 30 at 11:59 pm PST.
>
>
>
> 3.2.0 contains 1079 [2] fixed JIRA issues since 3.1.0. Below feature
> additions
>
> are the highlights of this release.
>
> 1. Node Attributes Support in YAR

Re: Hadoop 3.2 Release Plan proposal

2018-11-13 Thread Sunil G
Hi Folks,

All blockers are closed by last weekend and corrected jiras as well.
Preparing RC0 now. Facing some issue with shaded jars file size. Hence
respinning again.
Planning to complete this by end of this week.

Thanks
Sunil

On Fri, Oct 26, 2018 at 2:34 AM Konstantin Shvachko 
wrote:

> Another thing is that I see a bunch of jiras under HDFS-8707, which don't
> have the Fix Version field listing 3.2, some have it just empty.
> This means they will not be populated into release notes.
>
> Thanks,
> --Konstantin
>
> On Thu, Oct 25, 2018 at 7:59 AM Sunil G  wrote:
>
> > Thanks Konstantin for pointing out.
> > As 3.2 is pretty much on RC level, its better we try to find a good
> > solution to this issue.
> >
> > I ll follow up on this in the jira.
> >
> > - Sunil
> >
> > On Thu, Oct 25, 2018 at 11:35 AM Konstantin Shvachko <
> shv.had...@gmail.com>
> > wrote:
> >
> >> I've tried to attract attention to an incompatibility issue through the
> >> jira, but it didn't work. So pitching in in this thread.
> >> https://issues.apache.org/jira/browse/HDFS-12026
> >> It introduced binary incompatibility, which will prevent people from
> >> upgrading from 3.1 to 3.2.
> >> I think it can get messy if we release anything with this feature.
> >>
> >> Thanks,
> >> --Konstantin
> >>
> >> On Mon, Oct 22, 2018 at 5:01 AM Steve Loughran 
> >> wrote:
> >>
> >>> its in.
> >>>
> >>> good catch!
> >>>
> >>> On 20 Oct 2018, at 01:35, Wei-Chiu Chuang   >>> weic...@cloudera.com>> wrote:
> >>>
> >>> Thanks Sunil G for driving the release,
> >>> I filed HADOOP-15866<
> https://issues.apache.org/jira/browse/HADOOP-15866>
> >>> for a compat fix. If any one has cycle please review it, as I think it
> is
> >>> needed for 3.2.0.
> >>>
> >>> On Thu, Oct 18, 2018 at 4:43 AM Sunil G  >>> sun...@apache.org>> wrote:
> >>> Hi Folks,
> >>>
> >>> As we previously communicated for 3.2.0 release, we have delayed due to
> >>> few
> >>> blockers in our gate.
> >>>
> >>> I just cut branch-3.2.0 for release purpose. branch-3.2 will be open
> for
> >>> all bug fixes.
> >>>
> >>> - Sunil
> >>>
> >>>
> >>> On Tue, Oct 16, 2018 at 8:59 AM Sunil G  >>> sun...@apache.org>> wrote:
> >>>
> >>> > Hi Folks,
> >>> >
> >>> > We are now close to RC as other blocker issues are now merged to
> trunk
> >>> and
> >>> > branch-3.2. Last 2 critical issues are closer to merge and will be
> >>> > committed in few hours.
> >>> > With this, I will be creating 3.2.0 branch today and will go ahead
> >>> with RC
> >>> > related process.
> >>> >
> >>> > - Sunil
> >>> >
> >>> > On Mon, Oct 15, 2018 at 11:43 PM Jonathan Bender <
> jonben...@stripe.com
> >>> <mailto:jonben...@stripe.com>>
> >>> > wrote:
> >>> >
> >>> >> Hello, were there any updates around the 3.2.0 RC timing? All I see
> in
> >>> >> the current blockers are related to the new Submarine subproject,
> >>> wasn't
> >>> >> sure if that is what is holding things up.
> >>> >>
> >>> >> Cheers,
> >>> >> Jon
> >>> >>
> >>> >> On Tue, Oct 2, 2018 at 7:13 PM, Sunil G  >>> sun...@apache.org>> wrote:
> >>> >>
> >>> >>> Thanks Robert and Haibo for quickly correcting same.
> >>> >>> Sigh, I somehow missed one file while committing the change. Sorry
> >>> for
> >>> >>> the
> >>> >>> trouble.
> >>> >>>
> >>> >>> - Sunil
> >>> >>>
> >>> >>> On Wed, Oct 3, 2018 at 5:22 AM Robert Kanter  >>> <mailto:rkan...@cloudera.com>>
> >>> >>> wrote:
> >>> >>>
> >>> >>> > Looks like there's two that weren't updated:
> >>> >>> > >> [115] 16:32 : hadoop-common (trunk) :: grep "3.2.0-SNAPSHOT" .
> >>> -r
> >>> >>> > --include=pom.xml
> >>> >>> > ./hadoop-pr

3.2.0 branch is closed for commits

2018-11-05 Thread Sunil G
Hi All,

All blockers are closed for 3.2.0 as of now and RC is getting prepared.
Hence 3.2.0 branch closed for commits.
Please use branch-3.2 for any commit and set fixed version as 3.2.1

Thanks,
Sunil


Re: Hadoop 3.2 Release Plan proposal

2018-10-25 Thread Sunil G
Thanks Konstantin for pointing out.
As 3.2 is pretty much on RC level, its better we try to find a good
solution to this issue.

I ll follow up on this in the jira.

- Sunil

On Thu, Oct 25, 2018 at 11:35 AM Konstantin Shvachko 
wrote:

> I've tried to attract attention to an incompatibility issue through the
> jira, but it didn't work. So pitching in in this thread.
> https://issues.apache.org/jira/browse/HDFS-12026
> It introduced binary incompatibility, which will prevent people from
> upgrading from 3.1 to 3.2.
> I think it can get messy if we release anything with this feature.
>
> Thanks,
> --Konstantin
>
> On Mon, Oct 22, 2018 at 5:01 AM Steve Loughran 
> wrote:
>
>> its in.
>>
>> good catch!
>>
>> On 20 Oct 2018, at 01:35, Wei-Chiu Chuang > weic...@cloudera.com>> wrote:
>>
>> Thanks Sunil G for driving the release,
>> I filed HADOOP-15866<https://issues.apache.org/jira/browse/HADOOP-15866>
>> for a compat fix. If any one has cycle please review it, as I think it is
>> needed for 3.2.0.
>>
>> On Thu, Oct 18, 2018 at 4:43 AM Sunil G > sun...@apache.org>> wrote:
>> Hi Folks,
>>
>> As we previously communicated for 3.2.0 release, we have delayed due to
>> few
>> blockers in our gate.
>>
>> I just cut branch-3.2.0 for release purpose. branch-3.2 will be open for
>> all bug fixes.
>>
>> - Sunil
>>
>>
>> On Tue, Oct 16, 2018 at 8:59 AM Sunil G > sun...@apache.org>> wrote:
>>
>> > Hi Folks,
>> >
>> > We are now close to RC as other blocker issues are now merged to trunk
>> and
>> > branch-3.2. Last 2 critical issues are closer to merge and will be
>> > committed in few hours.
>> > With this, I will be creating 3.2.0 branch today and will go ahead with
>> RC
>> > related process.
>> >
>> > - Sunil
>> >
>> > On Mon, Oct 15, 2018 at 11:43 PM Jonathan Bender > <mailto:jonben...@stripe.com>>
>> > wrote:
>> >
>> >> Hello, were there any updates around the 3.2.0 RC timing? All I see in
>> >> the current blockers are related to the new Submarine subproject,
>> wasn't
>> >> sure if that is what is holding things up.
>> >>
>> >> Cheers,
>> >> Jon
>> >>
>> >> On Tue, Oct 2, 2018 at 7:13 PM, Sunil G > sun...@apache.org>> wrote:
>> >>
>> >>> Thanks Robert and Haibo for quickly correcting same.
>> >>> Sigh, I somehow missed one file while committing the change. Sorry for
>> >>> the
>> >>> trouble.
>> >>>
>> >>> - Sunil
>> >>>
>> >>> On Wed, Oct 3, 2018 at 5:22 AM Robert Kanter > <mailto:rkan...@cloudera.com>>
>> >>> wrote:
>> >>>
>> >>> > Looks like there's two that weren't updated:
>> >>> > >> [115] 16:32 : hadoop-common (trunk) :: grep "3.2.0-SNAPSHOT" . -r
>> >>> > --include=pom.xml
>> >>> > ./hadoop-project/pom.xml:
>> >>> >
>> 3.2.0-SNAPSHOT
>> >>> > ./pom.xml:3.2.0-SNAPSHOT
>> >>> >
>> >>> > I've just pushed in an addendum commit to fix those.
>> >>> > In the future, please make sure to do a sanity compile when updating
>> >>> poms.
>> >>> >
>> >>> > thanks
>> >>> > - Robert
>> >>> >
>> >>> > On Tue, Oct 2, 2018 at 11:44 AM Aaron Fabbri
>> >>> mailto:fab...@cloudera.com.invalid>>
>> >>> > wrote:
>> >>> >
>> >>> >> Trunk is not building for me.. Did you miss a 3.2.0-SNAPSHOT in the
>> >>> >> top-level pom.xml?
>> >>> >>
>> >>> >>
>> >>> >> On Tue, Oct 2, 2018 at 10:16 AM Sunil G > sun...@apache.org>> wrote:
>> >>> >>
>> >>> >> > Hi All
>> >>> >> >
>> >>> >> > As mentioned in earlier mail, I have cut branch-3.2 and reset
>> trunk
>> >>> to
>> >>> >> > 3.3.0-SNAPSHOT. I will share the RC details sooner once all
>> >>> necessary
>> >>> >> > patches are pulled into branch-3.2.
>> >>> >> >
>> >>> >> > Thank You
>> >>> >> > - Sunil
>> >>> >> >
>>

Re: Hadoop 3.2 Release Plan proposal

2018-10-15 Thread Sunil G
Hi Folks,

We are now close to RC as other blocker issues are now merged to trunk and
branch-3.2. Last 2 critical issues are closer to merge and will be
committed in few hours.
With this, I will be creating 3.2.0 branch today and will go ahead with RC
related process.

- Sunil

On Mon, Oct 15, 2018 at 11:43 PM Jonathan Bender 
wrote:

> Hello, were there any updates around the 3.2.0 RC timing? All I see in the
> current blockers are related to the new Submarine subproject, wasn't sure
> if that is what is holding things up.
>
> Cheers,
> Jon
>
> On Tue, Oct 2, 2018 at 7:13 PM, Sunil G  wrote:
>
>> Thanks Robert and Haibo for quickly correcting same.
>> Sigh, I somehow missed one file while committing the change. Sorry for the
>> trouble.
>>
>> - Sunil
>>
>> On Wed, Oct 3, 2018 at 5:22 AM Robert Kanter 
>> wrote:
>>
>> > Looks like there's two that weren't updated:
>> > >> [115] 16:32 : hadoop-common (trunk) :: grep "3.2.0-SNAPSHOT" . -r
>> > --include=pom.xml
>> > ./hadoop-project/pom.xml:
>> > 3.2.0-SNAPSHOT
>> > ./pom.xml:3.2.0-SNAPSHOT
>> >
>> > I've just pushed in an addendum commit to fix those.
>> > In the future, please make sure to do a sanity compile when updating
>> poms.
>> >
>> > thanks
>> > - Robert
>> >
>> > On Tue, Oct 2, 2018 at 11:44 AM Aaron Fabbri
>> 
>> > wrote:
>> >
>> >> Trunk is not building for me.. Did you miss a 3.2.0-SNAPSHOT in the
>> >> top-level pom.xml?
>> >>
>> >>
>> >> On Tue, Oct 2, 2018 at 10:16 AM Sunil G  wrote:
>> >>
>> >> > Hi All
>> >> >
>> >> > As mentioned in earlier mail, I have cut branch-3.2 and reset trunk
>> to
>> >> > 3.3.0-SNAPSHOT. I will share the RC details sooner once all necessary
>> >> > patches are pulled into branch-3.2.
>> >> >
>> >> > Thank You
>> >> > - Sunil
>> >> >
>> >> >
>> >> > On Mon, Sep 24, 2018 at 2:00 PM Sunil G  wrote:
>> >> >
>> >> > > Hi All
>> >> > >
>> >> > > We are now down to the last Blocker and HADOOP-15407 is merged to
>> >> trunk.
>> >> > > Thanks for the support.
>> >> > >
>> >> > > *Plan for RC*
>> >> > > 3.2 branch cut and reset trunk : *25th Tuesday*
>> >> > > RC0 for 3.2: *28th Friday*
>> >> > >
>> >> > > Thank You
>> >> > > Sunil
>> >> > >
>> >> > >
>> >> > > On Mon, Sep 17, 2018 at 3:21 PM Sunil G  wrote:
>> >> > >
>> >> > >> Hi All
>> >> > >>
>> >> > >> We are down to 3 Blockers and 4 Critical now. Thanks all of you
>> for
>> >> > >> helping in this. I am following up on these tickets, once its
>> closed
>> >> we
>> >> > >> will cut the 3.2 branch.
>> >> > >>
>> >> > >> Thanks
>> >> > >> Sunil Govindan
>> >> > >>
>> >> > >>
>> >> > >> On Wed, Sep 12, 2018 at 5:10 PM Sunil G 
>> wrote:
>> >> > >>
>> >> > >>> Hi All,
>> >> > >>>
>> >> > >>> Inline with the original 3.2 communication proposal dated 17th
>> July
>> >> > >>> 2018, I would like to provide more updates.
>> >> > >>>
>> >> > >>> We are approaching previously proposed code freeze date
>> (September
>> >> 14,
>> >> > >>> 2018). So I would like to cut 3.2 branch on 17th Sept and point
>> >> > existing
>> >> > >>> trunk to 3.3 if there are no issues.
>> >> > >>>
>> >> > >>> *Current Release Plan:*
>> >> > >>> Feature freeze date : all features to merge by September 7, 2018.
>> >> > >>> Code freeze date : blockers/critical only, no improvements and
>> >> > >>> blocker/critical bug-fixes September 14, 2018.
>> >> > >>> Release date: September 28, 2018
>> >> > >>>
>> >> > >>> If any critical/blocker tickets which are targeted to 3.2.0, we
>> >> need to
>> >> > &g

Re: Hadoop 3.2 Release Plan proposal

2018-10-02 Thread Sunil G
Thanks Robert and Haibo for quickly correcting same.
Sigh, I somehow missed one file while committing the change. Sorry for the
trouble.

- Sunil

On Wed, Oct 3, 2018 at 5:22 AM Robert Kanter  wrote:

> Looks like there's two that weren't updated:
> >> [115] 16:32 : hadoop-common (trunk) :: grep "3.2.0-SNAPSHOT" . -r
> --include=pom.xml
> ./hadoop-project/pom.xml:
> 3.2.0-SNAPSHOT
> ./pom.xml:3.2.0-SNAPSHOT
>
> I've just pushed in an addendum commit to fix those.
> In the future, please make sure to do a sanity compile when updating poms.
>
> thanks
> - Robert
>
> On Tue, Oct 2, 2018 at 11:44 AM Aaron Fabbri 
> wrote:
>
>> Trunk is not building for me.. Did you miss a 3.2.0-SNAPSHOT in the
>> top-level pom.xml?
>>
>>
>> On Tue, Oct 2, 2018 at 10:16 AM Sunil G  wrote:
>>
>> > Hi All
>> >
>> > As mentioned in earlier mail, I have cut branch-3.2 and reset trunk to
>> > 3.3.0-SNAPSHOT. I will share the RC details sooner once all necessary
>> > patches are pulled into branch-3.2.
>> >
>> > Thank You
>> > - Sunil
>> >
>> >
>> > On Mon, Sep 24, 2018 at 2:00 PM Sunil G  wrote:
>> >
>> > > Hi All
>> > >
>> > > We are now down to the last Blocker and HADOOP-15407 is merged to
>> trunk.
>> > > Thanks for the support.
>> > >
>> > > *Plan for RC*
>> > > 3.2 branch cut and reset trunk : *25th Tuesday*
>> > > RC0 for 3.2: *28th Friday*
>> > >
>> > > Thank You
>> > > Sunil
>> > >
>> > >
>> > > On Mon, Sep 17, 2018 at 3:21 PM Sunil G  wrote:
>> > >
>> > >> Hi All
>> > >>
>> > >> We are down to 3 Blockers and 4 Critical now. Thanks all of you for
>> > >> helping in this. I am following up on these tickets, once its closed
>> we
>> > >> will cut the 3.2 branch.
>> > >>
>> > >> Thanks
>> > >> Sunil Govindan
>> > >>
>> > >>
>> > >> On Wed, Sep 12, 2018 at 5:10 PM Sunil G  wrote:
>> > >>
>> > >>> Hi All,
>> > >>>
>> > >>> Inline with the original 3.2 communication proposal dated 17th July
>> > >>> 2018, I would like to provide more updates.
>> > >>>
>> > >>> We are approaching previously proposed code freeze date (September
>> 14,
>> > >>> 2018). So I would like to cut 3.2 branch on 17th Sept and point
>> > existing
>> > >>> trunk to 3.3 if there are no issues.
>> > >>>
>> > >>> *Current Release Plan:*
>> > >>> Feature freeze date : all features to merge by September 7, 2018.
>> > >>> Code freeze date : blockers/critical only, no improvements and
>> > >>> blocker/critical bug-fixes September 14, 2018.
>> > >>> Release date: September 28, 2018
>> > >>>
>> > >>> If any critical/blocker tickets which are targeted to 3.2.0, we
>> need to
>> > >>> backport to 3.2 post branch cut.
>> > >>>
>> > >>> Here's an updated 3.2.0 feature status:
>> > >>>
>> > >>> 1. Merged & Completed features:
>> > >>>
>> > >>> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
>> > >>> workloads Initial cut.
>> > >>> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
>> > >>> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
>> > >>> Scheduler.
>> > >>> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service
>> > API
>> > >>> and CLI.
>> > >>> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN.
>> > >>> - (Inigo) HDFS-12615: Router-based HDFS federation. Improvement
>> works.
>> > >>>
>> > >>> 2. Features close to finish:
>> > >>>
>> > >>> - (Steve) S3Guard Phase III. Close to commit.
>> > >>> - (Steve) S3a phase V. Close to commit.
>> > >>> - (Steve) Support Windows Azure Storage. Close to commit.
>> > >>>
>> > >>> 3. Tentative/Cancelled features for 3.2:
>> > >>> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
>> > >>> ATSv2. Pat

Re: Hadoop 3.2 Release Plan proposal

2018-10-02 Thread Sunil G
Hi All

As mentioned in earlier mail, I have cut branch-3.2 and reset trunk to
3.3.0-SNAPSHOT. I will share the RC details sooner once all necessary
patches are pulled into branch-3.2.

Thank You
- Sunil


On Mon, Sep 24, 2018 at 2:00 PM Sunil G  wrote:

> Hi All
>
> We are now down to the last Blocker and HADOOP-15407 is merged to trunk.
> Thanks for the support.
>
> *Plan for RC*
> 3.2 branch cut and reset trunk : *25th Tuesday*
> RC0 for 3.2: *28th Friday*
>
> Thank You
> Sunil
>
>
> On Mon, Sep 17, 2018 at 3:21 PM Sunil G  wrote:
>
>> Hi All
>>
>> We are down to 3 Blockers and 4 Critical now. Thanks all of you for
>> helping in this. I am following up on these tickets, once its closed we
>> will cut the 3.2 branch.
>>
>> Thanks
>> Sunil Govindan
>>
>>
>> On Wed, Sep 12, 2018 at 5:10 PM Sunil G  wrote:
>>
>>> Hi All,
>>>
>>> Inline with the original 3.2 communication proposal dated 17th July
>>> 2018, I would like to provide more updates.
>>>
>>> We are approaching previously proposed code freeze date (September 14,
>>> 2018). So I would like to cut 3.2 branch on 17th Sept and point existing
>>> trunk to 3.3 if there are no issues.
>>>
>>> *Current Release Plan:*
>>> Feature freeze date : all features to merge by September 7, 2018.
>>> Code freeze date : blockers/critical only, no improvements and
>>> blocker/critical bug-fixes September 14, 2018.
>>> Release date: September 28, 2018
>>>
>>> If any critical/blocker tickets which are targeted to 3.2.0, we need to
>>> backport to 3.2 post branch cut.
>>>
>>> Here's an updated 3.2.0 feature status:
>>>
>>> 1. Merged & Completed features:
>>>
>>> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
>>> workloads Initial cut.
>>> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
>>> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
>>> Scheduler.
>>> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
>>> and CLI.
>>> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN.
>>> - (Inigo) HDFS-12615: Router-based HDFS federation. Improvement works.
>>>
>>> 2. Features close to finish:
>>>
>>> - (Steve) S3Guard Phase III. Close to commit.
>>> - (Steve) S3a phase V. Close to commit.
>>> - (Steve) Support Windows Azure Storage. Close to commit.
>>>
>>> 3. Tentative/Cancelled features for 3.2:
>>> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
>>> ATSv2. Patch in progress.
>>> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to
>>> be done before Aug 2018.
>>> - (Eric) YARN-7129: Application Catalog for YARN applications.
>>> Challenging as more discussions are on-going.
>>>
>>> *Summary of 3.2.0 issues status:*
>>> 19 Blocker and Critical issues [1] are open, I am following up with
>>> owners to get status on each of them to get in by Code Freeze date.
>>>
>>> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
>>> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0 ORDER
>>> BY priority DESC
>>>
>>> Thanks,
>>> Sunil
>>>
>>>
>>>
>>> On Thu, Aug 30, 2018 at 9:59 PM Sunil G  wrote:
>>>
>>>> Hi All,
>>>>
>>>> Inline with earlier communication dated 17th July 2018, I would like to
>>>> provide some updates.
>>>>
>>>> We are approaching previously proposed code freeze date (Aug 31).
>>>>
>>>> One of the critical feature Node Attributes feature merge
>>>> discussion/vote is ongoing. Also few other Blocker bugs need a bit more
>>>> time. With regard to this, suggesting to push the feature/code freeze for 2
>>>> more weeks to accommodate these jiras too.
>>>>
>>>> Proposing Updated changes in plan inline with this:
>>>> Feature freeze date : all features to merge by September 7, 2018.
>>>> Code freeze date : blockers/critical only, no improvements and
>>>>  blocker/critical bug-fixes September 14, 2018.
>>>> Release date: September 28, 2018
>>>>
>>>> If any features in branch which are targeted to 3.2.0, please reply to
>>>> this email thread.
>>>>
>>>> *Here's an updated 3.2.0 feature status:*
>>>>
>&g

Re: Hadoop 3.2 Release Plan proposal

2018-09-24 Thread Sunil G
Hi All

We are now down to the last Blocker and HADOOP-15407 is merged to trunk.
Thanks for the support.

*Plan for RC*
3.2 branch cut and reset trunk : *25th Tuesday*
RC0 for 3.2: *28th Friday*

Thank You
Sunil

On Mon, Sep 17, 2018 at 3:21 PM Sunil G  wrote:

> Hi All
>
> We are down to 3 Blockers and 4 Critical now. Thanks all of you for
> helping in this. I am following up on these tickets, once its closed we
> will cut the 3.2 branch.
>
> Thanks
> Sunil Govindan
>
>
> On Wed, Sep 12, 2018 at 5:10 PM Sunil G  wrote:
>
>> Hi All,
>>
>> Inline with the original 3.2 communication proposal dated 17th July 2018,
>> I would like to provide more updates.
>>
>> We are approaching previously proposed code freeze date (September 14,
>> 2018). So I would like to cut 3.2 branch on 17th Sept and point existing
>> trunk to 3.3 if there are no issues.
>>
>> *Current Release Plan:*
>> Feature freeze date : all features to merge by September 7, 2018.
>> Code freeze date : blockers/critical only, no improvements and
>> blocker/critical bug-fixes September 14, 2018.
>> Release date: September 28, 2018
>>
>> If any critical/blocker tickets which are targeted to 3.2.0, we need to
>> backport to 3.2 post branch cut.
>>
>> Here's an updated 3.2.0 feature status:
>>
>> 1. Merged & Completed features:
>>
>> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning workloads
>> Initial cut.
>> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
>> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity Scheduler.
>> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
>> and CLI.
>> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN.
>> - (Inigo) HDFS-12615: Router-based HDFS federation. Improvement works.
>>
>> 2. Features close to finish:
>>
>> - (Steve) S3Guard Phase III. Close to commit.
>> - (Steve) S3a phase V. Close to commit.
>> - (Steve) Support Windows Azure Storage. Close to commit.
>>
>> 3. Tentative/Cancelled features for 3.2:
>> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
>> ATSv2. Patch in progress.
>> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to
>> be done before Aug 2018.
>> - (Eric) YARN-7129: Application Catalog for YARN applications.
>> Challenging as more discussions are on-going.
>>
>> *Summary of 3.2.0 issues status:*
>> 19 Blocker and Critical issues [1] are open, I am following up with
>> owners to get status on each of them to get in by Code Freeze date.
>>
>> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
>> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0 ORDER
>> BY priority DESC
>>
>> Thanks,
>> Sunil
>>
>>
>>
>> On Thu, Aug 30, 2018 at 9:59 PM Sunil G  wrote:
>>
>>> Hi All,
>>>
>>> Inline with earlier communication dated 17th July 2018, I would like to
>>> provide some updates.
>>>
>>> We are approaching previously proposed code freeze date (Aug 31).
>>>
>>> One of the critical feature Node Attributes feature merge
>>> discussion/vote is ongoing. Also few other Blocker bugs need a bit more
>>> time. With regard to this, suggesting to push the feature/code freeze for 2
>>> more weeks to accommodate these jiras too.
>>>
>>> Proposing Updated changes in plan inline with this:
>>> Feature freeze date : all features to merge by September 7, 2018.
>>> Code freeze date : blockers/critical only, no improvements and
>>>  blocker/critical bug-fixes September 14, 2018.
>>> Release date: September 28, 2018
>>>
>>> If any features in branch which are targeted to 3.2.0, please reply to
>>> this email thread.
>>>
>>> *Here's an updated 3.2.0 feature status:*
>>>
>>> 1. Merged & Completed features:
>>>
>>> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
>>> workloads Initial cut.
>>> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
>>> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
>>> Scheduler.
>>> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
>>> and CLI.
>>>
>>> 2. Features close to finish:
>>>
>>> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN. Merge/Vote
>>> Ongoing.
>>> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
>>> A

Re: Hadoop 3.2 Release Plan proposal

2018-09-17 Thread Sunil G
Hi All

We are down to 3 Blockers and 4 Critical now. Thanks all of you for helping
in this. I am following up on these tickets, once its closed we will cut
the 3.2 branch.

Thanks
Sunil Govindan

On Wed, Sep 12, 2018 at 5:10 PM Sunil G  wrote:

> Hi All,
>
> Inline with the original 3.2 communication proposal dated 17th July 2018,
> I would like to provide more updates.
>
> We are approaching previously proposed code freeze date (September 14,
> 2018). So I would like to cut 3.2 branch on 17th Sept and point existing
> trunk to 3.3 if there are no issues.
>
> *Current Release Plan:*
> Feature freeze date : all features to merge by September 7, 2018.
> Code freeze date : blockers/critical only, no improvements and
> blocker/critical bug-fixes September 14, 2018.
> Release date: September 28, 2018
>
> If any critical/blocker tickets which are targeted to 3.2.0, we need to
> backport to 3.2 post branch cut.
>
> Here's an updated 3.2.0 feature status:
>
> 1. Merged & Completed features:
>
> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning workloads
> Initial cut.
> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity Scheduler.
> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
> and CLI.
> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN.
> - (Inigo) HDFS-12615: Router-based HDFS federation. Improvement works.
>
> 2. Features close to finish:
>
> - (Steve) S3Guard Phase III. Close to commit.
> - (Steve) S3a phase V. Close to commit.
> - (Steve) Support Windows Azure Storage. Close to commit.
>
> 3. Tentative/Cancelled features for 3.2:
> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from ATSv2.
> Patch in progress.
> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to be
> done before Aug 2018.
> - (Eric) YARN-7129: Application Catalog for YARN applications. Challenging
> as more discussions are on-going.
>
> *Summary of 3.2.0 issues status:*
> 19 Blocker and Critical issues [1] are open, I am following up with owners
> to get status on each of them to get in by Code Freeze date.
>
> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0 ORDER
> BY priority DESC
>
> Thanks,
> Sunil
>
>
>
> On Thu, Aug 30, 2018 at 9:59 PM Sunil G  wrote:
>
>> Hi All,
>>
>> Inline with earlier communication dated 17th July 2018, I would like to
>> provide some updates.
>>
>> We are approaching previously proposed code freeze date (Aug 31).
>>
>> One of the critical feature Node Attributes feature merge discussion/vote
>> is ongoing. Also few other Blocker bugs need a bit more time. With regard
>> to this, suggesting to push the feature/code freeze for 2 more weeks to
>> accommodate these jiras too.
>>
>> Proposing Updated changes in plan inline with this:
>> Feature freeze date : all features to merge by September 7, 2018.
>> Code freeze date : blockers/critical only, no improvements and
>>  blocker/critical bug-fixes September 14, 2018.
>> Release date: September 28, 2018
>>
>> If any features in branch which are targeted to 3.2.0, please reply to
>> this email thread.
>>
>> *Here's an updated 3.2.0 feature status:*
>>
>> 1. Merged & Completed features:
>>
>> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning workloads
>> Initial cut.
>> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
>> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity Scheduler.
>> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
>> and CLI.
>>
>> 2. Features close to finish:
>>
>> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN. Merge/Vote
>> Ongoing.
>> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
>> ATSv2. Patch in progress.
>> - (Virajit) HDFS-12615: Router-based HDFS federation. Improvement works.
>> - (Steve) S3Guard Phase III, S3a phase V, Support Windows Azure Storage.
>> In progress.
>>
>> 3. Tentative features:
>>
>> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to
>> be done before Aug 2018.
>> - (Eric) YARN-7129: Application Catalog for YARN applications.
>> Challenging as more discussions are on-going.
>>
>> *Summary of 3.2.0 issues status:*
>>
>> 26 Blocker and Critical issues [1] are open, I am following up with
>> owners to get status on each of them to get in by Code Freeze date.
>>
&g

Re: [Vote] Merge discussion for Node attribute support feature YARN-3409

2018-09-12 Thread Sunil G
Thanks Sean

YARN-8768 is raised and we will get this in very soon.

- Sunil

On Wed, Sep 12, 2018 at 9:16 PM Sean Mackrory  wrote:

> `mvn install` fails on trunk right now due to javadoc error in yarn-api. Is
> that related? If so, can we revert or fix ASAP please?
>
> On Wed, Sep 12, 2018 at 6:05 AM Naganarasimha Garla <
> naganarasimha...@apache.org> wrote:
>
> > Hi All,
> >  Me and Sunil have successfully merged YARN-3409 to the trunk and
> > we have created new umbrella jira YARN-8766 to track the pending jiras
> and
> > the remaining planned improvement features.
> > Thanks all for the support !
> >
> > Regards,
> > + Naga
> >
> >
> > On Wed, Sep 12, 2018 at 7:46 AM Naganarasimha Garla <
> > naganarasimha...@apache.org> wrote:
> >
> > > Hi All,
> > >  Voting has been running since 6 days and adding my vote we
> have
> > 4
> > > binding and 2 non binding +1's with no -1's this voting passes and we
> > will
> > > be merging the branch shortly. Thanks for all who participated in the
> > > discussion and voting thread !
> > >
> > > Thanks and Regards,
> > > + Naga
> > >
> > > On Mon, Sep 10, 2018 at 2:50 PM Zian Chen 
> wrote:
> > >
> > >> +1 for merge.
> > >>
> > >> > On Sep 9, 2018, at 10:47 PM, Weiwei Yang 
> wrote:
> > >> >
> > >> > +1 for the merge
> > >> >
> > >> > On Mon, Sep 10, 2018 at 12:06 PM Rohith Sharma K S <
> > >> > rohithsharm...@apache.org> wrote:
> > >> >
> > >> >> +1 for merge
> > >> >>
> > >> >> -Rohith Sharma K S
> > >> >>
> > >> >> On Wed, 5 Sep 2018 at 18:01, Naganarasimha Garla <
> > >> >> naganarasimha...@apache.org> wrote:
> > >> >>
> > >> >>> Hi All,
> > >> >>>Thanks for feedback folks, based on the positive response
> > >> >> starting
> > >> >>> a Vote thread for merging YARN-3409 to master.
> > >> >>>
> > >> >>> Regards,
> > >> >>> + Naga & Sunil
> > >> >>>
> > >> >>> On Wed, 5 Sep 2018 2:51 am Wangda Tan, 
> wrote:
> > >> >>>
> > >>  +1 for the merge, it gonna be a great addition to 3.2.0 release.
> > >> Thanks
> > >> >>> to
> > >>  everybody for pushing this feature to complete.
> > >> 
> > >>  Best,
> > >>  Wangda
> > >> 
> > >>  On Tue, Sep 4, 2018 at 8:25 AM Bibinchundatt <
> > >> >> bibin.chund...@huawei.com>
> > >>  wrote:
> > >> 
> > >> > +1 for merge. Fetaure would be a good addition to 3.2 release.
> > >> >
> > >> > --
> > >> > Bibin A Chundatt
> > >> > M: +91-9742095715 <+91%2097420%2095715>
> > >> > E: bibin.chund...@huawei.com
> > >> > 2012实验室-印研IT BU分部
> > >> > 2012 Laboratories-IT BU Branch Dept.
> > >> > From:Naganarasimha Garla
> > >> > To:common-dev@hadoop.apache.org,Hdfs-dev,
> > yarn-...@hadoop.apache.org
> > >> ,
> > >> > mapreduce-...@hadoop.apache.org,
> > >> > Date:2018-08-29 20:00:44
> > >> > Subject:[Discuss] Merge discussion for Node attribute support
> > >> feature
> > >> > YARN-3409
> > >> >
> > >> > Hi All,
> > >> >
> > >> > We would like to hear your thoughts on merging “Node Attributes
> > >> >> Support
> > >> >>> in
> > >> > YARN” branch (YARN-3409) [2] into trunk in a few weeks. The goal
> > is
> > >> to
> > >> >>> get
> > >> > it in for HADOOP 3.2.
> > >> >
> > >> > *Major work happened in this branch*
> > >> >
> > >> > YARN-6858. Attribute Manager to store and provide node
> attributes
> > in
> > >> >> RM
> > >> > YARN-7871. Support Node attributes reporting from NM to RM(
> > >> >> distributed
> > >> > node attributes)
> > >> > YARN-7863. Modify placement constraints to support node
> attributes
> > >> > YARN-7875. Node Attribute store for storing and recovering
> > >> attributes
> > >> >
> > >> > *Detailed Design:*
> > >> >
> > >> > Please refer [1] for detailed design document.
> > >> >
> > >> > *Testing Efforts:*
> > >> >
> > >> > We did detailed tests for the feature in the last few weeks.
> > >> > This feature will be enabled only when Node Attributes
> constraints
> > >> are
> > >> > specified through SchedulingRequest from AM.
> > >> > Manager implementation will help to store and recover Node
> > >> Attributes.
> > >> > This
> > >> > works with existing placement constraints.
> > >> >
> > >> > *Regarding to API stability:*
> > >> >
> > >> > All newly added @Public APIs are @Unstable.
> > >> >
> > >> > Documentation jira [3] could help to provide detailed
> > configuration
> > >> > details. This feature works from end-to-end and we tested this
> in
> > >> our
> > >> > local
> > >> > cluster. Branch code is run against trunk and tracked via [4].
> > >> >
> > >> > We would love to get your thoughts before opening a voting
> thread.
> > >> >
> > >> > Special thanks to a team of folks who worked hard and
> contributed
> > >> >>> towards
> > >> > this efforts including design 

Re: Hadoop 3.2 Release Plan proposal

2018-09-12 Thread Sunil G
Hi All,

Inline with the original 3.2 communication proposal dated 17th July 2018, I
would like to provide more updates.

We are approaching previously proposed code freeze date (September 14,
2018). So I would like to cut 3.2 branch on 17th Sept and point existing
trunk to 3.3 if there are no issues.

*Current Release Plan:*
Feature freeze date : all features to merge by September 7, 2018.
Code freeze date : blockers/critical only, no improvements and
blocker/critical bug-fixes September 14, 2018.
Release date: September 28, 2018

If any critical/blocker tickets which are targeted to 3.2.0, we need to
backport to 3.2 post branch cut.

Here's an updated 3.2.0 feature status:

1. Merged & Completed features:

- (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning workloads
Initial cut.
- (Uma) HDFS-10285: HDFS Storage Policy Satisfier
- (Sunil) YARN-7494: Multi Node scheduling support in Capacity Scheduler.
- (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
and CLI.
- (Naga/Sunil) YARN-3409: Node Attributes support in YARN.
- (Inigo) HDFS-12615: Router-based HDFS federation. Improvement works.

2. Features close to finish:

- (Steve) S3Guard Phase III. Close to commit.
- (Steve) S3a phase V. Close to commit.
- (Steve) Support Windows Azure Storage. Close to commit.

3. Tentative/Cancelled features for 3.2:
- (Rohith) YARN-5742: Serve aggregated logs of historical apps from ATSv2.
Patch in progress.
- (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to be
done before Aug 2018.
- (Eric) YARN-7129: Application Catalog for YARN applications. Challenging
as more discussions are on-going.

*Summary of 3.2.0 issues status:*
19 Blocker and Critical issues [1] are open, I am following up with owners
to get status on each of them to get in by Code Freeze date.

[1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0 ORDER
BY priority DESC

Thanks,
Sunil



On Thu, Aug 30, 2018 at 9:59 PM Sunil G  wrote:

> Hi All,
>
> Inline with earlier communication dated 17th July 2018, I would like to
> provide some updates.
>
> We are approaching previously proposed code freeze date (Aug 31).
>
> One of the critical feature Node Attributes feature merge discussion/vote
> is ongoing. Also few other Blocker bugs need a bit more time. With regard
> to this, suggesting to push the feature/code freeze for 2 more weeks to
> accommodate these jiras too.
>
> Proposing Updated changes in plan inline with this:
> Feature freeze date : all features to merge by September 7, 2018.
> Code freeze date : blockers/critical only, no improvements and
>  blocker/critical bug-fixes September 14, 2018.
> Release date: September 28, 2018
>
> If any features in branch which are targeted to 3.2.0, please reply to
> this email thread.
>
> *Here's an updated 3.2.0 feature status:*
>
> 1. Merged & Completed features:
>
> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning workloads
> Initial cut.
> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity Scheduler.
> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
> and CLI.
>
> 2. Features close to finish:
>
> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN. Merge/Vote
> Ongoing.
> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from ATSv2.
> Patch in progress.
> - (Virajit) HDFS-12615: Router-based HDFS federation. Improvement works.
> - (Steve) S3Guard Phase III, S3a phase V, Support Windows Azure Storage.
> In progress.
>
> 3. Tentative features:
>
> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to be
> done before Aug 2018.
> - (Eric) YARN-7129: Application Catalog for YARN applications. Challenging
> as more discussions are on-going.
>
> *Summary of 3.2.0 issues status:*
>
> 26 Blocker and Critical issues [1] are open, I am following up with
> owners to get status on each of them to get in by Code Freeze date.
>
> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0 ORDER
> BY priority DESC
>
> Thanks,
> Sunil
>
> On Tue, Aug 14, 2018 at 10:30 PM Sunil G  wrote:
>
>> Hi All,
>>
>> Thanks for the feedbacks. Inline with earlier communication dated 17th
>> July 2018, I would like to provide some updates.
>>
>> We are approaching previously proposed feature freeze date (Aug 21, about
>> 7 days from today).
>> If any features in branch which are targeted to 3.2.0, please reply to
>> this email thread.
>> Steve has mentioned about the s3 features which will come close 

Re: [Vote] Merge discussion for Node attribute support feature YARN-3409

2018-09-06 Thread Sunil G
+1 for the merge.

- Sunil


On Wed, Sep 5, 2018 at 6:01 PM Naganarasimha Garla <
naganarasimha...@apache.org> wrote:

> Hi All,
>  Thanks for feedback folks, based on the positive response starting
> a Vote thread for merging YARN-3409 to master.
>
> Regards,
> + Naga & Sunil
>
> On Wed, 5 Sep 2018 2:51 am Wangda Tan,  wrote:
>
> > +1 for the merge, it gonna be a great addition to 3.2.0 release. Thanks
> to
> > everybody for pushing this feature to complete.
> >
> > Best,
> > Wangda
> >
> > On Tue, Sep 4, 2018 at 8:25 AM Bibinchundatt 
> > wrote:
> >
> >> +1 for merge. Fetaure would be a good addition to 3.2 release.
> >>
> >> --
> >> Bibin A Chundatt
> >> M: +91-9742095715 <+91%2097420%2095715> <+91%2097420%2095715>>
> >> E: bibin.chund...@huawei.com
> >> 2012实验室-印研IT BU分部
> >> 2012 Laboratories-IT BU Branch Dept.
> >> From:Naganarasimha Garla
> >> To:common-dev@hadoop.apache.org,Hdfs-dev,yarn-...@hadoop.apache.org,
> >> mapreduce-...@hadoop.apache.org,
> >> Date:2018-08-29 20:00:44
> >> Subject:[Discuss] Merge discussion for Node attribute support feature
> >> YARN-3409
> >>
> >> Hi All,
> >>
> >> We would like to hear your thoughts on merging “Node Attributes Support
> in
> >> YARN” branch (YARN-3409) [2] into trunk in a few weeks. The goal is to
> get
> >> it in for HADOOP 3.2.
> >>
> >> *Major work happened in this branch*
> >>
> >> YARN-6858. Attribute Manager to store and provide node attributes in RM
> >> YARN-7871. Support Node attributes reporting from NM to RM( distributed
> >> node attributes)
> >> YARN-7863. Modify placement constraints to support node attributes
> >> YARN-7875. Node Attribute store for storing and recovering attributes
> >>
> >> *Detailed Design:*
> >>
> >> Please refer [1] for detailed design document.
> >>
> >> *Testing Efforts:*
> >>
> >> We did detailed tests for the feature in the last few weeks.
> >> This feature will be enabled only when Node Attributes constraints are
> >> specified through SchedulingRequest from AM.
> >> Manager implementation will help to store and recover Node Attributes.
> >> This
> >> works with existing placement constraints.
> >>
> >> *Regarding to API stability:*
> >>
> >> All newly added @Public APIs are @Unstable.
> >>
> >> Documentation jira [3] could help to provide detailed configuration
> >> details. This feature works from end-to-end and we tested this in our
> >> local
> >> cluster. Branch code is run against trunk and tracked via [4].
> >>
> >> We would love to get your thoughts before opening a voting thread.
> >>
> >> Special thanks to a team of folks who worked hard and contributed
> towards
> >> this efforts including design discussion / patch / reviews, etc.: Weiwei
> >> Yang, Bibin Chundatt, Wangda Tan, Vinod Kumar Vavilappali, Konstantinos
> >> Karanasos, Arun Suresh, Varun Saxena, Devaraj Kavali, Lei Guo, Chong
> Chen.
> >>
> >> [1] :
> >>
> >>
> https://issues.apache.org/jira/secure/attachment/12937633/Node-Attributes-Requirements-Design-doc_v2.pdf
> >> [2] : https://issues.apache.org/jira/browse/YARN-3409
> >> [3] : https://issues.apache.org/jira/browse/YARN-7865
> >> [4] : https://issues.apache.org/jira/browse/YARN-8718
> >>
> >> Thanks,
> >> + Naga & Sunil Govindan
> >>
> >
>


Re: [Discuss] Merge discussion for Node attribute support feature YARN-3409

2018-09-04 Thread Sunil G
+1 for merge.

Quickly checked all basic test runs in the branch
- Add/replace/remove of attributes work good.
- scheduler can now handle attribute based placement constraints.
- tested DS shell with various constructs like java=1.8, python!=3, AND/OR
constraints etc.
- Documentation on attributes is also looks good

Thanks Naga, Weiwei, Bibin and all the folks who supported in designing and
reviewing the branch.

- Sunil



On Wed, Aug 29, 2018 at 8:00 PM Naganarasimha Garla <
naganarasimha...@apache.org> wrote:

> Hi All,
>
> We would like to hear your thoughts on merging “Node Attributes Support in
> YARN” branch (YARN-3409) [2] into trunk in a few weeks. The goal is to get
> it in for HADOOP 3.2.
>
> *Major work happened in this branch*
>
> YARN-6858. Attribute Manager to store and provide node attributes in RM
> YARN-7871. Support Node attributes reporting from NM to RM( distributed
> node attributes)
> YARN-7863. Modify placement constraints to support node attributes
> YARN-7875. Node Attribute store for storing and recovering attributes
>
> *Detailed Design:*
>
> Please refer [1] for detailed design document.
>
> *Testing Efforts:*
>
> We did detailed tests for the feature in the last few weeks.
> This feature will be enabled only when Node Attributes constraints are
> specified through SchedulingRequest from AM.
> Manager implementation will help to store and recover Node Attributes. This
> works with existing placement constraints.
>
> *Regarding to API stability:*
>
> All newly added @Public APIs are @Unstable.
>
> Documentation jira [3] could help to provide detailed configuration
> details. This feature works from end-to-end and we tested this in our local
> cluster. Branch code is run against trunk and tracked via [4].
>
> We would love to get your thoughts before opening a voting thread.
>
> Special thanks to a team of folks who worked hard and contributed towards
> this efforts including design discussion / patch / reviews, etc.: Weiwei
> Yang, Bibin Chundatt, Wangda Tan, Vinod Kumar Vavilappali, Konstantinos
> Karanasos, Arun Suresh, Varun Saxena, Devaraj Kavali, Lei Guo, Chong Chen.
>
> [1] :
>
> https://issues.apache.org/jira/secure/attachment/12937633/Node-Attributes-Requirements-Design-doc_v2.pdf
> [2] : https://issues.apache.org/jira/browse/YARN-3409
> [3] : https://issues.apache.org/jira/browse/YARN-7865
> [4] : https://issues.apache.org/jira/browse/YARN-8718
>
> Thanks,
> + Naga & Sunil Govindan
>


Re: HADOOP-14163 proposal for new hadoop.apache.org

2018-09-02 Thread Sunil G
+1. Looks really good.

- Sunil


On Mon, Sep 3, 2018 at 10:51 AM Vinayakumar B 
wrote:

> +1, New site looks great.
>
> Just one nit in README
> '--refresh' flag for hugo server no longer available.
>
> -Vinay
>
> On Mon, 3 Sep 2018, 10:21 am Shashikant Banerjee, <
> sbaner...@hortonworks.com>
> wrote:
>
> > +1
> >
> > Thanks
> > Shashi
> >
> > On 9/3/18, 9:23 AM, "Mukul Kumar Singh"  wrote:
> >
> > +1, Thanks for working on this Marton.
> >
> > -Mukul
> >
> > On 03/09/18, 9:02 AM, "John Zhuge"  wrote:
> >
> > +1 Like the new site.
> >
> > On Sun, Sep 2, 2018 at 7:02 PM Weiwei Yang 
> > wrote:
> >
> > > That's really nice, +1.
> > >
> > > --
> > > Weiwei
> > >
> > > On Sat, Sep 1, 2018 at 4:36 AM Wangda Tan  >
> > wrote:
> > >
> > > > +1, thanks for working on this, Marton!
> > > >
> > > > Best,
> > > > Wangda
> > > >
> > > > On Fri, Aug 31, 2018 at 11:24 AM Arpit Agarwal <
> > aagar...@hortonworks.com
> > > >
> > > > wrote:
> > > >
> > > > > +1
> > > > >
> > > > > Thanks for initiating this Marton.
> > > > >
> > > > >
> > > > > On 8/31/18, 1:07 AM, "Elek, Marton" 
> wrote:
> > > > >
> > > > > Bumping this thread at last time.
> > > > >
> > > > > I have the following proposal:
> > > > >
> > > > > 1. I will request a new git repository hadoop-site.git
> > and import
> > > the
> > > > > new site to there (which has exactly the same content
> as
> > the
> > > existing
> > > > > site).
> > > > >
> > > > > 2. I will ask infra to use the new repository as the
> > source of
> > > > > hadoop.apache.org
> > > > >
> > > > > 3. I will sync manually all of the changes in the next
> > two months
> > > > back
> > > > > to the svn site from the git (release announcements,
> new
> > > committers)
> > > > >
> > > > > IN CASE OF ANY PROBLEM we can switch back to the svn
> > without any
> > > > > problem.
> > > > >
> > > > > If no-one objects within three days, I'll assume lazy
> > consensus and
> > > > > start with this plan. Please comment if you have
> > objections.
> > > > >
> > > > > Again: it allows immediate fallback at any time as svn
> > repo will be
> > > > > kept
> > > > > as is (+ I will keep it up-to-date in the next 2
> months)
> > > > >
> > > > > Thanks,
> > > > > Marton
> > > > >
> > > > >
> > > > > On 06/21/2018 09:00 PM, Elek, Marton wrote:
> > > > > >
> > > > > > Thank you very much to bump up this thread.
> > > > > >
> > > > > >
> > > > > > About [2]: (Just for the clarification) the content
> of
> > the
> > > proposed
> > > > > > website is exactly the same as the old one.
> > > > > >
> > > > > > About [1]. I believe that the "mvn site" is perfect
> > for the
> > > > > > documentation but for website creation there are more
> > simple and
> > > > > > powerful tools.
> > > > > >
> > > > > > Hugo has more simple compared to jekyll. Just one
> > binary, without
> > > > > > dependencies, works everywhere (mac, linux, windows)
> > > > > >
> > > > > > Hugo has much more powerful compared to "mvn site".
> > Easier to
> > > > > create/use
> > > > > > more modern layout/theme, and easier to handle the
> > content (for
> > > > > example
> > > > > > new release announcements could be generated as part
> > of the
> > > release
> > > > > > process)
> > > > > >
> > > > > > I think it's very low risk to try out a new approach
> > for the site
> > > > > (and
> > > > > > easy to rollback in case of problems)
> > > > > >
> > > > > > Marton
> > > > > >
> > > > > > ps: I just updated the patch/preview site with the
> > recent
> > > releases:
> > > > > >
> > > > > > ***
> > > > > > * http://hadoop.anzix.net *
> > > > > > ***
> > > > > >
> > > > > > On 06/21/2018 01:27 AM, Vinod Kumar Vavilapalli
> wrote:
> > > > > >> Got pinged about this offline.
> > > > > >>
> > > > > >> Thanks for keeping at it, Marton!
> > > > > >>
> > > > > >> I think there are two road-blocks here
> > > > > >>   (1) Is the mechanism using which the website is
> > built good
> > > > 

Re: Hadoop 3.2 Release Plan proposal

2018-08-30 Thread Sunil G
Hi All,

Inline with earlier communication dated 17th July 2018, I would like to
provide some updates.

We are approaching previously proposed code freeze date (Aug 31).

One of the critical feature Node Attributes feature merge discussion/vote
is ongoing. Also few other Blocker bugs need a bit more time. With regard
to this, suggesting to push the feature/code freeze for 2 more weeks to
accommodate these jiras too.

Proposing Updated changes in plan inline with this:
Feature freeze date : all features to merge by September 7, 2018.
Code freeze date : blockers/critical only, no improvements and
 blocker/critical bug-fixes September 14, 2018.
Release date: September 28, 2018

If any features in branch which are targeted to 3.2.0, please reply to this
email thread.

*Here's an updated 3.2.0 feature status:*

1. Merged & Completed features:

- (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning workloads
Initial cut.
- (Uma) HDFS-10285: HDFS Storage Policy Satisfier
- (Sunil) YARN-7494: Multi Node scheduling support in Capacity Scheduler.
- (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
and CLI.

2. Features close to finish:

- (Naga/Sunil) YARN-3409: Node Attributes support in YARN. Merge/Vote
Ongoing.
- (Rohith) YARN-5742: Serve aggregated logs of historical apps from ATSv2.
Patch in progress.
- (Virajit) HDFS-12615: Router-based HDFS federation. Improvement works.
- (Steve) S3Guard Phase III, S3a phase V, Support Windows Azure Storage. In
progress.

3. Tentative features:

- (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to be
done before Aug 2018.
- (Eric) YARN-7129: Application Catalog for YARN applications. Challenging
as more discussions are on-going.

*Summary of 3.2.0 issues status:*

26 Blocker and Critical issues [1] are open, I am following up with owners
to get status on each of them to get in by Code Freeze date.

[1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0 ORDER
BY priority DESC

Thanks,
Sunil

On Tue, Aug 14, 2018 at 10:30 PM Sunil G  wrote:

> Hi All,
>
> Thanks for the feedbacks. Inline with earlier communication dated 17th
> July 2018, I would like to provide some updates.
>
> We are approaching previously proposed feature freeze date (Aug 21, about
> 7 days from today).
> If any features in branch which are targeted to 3.2.0, please reply to
> this email thread.
> Steve has mentioned about the s3 features which will come close to Code
> Freeze Date (Aug 31st).
>
> *Here's an updated 3.2.0 feature status:*
>
> 1. Merged & Completed features:
>
> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning workloads
> Initial cut.
> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
>
> 2. Features close to finish:
>
> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN. Major patches
> are all in, only one last
> patch is in review state.
> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity Scheduler.
> Close to commit.
> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
> and CLI. 2 patches are pending
> which will be closed by Feature freeze date.
> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from ATSv2.
> Patch in progress.
> - (Virajit) HDFS-12615: Router-based HDFS federation. Improvement works.
> - (Steve) S3Guard Phase III, S3a phase V, Support Windows Azure Storage.
> In progress.
>
> 3. Tentative features:
>
> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to be
> done before Aug 2018.
> - (Eric) YARN-7129: Application Catalog for YARN applications. Challenging
> as more discussions are on-going.
>
> *Summary of 3.2.0 issues status:*
>
> 39 Blocker and Critical issues [1] are open, I am checking with owners to
> get status on each of them to get in by Code Freeze date.
>
> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0 ORDER
> BY priority DESC
>
> Thanks,
> Sunil
>
> On Fri, Jul 20, 2018 at 8:03 AM Sunil G  wrote:
>
>> Thanks Subru for the thoughts.
>> One of the main reason for a major release is to push out critical
>> features with a faster cadence to the users. If we are pulling more and
>> more different types of features to a minor release, that branch will
>> become more destabilized and it may be tough to say that 3.1.2 is stable
>> that 3.1.1 for eg. We always tend to improve and stabilize features in
>> subsequent minor release.
>> For few companies, it makes sense to push out these new features faster
>> to make a reach to the users. Adding to the point to the backporting
>> issues, I

Re: Hadoop 3.2 Release Plan proposal

2018-08-14 Thread Sunil G
Hi All,

Thanks for the feedbacks. Inline with earlier communication dated 17th July
2018, I would like to provide some updates.

We are approaching previously proposed feature freeze date (Aug 21, about 7
days from today).
If any features in branch which are targeted to 3.2.0, please reply to this
email thread.
Steve has mentioned about the s3 features which will come close to Code
Freeze Date (Aug 31st).

*Here's an updated 3.2.0 feature status:*

1. Merged & Completed features:

- (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning workloads
Initial cut.
- (Uma) HDFS-10285: HDFS Storage Policy Satisfier

2. Features close to finish:

- (Naga/Sunil) YARN-3409: Node Attributes support in YARN. Major patches
are all in, only one last
patch is in review state.
- (Sunil) YARN-7494: Multi Node scheduling support in Capacity Scheduler.
Close to commit.
- (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
and CLI. 2 patches are pending
which will be closed by Feature freeze date.
- (Rohith) YARN-5742: Serve aggregated logs of historical apps from ATSv2.
Patch in progress.
- (Virajit) HDFS-12615: Router-based HDFS federation. Improvement works.
- (Steve) S3Guard Phase III, S3a phase V, Support Windows Azure Storage. In
progress.

3. Tentative features:

- (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to be
done before Aug 2018.
- (Eric) YARN-7129: Application Catalog for YARN applications. Challenging
as more discussions are on-going.

*Summary of 3.2.0 issues status:*

39 Blocker and Critical issues [1] are open, I am checking with owners to
get status on each of them to get in by Code Freeze date.

[1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0 ORDER
BY priority DESC

Thanks,
Sunil

On Fri, Jul 20, 2018 at 8:03 AM Sunil G  wrote:

> Thanks Subru for the thoughts.
> One of the main reason for a major release is to push out critical
> features with a faster cadence to the users. If we are pulling more and
> more different types of features to a minor release, that branch will
> become more destabilized and it may be tough to say that 3.1.2 is stable
> that 3.1.1 for eg. We always tend to improve and stabilize features in
> subsequent minor release.
> For few companies, it makes sense to push out these new features faster to
> make a reach to the users. Adding to the point to the backporting issues, I
> agree that its a pain and we can workaround that with some git scripts. If
> we can make such scripts available to committers, backport will be
> seem-less across branches and we can achieve the faster release cadence
> also.
>
> Thoughts?
>
> - Sunil
>
>
> On Fri, Jul 20, 2018 at 3:37 AM Subru Krishnan  wrote:
>
>> Thanks Sunil for volunteering to lead the release effort. I am generally
>> supportive of a release but -1 on a 3.2 (prefer a 3.1.x) as feel we
>> already
>> have too many branches to be maintained. I already see many commits are in
>> different branches with no apparent rationale, for e.g: 3.1 has commits
>> which are absent in 3.0 etc.
>>
>> Additionally AFAIK 3.x has not been deployed in any major production
>> setting so the cost of adding features should be minimal.
>>
>> Thoughts?
>>
>> -Subru
>>
>> On Thu, Jul 19, 2018 at 12:31 AM, Sunil G  wrote:
>>
>> > Thanks Steve, Aaron, Wangda for sharing thoughts.
>> >
>> > Yes, important changes and features are much needed, hence we will be
>> > keeping the door open for them as possible. Also considering few more
>> > offline requests from other folks, I think extending the timeframe by
>> > couple of weeks makes sense (including a second RC buffer) and this
>> should
>> > ideally help us to ship this by September itself.
>> >
>> > Revised dates (I will be updating same in Roadmap wiki as well)
>> >
>> > - Feature freeze date : all features to merge by August 21, 2018.
>> >
>> > - Code freeze date : blockers/critical only, no improvements and non
>> > blocker/critical
>> >
>> > bug-fixes  August 31, 2018.
>> >
>> > - Release date: September 15, 2018
>> >
>> > Thank Eric and Zian, I think Wangda has already answered your questions.
>> >
>> > Thanks
>> > Sunil
>> >
>> >
>> > On Thu, Jul 19, 2018 at 12:13 PM Wangda Tan 
>> wrote:
>> >
>> > > Thanks Sunil for volunteering to be RM of 3.2 release, +1 for that.
>> > >
>> > > To concerns from Steve,
>> > >
>> > > It is a good idea to keep the door open to get important chang

Re: YARN SLS improving idea

2018-08-13 Thread Sunil G
Hi Sichen

1. Add input support for the scheduling request format.
Yes. I suppose this change is in SLS end (client to generate requests)
2. Add support for scheduling request resource format in NMSim.
Makes sense.
3. Adding scheduling request support for the Capacity Scheduler(maybe it is
already done in current version).
This support is already there.

Adding to this, major challenge is to specify constrains per app level and
verifying results. I also suggest to cross check the scheduler invariants
check support added as per YARN-6547 and see how we can incorporate same to
predict o/p

- Sunil


On Mon, Aug 13, 2018 at 10:39 PM Yufei Gu  wrote:

> +YANG WEIWEI 
>
> Make sense to me from SLS perspective, but I am not familiar with
> Placement Constraints. Add WeiWei.
>
> Best,
>
> Yufei
>
> `This is not a contribution`
>
>
> On Sat, Aug 11, 2018 at 8:57 AM Daniel Templeton 
> wrote:
>
>> Yufei, Wangda, Sunil, any comments?
>>
>> Daniel
>>
>> On 8/11/18 8:48 AM, Sichen Zhao wrote:
>> > Hi,
>> > Is there anyone who can reply my ideas?
>> >
>> > Best Regards
>> > Sichen Zhao
>> >
>> > 
>> > From: Sichen Zhao 
>> > Sent: Friday, August 10, 2018 11:10
>> > To: Hadoop Common
>> > Subject: YARN SLS improving idea
>> >
>> > Hi,
>> > I am a developer from AliBaBa China, i recently used SLS for scheduling
>> simulation, SLS currently supports multidimensional resource input(CPU, mem
>> , other resources: disk). But SLS can't take scheduling request, which is
>> currently widely used in YARN, as input, so Placement Constraints and
>> attributes are not supported.
>> >
>> > So what i wanna improve the SLS: Add scheduling emulation for
>> scheduling request resource format.
>> >
>> > The specific work is as follows:
>> > 1. Add input support for the scheduling request format.
>> > 2. Add support for scheduling request resource format in NMSim.
>> > 3. Adding scheduling request support for the Capacity Scheduler(maybe
>> it is already done in current version).
>> >
>> > What do you think about my ideas?
>> >
>> >
>> > Best Regards
>> > Sichen Zhao
>> >
>> > -
>> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>> >
>> >
>> > -
>> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>> >
>>
>>


Re: [VOTE] Release Apache Hadoop 3.1.1 - RC0

2018-08-07 Thread Sunil G
Thanks Wangda for the initiative.
+1 for this RC.
I have tested this RC built from source file.

   - Also ran few MR apps and verified both new YARN UI and old RM UI.
   - Tested Application priority and timeout
   - Inter Queue and Intra Queue Preemption cases were also verified
   - Tested basic placement constraints with DS.
   - Tested NodeLabel scenarios.
   - Tested new YARN UI with ATS v2

- Sunil

On Fri, Aug 3, 2018 at 12:14 AM Wangda Tan  wrote:

> Hi folks,
>
> I've created RC0 for Apache Hadoop 3.1.1. The artifacts are available here:
>
> http://people.apache.org/~wangda/hadoop-3.1.1-RC0/
>
> The RC tag in git is release-3.1.1-RC0:
> https://github.com/apache/hadoop/commits/release-3.1.1-RC0
>
> The maven artifacts are available via repository.apache.org at
> https://repository.apache.org/content/repositories/orgapachehadoop-1139/
>
> You can find my public key at
> http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS
>
> This vote will run 5 days from now.
>
> 3.1.1 contains 435 [1] fixed JIRA issues since 3.1.0.
>
> I have done testing with a pseudo cluster and distributed shell job. My +1
> to start.
>
> Best,
> Wangda Tan
>
> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.1)
> ORDER BY priority DESC
>


Re: Hadoop 3.2 Release Plan proposal

2018-07-19 Thread Sunil G
Thanks Steve, Aaron, Wangda for sharing thoughts.

Yes, important changes and features are much needed, hence we will be
keeping the door open for them as possible. Also considering few more
offline requests from other folks, I think extending the timeframe by
couple of weeks makes sense (including a second RC buffer) and this should
ideally help us to ship this by September itself.

Revised dates (I will be updating same in Roadmap wiki as well)

- Feature freeze date : all features to merge by August 21, 2018.

- Code freeze date : blockers/critical only, no improvements and non
blocker/critical

bug-fixes  August 31, 2018.

- Release date: September 15, 2018

Thank Eric and Zian, I think Wangda has already answered your questions.

Thanks
Sunil


On Thu, Jul 19, 2018 at 12:13 PM Wangda Tan  wrote:

> Thanks Sunil for volunteering to be RM of 3.2 release, +1 for that.
>
> To concerns from Steve,
>
> It is a good idea to keep the door open to get important changes /
> features in before cutoff. I would prefer to keep the proposed release date
> to make sure things can happen earlier instead of last minute and we all
> know that releases are always get delayed :). I'm also fine if we want get
> another several weeks time.
>
> Regarding of 3.3 release, I would suggest doing that before thanksgiving.
> Do you think is it good or too early / late?
>
> Eric,
>
> The YARN-8220 will be replaced by YARN-8135, if YARN-8135 can get merged
> in time, we probably not need the YARN-8220.
>
> Sunil,
>
> Could u update https://cwiki.apache.org/confluence/display/HADOOP/Roadmap
> with proposed plan as well? We can fill feature list first before getting
> consensus of time.
>
> Thanks,
> Wangda
>
> On Wed, Jul 18, 2018 at 6:20 PM Aaron Fabbri 
> wrote:
>
>> On Tue, Jul 17, 2018 at 7:21 PM Steve Loughran 
>> wrote:
>>
>> >
>> >
>> > On 16 Jul 2018, at 23:45, Sunil G > > sun...@apache.org>> wrote:
>> >
>> > I would also would like to take this opportunity to come up with a
>> detailed
>> > plan.
>> >
>> > - Feature freeze date : all features should be merged by August 10,
>> 2018.
>> >
>> >
>> >
>> > 
>>
>> >
>> > Please let me know if I missed any features targeted to 3.2 per this
>> >
>> >
>> > Well there these big todo lists for S3 & S3Guard.
>> >
>> > https://issues.apache.org/jira/browse/HADOOP-15226
>> > https://issues.apache.org/jira/browse/HADOOP-15220
>> >
>> >
>> > There's a bigger bit of work coming on for Azure Datalake Gen 2
>> > https://issues.apache.org/jira/browse/HADOOP-15407
>> >
>> > I don't think this is quite ready yet, I've been doing work on it, but
>> if
>> > we have a 3 week deadline, I'm going to expect some timely reviews on
>> > https://issues.apache.org/jira/browse/HADOOP-15546
>> >
>> > I've uprated that to a blocker feature; will review the S3 & S3Guard
>> JIRAs
>> > to see which of those are blocking. Then there are some pressing "guave,
>> > java 9 prep"
>> >
>> >
>>  I can help with this part if you like.
>>
>>
>>
>> >
>> >
>> >
>> > timeline. I would like to volunteer myself as release manager of 3.2.0
>> > release.
>> >
>> >
>> > well volunteered!
>> >
>> >
>> >
>> Yes, thank you for stepping up.
>>
>>
>> >
>> > I think this raises a good q: what timetable should we have for the
>> 3.2. &
>> > 3.3 releases; if we do want a faster cadence, then having the outline
>> time
>> > from the 3.2 to the 3.3 release means that there's less concern about
>> > things not making the 3.2 dealine
>> >
>> > -Steve
>> >
>> >
>> Good idea to mitigate the short deadline.
>>
>> -AF
>>
>


Hadoop 3.2 Release Plan proposal

2018-07-17 Thread Sunil G
Hi All,


To continue a faster cadence of releases to accommodate more features,

we could plan a Hadoop 3.2 release around August end.


To start the process sooner, and to establish a timeline, I propose

to target Hadoop 3.2.0 release by August end 2018. (About 1.5 months from
now).


I would also would like to take this opportunity to come up with a detailed
plan.

- Feature freeze date : all features should be merged by August 10, 2018.

- Code freeze date : blockers/critical only, no improvements and non
blocker/critical

bug-fixes August 24, 2018.

- Release date: August 31, 2018


I have tried to come up with a list of features on my radar which could be
candidates

for a 3.2 release:

- YARN-3409, Node Attributes support. (Owner: Naganarasimha/Sunil)

- YARN-8135, Hadoop Submarine project for DeepLearning workloads in YARN
(Owner: Wangda Tan)

- YARN Native Service / Docker feature hardening and stabilization works in
YARN



There are several other HDFS features want to be released with 3.2 as well,
I am quoting few here:

- HDFS-10285 Storage Policy Satisfier (Owner: Uma/Rakesh)

- Improvements to HDFS-12615 Router-based HDFS federation



Please let me know if I missed any features targeted to 3.2 per this

timeline. I would like to volunteer myself as release manager of 3.2.0
release.


Please let me know if you have any suggestions.



Thanks,

Sunil Govindan


Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-06 Thread Sunil G
Thanks. These patches are now restored.

- Sunil


On Fri, Jul 6, 2018 at 11:14 AM Vinod Kumar Vavilapalli 
wrote:

> +1
>
> Thanks
> +Vinod
>
>
> On Jul 6, 2018, at 11:12 AM, Sunil G  wrote:
>
> I just checked.  YARN-7556 and YARN-7451 can be cherry-picked.
> I cherry-picked in my local and compiled. Things are good.
>
> I can push this now  which will restore trunk to its original.
> I can do this if there are no objection.
>
> - Sunil
>
> On Fri, Jul 6, 2018 at 11:10 AM Arpit Agarwal 
> wrote:
>
> afaict YARN-8435 is still in trunk. YARN-7556 and YARN-7451 are not.
>
>
> From: Giovanni Matteo Fumarola 
> Date: Friday, July 6, 2018 at 10:59 AM
> To: Vinod Kumar Vavilapalli 
> Cc: Anu Engineer , Arpit Agarwal <
> aagar...@hortonworks.com>, "su...@apache.org" , "
> yarn-...@hadoop.apache.org" , "
> hdfs-...@hadoop.apache.org" , "
> common-dev@hadoop.apache.org" , "
> mapreduce-...@hadoop.apache.org" 
> Subject: Re: [VOTE] reset/force push to clean up inadvertent merge commit
> pushed to trunk
>
> Everything seems ok except the 3 commits: YARN-8435, YARN-7556, YARN-7451
> are not anymore in trunk due to the revert.
>
> Haibo/Robert if you can recommit your patches I will commit mine
> subsequently to preserve the original order.
>
> (My apology for the mess I did with the merge commit)
>
> On Fri, Jul 6, 2018 at 10:42 AM, Vinod Kumar Vavilapalli <
> vino...@apache.org<mailto:vino...@apache.org >> wrote:
> I will add that the branch also successfully compiles.
>
> Let's just move forward as is, unblock commits and just fix things if
> anything is broken.
>
> +Vinod
>
> On Jul 6, 2018, at 10:30 AM, Anu Engineer 
> <mailto:aengin...@hortonworks.com >> wrote:
>
>
> Hi All,
>
> [ Thanks to Arpit for working offline and verifying that branch is
>
> indeed good.]
>
>
> I want to summarize what I know of this issue and also solicit other
>
> points of view.
>
>
> We reverted the commit(c163d1797) from the branch, as soon as we noticed
>
> it. That is, we have made no other commits after the merge commit.
>
>
> We used the following command to revert
> git revert -c c163d1797ade0f47d35b4a44381b8ef1dfec5b60 -m 1
>
> Giovanni's branch had three commits + merge, The JIRAs he had were
>
> YARN-7451, YARN-7556, YARN-8435.
>
>
> The issue seems to be the revert of merge has some diffs. I am not a
>
> YARN developer, so the only problem is to look at the revert and see if
> there were any spurious edits in Giovanni's original commit + merge.
>
> If there are none, we don't need a reset/force push.  But if we find an
>
> issue I am more than willing to go the force commit route.
>
>
> The revert takes the trunk back to the point of the first commit from
>
> Giovanni which is YARN-8435. His branch was also rewriting the order of
> commits which we have lost due to the revert.
>
>
> Based on what I know so far, I am -1 on the force push.
>
> In other words, I am trying to understand why we need the force push. I
>
> have left a similar comment in JIRA (
> https://issues.apache.org/jira/browse/INFRA-16727) too.
>
>
>
> Thanks
> Anu
>
>
> On 7/6/18, 10:24 AM, "Arpit Agarwal" 
> aagar...@hortonworks.com>> wrote:
>
>
>   -1 for the force push. Nothing is broken in trunk. The history looks
>
> ugly for two commits and we can live with it.
>
>
>   The revert restored the branch to Giovanni's intent. i.e. only
>
> YARN-8435 is applied. Verified there is no delta between hashes 0d9804d and
> 39ad989 (HEAD).
>
>
>   39ad989 2018-07-05 aengineer@ o {apache/trunk} Revert "Merge branch
>
> 't...
>
>   c163d17 2018-07-05 gifuma@apa M─┐ Merge branch 'trunk' of
>
> https://git-...
>
>   99febe7 2018-07-05 rkanter@ap │ o YARN-7451. Add missing tests to
>
> veri...
>
>   1726247 2018-07-05 haibochen@ │ o YARN-7556. Fair scheduler
>
> configurat...
>
>   0d9804d 2018-07-05 gifuma@apa o │ YARN-8435. Fix NPE when the same
>
> cli...
>
>   71df8c2 2018-07-05 nanda@apac o─┘ HDDS-212. Introduce
>
> NodeStateManager...
>
>
>   Regards,
>   Arpit
>
>
>   On 7/5/18, 2:37 PM, "Subru Krishnan" 
> su...@apache.org>> wrote:
>
>
>   Folks,
>
>   There was a merge commit accidentally pushed to trunk, you can
>
> find the
>
>   details in the mail thread [1].
>
>   I have raised an INFRA ticket [2] to reset/force push to clean up
>
> trunk.
>
>
>   Can we have a quick vote for INFRA sign-off to proceed as thi

Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-06 Thread Sunil G
I just checked.  YARN-7556 and YARN-7451 can be cherry-picked.
I cherry-picked in my local and compiled. Things are good.

I can push this now  which will restore trunk to its original.
I can do this if there are no objection.

- Sunil

On Fri, Jul 6, 2018 at 11:10 AM Arpit Agarwal 
wrote:

> afaict YARN-8435 is still in trunk. YARN-7556 and YARN-7451 are not.
>
>
> From: Giovanni Matteo Fumarola 
> Date: Friday, July 6, 2018 at 10:59 AM
> To: Vinod Kumar Vavilapalli 
> Cc: Anu Engineer , Arpit Agarwal <
> aagar...@hortonworks.com>, "su...@apache.org" , "
> yarn-...@hadoop.apache.org" , "
> hdfs-...@hadoop.apache.org" , "
> common-dev@hadoop.apache.org" , "
> mapreduce-...@hadoop.apache.org" 
> Subject: Re: [VOTE] reset/force push to clean up inadvertent merge commit
> pushed to trunk
>
> Everything seems ok except the 3 commits: YARN-8435, YARN-7556, YARN-7451
> are not anymore in trunk due to the revert.
>
> Haibo/Robert if you can recommit your patches I will commit mine
> subsequently to preserve the original order.
>
> (My apology for the mess I did with the merge commit)
>
> On Fri, Jul 6, 2018 at 10:42 AM, Vinod Kumar Vavilapalli <
> vino...@apache.org> wrote:
> I will add that the branch also successfully compiles.
>
> Let's just move forward as is, unblock commits and just fix things if
> anything is broken.
>
> +Vinod
>
> > On Jul 6, 2018, at 10:30 AM, Anu Engineer  > wrote:
> >
> > Hi All,
> >
> > [ Thanks to Arpit for working offline and verifying that branch is
> indeed good.]
> >
> > I want to summarize what I know of this issue and also solicit other
> points of view.
> >
> > We reverted the commit(c163d1797) from the branch, as soon as we noticed
> it. That is, we have made no other commits after the merge commit.
> >
> > We used the following command to revert
> > git revert -c c163d1797ade0f47d35b4a44381b8ef1dfec5b60 -m 1
> >
> > Giovanni's branch had three commits + merge, The JIRAs he had were
> YARN-7451, YARN-7556, YARN-8435.
> >
> > The issue seems to be the revert of merge has some diffs. I am not a
> YARN developer, so the only problem is to look at the revert and see if
> there were any spurious edits in Giovanni's original commit + merge.
> > If there are none, we don't need a reset/force push.  But if we find an
> issue I am more than willing to go the force commit route.
> >
> > The revert takes the trunk back to the point of the first commit from
> Giovanni which is YARN-8435. His branch was also rewriting the order of
> commits which we have lost due to the revert.
> >
> > Based on what I know so far, I am -1 on the force push.
> >
> > In other words, I am trying to understand why we need the force push. I
> have left a similar comment in JIRA (
> https://issues.apache.org/jira/browse/INFRA-16727) too.
> >
> >
> > Thanks
> > Anu
> >
> >
> > On 7/6/18, 10:24 AM, "Arpit Agarwal"  aagar...@hortonworks.com>> wrote:
> >
> >-1 for the force push. Nothing is broken in trunk. The history looks
> ugly for two commits and we can live with it.
> >
> >The revert restored the branch to Giovanni's intent. i.e. only
> YARN-8435 is applied. Verified there is no delta between hashes 0d9804d and
> 39ad989 (HEAD).
> >
> >39ad989 2018-07-05 aengineer@ o {apache/trunk} Revert "Merge branch
> 't...
> >c163d17 2018-07-05 gifuma@apa M─┐ Merge branch 'trunk' of
> https://git-...
> >99febe7 2018-07-05 rkanter@ap │ o YARN-7451. Add missing tests to
> veri...
> >1726247 2018-07-05 haibochen@ │ o YARN-7556. Fair scheduler
> configurat...
> >0d9804d 2018-07-05 gifuma@apa o │ YARN-8435. Fix NPE when the same
> cli...
> >71df8c2 2018-07-05 nanda@apac o─┘ HDDS-212. Introduce
> NodeStateManager...
> >
> >Regards,
> >Arpit
> >
> >
> >On 7/5/18, 2:37 PM, "Subru Krishnan"  su...@apache.org>> wrote:
> >
> >Folks,
> >
> >There was a merge commit accidentally pushed to trunk, you can
> find the
> >details in the mail thread [1].
> >
> >I have raised an INFRA ticket [2] to reset/force push to clean up
> trunk.
> >
> >Can we have a quick vote for INFRA sign-off to proceed as this is
> blocking
> >all commits?
> >
> >Thanks,
> >Subru
> >
> >[1]
> >
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbox/%3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%40mail.gmail.com%3E
> >[2] https://issues.apache.org/jira/browse/INFRA-16727
> >
> >
> >
> >-
> >To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> 
> >For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> 
> >
> >
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org 

Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-06 Thread Sunil G
Yes these patches can be cherry-picked.

git cherry-pick 17262470246232d0f0651d627a4961e55b1efe6a
git cherry-pick 99febe7fd50c31c0f5dd40fa7f376f2c1f64f8c3

I will try this now and will compile.

- Sunil


On Fri, Jul 6, 2018 at 10:59 AM Giovanni Matteo Fumarola <
giovanni.fumar...@gmail.com> wrote:

> Everything seems ok except the 3 commits: YARN-8435, YARN-7556, YARN-7451
> are not anymore in trunk due to the revert.
>
> Haibo/Robert if you can recommit your patches I will commit mine
> subsequently to preserve the original order.
>
> (My apology for the mess I did with the merge commit)
>
> On Fri, Jul 6, 2018 at 10:42 AM, Vinod Kumar Vavilapalli <
> vino...@apache.org
> > wrote:
>
> > I will add that the branch also successfully compiles.
> >
> > Let's just move forward as is, unblock commits and just fix things if
> > anything is broken.
> >
> > +Vinod
> >
> > > On Jul 6, 2018, at 10:30 AM, Anu Engineer 
> > wrote:
> > >
> > > Hi All,
> > >
> > > [ Thanks to Arpit for working offline and verifying that branch is
> > indeed good.]
> > >
> > > I want to summarize what I know of this issue and also solicit other
> > points of view.
> > >
> > > We reverted the commit(c163d1797) from the branch, as soon as we
> noticed
> > it. That is, we have made no other commits after the merge commit.
> > >
> > > We used the following command to revert
> > > git revert -c c163d1797ade0f47d35b4a44381b8ef1dfec5b60 -m 1
> > >
> > > Giovanni's branch had three commits + merge, The JIRAs he had were
> > YARN-7451, YARN-7556, YARN-8435.
> > >
> > > The issue seems to be the revert of merge has some diffs. I am not a
> > YARN developer, so the only problem is to look at the revert and see if
> > there were any spurious edits in Giovanni's original commit + merge.
> > > If there are none, we don't need a reset/force push.  But if we find an
> > issue I am more than willing to go the force commit route.
> > >
> > > The revert takes the trunk back to the point of the first commit from
> > Giovanni which is YARN-8435. His branch was also rewriting the order of
> > commits which we have lost due to the revert.
> > >
> > > Based on what I know so far, I am -1 on the force push.
> > >
> > > In other words, I am trying to understand why we need the force push. I
> > have left a similar comment in JIRA (https://issues.apache.org/
> > jira/browse/INFRA-16727) too.
> > >
> > >
> > > Thanks
> > > Anu
> > >
> > >
> > > On 7/6/18, 10:24 AM, "Arpit Agarwal"  wrote:
> > >
> > >-1 for the force push. Nothing is broken in trunk. The history looks
> > ugly for two commits and we can live with it.
> > >
> > >The revert restored the branch to Giovanni's intent. i.e. only
> > YARN-8435 is applied. Verified there is no delta between hashes 0d9804d
> and
> > 39ad989 (HEAD).
> > >
> > >39ad989 2018-07-05 aengineer@ o {apache/trunk} Revert "Merge branch
> > 't...
> > >c163d17 2018-07-05 gifuma@apa M─┐ Merge branch 'trunk' of
> > https://git-...
> > >99febe7 2018-07-05 rkanter@ap │ o YARN-7451. Add missing tests to
> > veri...
> > >1726247 2018-07-05 haibochen@ │ o YARN-7556. Fair scheduler
> > configurat...
> > >0d9804d 2018-07-05 gifuma@apa o │ YARN-8435. Fix NPE when the same
> > cli...
> > >71df8c2 2018-07-05 nanda@apac o─┘ HDDS-212. Introduce
> > NodeStateManager...
> > >
> > >Regards,
> > >Arpit
> > >
> > >
> > >On 7/5/18, 2:37 PM, "Subru Krishnan"  wrote:
> > >
> > >Folks,
> > >
> > >There was a merge commit accidentally pushed to trunk, you can
> > find the
> > >details in the mail thread [1].
> > >
> > >I have raised an INFRA ticket [2] to reset/force push to clean
> up
> > trunk.
> > >
> > >Can we have a quick vote for INFRA sign-off to proceed as this
> is
> > blocking
> > >all commits?
> > >
> > >Thanks,
> > >Subru
> > >
> > >[1]
> > >http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/
> > 201807.mbox/%3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%
> > 40mail.gmail.com%3E
> > >[2] https://issues.apache.org/jira/browse/INFRA-16727
> > >
> > >
> > >
> > >
> -
> > >To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > >For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> > >
> > >
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
> >
> > -
> > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> >
> >
>


Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-05 Thread Sunil G
+1 for this.

- Sunil


On Thu, Jul 5, 2018 at 2:37 PM Subru Krishnan  wrote:

> Folks,
>
> There was a merge commit accidentally pushed to trunk, you can find the
> details in the mail thread [1].
>
> I have raised an INFRA ticket [2] to reset/force push to clean up trunk.
>
> Can we have a quick vote for INFRA sign-off to proceed as this is blocking
> all commits?
>
> Thanks,
> Subru
>
> [1]
>
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbox/%3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%40mail.gmail.com%3E
> [2] https://issues.apache.org/jira/browse/INFRA-16727
>


Re: Why are some Jenkins builds failing in Yarn-UI

2018-07-05 Thread Sunil G
Hi Steve

I was recently looking into this failure and this is due to a repo hosting
problem. It seems Bower team has deprecated old registry url
https://bower.herokuapp.com and need to change registry url to
https://registry.bower.io. YARN-8457 addressed this problem is backported
till branch-2.9.
I think the branch also needs a rebase to run away from this problem.

  1.  What is Bower?
YARN ui2 is a single page web application and using ember js framework.
ember-2 is using bower as package manager. Bibin is doing some work in
YARN-8387 to move away from bower to resolve such issue in future.

  2.  Why is it breaking Jenkins builds
Bower team has deprecated old registry url https://bower.herokuapp.com and
need to change registry url to https://registry.bower.io.

  3.  Can you nominate someone to provide the patch to fix this?
YARN-8457 addressed this problem. Cud u pls backport this patch alone to
the branch and help to verify.

  4.  Will every active branch need a similar patch?
Yes. This is a bit unfortunate. If these branches can be rebased to top of
respective main branches or backport this patch alone, we could avoid this
build error.

  5.  Have any releases stopped building (3.1.0?)
Since this patch is already in 3.1.0, this is unblocked.

  6.  Do we have any stability guarantees about Bower's build process in
future?
YARN-8387 will be helping on this. Bibin Chundatt has some offline patches
which he still working on with the help of other communities to resolve
bower dependencies and upgrade to ember 3 which will help in offline
compilation.


- Sunil

On Thu, Jul 5, 2018 at 4:45 AM Steve Loughran 
wrote:

>
> Hi
>
> The HADOOP-15407 "abfs" branch has started failing
>
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/14861/artifact/out/patch-compile-root.txt
>
> [INFO] --- frontend-maven-plugin:1.5:bower (bower install) @
> hadoop-yarn-ui ---
> [INFO] Running 'bower install' in
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/webapp
> [ERROR] bower ember-cli-test-loader#0.2.1  EINVRES Request to
> https://bower.herokuapp.com/packages/ember-cli-test-loader failed with 502
> [INFO]
> 
> [INFO] Reactor Summary:
>
> And the namedc web page returns : This Bower version is deprecated. Please
> update it: npm install -g bower. The new registry address is
> https://registry.bower.io
>
>
> We haven't gone near the YARN code, and yet a branch which is only a few
> weeks old is now failing to build on Jenkins systems
>
> I have a few questions for the Yarn team
>
>
>   1.  What is Bower?
>   2.  Why is it breaking Jenkins builds
>   3.  Can you nominate someone to provide the patch to fix this?
>   4.  Will every active branch need a similar patch?
>   5.  Have any releases stopped building (3.1.0?)
>   6.  Do we have any stability guarantees about Bower's build process in
> future?
>
> Given a fork I'm worried about the long-term reproducibility of builds. Is
> this just a one off move of some build artifact repository, or is this a
> moving target which will stop source code releases from working. I can deal
> with cherry picking patches to WiP branches, tuning Jenkins, etc, but if we
> lose the ability to rebuild releases, the whole notion of "stable release"
> is dead. I know Maven exposes us to similar risks, but the maven central
> repo is stabile and available, so hasn't forced us into building an
> SCM-msnaged artifact tree the way I've done elsewhere (Ivy makes that
> straightforward; maven less so as you need to be online to boot). And we
> are now relying on docker for release builds. So we are already
> vulnerable...I'm just worried that Bower is making things worse.
>
> -Steve
>
> (ps: Sides from Olaf Febbe on the bigtop team on attacking builds through
> mavan https://oflebbe.de/presentations/2018/attackingiotdev.pdf)
>
>
>
>
>


Re: [VOTE] Release Apache Hadoop 2.8.4 (RC0)

2018-05-14 Thread Sunil G
+1 (binding)

1. Build package from src
2. Ran few MR jobs and verified checked App Priority cases
3. Node Label basic functions are ok.

Thanks
Sunil


On Tue, May 8, 2018 at 11:11 PM 俊平堵  wrote:

> Hi all,
>  I've created the first release candidate (RC0) for Apache Hadoop
> 2.8.4. This is our next maint release to follow up 2.8.3. It includes 77
> important fixes and improvements.
>
> The RC artifacts are available at:
> http://home.apache.org/~junping_du/hadoop-2.8.4-RC0
>
> The RC tag in git is: release-2.8.4-RC0
>
> The maven artifacts are available via repository.apache.org<
> http://repository.apache.org> at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1118
>
> Please try the release and vote; the vote will run for the usual 5
> working days, ending on 5/14/2018 PST time.
>
> Thanks,
>
> Junping
>


Re: Apache Hadoop 3.1.1 release plan

2018-05-10 Thread Sunil G
Thanks Brahma.
Yes, Billie is reviewing YARN-8265 and I am helping in YARN-8236.

- Sunil


On Thu, May 10, 2018 at 2:25 PM Brahma Reddy Battula <
brahmareddy.batt...@huawei.com> wrote:

> Thanks Wangda Tan for driving the 3.1.1 release.Yes,This can be better
> addition to 3.1 line release for improving quality.
>
> Looks only following two are pending which are in review state. Hope you
> are monitoring these two.
>
> https://issues.apache.org/jira/browse/YARN-8265
> https://issues.apache.org/jira/browse/YARN-8236
>
>
>
> Note : https://issues.apache.org/jira/browse/YARN-8247==> committed
> branch-3.1
>
>
> -Original Message-
> From: Wangda Tan [mailto:wheele...@gmail.com]
> Sent: 19 April 2018 17:49
> To: Hadoop Common ;
> mapreduce-...@hadoop.apache.org; Hdfs-dev ;
> yarn-...@hadoop.apache.org
> Subject: Apache Hadoop 3.1.1 release plan
>
> Hi, All
>
> We have released Apache Hadoop 3.1.0 on Apr 06. To further improve the
> quality of the release, we plan to release 3.1.1 at May 06. The focus of
> 3.1.1 will be fixing blockers / critical bugs and other enhancements. So
> far there are 100 JIRAs [1] have fix version marked to 3.1.1.
>
> We plan to cut branch-3.1.1 on May 01 and vote for RC on the same day.
>
> Please feel free to share your insights.
>
> Thanks,
> Wangda Tan
>
> [1] project in (YARN, "Hadoop HDFS", "Hadoop Common", "Hadoop Map/Reduce")
> AND fixVersion = 3.1.1
>


Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-02 Thread Sunil G
+1 (binding)

On Mon 2 Apr, 2018, 12:24 Sunil G, <sun...@apache.org> wrote:

> Thanks Wangda for initiating the release.
>
> I tested this RC built from source file.
>
>
>- Tested MR apps (sleep, wc) and verified both new YARN UI and old RM
>UI.
>- Below feature sanity is done
>   - Application priority
>   - Application timeout
>   - Intra Queue preemption with priority based
>   - DS based affinity tests to verify placement constraints.
>- Tested basic NodeLabel scenarios.
>   - Added couple of labels to few of nodes and behavior is coming
>   correct.
>   - Verified old UI  and new YARN UI for labels.
>   - Submitted apps to labelled cluster and it works fine.
>   - Also performed few cli commands related to nodelabel.
>- Test basic HA cases and seems correct.
>- Tested new YARN UI . All pages are getting loaded correctly.
>
>
> - Sunil
>
>
> On Fri, Mar 30, 2018 at 9:45 AM Wangda Tan <wheele...@gmail.com> wrote:
>
>> Hi folks,
>>
>> Thanks to the many who helped with this release since Dec 2017 [1]. We've
>> created RC1 for Apache Hadoop 3.1.0. The artifacts are available here:
>>
>> http://people.apache.org/~wangda/hadoop-3.1.0-RC1
>>
>> The RC tag in git is release-3.1.0-RC1. Last git commit SHA is
>> 16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d
>>
>> The maven artifacts are available via repository.apache.org at
>> https://repository.apache.org/content/repositories/orgapachehadoop-1090/
>> This vote will run 5 days, ending on Apr 3 at 11:59 pm Pacific.
>>
>> 3.1.0 contains 766 [2] fixed JIRA issues since 3.0.0. Notable additions
>> include the first class GPU/FPGA support on YARN, Native services, Support
>> rich placement constraints in YARN, S3-related enhancements, allow HDFS
>> block replicas to be provided by an external storage system, etc.
>>
>> For 3.1.0 RC0 vote discussion, please see [3].
>>
>> We’d like to use this as a starting release for 3.1.x [1], depending on
>> how
>> it goes, get it stabilized and potentially use a 3.1.1 in several weeks as
>> the stable release.
>>
>> We have done testing with a pseudo cluster:
>> - Ran distributed job.
>> - GPU scheduling/isolation.
>> - Placement constraints (intra-application anti-affinity) by using
>> distributed shell.
>>
>> My +1 to start.
>>
>> Best,
>> Wangda/Vinod
>>
>> [1]
>>
>> https://lists.apache.org/thread.html/b3fb3b6da8b6357a68513a6dfd104bc9e19e559aedc5ebedb4ca08c8@%3Cyarn-dev.hadoop.apache.org%3E
>> [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.0)
>> AND fixVersion not in (3.0.0, 3.0.0-beta1) AND status = Resolved ORDER BY
>> fixVersion ASC
>> [3]
>>
>> https://lists.apache.org/thread.html/b3a7dc075b7329fd660f65b48237d72d4061f26f83547e41d0983ea6@%3Cyarn-dev.hadoop.apache.org%3E
>>
>


Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-02 Thread Sunil G
Thanks Wangda for initiating the release.

I tested this RC built from source file.


   - Tested MR apps (sleep, wc) and verified both new YARN UI and old RM UI.
   - Below feature sanity is done
  - Application priority
  - Application timeout
  - Intra Queue preemption with priority based
  - DS based affinity tests to verify placement constraints.
   - Tested basic NodeLabel scenarios.
  - Added couple of labels to few of nodes and behavior is coming
  correct.
  - Verified old UI  and new YARN UI for labels.
  - Submitted apps to labelled cluster and it works fine.
  - Also performed few cli commands related to nodelabel.
   - Test basic HA cases and seems correct.
   - Tested new YARN UI . All pages are getting loaded correctly.


- Sunil

On Fri, Mar 30, 2018 at 9:45 AM Wangda Tan  wrote:

> Hi folks,
>
> Thanks to the many who helped with this release since Dec 2017 [1]. We've
> created RC1 for Apache Hadoop 3.1.0. The artifacts are available here:
>
> http://people.apache.org/~wangda/hadoop-3.1.0-RC1
>
> The RC tag in git is release-3.1.0-RC1. Last git commit SHA is
> 16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d
>
> The maven artifacts are available via repository.apache.org at
> https://repository.apache.org/content/repositories/orgapachehadoop-1090/
> This vote will run 5 days, ending on Apr 3 at 11:59 pm Pacific.
>
> 3.1.0 contains 766 [2] fixed JIRA issues since 3.0.0. Notable additions
> include the first class GPU/FPGA support on YARN, Native services, Support
> rich placement constraints in YARN, S3-related enhancements, allow HDFS
> block replicas to be provided by an external storage system, etc.
>
> For 3.1.0 RC0 vote discussion, please see [3].
>
> We’d like to use this as a starting release for 3.1.x [1], depending on how
> it goes, get it stabilized and potentially use a 3.1.1 in several weeks as
> the stable release.
>
> We have done testing with a pseudo cluster:
> - Ran distributed job.
> - GPU scheduling/isolation.
> - Placement constraints (intra-application anti-affinity) by using
> distributed shell.
>
> My +1 to start.
>
> Best,
> Wangda/Vinod
>
> [1]
>
> https://lists.apache.org/thread.html/b3fb3b6da8b6357a68513a6dfd104bc9e19e559aedc5ebedb4ca08c8@%3Cyarn-dev.hadoop.apache.org%3E
> [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.0)
> AND fixVersion not in (3.0.0, 3.0.0-beta1) AND status = Resolved ORDER BY
> fixVersion ASC
> [3]
>
> https://lists.apache.org/thread.html/b3a7dc075b7329fd660f65b48237d72d4061f26f83547e41d0983ea6@%3Cyarn-dev.hadoop.apache.org%3E
>


Re: [VOTE] Merge YARN-6592 feature branch to trunk

2018-01-26 Thread Sunil G
+1

- Sunil


On Fri, Jan 26, 2018 at 8:58 PM Arun Suresh  wrote:

> Hello yarn-dev@
>
> Based on the positive feedback from the DISCUSS thread [1], I'd like to
> start a formal vote to merge YARN-6592 [2] to trunk. The vote will run for
> 5 days, and will end Jan 31 7:30AM PDT.
>
> This feature adds support for placing containers in YARN using rich
> placement constraints. For example, this can be used by applications to
> co-locate containers on a node or rack (*affinity *constraint), spread
> containers across nodes or racks (*anti-affinity* constraint), or even
> specify the maximum number of containers on a node/rack (*cardinality *
> constraint).
>
> We have integrated this feature into the Distributed-Shell application for
> feature testing. We have performed end-to-end testing on moderately-sized
> clusters to verify that constraints work fine. Performance tests have been
> done via both SLS tests and Capacity Scheduler performance unit tests, and
> no regression was found. We have opened a JIRA to track Jenkins acceptance
> of the aggregated patch [3]. Documentation is in the process of being
> completed [4]. You can also check our design document for more details [5].
>
> Config flags are needed to enable this feature and it should not have any
> effect on YARN when turned off. Once merged, we plan to work on further
> improvements, which can be tracked in the umbrella YARN-7812 [6].
>
> Kindly do take a look at the branch and raise issues/concerns that need to
> be addressed before the merge.
>
> Many thanks to Konstantinos, Wangda, Panagiotis, Weiwei, and Sunil for
> their contributions to this effort, as well as Subru, Chris, Carlo, and
> Vinod for their inputs and discussions.
>
>
> Cheers
> -Arun
>
> [1]
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201801.mbox/%3CCAMreUaz%3DGnsjOLZ%3Dem2x%3DQS7qh27euCWNw6Bo_4Cu%2BfXnXhyNA%40mail.gmail.com%3E
> [2] https://issues.apache.org/jira/browse/YARN-6592
> [3] https://issues.apache.org/jira/browse/YARN-7792
> [4] https://issues.apache.org/jira/browse/YARN-7780
> [5]
> https://issues.apache.org/jira/secure/attachment/12867869/YARN-6592-Rich-Placement-Constraints-Design-V1.pdf
> [6] https://issues.apache.org/jira/browse/YARN-7812
>


Re: [DISCUSS] Merge YARN-6592 to trunk

2018-01-25 Thread Sunil G
+1.

Thanks Arun.

I did manual testing for check affinity and anti-affinity features with
placement allocator. Also checked SLS to see any performance regression,
and there are not much difference as Arun mentioned.

Thanks all the folks for working on this. Kudos!

- Sunil


On Fri, Jan 26, 2018 at 5:16 AM Arun Suresh  wrote:

> Hello yarn-dev@
>
> We feel that the YARN-6592 dev branch mostly in shape to be merged into
> trunk. This branch adds support for placing containers in YARN using rich
> placement constraints. For example, this can be used by applications to
> co-locate containers on a node or rack (*affinity *constraint), spread
> containers across nodes or racks (*anti-affinity* constraint), or even
> specify the maximum number of containers on a node/rack (*cardinality *
> constraint).
>
> We have integrated this feature into the Distributed-Shell application for
> feature testing. We have performed end-to-end testing on moderately-sized
> clusters to verify that constraints work fine. Performance tests have been
> done via both SLS tests and Capacity Scheduler performance unit tests, and
> no regression was found. We have opened a JIRA to track Jenkins acceptance
> of the aggregated patch [2]. Documentation is in the process of being
> completed [3]. You can also check our design document for more details [4].
>
> Config flags are needed to enable this feature and it should not have any
> effect on YARN when turned off. Once merged, we plan to work on further
> improvements, which can be tracked in the umbrella YARN-7812 [5].
>
> Kindly do take a look at the branch and raise issues/concerns that need to
> be addressed before the merge.
>
> Many thanks to Konstantinos, Wangda, Panagiotis, Weiwei, and Sunil for
> their contributions to this effort, as well as Subru, Chris, Carlo, and
> Vinod for their inputs and discussions.
>
> Cheers
> -Arun
>
>
> [1] https://issues.apache.org/jira/browse/YARN-6592
> [2] https://issues.apache.org/jira/browse/YARN-7792
> [3] https://issues.apache.org/jira/browse/YARN-7780
> [4]
> https://issues.apache.org/jira/secure/attachment/12867869/YARN-6592-Rich-Placement-Constraints-Design-V1.pdf
> [5] https://issues.apache.org/jira/browse/YARN-7812
>
>


Re: Missing some trunk commit history

2017-12-14 Thread Sunil G
Hi Eric.

A branch merge has happened during that time, and hence you might have seen
some old commits from that branch. If you go down further, you could see
those commits.

Copied from my git log:

commit 40b0045ebe0752cd3d1d09be00acbabdea983799
Author: Weiwei Yang 
Date:   Wed Dec 6 17:52:41 2017 +0800

YARN-7610. Extend Distributed Shell to support launching job with
opportunistic containers. Contributed by Weiwei Yang.

commit 56b1ff80dd9fbcde8d21a604eff0babb3a16418f
Author: Xiao Chen 
Date:   Tue Dec 5 20:48:02 2017 -0800

HDFS-12872. EC Checksum broken when BlockAccessToken is enabled.

commit 05c347fe51c01494ed8110f8f116a01c90205f13
Author: Weiwei Yang 
Date:   Wed Dec 6 12:21:52 2017 +0800

YARN-7611. Node manager web UI should display container type in
containers page. Contributed by Weiwei Yang.

commit 73b86979d661f4ad56fcfc3a05a403dfcb2a860e
Author: Kai Zheng 
Date:   Wed Dec 6 12:01:36 2017 +0800

HADOOP-15039. Move SemaphoredDelegatingExecutor to hadoop-common.
Contributed by Genmao Yu

commit 44b06d34a537f8b558007cc92a5d1a8e59b5d86b
Author: Akira Ajisaka 
Date:   Wed Dec 6 11:40:33 2017 +0900

HDFS-12889. Router UI is missing robots.txt file. Contributed by Bharat
Viswanadham.

commit 0311cf05358cd75388f48f048c44fba52ec90f00
Author: Wangda Tan 
Date:   Tue Dec 5 13:09:49 2017 -0800

YARN-7381. Enable the configuration:
yarn.nodemanager.log-container-debug-info.enabled by default in
yarn-default.xml. (Xuan Gong via wangda)

Change-Id: I1ed58dafad5cc276eea5c0b0813cf04f57d73a87

commit 6555af81a26b0b72ec3bee7034e01f5bd84b1564
Author: Aaron Fabbri 
Date:   Tue Dec 5 11:06:32 2017 -0800

HADOOP-14475 Metrics of S3A don't print out when enabled. Contributed
by Younger and Sean Mackrory.



- Sunil


On Fri, Dec 15, 2017 at 12:29 AM Eric Yang  wrote:

> Hi all,
>
> While troubleshooting a trunk build failure, I notice the commit history
> for trunk between Nov 30th to Dec 6th are squashed or disappeared for no
> reason.  This seems to have taken place in the last 24 hours.  I can see
> the commit logs from github UI.  When doing a new clone from Apache Git and
> Github, the commit histories between those dates are gone.  I usually
> maintain two git repositories, one for testing and one for development.
> Both repositories were sync up with github frequently, and only test
> repository was updated today and the missing history only reflect in test
> repository.  This is the reason that I have the impression that this might
> have happened in the last 24 hours.  I did some spot check to see if the
> missing commits are in trunk.  The code seems to be in place, and only
> commit history is gone.
>
> Is there any way to fix the commit history?  Hopefully this is not a git
> bug, but some peer review might find out the root cause that could help to
> understand the damage.  Thank you
>
> Regards,
> Eric
>
>


Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread Sunil G
+1 (binding)

Thanks Andrew Wang for driving this effort and also thanks to all others
who helped in this release. Kudos!!!

I tested this RC by building it from source. I met with couple of issues
(not blocker) HADOOP-15116 and YARN-7650. This could be tracked separately.


   - Ran many MR apps and verified both new YARN UI and old RM UI.
   - Tested below feature sanity and got results as per the behavior
  - Application priority (verified CLI/REST/UI etc)
  - Application timeout
  - Intra Queue preemption with priority based
  - Inter Queue preemption
   - Tested basic NodeLabel scenarios.
  - Added couple of labels to few of nodes and behavior is coming
  correct.
  - Verified old UI  and new YARN UI for labels.
  - Submitted apps to labelled cluster and it works fine.
  - Also performed few cli commands related to nodelabel.
   - Test basic HA cases and seems correct. However I got one issue.
   Raised HADOOP-15116 as its not a blocker.
   - Also tested new YARN UI . All pages are getting loaded correctly.
   (User must enable CORS to access NodeManager pages)
   - *Performance test*: I ran a tight loop perf test on CS
   TestCapacitySchedulerPerf#testUserLimitThroughputForTwoResources.
   Results are a bit off w.r.t 2.8  (~5% less). I will open a ticket and
   investigate by doing more tests to see if its to be addressed or not.


- Sunil G



On Sat, Dec 9, 2017 at 2:01 AM Andrew Wang <andrew.w...@cloudera.com> wrote:

> Hi all,
>
> Let me start, as always, by thanking the efforts of all the contributors
> who contributed to this release, especially those who jumped on the issues
> found in RC0.
>
> I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302
> fixed JIRAs since the previous 3.0.0-beta1 release.
>
> You can find the artifacts here:
>
> http://home.apache.org/~wang/3.0.0-RC1/
>
> I've done the traditional testing of building from the source tarball and
> running a Pi job on a single node cluster. I also verified that the shaded
> jars are not empty.
>
> Found one issue that create-release (probably due to the mvn deploy change)
> didn't sign the artifacts, but I fixed that by calling mvn one more time.
> Available here:
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1075/
>
> This release will run the standard 5 days, closing on Dec 13th at 12:31pm
> Pacific. My +1 to start.
>
> Best,
> Andrew
>


Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-13 Thread Sunil G
+1 (binding)

Thanks Junping for the effort.
I have deployed a cluster built from source tar ball.


   - Ran few MR apps and verified UI. CLI commands are also fine related to
   app.
   - Tested below feature sanity
  - Application priority
  - Application timeout
   - Tested basic NodeLabel scenarios.
  - Added some labels to couple of nodes
  - Verified old UI for labels
  - Submitted apps to labelled cluster and it works fine.
  - Also performed few cli commands related to nodelabel
   - Test basic HA cases


Thanks
Sunil G


On Tue, Dec 5, 2017 at 3:28 PM Junping Du <j...@hortonworks.com> wrote:

> Hi all,
>  I've created the first release candidate (RC0) for Apache Hadoop
> 2.8.3. This is our next maint release to follow up 2.8.2. It includes 79
> important fixes and improvements.
>
>   The RC artifacts are available at:
> http://home.apache.org/~junping_du/hadoop-2.8.3-RC0
>
>   The RC tag in git is: release-2.8.3-RC0
>
>   The maven artifacts are available via repository.apache.org at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1072
>
>   Please try the release and vote; the vote will run for the usual 5
> working days, ending on 12/12/2017 PST time.
>
> Thanks,
>
> Junping
>


Re: [VOTE] Merge Absolute resource configuration support in Capacity Scheduler (YARN-5881) to trunk

2017-12-07 Thread Sunil G
Thank You all.

We merged the branch to trunk and updated jiras accordingly. Thanks
everyone who helped in this feature.

- Sunil and Wangda


On Thu, Dec 7, 2017 at 10:13 PM Sunil G <sun...@apache.org> wrote:

> And lastly +1 (binding) from myself.
> Vote passes with 6 (+1) bindings by considering Weiwei's vote as binding
> itself.
>
> Thank you very much for all who voted. I’ll merge to trunk by the end of
> today.
>
>
> - Sunil
>
>
>
> On Thu, Dec 7, 2017 at 8:08 AM Subramaniam V K <subru...@gmail.com> wrote:
>
>> +1.
>>
>> Skimmed through the design doc and uber patch and seems to be reasonable.
>>
>> This is a welcome addition especially w.r.t. cloud deployments so thanks
>> to everyone who worked on this.
>>
>> On Mon, Dec 4, 2017 at 8:18 PM, Rohith Sharma K S <
>> rohithsharm...@apache.org> wrote:
>>
>>> +1
>>>
>>> On Nov 30, 2017 7:26 AM, "Sunil G" <sun...@apache.org> wrote:
>>>
>>> > Hi All,
>>> >
>>> >
>>> > Based on the discussion at [1], I'd like to start a vote to merge
>>> feature
>>> > branch
>>> >
>>> > YARN-5881 to trunk. Vote will run for 7 days, ending Wednesday Dec 6 at
>>> > 6:00PM PDT.
>>> >
>>> >
>>> > This branch adds support to configure queue capacity as absolute
>>> resource
>>> > in
>>> >
>>> > capacity scheduler. This will help admins who want fine control of
>>> > resources of queues.
>>> >
>>> >
>>> > Feature development is done at YARN-5881 [2], jenkins build is here
>>> > (YARN-7510 [3]).
>>> >
>>> > All required tasks for this feature are committed. This feature changes
>>> > RM’s Capacity Scheduler only,
>>> >
>>> > and we did extensive tests for the feature in the last couple of months
>>> > including performance tests.
>>> >
>>> >
>>> > Key points:
>>> >
>>> > - The feature is turned off by default, and have to configure absolute
>>> > resource to enable same.
>>> >
>>> > - Detailed documentation about how to use this feature is done as part
>>> of
>>> > [4].
>>> >
>>> > - No major performance degradation is observed with this branch work.
>>> SLS
>>> > and UT performance
>>> >
>>> > tests are done.
>>> >
>>> >
>>> > There were 11 subtasks completed for this feature.
>>> >
>>> >
>>> > Huge thanks to everyone who helped with reviews, commits, guidance, and
>>> >
>>> > technical discussion/design, including Wangda Tan, Vinod Vavilapalli,
>>> > Rohith Sharma K S, Eric Payne .
>>> >
>>> >
>>> > [1] :
>>> > http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201711.mbox/%
>>> > 3CCACYiTuhKhF1JCtR7ZFuZSEKQ4sBvN_n_tV5GHsbJ3YeyJP%2BP4Q%
>>> > 40mail.gmail.com%3E
>>> >
>>> > [2] : https://issues.apache.org/jira/browse/YARN-5881
>>> >
>>> > [3] : https://issues.apache.org/jira/browse/YARN-7510
>>> >
>>> > [4] : https://issues.apache.org/jira/browse/YARN-7533
>>> >
>>> >
>>> > Regards
>>> >
>>> > Sunil and Wangda
>>> >
>>>
>>
>>


Re: [VOTE] Merge Absolute resource configuration support in Capacity Scheduler (YARN-5881) to trunk

2017-12-07 Thread Sunil G
And lastly +1 (binding) from myself.
Vote passes with 6 (+1) bindings by considering Weiwei's vote as binding
itself.

Thank you very much for all who voted. I’ll merge to trunk by the end of
today.

- Sunil



On Thu, Dec 7, 2017 at 8:08 AM Subramaniam V K <subru...@gmail.com> wrote:

> +1.
>
> Skimmed through the design doc and uber patch and seems to be reasonable.
>
> This is a welcome addition especially w.r.t. cloud deployments so thanks
> to everyone who worked on this.
>
> On Mon, Dec 4, 2017 at 8:18 PM, Rohith Sharma K S <
> rohithsharm...@apache.org> wrote:
>
>> +1
>>
>> On Nov 30, 2017 7:26 AM, "Sunil G" <sun...@apache.org> wrote:
>>
>> > Hi All,
>> >
>> >
>> > Based on the discussion at [1], I'd like to start a vote to merge
>> feature
>> > branch
>> >
>> > YARN-5881 to trunk. Vote will run for 7 days, ending Wednesday Dec 6 at
>> > 6:00PM PDT.
>> >
>> >
>> > This branch adds support to configure queue capacity as absolute
>> resource
>> > in
>> >
>> > capacity scheduler. This will help admins who want fine control of
>> > resources of queues.
>> >
>> >
>> > Feature development is done at YARN-5881 [2], jenkins build is here
>> > (YARN-7510 [3]).
>> >
>> > All required tasks for this feature are committed. This feature changes
>> > RM’s Capacity Scheduler only,
>> >
>> > and we did extensive tests for the feature in the last couple of months
>> > including performance tests.
>> >
>> >
>> > Key points:
>> >
>> > - The feature is turned off by default, and have to configure absolute
>> > resource to enable same.
>> >
>> > - Detailed documentation about how to use this feature is done as part
>> of
>> > [4].
>> >
>> > - No major performance degradation is observed with this branch work.
>> SLS
>> > and UT performance
>> >
>> > tests are done.
>> >
>> >
>> > There were 11 subtasks completed for this feature.
>> >
>> >
>> > Huge thanks to everyone who helped with reviews, commits, guidance, and
>> >
>> > technical discussion/design, including Wangda Tan, Vinod Vavilapalli,
>> > Rohith Sharma K S, Eric Payne .
>> >
>> >
>> > [1] :
>> > http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201711.mbox/%
>> > 3CCACYiTuhKhF1JCtR7ZFuZSEKQ4sBvN_n_tV5GHsbJ3YeyJP%2BP4Q%
>> > 40mail.gmail.com%3E
>> >
>> > [2] : https://issues.apache.org/jira/browse/YARN-5881
>> >
>> > [3] : https://issues.apache.org/jira/browse/YARN-7510
>> >
>> > [4] : https://issues.apache.org/jira/browse/YARN-7533
>> >
>> >
>> > Regards
>> >
>> > Sunil and Wangda
>> >
>>
>
>


Re: [DISCUSS] Merge Absolute resource configuration support in Capacity Scheduler (YARN-5881) to trunk

2017-11-30 Thread Sunil G
Thanks everyone for the feedback!

Based on positive feedback, we started voting thread in
http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201711.mbox/%3CCACYiTuhzMrd_kFRT7_f4VBHejrajbCnVB1wmgHLMLXRr58y0MA%40mail.gmail.com%3E

@Carlo: Yes, this change should be straight forward except some minor
conflicts.

- Sunil



On Thu, Nov 30, 2017 at 9:34 AM Carlo Aldo Curino <carlo.cur...@gmail.com>
wrote:

> I haven't tested this, but I support the merge as the patch is very much
> needed for MS usecases as well... Can this be cherry-picked on 2.9 easily?
>
> Thanks for this contribution!
>
> Cheers,
> Carlo
>
> On Nov 29, 2017 6:34 PM, "Weiwei Yang" <cheersy...@hotmail.com> wrote:
>
>> Hi Sunil
>>
>> +1 from my side.
>> Actually we have applied some of these patches to our production cluster
>> since Sep this year, on over 2000+ nodes and it works nicely. +1 for the
>> merge. I am pretty sure this feature will help a lot of users, especially
>> those on cloud. Thanks for getting this done, great job!
>>
>> --
>> Weiwei
>>
>> On 29 Nov 2017, 9:23 PM +0800, Rohith Sharma K S <
>> rohithsharm...@apache.org>, wrote:
>> +1, thanks Sunil for working on this feature!
>>
>> -Rohith Sharma K S
>>
>> On 24 November 2017 at 23:19, Sunil G <sun...@apache.org> wrote:
>>
>> Hi All,
>>
>> We would like to bring up the discussion of merging “absolute min/max
>> resources support in capacity scheduler” branch (YARN-5881) [2] into trunk
>> in a few weeks. The goal is to get it in for Hadoop 3.1.
>>
>> *Major work happened in this branch*
>>
>> - YARN-6471. Support to add min/max resource configuration for a queue
>> - YARN-7332. Compute effectiveCapacity per each resource vector
>> - YARN-7411. Inter-Queue preemption's computeFixpointAllocation need to
>> handle absolute resources.
>>
>> *Regarding design details*
>>
>> Please refer [1] for detailed design document.
>>
>> *Regarding to testing:*
>>
>> We did extensive tests for the feature in the last couple of months.
>> Comparing to latest trunk.
>>
>> - For SLS benchmark: We didn't see observable performance gap from
>> simulated test based on 8K nodes SLS traces (1 PB memory). We got 3k+
>> containers allocated per second.
>>
>> - For microbenchmark: We use performance test cases added by YARN 6775, it
>> did not show much performance regression comparing to trunk.
>>
>> *YARN-5881* <https://issues.apache.org/jira/browse/YARN-5881
>>
>> #ResourceTypes = 2. Avg of fastest 20: 55294.52
>> #ResourceTypes = 2. Avg of fastest 20: 55401.66
>>
>> *trunk*
>> #ResourceTypes = 2. Avg of fastest 20: 55865.92
>> #ResourceTypes = 2. Avg of fastest 20: 55096.418
>>
>> *Regarding to API stability:*
>>
>> All newly added @Public APIs are @Unstable.
>>
>> Documentation jira [3] could help to provide detailed configuration
>> details. This feature works from end-to-end and we are running this in our
>> development cluster for last couple of months and undergone good amount of
>> testing. Branch code is run against trunk and tracked via [4].
>>
>> We would love to get your thoughts before opening a voting thread.
>>
>> Special thanks to a team of folks who worked hard and contributed towards
>> this efforts including design discussion / patch / reviews, etc.: Wangda
>> Tan, Vinod Kumar Vavilappali, Rohith Sharma K S.
>>
>> [1] :
>> https://issues.apache.org/jira/secure/attachment/
>> 12855984/YARN-5881.Support.Absolute.Min.Max.Resource.In.
>> Capacity.Scheduler.design-doc.v1.pdf
>> [2] : https://issues.apache.org/jira/browse/YARN-5881
>>
>> [3] : https://issues.apache.org/jira/browse/YARN-7533
>>
>> [4] : https://issues.apache.org/jira/browse/YARN-7510
>>
>> Thanks,
>>
>> Sunil G and Wangda Tan
>>
>>


[VOTE] Merge Absolute resource configuration support in Capacity Scheduler (YARN-5881) to trunk

2017-11-29 Thread Sunil G
Hi All,


Based on the discussion at [1], I'd like to start a vote to merge feature
branch

YARN-5881 to trunk. Vote will run for 7 days, ending Wednesday Dec 6 at
6:00PM PDT.


This branch adds support to configure queue capacity as absolute resource in

capacity scheduler. This will help admins who want fine control of
resources of queues.


Feature development is done at YARN-5881 [2], jenkins build is here
(YARN-7510 [3]).

All required tasks for this feature are committed. This feature changes
RM’s Capacity Scheduler only,

and we did extensive tests for the feature in the last couple of months
including performance tests.


Key points:

- The feature is turned off by default, and have to configure absolute
resource to enable same.

- Detailed documentation about how to use this feature is done as part of
[4].

- No major performance degradation is observed with this branch work. SLS
and UT performance

tests are done.


There were 11 subtasks completed for this feature.


Huge thanks to everyone who helped with reviews, commits, guidance, and

technical discussion/design, including Wangda Tan, Vinod Vavilapalli,
Rohith Sharma K S, Eric Payne .


[1] :
http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201711.mbox/%3CCACYiTuhKhF1JCtR7ZFuZSEKQ4sBvN_n_tV5GHsbJ3YeyJP%2BP4Q%40mail.gmail.com%3E

[2] : https://issues.apache.org/jira/browse/YARN-5881

[3] : https://issues.apache.org/jira/browse/YARN-7510

[4] : https://issues.apache.org/jira/browse/YARN-7533


Regards

Sunil and Wangda


Re: [DISCUSS] Merge Absolute resource configuration support in Capacity Scheduler (YARN-5881) to trunk

2017-11-29 Thread Sunil G
Thanks Eric. Appreciate the support in verifying the feature.
YARN-7575 is closed now.

- Sunil


On Tue, Nov 28, 2017 at 11:15 PM Eric Payne
<eric.payne1...@yahoo.com.invalid> wrote:

> Thanks Sunil for the great work on this feature.
> I looked through the design document, reviewed the code, and tested out
> branch YARN-5881. The design makes sense and the code looks like it is
> implementing the desing in a sensible way. However, I have encountered a
> couple of bugs. I opened https://issues.apache.org/jira/browse/YARN-7575
> to track my findings. Basically, here's a summary:
>
> The design document from YARN-5881 says that for max-capacity:
> 3)  For each queue, we require: a) if max-resource not set, it
> automatically set to parent.max-resource
>
> When I try not setting
> anyyarn.scheduler.capacity..maximum-capacity, the RMUI
> scheduler page refuses to render. It looks like it's in
> CapacitySchedulerPage$LeafQueueInfoBlock.
>
> Also... A job will run in the leaf queue with no max capacity set and it
> will grow to the max capacity of the cluster, but if I add resources to the
> node, the job won't grow any more even though it has pending resources.
>
> Thanks,Eric
>
>
>   From: Sunil G <sun...@apache.org>
>  To: "yarn-...@hadoop.apache.org" <yarn-...@hadoop.apache.org>; Hadoop
> Common <common-dev@hadoop.apache.org>; Hdfs-dev <
> hdfs-...@hadoop.apache.org>; "mapreduce-...@hadoop.apache.org" <
> mapreduce-...@hadoop.apache.org>
>  Sent: Friday, November 24, 2017 11:49 AM
>  Subject: [DISCUSS] Merge Absolute resource configuration support in
> Capacity Scheduler (YARN-5881) to trunk
>
> Hi All,
>
> We would like to bring up the discussion of merging “absolute min/max
> resources support in capacity scheduler” branch (YARN-5881) [2] into trunk
> in a few weeks. The goal is to get it in for Hadoop 3.1.
>
> *Major work happened in this branch*
>
>   - YARN-6471. Support to add min/max resource configuration for a queue
>   - YARN-7332. Compute effectiveCapacity per each resource vector
>   - YARN-7411. Inter-Queue preemption's computeFixpointAllocation need to
>   handle absolute resources.
>
> *Regarding design details*
>
> Please refer [1] for detailed design document.
>
> *Regarding to testing:*
>
> We did extensive tests for the feature in the last couple of months.
> Comparing to latest trunk.
>
> - For SLS benchmark: We didn't see observable performance gap from
> simulated test based on 8K nodes SLS traces (1 PB memory). We got 3k+
> containers allocated per second.
>
> - For microbenchmark: We use performance test cases added by YARN 6775, it
> did not show much performance regression comparing to trunk.
>
> *YARN-5881* <https://issues.apache.org/jira/browse/YARN-5881>
>
> #ResourceTypes = 2. Avg of fastest 20: 55294.52
> #ResourceTypes = 2. Avg of fastest 20: 55401.66
>
> *trunk*
> #ResourceTypes = 2. Avg of fastest 20: 55865.92
> #ResourceTypes = 2. Avg of fastest 20: 55096.418
>
> *Regarding to API stability:*
>
> All newly added @Public APIs are @Unstable.
>
> Documentation jira [3] could help to provide detailed configuration
> details. This feature works from end-to-end and we are running this in our
> development cluster for last couple of months and undergone good amount of
> testing. Branch code is run against trunk and tracked via [4].
>
> We would love to get your thoughts before opening a voting thread.
>
> Special thanks to a team of folks who worked hard and contributed towards
> this efforts including design discussion / patch / reviews, etc.: Wangda
> Tan, Vinod Kumar Vavilappali, Rohith Sharma K S.
>
> [1] :
>
> https://issues.apache.org/jira/secure/attachment/12855984/YARN-5881.Support.Absolute.Min.Max.Resource.In.Capacity.Scheduler.design-doc.v1.pdf
> [2] : https://issues.apache.org/jira/browse/YARN-5881
>
> [3] : https://issues.apache.org/jira/browse/YARN-7533
>
> [4] : https://issues.apache.org/jira/browse/YARN-7510
>
> Thanks,
>
> Sunil G and Wangda Tan
>
>


[DISCUSS] Merge Absolute resource configuration support in Capacity Scheduler (YARN-5881) to trunk

2017-11-24 Thread Sunil G
Hi All,

We would like to bring up the discussion of merging “absolute min/max
resources support in capacity scheduler” branch (YARN-5881) [2] into trunk
in a few weeks. The goal is to get it in for Hadoop 3.1.

*Major work happened in this branch*

   - YARN-6471. Support to add min/max resource configuration for a queue
   - YARN-7332. Compute effectiveCapacity per each resource vector
   - YARN-7411. Inter-Queue preemption's computeFixpointAllocation need to
   handle absolute resources.

*Regarding design details*

Please refer [1] for detailed design document.

*Regarding to testing:*

We did extensive tests for the feature in the last couple of months.
Comparing to latest trunk.

- For SLS benchmark: We didn't see observable performance gap from
simulated test based on 8K nodes SLS traces (1 PB memory). We got 3k+
containers allocated per second.

- For microbenchmark: We use performance test cases added by YARN 6775, it
did not show much performance regression comparing to trunk.

*YARN-5881* <https://issues.apache.org/jira/browse/YARN-5881>

#ResourceTypes = 2. Avg of fastest 20: 55294.52
#ResourceTypes = 2. Avg of fastest 20: 55401.66

*trunk*
#ResourceTypes = 2. Avg of fastest 20: 55865.92
#ResourceTypes = 2. Avg of fastest 20: 55096.418

*Regarding to API stability:*

All newly added @Public APIs are @Unstable.

Documentation jira [3] could help to provide detailed configuration
details. This feature works from end-to-end and we are running this in our
development cluster for last couple of months and undergone good amount of
testing. Branch code is run against trunk and tracked via [4].

We would love to get your thoughts before opening a voting thread.

Special thanks to a team of folks who worked hard and contributed towards
this efforts including design discussion / patch / reviews, etc.: Wangda
Tan, Vinod Kumar Vavilappali, Rohith Sharma K S.

[1] :
https://issues.apache.org/jira/secure/attachment/12855984/YARN-5881.Support.Absolute.Min.Max.Resource.In.Capacity.Scheduler.design-doc.v1.pdf
[2] : https://issues.apache.org/jira/browse/YARN-5881

[3] : https://issues.apache.org/jira/browse/YARN-7533

[4] : https://issues.apache.org/jira/browse/YARN-7510

Thanks,

Sunil G and Wangda Tan


Re: [VOTE] Release Apache Hadoop 2.9.0 (RC3)

2017-11-15 Thread Sunil G
+1 (binding)

Built from source.



   - Tested few cases in an HA cluster and tried to do failover by using
   rmadmin commands etc. This seems works fine including submitting apps.
   - I also tested many MR apps and all are running fine w/o any issues.
   - Majorly tested below feature sanity too (works fine)
  - Application priority
  - Application timeout
   - Tested basic NodeLabel scenarios.
  - Added some labels to couple of nodes
  - Verified old UI for labels
  - Submitted apps to labelled cluster and it works fine.
  - Also performed few cli commands related to nodelabel
   - Verified new YARN UI and accessed various pages when cluster was in
   use. It seems fine to me.

- Sunil


On Tue, Nov 14, 2017 at 5:40 AM Arun Suresh  wrote:

> Hi Folks,
>
> Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be the
> starting release for Apache Hadoop 2.9.x line - it includes 30 New Features
> with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues since
> 2.8.2.
>
> More information about the 2.9.0 release plan can be found here:
> *
> https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-Version2.9
> <
> https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-Version2.9
> >*
>
> New RC is available at: *
> https://home.apache.org/~asuresh/hadoop-2.9.0-RC3/
> *
>
> The RC tag in git is: release-2.9.0-RC3, and the latest commit id is:
> 756ebc8394e473ac25feac05fa493f6d612e6c50.
>
> The maven artifacts are available via repository.apache.org at:
> <
> https://www.google.com/url?q=https%3A%2F%2Frepository.apache.org%2Fcontent%2Frepositories%2Forgapachehadoop-1066=D=1=AFQjCNFcern4uingMV_sEreko_zeLlgdlg
> >*https://repository.apache.org/content/repositories/orgapachehadoop-1068/
>  >*
>
> We are carrying over the votes from the previous RC given that the delta is
> the license fix.
>
> Given the above - we are also going to stick with the original deadline for
> the vote : ending on Friday 17th November 2017 2pm PT time.
>
> Thanks,
> -Arun/Subru
>


Re: [VOTE] Release Apache Hadoop 2.9.0 (RC2)

2017-11-14 Thread Sunil G
Hi Mukul

We have an RC3 release thread started as RC2 had an issue reported.
Kindly help to check the same

- Sunil

On Tue, Nov 14, 2017 at 3:00 PM Mukul Kumar Singh 
wrote:

> +1 (non-binding)
>
> I built from source on Mac OS X 10.13.1 Java 1.8.0_111
>
> - Deployed on a single node cluster.
> - Deployed a ViewFS cluster with two hdfs mount points.
> - Performed basic sanity checks.
> - Performed basic DFS operations.
>
> Thanks,
> Mukul
>
> > On 13-Nov-2017, at 3:01 AM, Subru Krishnan  wrote:
> >
> > Hi Folks,
> >
> > Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be
> the
> > starting release for Apache Hadoop 2.9.x line - it includes 30 New
> Features
> > with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues
> since
> > 2.8.2.
> >
> > More information about the 2.9.0 release plan can be found here:
> > *
> https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-Version2.9
> > <
> https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-Version2.9
> >*
> >
> > New RC is available at:
> http://home.apache.org/~asuresh/hadoop-2.9.0-RC2/
> > <
> http://www.google.com/url?q=http%3A%2F%2Fhome.apache.org%2F~asuresh%2Fhadoop-2.9.0-RC1%2F=D=1=AFQjCNE7BF35IDIMZID3hPqiNglWEVsTpg
> >
> >
> > The RC tag in git is: release-2.9.0-RC2, and the latest commit id is:
> > 1eb05c1dd48fbc9e4b375a76f2046a59103bbeb1.
> >
> > The maven artifacts are available via repository.apache.org at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1067/
> > <
> https://www.google.com/url?q=https%3A%2F%2Frepository.apache.org%2Fcontent%2Frepositories%2Forgapachehadoop-1066=D=1=AFQjCNFcern4uingMV_sEreko_zeLlgdlg
> >
> >
> > Please try the release and vote; the vote will run for the usual 5 days,
> > ending on Friday 17th November 2017 2pm PT time.
> >
> > We want to give a big shout out to Sunil, Varun, Rohith, Wangda, Vrushali
> > and Inigo for the extensive testing/validation which helped prepare for
> > RC2. Do report your results in this vote as it'll be very useful to the
> > entire community.
> >
> > Thanks,
> > -Subru/Arun
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Apache Hadoop 2.9.0 (RC2)

2017-11-13 Thread Sunil G
+1 (binding)

Deployed cluster built from source.



   - Tested few cases in an HA cluster and tried to do failover by using
   rmadmin commands etc. This seems works fine including submitting apps.
   - I also tested many MR apps and all are running fine w/o any issues.
   - Majorly tested below feature sanity too (works fine)
  - Application priority
  - Application timeout
   - Tested basic NodeLabel scenarios.
  - Added some labels to couple of nodes
  - Verified old UI for labels
  - Submitted apps to labelled cluster and it works fine.
  - Also performed few cli commands related to nodelabel
   - Verified new YARN UI and accessed various pages when cluster was in
   use. It seems fine to me.


Thanks all folks who participated in this release, appreciate the same!

- Sunil


On Mon, Nov 13, 2017 at 3:01 AM Subru Krishnan  wrote:

> Hi Folks,
>
> Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be the
> starting release for Apache Hadoop 2.9.x line - it includes 30 New Features
> with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues since
> 2.8.2.
>
> More information about the 2.9.0 release plan can be found here:
> *
> https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-Version2.9
> <
> https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-Version2.9
> >*
>
> New RC is available at: http://home.apache.org/~asuresh/hadoop-2.9.0-RC2/
> <
> http://www.google.com/url?q=http%3A%2F%2Fhome.apache.org%2F~asuresh%2Fhadoop-2.9.0-RC1%2F=D=1=AFQjCNE7BF35IDIMZID3hPqiNglWEVsTpg
> >
>
> The RC tag in git is: release-2.9.0-RC2, and the latest commit id is:
> 1eb05c1dd48fbc9e4b375a76f2046a59103bbeb1.
>
> The maven artifacts are available via repository.apache.org at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1067/
> <
> https://www.google.com/url?q=https%3A%2F%2Frepository.apache.org%2Fcontent%2Frepositories%2Forgapachehadoop-1066=D=1=AFQjCNFcern4uingMV_sEreko_zeLlgdlg
> >
>
> Please try the release and vote; the vote will run for the usual 5 days,
> ending on Friday 17th November 2017 2pm PT time.
>
> We want to give a big shout out to Sunil, Varun, Rohith, Wangda, Vrushali
> and Inigo for the extensive testing/validation which helped prepare for
> RC2. Do report your results in this vote as it'll be very useful to the
> entire community.
>
> Thanks,
> -Subru/Arun
>


Re: [VOTE] Release Apache Hadoop 2.9.0 (RC0)

2017-11-07 Thread Sunil G
Hi Subru and Arun.

Thanks for driving 2.9 release. Great work!

I installed cluster built from source.
- Ran few MR jobs with application priority enabled. Runs fine.
- Accessed new UI and it also seems fine.

However I am also getting same issue as Rohith reported.
- Started an HA cluster
- Pushed RM to standby
- Pushed back RM to active then seeing an exception.

org.apache.hadoop.ha.ServiceFailedException: RM could not transition to
Active
at
org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorServic
e.becomeActive(ActiveStandbyElectorBasedElectorService.java:146)
at
org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:894
)

Caused by: org.apache.zookeeper.KeeperException$NoAuthException:
KeeperErrorCode = NoAuth
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:949)

Will check and post more details,

- Sunil


On Tue, Nov 7, 2017 at 12:47 PM Rohith Sharma K S 
wrote:

> Thanks Subru/Arun for the great work!
>
> Downloaded source and built from it. Deployed RM HA non-secured cluster
> along with new YARN UI and ATSv2.
>
> I am facing basic RM HA switch issue after first time successful start.
> *Can
> anyone else is facing this issue?*
>
> When RM is switched from ACTIVE to STANDBY to ACTIVE, RM never switch to
> active successfully. Exception trace I see from the log is
>
> 2017-11-07 12:35:56,540 WARN org.apache.hadoop.ha.ActiveStandbyElector:
> Exception handling the winning of election
> org.apache.hadoop.ha.ServiceFailedException: RM could not transition to
> Active
> at
>
> org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorService.becomeActive(ActiveStandbyElectorBasedElectorService.java:146)
> at
>
> org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:894)
> at
>
> org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:473)
> at
>
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:599)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> Caused by: org.apache.hadoop.ha.ServiceFailedException: Error when
> transitioning to Active mode
> at
>
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:325)
> at
>
> org.apache.hadoop.yarn.server.resourcemanager.ActiveStandbyElectorBasedElectorService.becomeActive(ActiveStandbyElectorBasedElectorService.java:144)
> ... 4 more
> Caused by: org.apache.hadoop.service.ServiceStateException:
> org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode =
> NoAuth
> at
>
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
> at
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:205)
> at
>
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1131)
> at
>
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1171)
> at
>
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1167)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
> at
>
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1167)
> at
>
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:320)
> ... 5 more
> Caused by: org.apache.zookeeper.KeeperException$NoAuthException:
> KeeperErrorCode = NoAuth
> at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
> at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:949)
> at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:915)
> at
>
> org.apache.curator.framework.imps.CuratorTransactionImpl.doOperation(CuratorTransactionImpl.java:159)
> at
>
> org.apache.curator.framework.imps.CuratorTransactionImpl.access$200(CuratorTransactionImpl.java:44)
> at
>
> org.apache.curator.framework.imps.CuratorTransactionImpl$2.call(CuratorTransactionImpl.java:129)
> at
>
> org.apache.curator.framework.imps.CuratorTransactionImpl$2.call(CuratorTransactionImpl.java:125)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
> at
>
> org.apache.curator.framework.imps.CuratorTransactionImpl.commit(CuratorTransactionImpl.java:122)
> at
>
> org.apache.hadoop.util.curator.ZKCuratorManager$SafeTransaction.commit(ZKCuratorManager.java:403)
> at
>
> org.apache.hadoop.util.curator.ZKCuratorManager.safeSetData(ZKCuratorManager.java:372)
> at
>
> 

Re: [VOTE] Merge yarn-native-services branch into trunk

2017-11-03 Thread Sunil G
+1 (binding)

Tested the branch code and brought up services like sleep and httpd. Also
verified UI as well.

- Sunil


On Tue, Oct 31, 2017 at 1:36 AM Jian He <j...@hortonworks.com> wrote:

> Hi All,
>
> I would like to restart the vote for merging yarn-native-services to trunk.
> Since last vote, we have been working on several issues in documentation,
> DNS, CLI modifications etc. We believe now the feature is in a much better
> shape.
>
> Some back ground:
> At a high level, the following are the key feautres implemented.
> - YARN-5079[1]. A native YARN framework (ApplicationMaster) to orchestrate
> existing services to YARN either docker or non-docker based.
> - YARN-4793[2]. A Rest API service embeded in RM (optional)  for user to
> deploy a service via a simple JSON spec
> - YARN-4757[3]. Extending today's service registry with a simple DNS
> service to enable users to discover services deployed on YARN via standard
> DNS lookup
> - YARN-6419[4]. UI support for native-services on the new YARN UI
> All these new services are optional and are sitting outside of the
> existing system, and have no impact on existing system if disabled.
>
> Special thanks to a team of folks who worked hard towards this: Billie
> Rinaldi, Gour Saha, Vinod Kumar Vavilapalli, Jonathan Maron, Rohith Sharma
> K S, Sunil G, Akhil PB, Eric Yang. This effort could not be possible
> without their ideas and hard work.
> Also thanks Allen for some review and verifications.
>
> Thanks,
> Jian
>
> [1] https://issues.apache.org/jira/browse/YARN-5079
> [2] https://issues.apache.org/jira/browse/YARN-4793
> [3] https://issues.apache.org/jira/browse/YARN-4757
> [4] https://issues.apache.org/jira/browse/YARN-6419
>


Re: [VOTE] Merge YARN-3926 (resource profile) to trunk

2017-08-26 Thread Sunil G
Hi Daniel

Thank you very much for the support.

* When you say that the feature can be turned
off, do you mean resource types or resource profiles?  I know there's an
off-by-default property that governs resource profiles, but I didn't see
any way to turn off resource types.
Yes,*yarn.resourcemanager.resource-profiles.enabled* is false by default
and controls off/on of this feature. Now regarding new resource types, its
been loaded from "*resource-types.xml"* and by default this XML file is not
available in the package. Thus prevents any issues in default case. Once
this file is added to a cluster then new resources will be loaded from same.

* Even if only CPU and memory are configured, i.e. no additional resource
types, the code path is different than it was.
Earlier primitive data types were used to represent vcores and memory. As
per resource profile work, all resources under YARN is categorized as
ResourceInformation and placed under existing Resource object. So memory
and vcores will be accessible and operable with same set of public apis
from Resources or ResourceCalculator (DRC) same as earlier even when
feature is off (Code path is same, but improved to support a unified
ResourceInformation class instead of memory/vcores primitive types).

Thanks
Sunil




On Sat, Aug 26, 2017 at 8:10 PM Daniel Templeton 
wrote:

> Quick question, Wangda.  When you say that the feature can be turned
> off, do you mean resource types or resource profiles?  I know there's an
> off-by-default property that governs resource profiles, but I didn't see
> any way to turn off resource types.  Even if only CPU and memory are
> configured, i.e. no additional resource types, the code path is
> different than it was.  Specifically, where CPU and memory were
> primitives before, they're now entries in an array whose indexes have to
> be looked up through the ResourceUtils class.  Did I miss something?
>
> For those who haven't followed the feature closely, there are really two
> features here.  Resource types allows for declarative extension of the
> resource system in YARN.  Resource profiles builds on top of resource
> types to allow a user to request a group of resources as a profile, much
> like EC2 instance types, e.g. "fast-compute" might mean 32GB RAM, 8
> vcores, and 2 GPUs.
>
> Daniel
>
> On 8/23/17 11:49 AM, Wangda Tan wrote:
> >   Hi folks,
> >
> > Per earlier discussion [1], I'd like to start a formal vote to merge
> > feature branch YARN-3926 (Resource profile) to trunk. The vote will run
> for
> > 7 days and will end August 30 10:00 AM PDT.
> >
> > Briefly, YARN-3926 can extend resource model of YARN to support resource
> > types other than CPU and memory, so it will be a cornerstone of features
> > like GPU support (YARN-6223), disk scheduling/isolation (YARN-2139), FPGA
> > support (YARN-5983), network IO scheduling/isolation (YARN-2140). In
> > addition to that, YARN-3926 allows admin to preconfigure resource
> profiles
> > in the cluster, for example, m3.large means <2 vcores, 8 GB memory, 64 GB
> > disk>, so applications can request "m3.large" profile instead of
> specifying
> > all resource types’s values.
> >
> > There are 32 subtasks that were completed as part of this effort.
> >
> > This feature needs to be explicitly turned on before use. We paid close
> > attention to compatibility, performance, and scalability of this feature,
> > mentioned in [1], we didn't see observable performance regression in
> large
> > scale SLS (scheduler load simulator) executions and saw less than 5%
> > performance regression by using micro benchmark added by YARN-6775.
> >
> > This feature works from end-to-end (including UI/CLI/application/server),
> > we have setup a cluster with this feature turned on runs for several
> weeks,
> > we didn't see any issues by far.
> >
> > Merge JIRA: YARN-7013 (Jenkins gave +1 already).
> > Documentation: YARN-7056
> >
> > Special thanks to a team of folks who worked hard and contributed towards
> > this effort including design discussion/development/reviews, etc.: Varun
> > Vasudev, Sunil Govind, Daniel Templeton, Vinod Vavilapalli, Yufei Gu,
> > Karthik Kambatla, Jason Lowe, Arun Suresh.
> >
> > Regards,
> > Wangda Tan
> >
> > [1]
> >
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201708.mbox/%3CCAD%2B%2BeCnjEHU%3D-M33QdjnND0ZL73eKwxRua4%3DBbp4G8inQZmaMg%40mail.gmail.com%3E
> >
>
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Merge YARN-3926 (resource profile) to trunk

2017-08-24 Thread Sunil G
Thank you very much Varun Vasudev, Wangda Tan, Daniel and all the folks who
helped in getting this feature in this level.

Starting with my +1 (binding).


# Tested a 5 node cluster with resource profiles enabled/disabled (feature
is disabled by default)

# All apis added are marked as Unstable/Evolving (very few)

# There is no compatibility break with older versions (we have added UT
cases also to ensure same)

# Performance tests were done using SLS and also with some tight loops unit
tests. There is no much regression with current trunk.

# Latest jenkins +1 on YARN-7013 for whole branch code.

# Verified old RM UI and new YARN UI (newly added resources could be seen
easily)


Once again thanks all the folks who helped in getting this feature. Kudos!


Thanks

- Sunil


On Thu, Aug 24, 2017 at 12:20 AM Wangda Tan  wrote:

>  Hi folks,
>
> Per earlier discussion [1], I'd like to start a formal vote to merge
> feature branch YARN-3926 (Resource profile) to trunk. The vote will run for
> 7 days and will end August 30 10:00 AM PDT.
>
> Briefly, YARN-3926 can extend resource model of YARN to support resource
> types other than CPU and memory, so it will be a cornerstone of features
> like GPU support (YARN-6223), disk scheduling/isolation (YARN-2139), FPGA
> support (YARN-5983), network IO scheduling/isolation (YARN-2140). In
> addition to that, YARN-3926 allows admin to preconfigure resource profiles
> in the cluster, for example, m3.large means <2 vcores, 8 GB memory, 64 GB
> disk>, so applications can request "m3.large" profile instead of specifying
> all resource types’s values.
>
> There are 32 subtasks that were completed as part of this effort.
>
> This feature needs to be explicitly turned on before use. We paid close
> attention to compatibility, performance, and scalability of this feature,
> mentioned in [1], we didn't see observable performance regression in large
> scale SLS (scheduler load simulator) executions and saw less than 5%
> performance regression by using micro benchmark added by YARN-6775.
>
> This feature works from end-to-end (including UI/CLI/application/server),
> we have setup a cluster with this feature turned on runs for several weeks,
> we didn't see any issues by far.
>
> Merge JIRA: YARN-7013 (Jenkins gave +1 already).
> Documentation: YARN-7056
>
> Special thanks to a team of folks who worked hard and contributed towards
> this effort including design discussion/development/reviews, etc.: Varun
> Vasudev, Sunil Govind, Daniel Templeton, Vinod Vavilapalli, Yufei Gu,
> Karthik Kambatla, Jason Lowe, Arun Suresh.
>
> Regards,
> Wangda Tan
>
> [1]
>
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201708.mbox/%3CCAD%2B%2BeCnjEHU%3D-M33QdjnND0ZL73eKwxRua4%3DBbp4G8inQZmaMg%40mail.gmail.com%3E
>


Re: [VOTE] Merge feature branch YARN-5355 (Timeline Service v2) to trunk

2017-08-24 Thread Sunil G
Thank you very much Vrushali, Rohith, Varun and other folks who made this
happen. Great work, really appreciate the same!!

+1 (binding) from my side:

# Tested ATSv2 cluster in a secure cluster. Ran some basic jobs
# Accessed new YARN UI which shows various flows/flow activity etc. Seems
fine.
# Based on code, looks like all apis are compatible.
# REST api docs looks fine as well, I guess we could improve that a bit
more post merge as well.
# Adding to additional thoughts which are discussed here, native service
also could publish events to atsv2. I think that work is also happened in
branch.

Looking forward to a much wider adoption of ATSv2 with more projects.

Thanks
Sunil


On Tue, Aug 22, 2017 at 12:02 PM Vrushali Channapattan <
vrushalic2...@gmail.com> wrote:

> Hi folks,
>
> Per earlier discussion [1], I'd like to start a formal vote to merge
> feature branch YARN-5355 [2] (Timeline Service v.2) to trunk. The vote will
> run for 7 days, and will end August 29 11:00 PM PDT.
>
> We have previously completed one merge onto trunk [3] and Timeline Service
> v2 has been part of Hadoop release 3.0.0-alpha1.
>
> Since then, we have been working on extending the capabilities of Timeline
> Service v2 in a feature branch [2] for a while, and we are reasonably
> confident that the state of the feature meets the criteria to be merged
> onto trunk and we'd love folks to get their hands on it in a test capacity
> and provide valuable feedback so that we can make it production-ready.
>
> In a nutshell, Timeline Service v.2 delivers significant scalability and
> usability improvements based on a new architecture. What we would like to
> merge to trunk is termed "alpha 2" (milestone 2). The feature has a
> complete end-to-end read/write flow with security and read level
> authorization via whitelists. You should be able to start setting it up and
> testing it.
>
> At a high level, the following are the key features that have been
> implemented since alpha1:
> - Security via Kerberos Authentication and delegation tokens
> - Read side simple authorization via whitelist
> - Client configurable entity sort ordering
> - Richer REST APIs for apps, app attempts, containers, fetching metrics by
> timerange, pagination, sub-app entities
> - Support for storing sub-application entities (entities that exist outside
> the scope of an application)
> - Configurable TTLs (time-to-live) for tables, configurable table prefixes,
> configurable hbase cluster
> - Flow level aggregations done as dynamic (table level) coprocessors
> - Uses latest stable HBase release 1.2.6
>
> There are a total of 82 subtasks that were completed as part of this
> effort.
>
> We paid close attention to ensure that once disabled Timeline Service v.2
> does not impact existing functionality when disabled (by default).
>
> Special thanks to a team of folks who worked hard and contributed towards
> this effort with patches, reviews and guidance: Rohith Sharma K S, Varun
> Saxena, Haibo Chen, Sangjin Lee, Li Lu, Vinod Kumar Vavilapalli, Joep
> Rottinghuis, Jason Lowe, Jian He, Robert Kanter, Micheal Stack.
>
> Regards,
> Vrushali
>
> [1] http://www.mail-archive.com/yarn-dev@hadoop.apache.org/msg27383.html
> [2] https://issues.apache.org/jira/browse/YARN-5355
> [3] https://issues.apache.org/jira/browse/YARN-2928
> [4] https://github.com/apache/hadoop/commits/YARN-5355
>


Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)

2017-08-03 Thread Sunil G
Thanks Konstantin

+1 (binding)

1. Build tar ball from source package
2. Ran basic MR jobs and verified UI.
3. Enabled node labels and ran sleep job. Works fine.
4. Verified CLI commands related to node labels and its working fine.
5. RM WorkPreserving restart cases are also verified, and looks fine

Thanks
Sunil



On Sun, Jul 30, 2017 at 4:59 AM Konstantin Shvachko 
wrote:

> Hi everybody,
>
> Here is the next release of Apache Hadoop 2.7 line. The previous stable
> release 2.7.3 was available since 25 August, 2016.
> Release 2.7.4 includes 264 issues fixed after release 2.7.3, which are
> critical bug fixes and major optimizations. See more details in Release
> Note:
> http://home.apache.org/~shv/hadoop-2.7.4-RC0/releasenotes.html
>
> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.4-RC0/
>
> Please give it a try and vote on this thread. The vote will run for 5 days
> ending 08/04/2017.
>
> Please note that my up to date public key are available from:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> Please don't forget to refresh the page if you've been there recently.
> There are other place on Apache sites, which may contain my outdated key.
>
> Thanks,
> --Konstantin
>


[jira] [Created] (HADOOP-14658) branch-2 compilation is broken in hadoop-azure

2017-07-13 Thread Sunil G (JIRA)
Sunil G created HADOOP-14658:


 Summary: branch-2 compilation is broken in hadoop-azure
 Key: HADOOP-14658
 URL: https://issues.apache.org/jira/browse/HADOOP-14658
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.9.0
Reporter: Sunil G


Compilation failure. 
[link|https://builds.apache.org/job/PreCommit-YARN-Build/16414/artifact/patchprocess/branch-mvninstall-root.txt]

{noformat}
[ERROR] 
/home/sunilg/hadoop/repo/branch-2/hadoop/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbRemoteCallHelper.java:[194,19]
 cannot find symbol
[ERROR] symbol:   method join(java.lang.String,java.lang.String[])
[ERROR] location: class java.lang.String
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0

2017-07-07 Thread Sunil G
+1 (non-binding)

Tested from building tar ball from binary.
- Ran few MR jobs with node labels
- Verified old and new YARN ui
- Checked preemption as well

Thanks
Sunil

On Fri, Jun 30, 2017 at 8:11 AM Andrew Wang 
wrote:

> Hi all,
>
> As always, thanks to the many, many contributors who helped with this
> release! I've prepared an RC0 for 3.0.0-alpha4:
>
> http://home.apache.org/~wang/3.0.0-alpha4-RC0/
>
> The standard 5-day vote would run until midnight on Tuesday, July 4th.
> Given that July 4th is a holiday in the US, I expect this vote might have
> to be extended, but I'd like to close the vote relatively soon after.
>
> I've done my traditional testing of a pseudo-distributed cluster with a
> single task pi job, which was successful.
>
> Normally my testing would end there, but I'm slightly more confident this
> time. At Cloudera, we've successfully packaged and deployed a snapshot from
> a few days ago, and run basic smoke tests. Some bugs found from this
> include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, and
> the revert of HDFS-11696, which broke NN QJM HA setup.
>
> Vijay is working on a test run with a fuller test suite (the results of
> which we can hopefully post soon).
>
> My +1 to start,
>
> Best,
> Andrew
>


Upgrading minimum version of Maven to 3.1 from 3.0

2017-04-03 Thread Sunil G
Hi Folks,

Recently we were doing build framework up-gradation for Yarn Ui. In order
to compile yarn-ui on various architectures, we were using
frontend-maven-plugin 0.0.22 version.
However build is failing in *ppc64le.* If we could use latest version of
frontend-maven-plugin, we could resolve this error. (such as using 1.1
version). But this requires maven version 3.1 minimum. YARN-6421 is
tracking this issue, and we thought we can propose to upgrade to maven 3.1

Kindly share your thoughts.

Thanks
+ Sunil