回复:[VOTE] Merge HDFS-9806 to trunk

2017-12-13 Thread 郑锴(铁杰)
This is a great work to be in to evolve HDFS towards cloud storage. Thanks 
Chris and folks!
I had participated in the design review meeting and it looks good to me. I will 
take some time to look at the cocdes closely.
+1 on the merge.

Regards,Kai
--发件人:Iñigo 
Goiri 发送时间:2017年12月14日(星期四) 09:29收件人:Sean Mackrory 
抄 送:Anu Engineer ; Virajith 
Jalaparti ; Chris Douglas ; 
hdfs-dev@hadoop.apache.org ; viraj...@apache.org 
; ehi...@apache.org ; 
thdem...@apache.org 主 题:Re: [VOTE] Merge HDFS-9806 to trunk
+1
I have been reviewing some of the latest patches.
I skimmed through the patch in HDFS-9806 and it looks good.

In addition, we have ported it to 2.7.1 (minor differences to what would be
merged).
It has been running in our test cluster for a couple months.
All the issues we have been finding are already resolved and committed to
the feature branch.
After this, we have recently deployed to three production clusters and is
working as expected so far.

Thanks for the work Virajith and Chris; I'd like to see this merged into
trunk to make the maintainability easier.


On Wed, Dec 13, 2017 at 12:01 PM, Sean Mackrory 
wrote:

> +1 from me. There are some unrelated errors building the branch right now
> due to annotations in some YARN code, etc. but I was able to generate an fs
> image from an S3 bucket and serve the content through HDFS on a
> pseudo-distributed HDFS node this morning. Seems like a good point for a
> merge.
>
> On Wed, Dec 13, 2017 at 11:55 AM, Anu Engineer 
> wrote:
>
> > Hi Virajith / Chris/ Thomas / Ewan,
> >
> > Thanks for developing this feature and getting to merge state.
> > I would like to vote +1 for this merge. Thanks for all the hard work.
> >
> > Thanks
> > Anu
> >
> >
> > On 12/8/17, 7:11 PM, "Virajith Jalaparti"  wrote:
> >
> > Hi,
> >
> > We have tested the HDFS-9806 branch in two settings:
> >
> > (i) 26 node bare-metal cluster, with PROVIDED storage configured to
> > point
> > to another instance of HDFS (containing 468 files, total of ~400GB of
> > data). Half of the Datanodes are configured with only DISK volumes
> and
> > other other half have both DISK and PROVIDED volumes.
> > (ii) 8 VMs on Azure, with PROVIDED storage configured to point to a
> > WASB
> > account (containing 26,074 files and ~1.3TB of data). All Datanodes
> are
> > configured with DISK and PROVIDED volumes.
> >
> > (i) was tested using both the text-based alias map
> > (TextFileRegionAliasMap)
> > and the in-memory leveldb-based alias map (
> > InMemoryLevelDBAliasMapClient),
> > while (ii) was tested using the text-based alias map only.
> >
> > Steps followed:
> > (0) Build from apache/HDFS-9806. (Note that for the leveldb-based
> alias
> > map, the patch posted to HDFS-12912
> >  needs to be
> > applied; we
> > will commit this to apache/HDFS-9806 after review).
> > (1) Generate the FSImage using the image generation tool with the
> > appropriate remote location (hdfs:// in (i) and wasb:// in (ii)).
> > (2) Bring up the HDFS cluster.
> > (3) Verify that the remote namespace is reflected correctly and data
> on
> > remote store can be accessed. Commands ran: ls, copyToLocal, fsck,
> > getrep,
> > setrep, getStoragePolicy
> > (4) Run Sort and Gridmix jobs on the data in the remote location with
> > the
> > input paths pointing to the local HDFS.
> > (5) Increase replication of the PROVIDED files and verified that
> local
> > (DISK) replicas were created for the PROVIDED replicas, using fsck.
> > (6) Verify that Provided storage capacity is shown correctly on the
> NN
> > and
> > Datanode Web-UI.
> > (7) Bring down datanodes, one by one. When all are down, verify NN
> > reports
> > all PROVIDED files as missing. Bringing back up any one Datanode
> makes
> > all
> > the data available.
> > (8) Restart NN and verify data is still accesible.
> > (9) Verify that Writes to local HDFS continue to work.
> > (10) Bring down all Datanodes except one. Start decommissioning the
> > remaining Datanode. Verify that the data in the PROVIDED storage is
> > still
> > accessible.
> >
> > Apart from the above, we ported the changes in HDFS-9806 to
> branch-2.7
> > and
> > deployed it on a ~800 node cluster as one of the sub-clusters in a
> > Router-based Federated HDFS of nearly 4000 nodes (with help from
> Inigo
> > Goiri). We mounted about 1000 files, 650TB of remote data
> (~2.6million
> > blocks with 256MB block size) in this cluster using the text-based
> > alias
> > map. 

回复:[VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread 郑锴(铁杰)
Thanks Andrew for the hard driving!!
I downloaded the source tar bao and built it successfully on a MacOS. I haven't 
got the chance to try it yet, so mynon-binding +1. 
Regards,Kai
--发件人:John 
Zhuge 发送时间:2017年12月14日(星期四) 11:06收件人:Vinod Kumar 
Vavilapalli 抄 送:Andrew Wang ; 
Junping Du ; Robert Kanter ; Arun 
Suresh ; Lei Xu ; Wei-Chiu Chuang 
; Ajay Kumar ; Xiao Chen 
; Aaron T. Myers ; 
common-...@hadoop.apache.org ; 
hdfs-dev@hadoop.apache.org ; 
yarn-...@hadoop.apache.org ; 
mapreduce-...@hadoop.apache.org 主 题:Re: [VOTE] 
Release Apache Hadoop 3.0.0 RC1
Thanks Andrew for the great effort! Here is my late vote.


+1 (binding)

   - Verified checksums and signatures of tarballs
   - Built source with native, Oracle Java 1.8.0_152 on Mac OS X 10.13.2
   - Verified cloud connectors:
  - S3A integration tests (perf tests skipped)
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure and SSL mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount
  - KMS and HttpFS basic
  - Balancer start/stop


On Wed, Dec 13, 2017 at 6:12 PM, Vinod Kumar Vavilapalli  wrote:

> Yes, JIRAs will be filed, the wiki-page idea from YARN meetup is to record
> all combinations of testing that need to be done and correspondingly
> capture all the testing that someone in the community has already done and
> record it for future perusal.
>
> From what you are saying, I guess we haven't advertised to the public yet
> on rolling upgrades, but in our meetups etc so far, you have been saying
> that rolling upgrades is supported - so I assumed we did put it in our
> messaging.
>
> The important question is if we are or are not allowed to make potentially
> incompatible changes to fix bugs in the process of supporting 2.x to 3.x
> upgrades whether rolling or not.
>
> +Vinod
>
> > On Dec 13, 2017, at 1:05 PM, Andrew Wang 
> wrote:
> >
> > I'm hoping we can address YARN-7588 and any remaining rolling upgrade
> issues in 3.0.x maintenance releases. Beyond a wiki page, it would be
> really great to get JIRAs filed and targeted for tracking as soon as
> possible.
> >
> > Vinod, what do you think we need to do regarding caveating rolling
> upgrade support? We haven't advertised rolling upgrade support between
> major releases outside of dev lists and JIRA. As a new major release, our
> compat guidelines allow us to break compatibility, so I don't think it's
> expected by users.
> >
>
>


-- 
John


Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread John Zhuge
Thanks Andrew for the great effort! Here is my late vote.


+1 (binding)

   - Verified checksums and signatures of tarballs
   - Built source with native, Oracle Java 1.8.0_152 on Mac OS X 10.13.2
   - Verified cloud connectors:
  - S3A integration tests (perf tests skipped)
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure and SSL mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount
  - KMS and HttpFS basic
  - Balancer start/stop


On Wed, Dec 13, 2017 at 6:12 PM, Vinod Kumar Vavilapalli  wrote:

> Yes, JIRAs will be filed, the wiki-page idea from YARN meetup is to record
> all combinations of testing that need to be done and correspondingly
> capture all the testing that someone in the community has already done and
> record it for future perusal.
>
> From what you are saying, I guess we haven't advertised to the public yet
> on rolling upgrades, but in our meetups etc so far, you have been saying
> that rolling upgrades is supported - so I assumed we did put it in our
> messaging.
>
> The important question is if we are or are not allowed to make potentially
> incompatible changes to fix bugs in the process of supporting 2.x to 3.x
> upgrades whether rolling or not.
>
> +Vinod
>
> > On Dec 13, 2017, at 1:05 PM, Andrew Wang 
> wrote:
> >
> > I'm hoping we can address YARN-7588 and any remaining rolling upgrade
> issues in 3.0.x maintenance releases. Beyond a wiki page, it would be
> really great to get JIRAs filed and targeted for tracking as soon as
> possible.
> >
> > Vinod, what do you think we need to do regarding caveating rolling
> upgrade support? We haven't advertised rolling upgrade support between
> major releases outside of dev lists and JIRA. As a new major release, our
> compat guidelines allow us to break compatibility, so I don't think it's
> expected by users.
> >
>
>


-- 
John


Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread Vinod Kumar Vavilapalli
Yes, JIRAs will be filed, the wiki-page idea from YARN meetup is to record all 
combinations of testing that need to be done and correspondingly capture all 
the testing that someone in the community has already done and record it for 
future perusal.

From what you are saying, I guess we haven't advertised to the public yet on 
rolling upgrades, but in our meetups etc so far, you have been saying that 
rolling upgrades is supported - so I assumed we did put it in our messaging.

The important question is if we are or are not allowed to make potentially 
incompatible changes to fix bugs in the process of supporting 2.x to 3.x 
upgrades whether rolling or not.

+Vinod

> On Dec 13, 2017, at 1:05 PM, Andrew Wang  wrote:
> 
> I'm hoping we can address YARN-7588 and any remaining rolling upgrade issues 
> in 3.0.x maintenance releases. Beyond a wiki page, it would be really great 
> to get JIRAs filed and targeted for tracking as soon as possible.
> 
> Vinod, what do you think we need to do regarding caveating rolling upgrade 
> support? We haven't advertised rolling upgrade support between major releases 
> outside of dev lists and JIRA. As a new major release, our compat guidelines 
> allow us to break compatibility, so I don't think it's expected by users.
> 



Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread Vinod Kumar Vavilapalli
Good stuff Andrew, and thanks everyone!

+Vinod

> On Dec 13, 2017, at 1:05 PM, Andrew Wang  wrote:
> 
> To close this out, the vote passes successfully with 13 binding +1s, 5 
> non-binding +1s, and no -1s. Thanks everyone for voting! I'll work on staging.
> 



Re: [VOTE] Merge HDFS-9806 to trunk

2017-12-13 Thread Iñigo Goiri
+1
I have been reviewing some of the latest patches.
I skimmed through the patch in HDFS-9806 and it looks good.

In addition, we have ported it to 2.7.1 (minor differences to what would be
merged).
It has been running in our test cluster for a couple months.
All the issues we have been finding are already resolved and committed to
the feature branch.
After this, we have recently deployed to three production clusters and is
working as expected so far.

Thanks for the work Virajith and Chris; I'd like to see this merged into
trunk to make the maintainability easier.


On Wed, Dec 13, 2017 at 12:01 PM, Sean Mackrory 
wrote:

> +1 from me. There are some unrelated errors building the branch right now
> due to annotations in some YARN code, etc. but I was able to generate an fs
> image from an S3 bucket and serve the content through HDFS on a
> pseudo-distributed HDFS node this morning. Seems like a good point for a
> merge.
>
> On Wed, Dec 13, 2017 at 11:55 AM, Anu Engineer 
> wrote:
>
> > Hi Virajith / Chris/ Thomas / Ewan,
> >
> > Thanks for developing this feature and getting to merge state.
> > I would like to vote +1 for this merge. Thanks for all the hard work.
> >
> > Thanks
> > Anu
> >
> >
> > On 12/8/17, 7:11 PM, "Virajith Jalaparti"  wrote:
> >
> > Hi,
> >
> > We have tested the HDFS-9806 branch in two settings:
> >
> > (i) 26 node bare-metal cluster, with PROVIDED storage configured to
> > point
> > to another instance of HDFS (containing 468 files, total of ~400GB of
> > data). Half of the Datanodes are configured with only DISK volumes
> and
> > other other half have both DISK and PROVIDED volumes.
> > (ii) 8 VMs on Azure, with PROVIDED storage configured to point to a
> > WASB
> > account (containing 26,074 files and ~1.3TB of data). All Datanodes
> are
> > configured with DISK and PROVIDED volumes.
> >
> > (i) was tested using both the text-based alias map
> > (TextFileRegionAliasMap)
> > and the in-memory leveldb-based alias map (
> > InMemoryLevelDBAliasMapClient),
> > while (ii) was tested using the text-based alias map only.
> >
> > Steps followed:
> > (0) Build from apache/HDFS-9806. (Note that for the leveldb-based
> alias
> > map, the patch posted to HDFS-12912
> >  needs to be
> > applied; we
> > will commit this to apache/HDFS-9806 after review).
> > (1) Generate the FSImage using the image generation tool with the
> > appropriate remote location (hdfs:// in (i) and wasb:// in (ii)).
> > (2) Bring up the HDFS cluster.
> > (3) Verify that the remote namespace is reflected correctly and data
> on
> > remote store can be accessed. Commands ran: ls, copyToLocal, fsck,
> > getrep,
> > setrep, getStoragePolicy
> > (4) Run Sort and Gridmix jobs on the data in the remote location with
> > the
> > input paths pointing to the local HDFS.
> > (5) Increase replication of the PROVIDED files and verified that
> local
> > (DISK) replicas were created for the PROVIDED replicas, using fsck.
> > (6) Verify that Provided storage capacity is shown correctly on the
> NN
> > and
> > Datanode Web-UI.
> > (7) Bring down datanodes, one by one. When all are down, verify NN
> > reports
> > all PROVIDED files as missing. Bringing back up any one Datanode
> makes
> > all
> > the data available.
> > (8) Restart NN and verify data is still accesible.
> > (9) Verify that Writes to local HDFS continue to work.
> > (10) Bring down all Datanodes except one. Start decommissioning the
> > remaining Datanode. Verify that the data in the PROVIDED storage is
> > still
> > accessible.
> >
> > Apart from the above, we ported the changes in HDFS-9806 to
> branch-2.7
> > and
> > deployed it on a ~800 node cluster as one of the sub-clusters in a
> > Router-based Federated HDFS of nearly 4000 nodes (with help from
> Inigo
> > Goiri). We mounted about 1000 files, 650TB of remote data
> (~2.6million
> > blocks with 256MB block size) in this cluster using the text-based
> > alias
> > map. We verified that the basic commands (ls, copyToLocal, setrep)
> > work.
> > We also ran spark jobs against this cluster.
> >
> > -Virajith
> >
> >
> > On Fri, Dec 8, 2017 at 3:44 PM, Chris Douglas 
> > wrote:
> >
> > > Discussion thread: https://s.apache.org/kxT1
> > >
> > > We're down to the last few issues and are preparing the branch to
> > > merge to trunk. We'll post merge patches to HDFS-9806 [1]. Minor,
> > > "cleanup" tasks (checkstyle, findbugs, naming, etc.) will be
> tracked
> > > in HDFS-12712 [2].
> > >
> > > We've tried to ensure that when this feature is disabled, HDFS is
> > > unaffected. For those reviewing this, please look for places where
> > > this might add overheads 

[RESULT] [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-13 Thread Junping Du
Thanks to all who verified and voted!

I give my binding +1 to conclude the vote for 2.8.3 RC0, based on:
- Build from source and verify signatures
- Deploy pseudo-distributed cluster and run some simple job, like: pi, sleep, 
etc.
- Verify UI of daemons, like: NameNode, DataNode, ResourceManager, NodeManager, 
etc

- Verify rolling upgrade features from 2.7.4, include MR over distributed 
cache, NM restart with work preserving, etc.

Now, we have:

8 binding +1s, from:
 Eric Payne, Jason Lowe, Jian He, Wangda Tan, Rohith Sharma K S, Sunil G, 
Naganarasimha Garla, Junping Du

5 non-binding +1s, from:
Brahma Reddy Battula, Kuhu Shukla, Ajay Kumar, Eric Badger, Chandni Singh

and no -1s.

So I am glad to announce that the vote of 2.8.3 RC0 passes.

Thanks everyone listed above who tried the release candidate and vote and all 
who ever help with 2.8.3 release effort in all kinds of ways.
I'll push the release bits and send out an announcement for 2.8.3 soon.

Thanks,

Junping


From: Naganarasimha Garla 
Sent: Wednesday, December 13, 2017 9:39 AM
To: Sunil G
Cc: Junping Du; common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

Thanks Junping for the release. +1 (binding)

Tested on a single node pseudo cluster, I performed the following tests:
- Downloaded the tars and verified the signatures, installed using the tar
- Successfully ran few MR jobs
- Verified few hdfs operations
- Verified RM,NM and HDFS webui's
- configured labels and submitted some apps

Thanks and Regards,
+ Naga

On Wed, Dec 13, 2017 at 8:14 PM, Sunil G 
> wrote:
+1 (binding)

Thanks Junping for the effort.
I have deployed a cluster built from source tar ball.


   - Ran few MR apps and verified UI. CLI commands are also fine related to
   app.
   - Tested below feature sanity
  - Application priority
  - Application timeout
   - Tested basic NodeLabel scenarios.
  - Added some labels to couple of nodes
  - Verified old UI for labels
  - Submitted apps to labelled cluster and it works fine.
  - Also performed few cli commands related to nodelabel
   - Test basic HA cases


Thanks
Sunil G


On Tue, Dec 5, 2017 at 3:28 PM Junping Du 
> wrote:

> Hi all,
>  I've created the first release candidate (RC0) for Apache Hadoop
> 2.8.3. This is our next maint release to follow up 2.8.2. It includes 79
> important fixes and improvements.
>
>   The RC artifacts are available at:
> http://home.apache.org/~junping_du/hadoop-2.8.3-RC0
>
>   The RC tag in git is: release-2.8.3-RC0
>
>   The maven artifacts are available via 
> repository.apache.org at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1072
>
>   Please try the release and vote; the vote will run for the usual 5
> working days, ending on 12/12/2017 PST time.
>
> Thanks,
>
> Junping
>



Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread Gangumalla, Uma
Here is my +1(binding) too. 
Sorry for late vote.

Verified signatures of the source tarball.
built from source.
set up a 2-node test cluster.
Tested via HDFS commands and java API – Written bunch of files and read back. 
Ran basic MR job

Thanks Andrew and others for the hard work for getting Hadoop 3.0 out.

Regards,
Uma

On 12/13/17, 1:05 PM, "Andrew Wang"  wrote:

Hi folks,

To close this out, the vote passes successfully with 13 binding +1s, 5
non-binding +1s, and no -1s. Thanks everyone for voting! I'll work on
staging.

I'm hoping we can address YARN-7588 and any remaining rolling upgrade
issues in 3.0.x maintenance releases. Beyond a wiki page, it would be
really great to get JIRAs filed and targeted for tracking as soon as
possible.

Vinod, what do you think we need to do regarding caveating rolling upgrade
support? We haven't advertised rolling upgrade support between major
releases outside of dev lists and JIRA. As a new major release, our compat
guidelines allow us to break compatibility, so I don't think it's expected
by users.

Best,
Andrew

On Wed, Dec 13, 2017 at 12:37 PM, Vinod Kumar Vavilapalli <
vino...@apache.org> wrote:

> I was waiting for Daniel to post the minutes from YARN meetup to talk
> about this. Anyways, in that discussion, we identified a bunch of key
> upgrade related scenarios that no-one seems to have validated - atleast
> from the representation in the YARN meetup. I'm going to create a 
wiki-page
> listing all these scenarios.
>
> But back to the bug that Junping raised. At this point, we don't have a
> clear path towards running 2.x applications on 3.0.0 clusters. So, our
> claim of rolling-upgrades already working is not accurate.
>
> One of the two options that Junping proposed should be pursued before we
> close the release. I'm in favor of calling out rolling-upgrade support be
> with-drawn or caveated and push for progress instead of blocking the
> release.
>
> Thanks
> +Vinod
>
> > On Dec 12, 2017, at 5:44 PM, Junping Du  wrote:
> >
> > Thanks Andrew for pushing new RC for 3.0.0. I was out last week, just
> get chance to validate new RC now.
> >
> > Basically, I found two critical issues with the same rolling upgrade
> scenario as where HADOOP-15059 get found previously:
> > HDFS-12920, we changed value format for some hdfs configurations that
> old version MR client doesn't understand when fetching these
> configurations. Some quick workarounds are to add old value (without time
> unit) in hdfs-site.xml to override new default values but will generate
> many annoying warnings. I provided my fix suggestions on the JIRA already
> for more discussion.
> > The other one is YARN-7646. After we workaround HDFS-12920, will hit the
> issue that old version MR AppMaster cannot communicate with new version of
> YARN RM - could be related to resource profile changes from YARN side but
> root cause are still in investigation.
> >
> > The first issue may not belong to a blocker given we can workaround this
> without code change. I am not sure if we can workaround 2nd issue so far.
> If not, we may have to fix this or compromise with withdrawing support of
> rolling upgrade or calling it a stable release.
> >
> >
> > Thanks,
> >
> > Junping
> >
> > 
> > From: Robert Kanter 
> > Sent: Tuesday, December 12, 2017 3:10 PM
> > To: Arun Suresh
> > Cc: Andrew Wang; Lei Xu; Wei-Chiu Chuang; Ajay Kumar; Xiao Chen; Aaron
> T. Myers; common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org;
> yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> > Subject: Re: [VOTE] Release Apache Hadoop 3.0.0 RC1
> >
> > +1 (binding)
> >
> > + Downloaded the binary release
> > + Deployed on a 3 node cluster on CentOS 7.3
> > + Ran some MR jobs, clicked around the UI, etc
> > + Ran some CLI commands (yarn logs, etc)
> >
> > Good job everyone on Hadoop 3!
> >
> >
> > - Robert
> >
> > On Tue, Dec 12, 2017 at 1:56 PM, Arun Suresh  wrote:
> >
> >> +1 (binding)
> >>
> >> - Verified signatures of the source tarball.
> >> - built from source - using the docker build environment.
> >> - set up a pseudo-distributed test cluster.
> >> - ran basic HDFS commands
> >> - ran some basic MR jobs
> >>
> >> Cheers
> >> -Arun
> >>
> >> On Tue, Dec 12, 2017 at 1:52 PM, Andrew Wang 
> >> wrote:
> >>
> >>> Hi everyone,
> >>>
> >>> As a reminder, this vote closes tomorrow at 12:31pm, so please give it
> a
> >>> whack if you 

[jira] [Resolved] (HDFS-12923) DFS.concat should throw exception if files have different EC policies.

2017-12-13 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu resolved HDFS-12923.
--
   Resolution: Won't Fix
Fix Version/s: 3.0.0

Resolved as not an issue.

> DFS.concat should throw exception if files have different EC policies. 
> ---
>
> Key: HDFS-12923
> URL: https://issues.apache.org/jira/browse/HDFS-12923
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Priority: Critical
> Fix For: 3.0.0
>
>
> {{DFS#concat}} appends blocks from different files to a single file. However, 
> if these files have different EC policies, or mixed with replicated and EC 
> files, the resulted file would be problematic to read, because the EC codec 
> is defined in INode instead of in a block. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread Andrew Wang
Hi folks,

To close this out, the vote passes successfully with 13 binding +1s, 5
non-binding +1s, and no -1s. Thanks everyone for voting! I'll work on
staging.

I'm hoping we can address YARN-7588 and any remaining rolling upgrade
issues in 3.0.x maintenance releases. Beyond a wiki page, it would be
really great to get JIRAs filed and targeted for tracking as soon as
possible.

Vinod, what do you think we need to do regarding caveating rolling upgrade
support? We haven't advertised rolling upgrade support between major
releases outside of dev lists and JIRA. As a new major release, our compat
guidelines allow us to break compatibility, so I don't think it's expected
by users.

Best,
Andrew

On Wed, Dec 13, 2017 at 12:37 PM, Vinod Kumar Vavilapalli <
vino...@apache.org> wrote:

> I was waiting for Daniel to post the minutes from YARN meetup to talk
> about this. Anyways, in that discussion, we identified a bunch of key
> upgrade related scenarios that no-one seems to have validated - atleast
> from the representation in the YARN meetup. I'm going to create a wiki-page
> listing all these scenarios.
>
> But back to the bug that Junping raised. At this point, we don't have a
> clear path towards running 2.x applications on 3.0.0 clusters. So, our
> claim of rolling-upgrades already working is not accurate.
>
> One of the two options that Junping proposed should be pursued before we
> close the release. I'm in favor of calling out rolling-upgrade support be
> with-drawn or caveated and push for progress instead of blocking the
> release.
>
> Thanks
> +Vinod
>
> > On Dec 12, 2017, at 5:44 PM, Junping Du  wrote:
> >
> > Thanks Andrew for pushing new RC for 3.0.0. I was out last week, just
> get chance to validate new RC now.
> >
> > Basically, I found two critical issues with the same rolling upgrade
> scenario as where HADOOP-15059 get found previously:
> > HDFS-12920, we changed value format for some hdfs configurations that
> old version MR client doesn't understand when fetching these
> configurations. Some quick workarounds are to add old value (without time
> unit) in hdfs-site.xml to override new default values but will generate
> many annoying warnings. I provided my fix suggestions on the JIRA already
> for more discussion.
> > The other one is YARN-7646. After we workaround HDFS-12920, will hit the
> issue that old version MR AppMaster cannot communicate with new version of
> YARN RM - could be related to resource profile changes from YARN side but
> root cause are still in investigation.
> >
> > The first issue may not belong to a blocker given we can workaround this
> without code change. I am not sure if we can workaround 2nd issue so far.
> If not, we may have to fix this or compromise with withdrawing support of
> rolling upgrade or calling it a stable release.
> >
> >
> > Thanks,
> >
> > Junping
> >
> > 
> > From: Robert Kanter 
> > Sent: Tuesday, December 12, 2017 3:10 PM
> > To: Arun Suresh
> > Cc: Andrew Wang; Lei Xu; Wei-Chiu Chuang; Ajay Kumar; Xiao Chen; Aaron
> T. Myers; common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org;
> yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> > Subject: Re: [VOTE] Release Apache Hadoop 3.0.0 RC1
> >
> > +1 (binding)
> >
> > + Downloaded the binary release
> > + Deployed on a 3 node cluster on CentOS 7.3
> > + Ran some MR jobs, clicked around the UI, etc
> > + Ran some CLI commands (yarn logs, etc)
> >
> > Good job everyone on Hadoop 3!
> >
> >
> > - Robert
> >
> > On Tue, Dec 12, 2017 at 1:56 PM, Arun Suresh  wrote:
> >
> >> +1 (binding)
> >>
> >> - Verified signatures of the source tarball.
> >> - built from source - using the docker build environment.
> >> - set up a pseudo-distributed test cluster.
> >> - ran basic HDFS commands
> >> - ran some basic MR jobs
> >>
> >> Cheers
> >> -Arun
> >>
> >> On Tue, Dec 12, 2017 at 1:52 PM, Andrew Wang 
> >> wrote:
> >>
> >>> Hi everyone,
> >>>
> >>> As a reminder, this vote closes tomorrow at 12:31pm, so please give it
> a
> >>> whack if you have time. There are already enough binding +1s to pass
> this
> >>> vote, but it'd be great to get additional validation.
> >>>
> >>> Thanks to everyone who's voted thus far!
> >>>
> >>> Best,
> >>> Andrew
> >>>
> >>>
> >>>
> >>> On Tue, Dec 12, 2017 at 11:08 AM, Lei Xu  wrote:
> >>>
>  +1 (binding)
> 
>  * Verified src tarball and bin tarball, verified md5 of each.
>  * Build source with -Pdist,native
>  * Started a pseudo cluster
>  * Run ec -listPolicies / -getPolicy / -setPolicy on /  , and run hdfs
>  dfs put/get/cat on "/" with XOR-2-1 policy.
> 
>  Thanks Andrew for this great effort!
> 
>  Best,
> 
> 
>  On Tue, Dec 12, 2017 at 9:55 AM, Andrew Wang <
> andrew.w...@cloudera.com
> >>>
>  wrote:
> > Hi Wei-Chiu,
> >
> > The patchprocess 

Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread Vinod Kumar Vavilapalli
I was waiting for Daniel to post the minutes from YARN meetup to talk about 
this. Anyways, in that discussion, we identified a bunch of key upgrade related 
scenarios that no-one seems to have validated - atleast from the representation 
in the YARN meetup. I'm going to create a wiki-page listing all these scenarios.

But back to the bug that Junping raised. At this point, we don't have a clear 
path towards running 2.x applications on 3.0.0 clusters. So, our claim of 
rolling-upgrades already working is not accurate.

One of the two options that Junping proposed should be pursued before we close 
the release. I'm in favor of calling out rolling-upgrade support be with-drawn 
or caveated and push for progress instead of blocking the release.

Thanks
+Vinod

> On Dec 12, 2017, at 5:44 PM, Junping Du  wrote:
> 
> Thanks Andrew for pushing new RC for 3.0.0. I was out last week, just get 
> chance to validate new RC now.
> 
> Basically, I found two critical issues with the same rolling upgrade scenario 
> as where HADOOP-15059 get found previously:
> HDFS-12920, we changed value format for some hdfs configurations that old 
> version MR client doesn't understand when fetching these configurations. Some 
> quick workarounds are to add old value (without time unit) in hdfs-site.xml 
> to override new default values but will generate many annoying warnings. I 
> provided my fix suggestions on the JIRA already for more discussion.
> The other one is YARN-7646. After we workaround HDFS-12920, will hit the 
> issue that old version MR AppMaster cannot communicate with new version of 
> YARN RM - could be related to resource profile changes from YARN side but 
> root cause are still in investigation.
> 
> The first issue may not belong to a blocker given we can workaround this 
> without code change. I am not sure if we can workaround 2nd issue so far. If 
> not, we may have to fix this or compromise with withdrawing support of 
> rolling upgrade or calling it a stable release.
> 
> 
> Thanks,
> 
> Junping
> 
> 
> From: Robert Kanter 
> Sent: Tuesday, December 12, 2017 3:10 PM
> To: Arun Suresh
> Cc: Andrew Wang; Lei Xu; Wei-Chiu Chuang; Ajay Kumar; Xiao Chen; Aaron T. 
> Myers; common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; 
> yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> Subject: Re: [VOTE] Release Apache Hadoop 3.0.0 RC1
> 
> +1 (binding)
> 
> + Downloaded the binary release
> + Deployed on a 3 node cluster on CentOS 7.3
> + Ran some MR jobs, clicked around the UI, etc
> + Ran some CLI commands (yarn logs, etc)
> 
> Good job everyone on Hadoop 3!
> 
> 
> - Robert
> 
> On Tue, Dec 12, 2017 at 1:56 PM, Arun Suresh  wrote:
> 
>> +1 (binding)
>> 
>> - Verified signatures of the source tarball.
>> - built from source - using the docker build environment.
>> - set up a pseudo-distributed test cluster.
>> - ran basic HDFS commands
>> - ran some basic MR jobs
>> 
>> Cheers
>> -Arun
>> 
>> On Tue, Dec 12, 2017 at 1:52 PM, Andrew Wang 
>> wrote:
>> 
>>> Hi everyone,
>>> 
>>> As a reminder, this vote closes tomorrow at 12:31pm, so please give it a
>>> whack if you have time. There are already enough binding +1s to pass this
>>> vote, but it'd be great to get additional validation.
>>> 
>>> Thanks to everyone who's voted thus far!
>>> 
>>> Best,
>>> Andrew
>>> 
>>> 
>>> 
>>> On Tue, Dec 12, 2017 at 11:08 AM, Lei Xu  wrote:
>>> 
 +1 (binding)
 
 * Verified src tarball and bin tarball, verified md5 of each.
 * Build source with -Pdist,native
 * Started a pseudo cluster
 * Run ec -listPolicies / -getPolicy / -setPolicy on /  , and run hdfs
 dfs put/get/cat on "/" with XOR-2-1 policy.
 
 Thanks Andrew for this great effort!
 
 Best,
 
 
 On Tue, Dec 12, 2017 at 9:55 AM, Andrew Wang >> 
 wrote:
> Hi Wei-Chiu,
> 
> The patchprocess directory is left over from the create-release
>>> process,
> and it looks empty to me. We should still file a create-release JIRA
>> to
 fix
> this, but I think this is not a blocker. Would you agree?
> 
> Best,
> Andrew
> 
> On Tue, Dec 12, 2017 at 9:44 AM, Wei-Chiu Chuang <
>> weic...@cloudera.com
 
> wrote:
> 
>> Hi Andrew, thanks the tremendous effort.
>> I found an empty "patchprocess" directory in the source tarball,
>> that
>>> is
>> not there if you clone from github. Any chance you might have some
 leftover
>> trash when you made the tarball?
>> Not wanting to nitpicking, but you might want to double check so we
 don't
>> ship anything private to you in public :)
>> 
>> 
>> 
>> On Tue, Dec 12, 2017 at 7:48 AM, Ajay Kumar <
>>> ajay.ku...@hortonworks.com
> 
>> wrote:
>> 
>>> +1 (non-binding)
>>> Thanks for driving 

Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread Vinod Kumar Vavilapalli
Looked at RC1. Went through my usual check-list. Here's my summary.

+1 (binding) overall

Verification
- [Check] Successful recompilation from source tar-ball
- [Check] Signature verification
- [Check] Generating dist tarballs from source tar-ball
- [Check] Validating the layout of the binary tar-ball
- [Check] Testing
   -- Start NN, DN, RM, NM, JHS, Timeline Service
   -- Ran dist-shell example, MR sleep, wordcount, randomwriter, sort, grep, pi
   -- Tested CLIs to print nodes, apps etc and also navigated UIs

Few issues as before found during testing, but shouldn't be blockers
 - The previously supported way of being able to use different tar-balls for 
different sub-modules is completely broken - common and HDFS tar.gz are 
completely empty. Will file a ticket.
 - resourcemanager-metrics.out is going into current directory instead of log 
directory. Will file a ticket.

One thing I want to make sure folks agree to. Allen filed a ticket to remove 
yarn-historyserver option per our previous discussion - 
https://issues.apache.org/jira/browse/YARN-7588 
. It isn't done - I am hoping 
this 'incompatible change' can be put in 3.0.1 and not 4.0.

Thanks
+Vinod


> On Dec 8, 2017, at 12:31 PM, Andrew Wang  wrote:
> 
> Hi all,
> 
> Let me start, as always, by thanking the efforts of all the contributors
> who contributed to this release, especially those who jumped on the issues
> found in RC0.
> 
> I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302
> fixed JIRAs since the previous 3.0.0-beta1 release.
> 
> You can find the artifacts here:
> 
> http://home.apache.org/~wang/3.0.0-RC1/
> 
> I've done the traditional testing of building from the source tarball and
> running a Pi job on a single node cluster. I also verified that the shaded
> jars are not empty.
> 
> Found one issue that create-release (probably due to the mvn deploy change)
> didn't sign the artifacts, but I fixed that by calling mvn one more time.
> Available here:
> 
> https://repository.apache.org/content/repositories/orgapachehadoop-1075/
> 
> This release will run the standard 5 days, closing on Dec 13th at 12:31pm
> Pacific. My +1 to start.
> 
> Best,
> Andrew



Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread Subramaniam V K
Thanks to everyone who pushed on this release, really great to see a
release coming off trunk.

+1 (binding).

Deployed the RC on a federated YARN cluster consisting of 8 sub-clusters on
CentOs 7.4 running Java 1.8.0_144.

Hit YARN-7652[1] which is not a blocker as I was able to run multiple jobs
successfully once I cleaned up the StateStore.

[1] https://issues.apache.org/jira/browse/YARN-7652

On Wed, Dec 13, 2017 at 11:25 AM, Wei-Chiu Chuang 
wrote:

> Thanks Andrew again,
>
> +1 (binding)
>
> * Downloaded source tarball, and compiled with native libs
> successfully: -Pdist,native -Drequire.isal -Drequire.snappy
> -Drequire.openssl -Drequire.zstd.
> * hadoop checknative ran successfully.
> * Upgraded a fsimage that was previously used in a Hadoop 2.6-based CDH
> production cluster (fsimage size: 7.3 GB) to Hadoop 3.0.0 RC1 successfully.
> (Command used: hdfs namenode -upgrade)
> Interestingly, after the upgrade, the fsimage grew from 7.3 GB to 7.5 GB.
> I'm not really sure what went into fsimage between 2.6 and 3.0, but this is
> definitely not a big problem.
>
> In addition,
> * Built HBase 2.0.0 alpha4 against Hadoop 3.0.0 RC1 successfully.
> * Built Hive master branch against Hadoop 3.0.0 RC1 successfully.
>
>
> On Tue, Dec 12, 2017 at 9:55 AM, Andrew Wang 
> wrote:
>
> > Hi Wei-Chiu,
> >
> > The patchprocess directory is left over from the create-release process,
> > and it looks empty to me. We should still file a create-release JIRA to
> fix
> > this, but I think this is not a blocker. Would you agree?
> >
> > Best,
> > Andrew
> >
> > On Tue, Dec 12, 2017 at 9:44 AM, Wei-Chiu Chuang 
> > wrote:
> >
> >> Hi Andrew, thanks the tremendous effort.
> >> I found an empty "patchprocess" directory in the source tarball, that is
> >> not there if you clone from github. Any chance you might have some
> leftover
> >> trash when you made the tarball?
> >> Not wanting to nitpicking, but you might want to double check so we
> don't
> >> ship anything private to you in public :)
> >>
> >>
> >>
> >> On Tue, Dec 12, 2017 at 7:48 AM, Ajay Kumar  >
> >> wrote:
> >>
> >>> +1 (non-binding)
> >>> Thanks for driving this, Andrew Wang!!
> >>>
> >>> - downloaded the src tarball and verified md5 checksum
> >>> - built from source with jdk 1.8.0_111-b14
> >>> - brought up a pseudo distributed cluster
> >>> - did basic file system operations (mkdir, list, put, cat) and
> >>> confirmed that everything was working
> >>> - Run word count, pi and DFSIOTest
> >>> - run hdfs and yarn, confirmed that the NN, RM web UI worked
> >>>
> >>> Cheers,
> >>> Ajay
> >>>
> >>> On 12/11/17, 9:35 PM, "Xiao Chen"  wrote:
> >>>
> >>> +1 (binding)
> >>>
> >>> - downloaded src tarball, verified md5
> >>> - built from source with jdk1.8.0_112
> >>> - started a pseudo cluster with hdfs and kms
> >>> - sanity checked encryption related operations working
> >>> - sanity checked webui and logs.
> >>>
> >>> -Xiao
> >>>
> >>> On Mon, Dec 11, 2017 at 6:10 PM, Aaron T. Myers 
> >>> wrote:
> >>>
> >>> > +1 (binding)
> >>> >
> >>> > - downloaded the src tarball and built the source (-Pdist
> -Pnative)
> >>> > - verified the checksum
> >>> > - brought up a secure pseudo distributed cluster
> >>> > - did some basic file system operations (mkdir, list, put, cat)
> and
> >>> > confirmed that everything was working
> >>> > - confirmed that the web UI worked
> >>> >
> >>> > Best,
> >>> > Aaron
> >>> >
> >>> > On Fri, Dec 8, 2017 at 12:31 PM, Andrew Wang <
> >>> andrew.w...@cloudera.com>
> >>> > wrote:
> >>> >
> >>> > > Hi all,
> >>> > >
> >>> > > Let me start, as always, by thanking the efforts of all the
> >>> contributors
> >>> > > who contributed to this release, especially those who jumped on
> >>> the
> >>> > issues
> >>> > > found in RC0.
> >>> > >
> >>> > > I've prepared RC1 for Apache Hadoop 3.0.0. This release
> >>> incorporates 302
> >>> > > fixed JIRAs since the previous 3.0.0-beta1 release.
> >>> > >
> >>> > > You can find the artifacts here:
> >>> > >
> >>> > > http://home.apache.org/~wang/3.0.0-RC1/
> >>> > >
> >>> > > I've done the traditional testing of building from the source
> >>> tarball and
> >>> > > running a Pi job on a single node cluster. I also verified that
> >>> the
> >>> > shaded
> >>> > > jars are not empty.
> >>> > >
> >>> > > Found one issue that create-release (probably due to the mvn
> >>> deploy
> >>> > change)
> >>> > > didn't sign the artifacts, but I fixed that by calling mvn one
> >>> more time.
> >>> > > Available here:
> >>> > >
> >>> > > https://repository.apache.org/content/repositories/orgapache
> >>> hadoop-1075/
> >>> > >
> >>> > > This release will run the standard 5 days, closing on Dec 

Re: [VOTE] Release Apache Hadoop 2.7.5 (RC1)

2017-12-13 Thread Jonathan Hung
Thanks Konstantin for working on this.

+1 (non-binding)
- Downloaded binary and verified md5
- Deployed RM HA and tested failover




Jonathan Hung

On Wed, Dec 13, 2017 at 11:02 AM, Eric Payne  wrote:

> Thanks for the hard work on this release, Konstantin.
> +1 (binding)
> - Built from source
> - Verified that refreshing of queues works as expected.
>
> - Verified can run multiple users in a single queue
> - Ran terasort test
> - Verified that cross-queue preemption works as expected
> Thanks. Eric Payne
>
>   From: Konstantin Shvachko 
>  To: "common-...@hadoop.apache.org" ; "
> hdfs-dev@hadoop.apache.org" ; "
> mapreduce-...@hadoop.apache.org" ; "
> yarn-...@hadoop.apache.org" 
>  Sent: Thursday, December 7, 2017 9:22 PM
>  Subject: [VOTE] Release Apache Hadoop 2.7.5 (RC1)
>
> Hi everybody,
>
> I updated CHANGES.txt and fixed documentation links.
> Also committed  MAPREDUCE-6165, which fixes a consistently failing test.
>
> This is RC1 for the next dot release of Apache Hadoop 2.7 line. The
> previous one 2.7.4 was release August 4, 2017.
> Release 2.7.5 includes critical bug fixes and optimizations. See more
> details in Release Note:
> http://home.apache.org/~shv/hadoop-2.7.5-RC1/releasenotes.html
>
> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC1/
>
> Please give it a try and vote on this thread. The vote will run for 5 days
> ending 12/13/2017.
>
> My up to date public key is available from:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Thanks,
> --Konstantin
>
>
>
>


Re: [VOTE] Merge HDFS-9806 to trunk

2017-12-13 Thread Sean Mackrory
+1 from me. There are some unrelated errors building the branch right now
due to annotations in some YARN code, etc. but I was able to generate an fs
image from an S3 bucket and serve the content through HDFS on a
pseudo-distributed HDFS node this morning. Seems like a good point for a
merge.

On Wed, Dec 13, 2017 at 11:55 AM, Anu Engineer 
wrote:

> Hi Virajith / Chris/ Thomas / Ewan,
>
> Thanks for developing this feature and getting to merge state.
> I would like to vote +1 for this merge. Thanks for all the hard work.
>
> Thanks
> Anu
>
>
> On 12/8/17, 7:11 PM, "Virajith Jalaparti"  wrote:
>
> Hi,
>
> We have tested the HDFS-9806 branch in two settings:
>
> (i) 26 node bare-metal cluster, with PROVIDED storage configured to
> point
> to another instance of HDFS (containing 468 files, total of ~400GB of
> data). Half of the Datanodes are configured with only DISK volumes and
> other other half have both DISK and PROVIDED volumes.
> (ii) 8 VMs on Azure, with PROVIDED storage configured to point to a
> WASB
> account (containing 26,074 files and ~1.3TB of data). All Datanodes are
> configured with DISK and PROVIDED volumes.
>
> (i) was tested using both the text-based alias map
> (TextFileRegionAliasMap)
> and the in-memory leveldb-based alias map (
> InMemoryLevelDBAliasMapClient),
> while (ii) was tested using the text-based alias map only.
>
> Steps followed:
> (0) Build from apache/HDFS-9806. (Note that for the leveldb-based alias
> map, the patch posted to HDFS-12912
>  needs to be
> applied; we
> will commit this to apache/HDFS-9806 after review).
> (1) Generate the FSImage using the image generation tool with the
> appropriate remote location (hdfs:// in (i) and wasb:// in (ii)).
> (2) Bring up the HDFS cluster.
> (3) Verify that the remote namespace is reflected correctly and data on
> remote store can be accessed. Commands ran: ls, copyToLocal, fsck,
> getrep,
> setrep, getStoragePolicy
> (4) Run Sort and Gridmix jobs on the data in the remote location with
> the
> input paths pointing to the local HDFS.
> (5) Increase replication of the PROVIDED files and verified that local
> (DISK) replicas were created for the PROVIDED replicas, using fsck.
> (6) Verify that Provided storage capacity is shown correctly on the NN
> and
> Datanode Web-UI.
> (7) Bring down datanodes, one by one. When all are down, verify NN
> reports
> all PROVIDED files as missing. Bringing back up any one Datanode makes
> all
> the data available.
> (8) Restart NN and verify data is still accesible.
> (9) Verify that Writes to local HDFS continue to work.
> (10) Bring down all Datanodes except one. Start decommissioning the
> remaining Datanode. Verify that the data in the PROVIDED storage is
> still
> accessible.
>
> Apart from the above, we ported the changes in HDFS-9806 to branch-2.7
> and
> deployed it on a ~800 node cluster as one of the sub-clusters in a
> Router-based Federated HDFS of nearly 4000 nodes (with help from Inigo
> Goiri). We mounted about 1000 files, 650TB of remote data (~2.6million
> blocks with 256MB block size) in this cluster using the text-based
> alias
> map. We verified that the basic commands (ls, copyToLocal, setrep)
> work.
> We also ran spark jobs against this cluster.
>
> -Virajith
>
>
> On Fri, Dec 8, 2017 at 3:44 PM, Chris Douglas 
> wrote:
>
> > Discussion thread: https://s.apache.org/kxT1
> >
> > We're down to the last few issues and are preparing the branch to
> > merge to trunk. We'll post merge patches to HDFS-9806 [1]. Minor,
> > "cleanup" tasks (checkstyle, findbugs, naming, etc.) will be tracked
> > in HDFS-12712 [2].
> >
> > We've tried to ensure that when this feature is disabled, HDFS is
> > unaffected. For those reviewing this, please look for places where
> > this might add overheads and we'll address them before the merge. The
> > site documentation [3] and design doc [4] should be up to date and
> > sufficient to try this out. Again, please point out where it is
> > unclear and we can address it.
> >
> > This has been a long effort and we're grateful for the support we've
> > received from the community. In particular, thanks to Íñigo Goiri,
> > Andrew Wang, Anu Engineer, Steve Loughran, Sean Mackrory, Lukas
> > Majercak, Uma Gunuganti, Kai Zheng, Rakesh Radhakrishnan, Sriram Rao,
> > Lei Xu, Zhe Zhang, Jing Zhao, Bharat Viswanadham, ATM, Chris Nauroth,
> > Sanjay Radia, Atul Sikaria, and Peng Li for all your input into the
> > design, testing, and review of this feature.
> >
> > The vote will close no earlier than one week from today, 12/15. -C
> >
> > [1]: 

Re: [VOTE] Release Apache Hadoop 2.7.5 (RC1)

2017-12-13 Thread Eric Payne
Thanks for the hard work on this release, Konstantin.
+1 (binding)
- Built from source
- Verified that refreshing of queues works as expected.

- Verified can run multiple users in a single queue
- Ran terasort test
- Verified that cross-queue preemption works as expected
Thanks. Eric Payne

  From: Konstantin Shvachko 
 To: "common-...@hadoop.apache.org" ; 
"hdfs-dev@hadoop.apache.org" ; 
"mapreduce-...@hadoop.apache.org" ; 
"yarn-...@hadoop.apache.org"  
 Sent: Thursday, December 7, 2017 9:22 PM
 Subject: [VOTE] Release Apache Hadoop 2.7.5 (RC1)
   
Hi everybody,

I updated CHANGES.txt and fixed documentation links.
Also committed  MAPREDUCE-6165, which fixes a consistently failing test.

This is RC1 for the next dot release of Apache Hadoop 2.7 line. The
previous one 2.7.4 was release August 4, 2017.
Release 2.7.5 includes critical bug fixes and optimizations. See more
details in Release Note:
http://home.apache.org/~shv/hadoop-2.7.5-RC1/releasenotes.html

The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC1/

Please give it a try and vote on this thread. The vote will run for 5 days
ending 12/13/2017.

My up to date public key is available from:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

Thanks,
--Konstantin


   

Re: [VOTE] Merge HDFS-9806 to trunk

2017-12-13 Thread Anu Engineer
Hi Virajith / Chris/ Thomas / Ewan,

Thanks for developing this feature and getting to merge state. 
I would like to vote +1 for this merge. Thanks for all the hard work.

Thanks
Anu


On 12/8/17, 7:11 PM, "Virajith Jalaparti"  wrote:

Hi,

We have tested the HDFS-9806 branch in two settings:

(i) 26 node bare-metal cluster, with PROVIDED storage configured to point
to another instance of HDFS (containing 468 files, total of ~400GB of
data). Half of the Datanodes are configured with only DISK volumes and
other other half have both DISK and PROVIDED volumes.
(ii) 8 VMs on Azure, with PROVIDED storage configured to point to a WASB
account (containing 26,074 files and ~1.3TB of data). All Datanodes are
configured with DISK and PROVIDED volumes.

(i) was tested using both the text-based alias map (TextFileRegionAliasMap)
and the in-memory leveldb-based alias map (InMemoryLevelDBAliasMapClient),
while (ii) was tested using the text-based alias map only.

Steps followed:
(0) Build from apache/HDFS-9806. (Note that for the leveldb-based alias
map, the patch posted to HDFS-12912
 needs to be applied; we
will commit this to apache/HDFS-9806 after review).
(1) Generate the FSImage using the image generation tool with the
appropriate remote location (hdfs:// in (i) and wasb:// in (ii)).
(2) Bring up the HDFS cluster.
(3) Verify that the remote namespace is reflected correctly and data on
remote store can be accessed. Commands ran: ls, copyToLocal, fsck, getrep,
setrep, getStoragePolicy
(4) Run Sort and Gridmix jobs on the data in the remote location with the
input paths pointing to the local HDFS.
(5) Increase replication of the PROVIDED files and verified that local
(DISK) replicas were created for the PROVIDED replicas, using fsck.
(6) Verify that Provided storage capacity is shown correctly on the NN and
Datanode Web-UI.
(7) Bring down datanodes, one by one. When all are down, verify NN reports
all PROVIDED files as missing. Bringing back up any one Datanode makes all
the data available.
(8) Restart NN and verify data is still accesible.
(9) Verify that Writes to local HDFS continue to work.
(10) Bring down all Datanodes except one. Start decommissioning the
remaining Datanode. Verify that the data in the PROVIDED storage is still
accessible.

Apart from the above, we ported the changes in HDFS-9806 to branch-2.7 and
deployed it on a ~800 node cluster as one of the sub-clusters in a
Router-based Federated HDFS of nearly 4000 nodes (with help from Inigo
Goiri). We mounted about 1000 files, 650TB of remote data (~2.6million
blocks with 256MB block size) in this cluster using the text-based alias
map. We verified that the basic commands (ls, copyToLocal, setrep)  work.
We also ran spark jobs against this cluster.

-Virajith


On Fri, Dec 8, 2017 at 3:44 PM, Chris Douglas  wrote:

> Discussion thread: https://s.apache.org/kxT1
>
> We're down to the last few issues and are preparing the branch to
> merge to trunk. We'll post merge patches to HDFS-9806 [1]. Minor,
> "cleanup" tasks (checkstyle, findbugs, naming, etc.) will be tracked
> in HDFS-12712 [2].
>
> We've tried to ensure that when this feature is disabled, HDFS is
> unaffected. For those reviewing this, please look for places where
> this might add overheads and we'll address them before the merge. The
> site documentation [3] and design doc [4] should be up to date and
> sufficient to try this out. Again, please point out where it is
> unclear and we can address it.
>
> This has been a long effort and we're grateful for the support we've
> received from the community. In particular, thanks to Íñigo Goiri,
> Andrew Wang, Anu Engineer, Steve Loughran, Sean Mackrory, Lukas
> Majercak, Uma Gunuganti, Kai Zheng, Rakesh Radhakrishnan, Sriram Rao,
> Lei Xu, Zhe Zhang, Jing Zhao, Bharat Viswanadham, ATM, Chris Nauroth,
> Sanjay Radia, Atul Sikaria, and Peng Li for all your input into the
> design, testing, and review of this feature.
>
> The vote will close no earlier than one week from today, 12/15. -C
>
> [1]: https://issues.apache.org/jira/browse/HDFS-9806
> [2]: https://issues.apache.org/jira/browse/HDFS-12712
> [3]: https://github.com/apache/hadoop/blob/HDFS-9806/hadoop-
> hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
> [4]: https://issues.apache.org/jira/secure/attachment/
> 12875791/HDFS-9806-design.002.pdf
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: 

[jira] [Resolved] (HDFS-12265) Ozone : better handling of operation fail due to chill mode

2017-12-13 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang resolved HDFS-12265.
---
  Resolution: Fixed
Release Note: Looks like this has been handled as part of HDFS-12387, close 
this JIRA.

> Ozone : better handling of operation fail due to chill mode
> ---
>
> Key: HDFS-12265
> URL: https://issues.apache.org/jira/browse/HDFS-12265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Priority: Minor
>  Labels: OzonePostMerge
>
> Currently if someone tries to create a container while SCM is in chill mode, 
> there will be exception of INTERNAL_ERROR, which is not very informative and 
> can be confusing for debugging.
> We should make it easier to identify problems caused by chill mode. For 
> example, we may detect if SCM is in chill mode and report back to client in 
> some way, such that the client can backup and try again later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-13 Thread Naganarasimha Garla
Thanks Junping for the release. +1 (binding)

Tested on a single node pseudo cluster, I performed the following tests:
- Downloaded the tars and verified the signatures, installed using the tar
- Successfully ran few MR jobs
- Verified few hdfs operations
- Verified RM,NM and HDFS webui's
- configured labels and submitted some apps

Thanks and Regards,
+ Naga

On Wed, Dec 13, 2017 at 8:14 PM, Sunil G  wrote:

> +1 (binding)
>
> Thanks Junping for the effort.
> I have deployed a cluster built from source tar ball.
>
>
>- Ran few MR apps and verified UI. CLI commands are also fine related to
>app.
>- Tested below feature sanity
>   - Application priority
>   - Application timeout
>- Tested basic NodeLabel scenarios.
>   - Added some labels to couple of nodes
>   - Verified old UI for labels
>   - Submitted apps to labelled cluster and it works fine.
>   - Also performed few cli commands related to nodelabel
>- Test basic HA cases
>
>
> Thanks
> Sunil G
>
>
> On Tue, Dec 5, 2017 at 3:28 PM Junping Du  wrote:
>
> > Hi all,
> >  I've created the first release candidate (RC0) for Apache Hadoop
> > 2.8.3. This is our next maint release to follow up 2.8.2. It includes 79
> > important fixes and improvements.
> >
> >   The RC artifacts are available at:
> > http://home.apache.org/~junping_du/hadoop-2.8.3-RC0
> >
> >   The RC tag in git is: release-2.8.3-RC0
> >
> >   The maven artifacts are available via repository.apache.org at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1072
> >
> >   Please try the release and vote; the vote will run for the usual 5
> > working days, ending on 12/12/2017 PST time.
> >
> > Thanks,
> >
> > Junping
> >
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-12-13 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/621/

[Dec 12, 2017 6:56:26 PM] (jlowe) YARN-7625. Expose NM node/containers resource 
utilization in JVM
[Dec 12, 2017 9:35:56 PM] (jianhe) YARN-7565. Yarn service pre-maturely 
releases the container after AM
[Dec 12, 2017 10:04:15 PM] (jlowe) YARN-7595. Container launching code 
suppresses close exceptions after
[Dec 13, 2017 5:11:41 AM] (wwei) YARN-7647. NM print inappropriate error log 
when node-labels is enabled.
[Dec 13, 2017 10:21:49 AM] (sunilg) YARN-7641. Allow searchable filter for 
Application page log viewer in
[Dec 13, 2017 11:16:12 AM] (sunilg) YARN-7536. em-table improvement for better 
filtering in new YARN UI.
[Dec 13, 2017 4:30:07 PM] (sunilg) YARN-7383. Node resource is not parsed 
correctly for resource names

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread Sean Mackrory
+1 (non-binding)

* Verified md5 of all artifacts
* Built with -Pdist
* Ran several s3a shell commands
* Started a pseudo-distributed HDFS and YARN cluster
* Ran grep and pi MR examples
* Sanity-checked contents of release notes, rat report, and changes

On Fri, Dec 8, 2017 at 1:31 PM, Andrew Wang 
wrote:

> Hi all,
>
> Let me start, as always, by thanking the efforts of all the contributors
> who contributed to this release, especially those who jumped on the issues
> found in RC0.
>
> I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302
> fixed JIRAs since the previous 3.0.0-beta1 release.
>
> You can find the artifacts here:
>
> http://home.apache.org/~wang/3.0.0-RC1/
>
> I've done the traditional testing of building from the source tarball and
> running a Pi job on a single node cluster. I also verified that the shaded
> jars are not empty.
>
> Found one issue that create-release (probably due to the mvn deploy change)
> didn't sign the artifacts, but I fixed that by calling mvn one more time.
> Available here:
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1075/
>
> This release will run the standard 5 days, closing on Dec 13th at 12:31pm
> Pacific. My +1 to start.
>
> Best,
> Andrew
>


Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread Wangda Tan
Thanks Andrew for driving this.

+1 (Binding)

Ran SLS + CS's Perf unit test and saw similar performance compared to
trunk.

Compiled from source, deployed single node cluster and ran several jobs.

Best,
Wangda




On Wed, Dec 13, 2017 at 7:29 AM, Sunil G  wrote:

> +1 (binding)
>
> Thanks Andrew Wang for driving this effort and also thanks to all others
> who helped in this release. Kudos!!!
>
> I tested this RC by building it from source. I met with couple of issues
> (not blocker) HADOOP-15116 and YARN-7650. This could be tracked separately.
>
>
>- Ran many MR apps and verified both new YARN UI and old RM UI.
>- Tested below feature sanity and got results as per the behavior
>   - Application priority (verified CLI/REST/UI etc)
>   - Application timeout
>   - Intra Queue preemption with priority based
>   - Inter Queue preemption
>- Tested basic NodeLabel scenarios.
>   - Added couple of labels to few of nodes and behavior is coming
>   correct.
>   - Verified old UI  and new YARN UI for labels.
>   - Submitted apps to labelled cluster and it works fine.
>   - Also performed few cli commands related to nodelabel.
>- Test basic HA cases and seems correct. However I got one issue.
>Raised HADOOP-15116 as its not a blocker.
>- Also tested new YARN UI . All pages are getting loaded correctly.
>(User must enable CORS to access NodeManager pages)
>- *Performance test*: I ran a tight loop perf test on CS
>TestCapacitySchedulerPerf#testUserLimitThroughputForTwoResources.
>Results are a bit off w.r.t 2.8  (~5% less). I will open a ticket and
>investigate by doing more tests to see if its to be addressed or not.
>
>
> - Sunil G
>
>
>
> On Sat, Dec 9, 2017 at 2:01 AM Andrew Wang 
> wrote:
>
> > Hi all,
> >
> > Let me start, as always, by thanking the efforts of all the contributors
> > who contributed to this release, especially those who jumped on the
> issues
> > found in RC0.
> >
> > I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302
> > fixed JIRAs since the previous 3.0.0-beta1 release.
> >
> > You can find the artifacts here:
> >
> > http://home.apache.org/~wang/3.0.0-RC1/
> >
> > I've done the traditional testing of building from the source tarball and
> > running a Pi job on a single node cluster. I also verified that the
> shaded
> > jars are not empty.
> >
> > Found one issue that create-release (probably due to the mvn deploy
> change)
> > didn't sign the artifacts, but I fixed that by calling mvn one more time.
> > Available here:
> >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1075/
> >
> > This release will run the standard 5 days, closing on Dec 13th at 12:31pm
> > Pacific. My +1 to start.
> >
> > Best,
> > Andrew
> >
>


Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread Sunil G
+1 (binding)

Thanks Andrew Wang for driving this effort and also thanks to all others
who helped in this release. Kudos!!!

I tested this RC by building it from source. I met with couple of issues
(not blocker) HADOOP-15116 and YARN-7650. This could be tracked separately.


   - Ran many MR apps and verified both new YARN UI and old RM UI.
   - Tested below feature sanity and got results as per the behavior
  - Application priority (verified CLI/REST/UI etc)
  - Application timeout
  - Intra Queue preemption with priority based
  - Inter Queue preemption
   - Tested basic NodeLabel scenarios.
  - Added couple of labels to few of nodes and behavior is coming
  correct.
  - Verified old UI  and new YARN UI for labels.
  - Submitted apps to labelled cluster and it works fine.
  - Also performed few cli commands related to nodelabel.
   - Test basic HA cases and seems correct. However I got one issue.
   Raised HADOOP-15116 as its not a blocker.
   - Also tested new YARN UI . All pages are getting loaded correctly.
   (User must enable CORS to access NodeManager pages)
   - *Performance test*: I ran a tight loop perf test on CS
   TestCapacitySchedulerPerf#testUserLimitThroughputForTwoResources.
   Results are a bit off w.r.t 2.8  (~5% less). I will open a ticket and
   investigate by doing more tests to see if its to be addressed or not.


- Sunil G



On Sat, Dec 9, 2017 at 2:01 AM Andrew Wang  wrote:

> Hi all,
>
> Let me start, as always, by thanking the efforts of all the contributors
> who contributed to this release, especially those who jumped on the issues
> found in RC0.
>
> I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302
> fixed JIRAs since the previous 3.0.0-beta1 release.
>
> You can find the artifacts here:
>
> http://home.apache.org/~wang/3.0.0-RC1/
>
> I've done the traditional testing of building from the source tarball and
> running a Pi job on a single node cluster. I also verified that the shaded
> jars are not empty.
>
> Found one issue that create-release (probably due to the mvn deploy change)
> didn't sign the artifacts, but I fixed that by calling mvn one more time.
> Available here:
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1075/
>
> This release will run the standard 5 days, closing on Dec 13th at 12:31pm
> Pacific. My +1 to start.
>
> Best,
> Andrew
>


RE: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread Brahma Reddy Battula
+1 (non-binding),Thanks Andrew wang for driving this.

---Built from the source
--Installed 3 Node HA cluster
--Verified Basic shell commands 
--Ran Sample jobs like pi,wordcount
--Verified the UI's



--Brahma Reddy Battula


-Original Message-
From: Andrew Wang [mailto:andrew.w...@cloudera.com] 
Sent: 09 December 2017 02:01
To: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Subject: [VOTE] Release Apache Hadoop 3.0.0 RC1

Hi all,

Let me start, as always, by thanking the efforts of all the contributors who 
contributed to this release, especially those who jumped on the issues found in 
RC0.

I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302 fixed 
JIRAs since the previous 3.0.0-beta1 release.

You can find the artifacts here:

http://home.apache.org/~wang/3.0.0-RC1/

I've done the traditional testing of building from the source tarball and 
running a Pi job on a single node cluster. I also verified that the shaded jars 
are not empty.

Found one issue that create-release (probably due to the mvn deploy change) 
didn't sign the artifacts, but I fixed that by calling mvn one more time.
Available here:

https://repository.apache.org/content/repositories/orgapachehadoop-1075/

This release will run the standard 5 days, closing on Dec 13th at 12:31pm 
Pacific. My +1 to start.

Best,
Andrew


Re: [VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-13 Thread Rohith Sharma K S
+1 (binding)

- built from source and deployed 3 node cluster
- installed RM HA cluster along with ATSv2 enabled and new YARN UI.
- verified for
-- RM HA switch / RM Restart / RM Work preserving restart
-- NM work preserving restart
-- Ran sample MR jobs and Distributed shell along with multiple RM and NM
switch
- verified for ATSv2 entities data, REST API's validation as HBase-1.2.6 as
back end.
- verified for priority. timeout feature of RM.
- verified for new YARN UI and pages along with atsv2 integration

Thanks & Regards
Rohith Sharma K S


On 9 December 2017 at 02:01, Andrew Wang  wrote:

> Hi all,
>
> Let me start, as always, by thanking the efforts of all the contributors
> who contributed to this release, especially those who jumped on the issues
> found in RC0.
>
> I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302
> fixed JIRAs since the previous 3.0.0-beta1 release.
>
> You can find the artifacts here:
>
> http://home.apache.org/~wang/3.0.0-RC1/
>
> I've done the traditional testing of building from the source tarball and
> running a Pi job on a single node cluster. I also verified that the shaded
> jars are not empty.
>
> Found one issue that create-release (probably due to the mvn deploy change)
> didn't sign the artifacts, but I fixed that by calling mvn one more time.
> Available here:
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1075/
>
> This release will run the standard 5 days, closing on Dec 13th at 12:31pm
> Pacific. My +1 to start.
>
> Best,
> Andrew
>


Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-13 Thread Sunil G
+1 (binding)

Thanks Junping for the effort.
I have deployed a cluster built from source tar ball.


   - Ran few MR apps and verified UI. CLI commands are also fine related to
   app.
   - Tested below feature sanity
  - Application priority
  - Application timeout
   - Tested basic NodeLabel scenarios.
  - Added some labels to couple of nodes
  - Verified old UI for labels
  - Submitted apps to labelled cluster and it works fine.
  - Also performed few cli commands related to nodelabel
   - Test basic HA cases


Thanks
Sunil G


On Tue, Dec 5, 2017 at 3:28 PM Junping Du  wrote:

> Hi all,
>  I've created the first release candidate (RC0) for Apache Hadoop
> 2.8.3. This is our next maint release to follow up 2.8.2. It includes 79
> important fixes and improvements.
>
>   The RC artifacts are available at:
> http://home.apache.org/~junping_du/hadoop-2.8.3-RC0
>
>   The RC tag in git is: release-2.8.3-RC0
>
>   The maven artifacts are available via repository.apache.org at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1072
>
>   Please try the release and vote; the vote will run for the usual 5
> working days, ending on 12/12/2017 PST time.
>
> Thanks,
>
> Junping
>


Re: [VOTE] Release Apache Hadoop 2.7.5 (RC1)

2017-12-13 Thread Rohith Sharma K S
+1 (binding)

Built from source and deployed non-secure 3 node cluster
Verified for
- RM HA/RM Restart/RM work preserving restart
- Ran sample MR and Distributed shell jobs.

Thanks & Regards
Rohith Sharma K S


On 8 December 2017 at 08:52, Konstantin Shvachko 
wrote:

> Hi everybody,
>
> I updated CHANGES.txt and fixed documentation links.
> Also committed  MAPREDUCE-6165, which fixes a consistently failing test.
>
> This is RC1 for the next dot release of Apache Hadoop 2.7 line. The
> previous one 2.7.4 was release August 4, 2017.
> Release 2.7.5 includes critical bug fixes and optimizations. See more
> details in Release Note:
> http://home.apache.org/~shv/hadoop-2.7.5-RC1/releasenotes.html
>
> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC1/
>
> Please give it a try and vote on this thread. The vote will run for 5 days
> ending 12/13/2017.
>
> My up to date public key is available from:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Thanks,
> --Konstantin
>


Re: [VOTE] Release Apache Hadoop 2.8.3 (RC0)

2017-12-13 Thread Rohith Sharma K S
+1 (Binding)

Built from source and deployed non-secure 3 node cluster
Verified for
- RM HA/RM Restart/RM work preserving restart
- NM work preserving restart
- Ran sample MR and Distributed shell jobs.

Thanks & Regards
Rohith Sharma K S

On 5 December 2017 at 15:28, Junping Du  wrote:

> Hi all,
>  I've created the first release candidate (RC0) for Apache Hadoop
> 2.8.3. This is our next maint release to follow up 2.8.2. It includes 79
> important fixes and improvements.
>
>   The RC artifacts are available at: http://home.apache.org/~
> junping_du/hadoop-2.8.3-RC0
>
>   The RC tag in git is: release-2.8.3-RC0
>
>   The maven artifacts are available via repository.apache.org at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1072
>
>   Please try the release and vote; the vote will run for the usual 5
> working days, ending on 12/12/2017 PST time.
>
> Thanks,
>
> Junping
>