Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-17 Thread Yung-An He
+1 (non-binding)

All the testing I've done are below:

Testing all the download links: OK
Checked MD5 checksums of all download files :
Start from bin tar with cluster mode via docker container: OK
Execute basic command via `hadoop fs`: OK
Execute Word Count job: OK



‌

2017-11-15 5:34 GMT+08:00 Andrew Wang :

> Hi folks,
>
> Thanks as always to the many, many contributors who helped with this
> release. I've created RC0 for Apache Hadoop 3.0.0. The artifacts are
> available here:
>
> http://people.apache.org/~wang/3.0.0-RC0/
>
> This vote will run 5 days, ending on Nov 19th at 1:30pm Pacific.
>
> 3.0.0 GA contains 291 fixed JIRA issues since 3.0.0-beta1. Notable
> additions include the merge of YARN resource types, API-based configuration
> of the CapacityScheduler, and HDFS router-based federation.
>
> I've done my traditional testing with a pseudo cluster and a Pi job. My +1
> to start.
>
> Best,
> Andrew
>


Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-17 Thread Andrew Wang
Hi Arpit,

I agree the timing is not great here, but extending it to meaningfully
avoid the holidays would mean extending it an extra week (e.g. to the
29th). We've been coordinating with ASF PR for that Tuesday, so I'd really,
really like to get the RC out before then.

In terms of downstream testing, we've done extensive integration testing
with downstreams via the alphas and betas, and we have continuous
integration running at Cloudera against branch-3.0. Because of this, I have
more confidence in our integration for 3.0.0 than most Hadoop releases.

Is it meaningful to extend to say, the 21st, which provides for a full week
of voting?

Best,
Andrew

On Fri, Nov 17, 2017 at 1:27 PM, Arpit Agarwal 
wrote:

> Hi Andrew,
>
> Thank you for your hard work in getting us to this step. This is our first
> major GA release in many years.
>
> I feel a 5-day vote window ending over the weekend before thanksgiving may
> not provide sufficient time to evaluate this RC especially for downstream
> components.
>
> Would you please consider extending the voting deadline until a few days
> after the thanksgiving holiday? It would be a courtesy to our broader
> community and I see no harm in giving everyone a few days to evaluate it
> more thoroughly.
>
> On a lighter note, your deadline is also 4 minutes short of the required 5
> days. :)
>
> Regards,
> Arpit
>
>
>
> On 11/14/17, 1:34 PM, "Andrew Wang"  wrote:
>
> Hi folks,
>
> Thanks as always to the many, many contributors who helped with this
> release. I've created RC0 for Apache Hadoop 3.0.0. The artifacts are
> available here:
>
> http://people.apache.org/~wang/3.0.0-RC0/
>
> This vote will run 5 days, ending on Nov 19th at 1:30pm Pacific.
>
> 3.0.0 GA contains 291 fixed JIRA issues since 3.0.0-beta1. Notable
> additions include the merge of YARN resource types, API-based
> configuration
> of the CapacityScheduler, and HDFS router-based federation.
>
> I've done my traditional testing with a pseudo cluster and a Pi job.
> My +1
> to start.
>
> Best,
> Andrew
>
>
>


Re: [DISCUSS] Merge Storage Policy Satisfier (SPS) [HDFS-10285] feature branch to trunk

2017-11-17 Thread Uma Maheswara Rao G
Update: We worked on the review comments and additional JIRAs above
mentioned.

>1. After the feedbacks from Andrew, Eddy, Xiao in JIRA reviews, we planned
to take up the support for recursive API support. HDFS-12291

We provided the recursive API support now.

>2. Xattr optimizations HDFS-12225
Improved this portion as well

>3. Few other review comments already fixed and committed HDFS-12214<
https://issues.apache.org/jira/browse/HDFS-12214>
Fixed the comments.

We are continuing to test the feature and working so far well. Also we
uploaded a combined patch and got the good QA report.

If there are no further objections, we would like to go for merge vote
tomorrow. Please by default this feature will be disabled.

Regards,
Uma

On Fri, Aug 18, 2017 at 11:27 PM, Gangumalla, Uma 
wrote:

> Hi Andrew,
>
> >Great to hear. It'd be nice to define which use cases are met by the
> current version of SPS, and which will be handled after the merge.
> After the discussions in JIRA, we planned to support recursive API as
> well. The primary use cases we planned was for Hbase. Please check next
> point for use case details.
>
> >A bit more detail in the design doc on how HBase would use this feature
> would also be helpful. Is there an HBase JIRA already?
> Please find the usecase details at this comment in JIRA:
> https://issues.apache.org/jira/browse/HDFS-10285?
> focusedCommentId=16120227=com.atlassian.jira.
> plugin.system.issuetabpanels:comment-tabpanel#comment-16120227
>
> >I also spent some more time with the design doc and posted a few
> questions on the JIRA.
> Thank you for the reviews.
>
> To summarize the discussions in JIRA:
> 1. After the feedbacks from Andrew, Eddy, Xiao in JIRA reviews, we planned
> to take up the support for recursive API support. HDFS-12291<
> https://issues.apache.org/jira/browse/HDFS-12291> (Rakesh started the
> work on it)
> 2. Xattr optimizations HDFS-12225 apache.org/jira/browse/HDFS-12225> (Patch available)
> 3. Few other review comments already fixed and committed HDFS-12214<
> https://issues.apache.org/jira/browse/HDFS-12214>
>
> For tracking the follow-up tasks we filed JIRA HDFS-12226, they should not
> be critical for merge.
>
> Regards,
> Uma
>
> From: Andrew Wang  com>>
> Date: Friday, July 28, 2017 at 11:33 AM
> To: Uma Gangumalla  com>>
> Cc: "hdfs-dev@hadoop.apache.org" <
> hdfs-dev@hadoop.apache.org>
> Subject: Re: [DISCUSS] Merge Storage Policy Satisfier (SPS) [HDFS-10285]
> feature branch to trunk
>
> Hi Uma,
>
> > If there are still plans to make changes that affect compatibility (the
> hybrid RPC and bulk DN work mentioned sound like they would), then we can
> cut branch-3 first, or wait to merge until after these tasks are finished.
> [Uma] We don’t see that 2 items as high priority for the feature. Users
> would be able to use the feature with current code base and API. So, we
> would consider them after branch-3 only. That should be perfectly fine IMO.
> The current API is very much useful for Hbase scenario. In Hbase case, they
> will rename files under to different policy directory. They will not set
> the policies always. So, when rename files under to different policy
> directory, they can simply call satisfyStoragePolicy, they don’t need any
> hybrid API.
>
> Great to hear. It'd be nice to define which usecases are met by the
> current version of SPS, and which will be handled after the merge.
>
> A bit more detail in the design doc on how HBase would use this feature
> would also be helpful. Is there an HBase JIRA already?
>
> I also spent some more time with the design doc and posted a few questions
> on the JIRA.
>
> Best,
> Andrew
>


Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2017-11-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/44/

[Nov 16, 2017 5:12:52 PM] (sunilg) YARN-7469. Capacity Scheduler Intra-queue 
preemption: User can starve if
[Nov 16, 2017 7:12:01 PM] (rkanter) HADOOP-14982. Clients using 
FailoverOnNetworkExceptionRetry can go into
[Nov 17, 2017 6:35:35 AM] (vvasudev) YARN-7430. Enable user re-mapping for 
Docker containers by default.


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-17 Thread Eric Payne
Thanks for the all of the work it took to finally get htere.
+1 (binding)
Built from source and stood up a pseudo cluster with 4 NMs
Tested the following:
o Cross queue preemption
o Restarting the RM preserves work
o User limits are honored during In-queue preemption 

o Priorities are honored during In-queue preemption
o Users with different weights are assigned resources proportional to their 
weights.
o User weights are refreshable, and in-queue preemption works to honor the 
post-refresh weights 

Thanks,
-Eric Payne

  From: Andrew Wang 
 To: "common-...@hadoop.apache.org" ; 
"yarn-...@hadoop.apache.org" ; 
"mapreduce-...@hadoop.apache.org" ; 
"hdfs-dev@hadoop.apache.org"  
 Sent: Tuesday, November 14, 2017 3:34 PM
 Subject: [VOTE] Release Apache Hadoop 3.0.0 RC0
   
Hi folks,

Thanks as always to the many, many contributors who helped with this
release. I've created RC0 for Apache Hadoop 3.0.0. The artifacts are
available here:

http://people.apache.org/~wang/3.0.0-RC0/

This vote will run 5 days, ending on Nov 19th at 1:30pm Pacific.

3.0.0 GA contains 291 fixed JIRA issues since 3.0.0-beta1. Notable
additions include the merge of YARN resource types, API-based configuration
of the CapacityScheduler, and HDFS router-based federation.

I've done my traditional testing with a pseudo cluster and a Pi job. My +1
to start.

Best,
Andrew


   

Re: [VOTE] Release Apache Hadoop 2.9.0 (RC3)

2017-11-17 Thread Subru Krishnan
Wrapping up the vote with my +1.

Deployed RC3 on a federated YARN cluster with 6 sub-clusters:
- ran multiple sample jobs
- enabled opportunistic containers and submitted more samples
- configured HDFS federation and reran jobs.


With 13 binding +1s and 7 non-binding +1s and no +/-1s, pleased to announce
the vote is passed successfully.

Thanks to the many of you who contributed to the release and made this
possible and to everyone in this thread who took the time/effort to
validate and vote!

We’ll push the release bits and send out an announcement for 2.9.0 soon.

Cheers,
Subru

On Fri, Nov 17, 2017 at 12:41 PM Eric Payne 
wrote:

> Thanks Arun and Subru for the hard work on this release.
>
> +1 (binding)
>
> Built from source and stood up a pseudo cluster with 4 NMs
>
> Tested the following:
>
> o User limits are honored during In-queue preemption
>
> o Priorities are honored during In-queue preemption
>
> o Can kill applications from the command line
>
> o Users with different weights are assigned resources proportional to
> their weights.
>
> Thanks,
> -Eric Payne
>
>
> --
> *From:* Arun Suresh 
> *To:* yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org; Hadoop
> Common ; Hdfs-dev <
> hdfs-dev@hadoop.apache.org>
> *Cc:* Subramaniam Krishnan 
> *Sent:* Monday, November 13, 2017 6:10 PM
>
> *Subject:* [VOTE] Release Apache Hadoop 2.9.0 (RC3)
>
> Hi Folks,
>
> Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be the
> starting release for Apache Hadoop 2.9.x line - it includes 30 New Features
> with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues since
> 2.8.2.
>
> More information about the 2.9.0 release plan can be found here:
> *
> https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-Version2.9
> <
> https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-Version2.9
> >*
>
> New RC is available at: *
> https://home.apache.org/~asuresh/hadoop-2.9.0-RC3/
> *
>
> The RC tag in git is: release-2.9.0-RC3, and the latest commit id is:
> 756ebc8394e473ac25feac05fa493f6d612e6c50.
>
> The maven artifacts are available via repository.apache.org at:
> <
> https://www.google.com/url?q=https%3A%2F%2Frepository.apache.org%2Fcontent%2Frepositories%2Forgapachehadoop-1066=D=1=AFQjCNFcern4uingMV_sEreko_zeLlgdlg
> >*https://repository.apache.org/content/repositories/orgapachehadoop-1068/
>  >*
>
> We are carrying over the votes from the previous RC given that the delta is
> the license fix.
>
> Given the above - we are also going to stick with the original deadline for
> the vote : ending on Friday 17th November 2017 2pm PT time.
>
> Thanks,
> -Arun/Subru
>
>
>


Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-17 Thread Arpit Agarwal
Hi Andrew,

Thank you for your hard work in getting us to this step. This is our first 
major GA release in many years.

I feel a 5-day vote window ending over the weekend before thanksgiving may not 
provide sufficient time to evaluate this RC especially for downstream 
components.

Would you please consider extending the voting deadline until a few days after 
the thanksgiving holiday? It would be a courtesy to our broader community and I 
see no harm in giving everyone a few days to evaluate it more thoroughly.

On a lighter note, your deadline is also 4 minutes short of the required 5 
days. :)

Regards,
Arpit



On 11/14/17, 1:34 PM, "Andrew Wang"  wrote:

Hi folks,

Thanks as always to the many, many contributors who helped with this
release. I've created RC0 for Apache Hadoop 3.0.0. The artifacts are
available here:

http://people.apache.org/~wang/3.0.0-RC0/

This vote will run 5 days, ending on Nov 19th at 1:30pm Pacific.

3.0.0 GA contains 291 fixed JIRA issues since 3.0.0-beta1. Notable
additions include the merge of YARN resource types, API-based configuration
of the CapacityScheduler, and HDFS router-based federation.

I've done my traditional testing with a pseudo cluster and a Pi job. My +1
to start.

Best,
Andrew




Re: [VOTE] Release Apache Hadoop 2.9.0 (RC3)

2017-11-17 Thread Eric Payne
Thanks Arun and Subru for the hard work on this release.

+1 (binding)

Built from source and stood up a pseudo cluster with 4 NMs
Tested the following:
o User limits are honored during In-queue preemption 

o Priorities are honored during In-queue preemption
o Can kill applications from the command line
o Users with different weights are assigned resources proportional to their 
weights.
Thanks,-Eric Payne


  From: Arun Suresh 
 To: yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org; Hadoop Common 
; Hdfs-dev  
Cc: Subramaniam Krishnan 
 Sent: Monday, November 13, 2017 6:10 PM
 Subject: [VOTE] Release Apache Hadoop 2.9.0 (RC3)
   
Hi Folks,

Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be the
starting release for Apache Hadoop 2.9.x line - it includes 30 New Features
with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues since
2.8.2.

More information about the 2.9.0 release plan can be found here:
*https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-Version2.9
*

New RC is available at: *https://home.apache.org/~asuresh/hadoop-2.9.0-RC3/
*

The RC tag in git is: release-2.9.0-RC3, and the latest commit id is:
756ebc8394e473ac25feac05fa493f6d612e6c50.

The maven artifacts are available via repository.apache.org at:
*https://repository.apache.org/content/repositories/orgapachehadoop-1068/
*

We are carrying over the votes from the previous RC given that the delta is
the license fix.

Given the above - we are also going to stick with the original deadline for
the vote : ending on Friday 17th November 2017 2pm PT time.

Thanks,
-Arun/Subru


   

[jira] [Created] (HDFS-12837) Intermittent failure TestReencryptionWithKMS#testReencryptionKMSDown

2017-11-17 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-12837:
-

 Summary: Intermittent failure 
TestReencryptionWithKMS#testReencryptionKMSDown
 Key: HDFS-12837
 URL: https://issues.apache.org/jira/browse/HDFS-12837
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: kms, namenode
Affects Versions: 3.0.0-beta1
Reporter: Surendra Singh Lilhore



https://builds.apache.org/job/PreCommit-HDFS-Build/22112/testReport/org.apache.hadoop.hdfs.server.namenode/TestReencryptionWithKMS/testReencryptionKMSDown/





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12836) startTxId could be greater than endTxId when tailing in-progress edit log

2017-11-17 Thread Chao Sun (JIRA)
Chao Sun created HDFS-12836:
---

 Summary: startTxId could be greater than endTxId when tailing 
in-progress edit log
 Key: HDFS-12836
 URL: https://issues.apache.org/jira/browse/HDFS-12836
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Chao Sun
Assignee: Chao Sun


When {{dfs.ha.tail-edits.in-progress}} is true, edit log tailer will also tail 
those in progress edit log segments. However, in the following code:

{code}
if (onlyDurableTxns && inProgressOk) {
  endTxId = Math.min(endTxId, committedTxnId);
}

EditLogInputStream elis = EditLogFileInputStream.fromUrl(
connectionFactory, url, remoteLog.getStartTxId(),
endTxId, remoteLog.isInProgress());
{code}

it is possible that {{remoteLog.getStartTxId()}} could be greater than 
{{endTxId}}, and therefore will cause the following error:

{code}
2017-11-17 19:55:41,165 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: 
Error replaying edit log at offset 1048576.  Expected transaction ID was 87
Recent opcode offsets: 1048576
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream$PrematureEOFException:
 got premature end-of-file at txid 86; expected file to go up to 85
at 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:197)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:189)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:205)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:882)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:863)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:293)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:427)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:380)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:397)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:481)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:393)
2017-11-17 19:55:41,165 WARN 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Error while reading 
edits from disk. Will try again.
org.apache.hadoop.hdfs.server.namenode.EditLogInputException: Error replaying 
edit log at offset 1048576.  Expected transaction ID was 87
Recent opcode offsets: 1048576
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:218)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:882)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:863)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:293)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:427)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:380)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:397)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:481)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:393)
Caused by: 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream$PrematureEOFException:
 got premature end-of-file at txid 86; expected file to go up to 85
at 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:197)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:189)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:205)
... 9 

Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-17 Thread Andrew Wang
Thanks for the spot, normally create-release spits those out. I uploaded
asc and mds for the release artifacts.

Best,
Andrew

On Thu, Nov 16, 2017 at 11:33 PM, Akira Ajisaka  wrote:

> Hi Andrew,
>
> Signatures are missing. Would you upload them?
>
> Thanks,
> Akira
>
>
> On 2017/11/15 6:34, Andrew Wang wrote:
>
>> Hi folks,
>>
>> Thanks as always to the many, many contributors who helped with this
>> release. I've created RC0 for Apache Hadoop 3.0.0. The artifacts are
>> available here:
>>
>> http://people.apache.org/~wang/3.0.0-RC0/
>>
>> This vote will run 5 days, ending on Nov 19th at 1:30pm Pacific.
>>
>> 3.0.0 GA contains 291 fixed JIRA issues since 3.0.0-beta1. Notable
>> additions include the merge of YARN resource types, API-based
>> configuration
>> of the CapacityScheduler, and HDFS router-based federation.
>>
>> I've done my traditional testing with a pseudo cluster and a Pi job. My +1
>> to start.
>>
>> Best,
>> Andrew
>>
>>


Re: [VOTE] Release Apache Hadoop 2.9.0 (RC3)

2017-11-17 Thread Jason Lowe
Thanks for putting this release together!

+1 (binding)

- Verified signatures and digests
- Successfully built from source including native
- Deployed to single-node cluster and ran some test jobs

Jason


On Mon, Nov 13, 2017 at 6:10 PM, Arun Suresh  wrote:

> Hi Folks,
>
> Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be the
> starting release for Apache Hadoop 2.9.x line - it includes 30 New Features
> with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues since
> 2.8.2.
>
> More information about the 2.9.0 release plan can be found here:
> *https://cwiki.apache.org/confluence/display/HADOOP/
> Roadmap#Roadmap-Version2.9
>  Roadmap#Roadmap-Version2.9>*
>
> New RC is available at: *https://home.apache.org/~
> asuresh/hadoop-2.9.0-RC3/
> *
>
> The RC tag in git is: release-2.9.0-RC3, and the latest commit id is:
> 756ebc8394e473ac25feac05fa493f6d612e6c50.
>
> The maven artifacts are available via repository.apache.org at:
>  apache.org%2Fcontent%2Frepositories%2Forgapachehadoop-1066=D&
> sntz=1=AFQjCNFcern4uingMV_sEreko_zeLlgdlg>*https://
> repository.apache.org/content/repositories/orgapachehadoop-1068/
>  >*
>
> We are carrying over the votes from the previous RC given that the delta is
> the license fix.
>
> Given the above - we are also going to stick with the original deadline for
> the vote : ending on Friday 17th November 2017 2pm PT time.
>
> Thanks,
> -Arun/Subru
>


[jira] [Created] (HDFS-12835) RBF: Fix Javadoc parameter errors

2017-11-17 Thread Wei Yan (JIRA)
Wei Yan created HDFS-12835:
--

 Summary: RBF: Fix Javadoc parameter errors
 Key: HDFS-12835
 URL: https://issues.apache.org/jira/browse/HDFS-12835
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Apache Hadoop 2.7.5 Release Plan

2017-11-17 Thread Wei-Chiu Chuang
Hi Konstantin,
Thanks for initiating the release effort.

I am marking HDFS-12641  as
a blocker for Hadoop 2.7.5 because during our internal testing for CDH we
found a regression in HDFS-11445 that was fixed by HDFS-11755 (technically
not a real regression since HDFS-11755 was committed before HDFS-11445).
The regression results in bogus corrupt block reports. It is not clear to
me if the same behavior is in Apache Hadoop, but since the later
(HDFS-11755) is currently Hadoop 2.8.x and above, I would want to be more
cautious about it.

On Thu, Nov 16, 2017 at 5:20 PM, Konstantin Shvachko 
wrote:

> Hi developers,
>
> We have accumulated about 30 commits on branch-2.7. Those are mostly
> valuable bug fixes, minor optimizations and test corrections. I would like
> to propose to make a quick maintenance release 2.7.5.
>
> If there are no objections I'll start preparations.
>
> Thanks,
> --Konstantin
>



-- 
A very happy Clouderan


[jira] [Created] (HDFS-12834) DFSZKFailoverController on error exits with 0 error code

2017-11-17 Thread Zbigniew Kostrzewa (JIRA)
Zbigniew Kostrzewa created HDFS-12834:
-

 Summary: DFSZKFailoverController on error exits with 0 error code
 Key: HDFS-12834
 URL: https://issues.apache.org/jira/browse/HDFS-12834
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 3.0.0-alpha4, 2.7.3
Reporter: Zbigniew Kostrzewa


On error {{DFSZKFailoverController}} exits with 0 return code which leads to 
problems when integrating it with scripts and monitoring tools, e.g. systemd, 
which when configured to restart service only on failure does not restart ZKFC 
service because it exited with 0.

For example, in my case, systemd reported zkfc exited with success but in logs 
I have found this:
{noformat}
2017-11-14 05:33:55,075 INFO org.apache.zookeeper.ClientCnxn: Client session 
timed out, have not heard from server in 3334ms for sessionid 
0x15fb794bd240001, closing socket connection and attempting reconnect
2017-11-14 05:33:55,178 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session 
disconnected. Entering neutral mode...
2017-11-14 05:33:55,564 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
connection to server 10.9.4.73/10.9.4.73:2182. Will not attempt to authenticate 
using SASL (unknown error)
2017-11-14 05:33:55,566 INFO org.apache.zookeeper.ClientCnxn: Socket connection 
established to 10.9.4.73/10.9.4.73:2182, initiating session
2017-11-14 05:33:55,569 INFO org.apache.zookeeper.ClientCnxn: Session 
establishment complete on server 10.9.4.73/10.9.4.73:2182, sessionid = 
0x15fb794bd240001, negotiated timeout = 5000
2017-11-14 05:33:55,570 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session 
connected.
2017-11-14 05:33:58,230 INFO org.apache.zookeeper.ClientCnxn: Unable to read 
additional data from server sessionid 0x15fb794bd240001, likely server has 
closed socket, closing socket connection and attempting reconnect
2017-11-14 05:33:58,335 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session 
disconnected. Entering neutral mode...
2017-11-14 05:33:58,402 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
connection to server 10.9.4.138/10.9.4.138:2181. Will not attempt to 
authenticate using SASL (unknown error)
2017-11-14 05:33:58,403 INFO org.apache.zookeeper.ClientCnxn: Socket connection 
established to 10.9.4.138/10.9.4.138:2181, initiating session
2017-11-14 05:33:58,406 INFO org.apache.zookeeper.ClientCnxn: Unable to read 
additional data from server sessionid 0x15fb794bd240001, likely server has 
closed socket, closing socket connection and attempting reconnect
2017-11-14 05:33:59,218 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
connection to server 10.9.4.228/10.9.4.228:2183. Will not attempt to 
authenticate using SASL (unknown error)
2017-11-14 05:33:59,219 INFO org.apache.zookeeper.ClientCnxn: Socket connection 
established to 10.9.4.228/10.9.4.228:2183, initiating session
2017-11-14 05:33:59,221 INFO org.apache.zookeeper.ClientCnxn: Unable to read 
additional data from server sessionid 0x15fb794bd240001, likely server has 
closed socket, closing socket connection and attempting reconnect
2017-11-14 05:34:01,094 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
connection to server 10.9.4.73/10.9.4.73:2182. Will not attempt to authenticate 
using SASL (unknown error)
2017-11-14 05:34:01,094 INFO org.apache.zookeeper.ClientCnxn: Client session 
timed out, have not heard from server in 1773ms for sessionid 
0x15fb794bd240001, closing socket connection and attempting reconnect
2017-11-14 05:34:01,196 FATAL org.apache.hadoop.ha.ActiveStandbyElector: 
Received stat error from Zookeeper. code:CONNECTIONLOSS. Not retrying further 
znode monitoring connection errors.
2017-11-14 05:34:02,153 INFO org.apache.zookeeper.ZooKeeper: Session: 
0x15fb794bd240001 closed
2017-11-14 05:34:02,154 FATAL org.apache.hadoop.ha.ZKFailoverController: Fatal 
error occurred:Received stat error from Zookeeper. code:CONNECTIONLOSS. Not 
retrying further znode monitoring connection errors.
2017-11-14 05:34:02,154 INFO org.apache.zookeeper.ClientCnxn: EventThread shut 
down
2017-11-14 05:34:05,208 INFO org.apache.hadoop.ipc.Server: Stopping server on 
8019
2017-11-14 05:34:05,487 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server 
listener on 8019
2017-11-14 05:34:05,488 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server 
Responder
2017-11-14 05:34:05,487 INFO org.apache.hadoop.ha.ActiveStandbyElector: 
Yielding from election
2017-11-14 05:34:05,488 INFO org.apache.hadoop.ha.HealthMonitor: Stopping 
HealthMonitor thread
2017-11-14 05:34:05,490 FATAL 
org.apache.hadoop.hdfs.tools.DFSZKFailoverController: Got a fatal error, 
exiting now
java.lang.RuntimeException: ZK Failover Controller failed: Received stat error 
from Zookeeper. code:CONNECTIONLOSS. Not retrying further znode monitoring 
connection errors.
at 

Re: [VOTE] Release Apache Hadoop 2.9.0 (RC3)

2017-11-17 Thread Arun Suresh
+1 (binding)

* Built from source
* Set up a 4 node cluster with RM HA enabled
* Performed some basic HDFS commands
* Ran a bunch of pi & sleep jobs - with and without opportunistic containers
* Performed some basic RM failover testing.

Cheers
-Arun

On Thu, Nov 16, 2017 at 11:12 PM, Akira Ajisaka  wrote:

> +1
>
> * Downloaded source tarball and verified checksum and signature
> * Compiled the source with OpenJDK 1.8.0_151 and CentOS 7.4
> * Deployed a pseudo cluster and ran some simple MR jobs
>
> I noticed ISA-L build options are documented in BUILDING.txt
> but the options do not exist in 2.x releases.
> Filed HADOOP-15045 to fix this issue.
> I think this issue doesn't block the release.
>
> Thanks and regards,
> Akira
>
> On 2017/11/14 9:10, Arun Suresh wrote:
>
>> Hi Folks,
>>
>> Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be
>> the
>> starting release for Apache Hadoop 2.9.x line - it includes 30 New
>> Features
>> with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues since
>> 2.8.2.
>>
>> More information about the 2.9.0 release plan can be found here:
>> *https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#
>> Roadmap-Version2.9
>> > Roadmap-Version2.9>*
>>
>> New RC is available at: *https://home.apache.org/~asur
>> esh/hadoop-2.9.0-RC3/
>> *
>>
>> The RC tag in git is: release-2.9.0-RC3, and the latest commit id is:
>> 756ebc8394e473ac25feac05fa493f6d612e6c50.
>>
>> The maven artifacts are available via repository.apache.org at:
>> > e.org%2Fcontent%2Frepositories%2Forgapachehadoop-1066=D
>> ntz=1=AFQjCNFcern4uingMV_sEreko_zeLlgdlg>*https://reposi
>> tory.apache.org/content/repositories/orgapachehadoop-1068/
>> > >*
>>
>> We are carrying over the votes from the previous RC given that the delta
>> is
>> the license fix.
>>
>> Given the above - we are also going to stick with the original deadline
>> for
>> the vote : ending on Friday 17th November 2017 2pm PT time.
>>
>> Thanks,
>> -Arun/Subru
>>
>>


[jira] [Resolved] (HDFS-12820) Decommissioned datanode is counted in service cause datanode allcating failure

2017-11-17 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-12820.

Resolution: Duplicate

Thanks for reporting the issue, [~xiegang112].
Hadoop 2.4.0 is an old release and no longer supported. The issue reported in 
this jira is fixed by HDFS-9279.

I am going to resolve this jira as a dup of HDFS-9279. Please reopen if this is 
not the case.

> Decommissioned datanode is counted in service cause datanode allcating failure
> --
>
> Key: HDFS-12820
> URL: https://issues.apache.org/jira/browse/HDFS-12820
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: block placement
>Affects Versions: 2.4.0
>Reporter: Gang Xie
>
> When allocate a datanode when dfsclient write with considering the load, it 
> checks if the datanode is overloaded by calculating the average xceivers of 
> all the in service datanode. But if the datanode is decommissioned and become 
> dead, it's still treated as in service, which make the average load much more 
> than the real one especially when the number of the decommissioned datanode 
> is great. In our cluster, 180 datanode, and 100 of them decommissioned, and 
> the average load is 17. This failed all the datanode allocation. 
> private void subtract(final DatanodeDescriptor node) {
>   capacityUsed -= node.getDfsUsed();
>   blockPoolUsed -= node.getBlockPoolUsed();
>   xceiverCount -= node.getXceiverCount();
> {color:red}  if (!(node.isDecommissionInProgress() || 
> node.isDecommissioned())) {{color}
> nodesInService--;
> nodesInServiceXceiverCount -= node.getXceiverCount();
> capacityTotal -= node.getCapacity();
> capacityRemaining -= node.getRemaining();
>   } else {
> capacityTotal -= node.getDfsUsed();
>   }
>   cacheCapacity -= node.getCacheCapacity();
>   cacheUsed -= node.getCacheUsed();
> }



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12833) In Distcp, Delete option not having the proper usage message.

2017-11-17 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-12833:


 Summary: In Distcp, Delete option not having the proper usage 
message.
 Key: HDFS-12833
 URL: https://issues.apache.org/jira/browse/HDFS-12833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: distcp, hdfs
Affects Versions: 3.0.0-alpha1
Reporter: Harshakiran Reddy
Priority: Minor


Basically Delete option applicable only with update or overwrite options. I 
tried as per usage message am getting the bellow exception.

{noformat}
bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
java.lang.IllegalArgumentException: Delete missing is applicable only with 
update or overwrite options
at 
org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
at 
org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
Invalid arguments: Delete missing is applicable only with update or overwrite 
options
usage: distcp OPTIONS [source_path...] 
  OPTIONS
 -append   Reuse existing data in target files and
   append new data to them if possible
 -asyncShould distcp execution be blocking
 -atomic   Commit all changes or none
 -bandwidth   Specify bandwidth per map in MB, accepts
   bandwidth as a fraction.
 -blocksperchunk  If set to a positive value, fileswith more
   blocks than this value will be split into
   chunks of  blocks to be
   transferred in parallel, and reassembled on
   the destination. By default,
is 0 and the files will be
   transmitted in their entirety without
   splitting. This switch is only applicable
   when the source file system implements
   getBlockLocations method and the target
   file system implements concat method
 -copybuffersize  Size of the copy buffer to use. By default
is 8192B.
 -delete   Delete from target, files missing in source
 -diffUse snapshot diff report to identify the
   difference between source and target
{noformat}

Even in Document also it's not updated proper usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12832) INode.getFullPathName may throw ArrayIndexOutOfBoundsException lead to NameNode exit

2017-11-17 Thread DENG FEI (JIRA)
DENG FEI created HDFS-12832:
---

 Summary: INode.getFullPathName may throw 
ArrayIndexOutOfBoundsException lead to NameNode exit
 Key: HDFS-12832
 URL: https://issues.apache.org/jira/browse/HDFS-12832
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0-beta1, 2.7.4
Reporter: DENG FEI
Priority: Critical


{code:title=INode.java|borderStyle=solid}
public String getFullPathName() {
// Get the full path name of this inode.
if (isRoot()) {
  return Path.SEPARATOR;
}
// compute size of needed bytes for the path
int idx = 0;
for (INode inode = this; inode != null; inode = inode.getParent()) {
  // add component + delimiter (if not tail component)
  idx += inode.getLocalNameBytes().length + (inode != this ? 1 : 0);
}
byte[] path = new byte[idx];
for (INode inode = this; inode != null; inode = inode.getParent()) {
  if (inode != this) {
path[--idx] = Path.SEPARATOR_CHAR;
  }
  byte[] name = inode.getLocalNameBytes();
  idx -= name.length;
  System.arraycopy(name, 0, path, idx, name.length);
}
return DFSUtil.bytes2String(path);
  }
{code}
We found ArrayIndexOutOfBoundsException at 
_{color:#707070}System.arraycopy(name, 0, path, idx, name.length){color}_ when 
ReplicaMonitor work ,and the NameNode will quit.

It seems the two loop is not synchronized, the path's length is changed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12831) HDFS throws FileNotFoundException on getFileBlockLocations(path-to-directory)

2017-11-17 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-12831:
-

 Summary: HDFS throws FileNotFoundException on 
getFileBlockLocations(path-to-directory)
 Key: HDFS-12831
 URL: https://issues.apache.org/jira/browse/HDFS-12831
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.8.1
Reporter: Steve Loughran


The HDFS implementation of {{getFileBlockLocations(path, offset, len)}} throws 
an exception if the path references a directory. 

The base implementation (and all other filesystems) just return an empty array, 
something implemented in {{getFileBlockLocations(filestatsus, offset, len)}}; 
something written up in filesystem.md as the correct behaviour. 

# has been shown to break things: SPARK-14959
# there's no contract tests for these APIs; shows up in HADOOP-15044. 
# even if this is considered a wontfix, it should raise something like 
{{PathIsDirectoryException}} rather than FNFE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2017-11-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/43/

[Nov 15, 2017 6:33:48 PM] (jlowe) YARN-7361. Improve the docker container 
runtime documentation.
[Nov 16, 2017 12:45:06 AM] (xiao) HADOOP-15023. ValueQueue should also validate 
(int) (lowWatermark *




-1 overall


The following subsystems voted -1:
asflicense unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Unreaped Processes :

   hadoop-hdfs:17 
   bkjournal:7 
   hadoop-mapreduce-client-jobclient:9 
   hadoop-archive-logs:1 
   hadoop-archives:1 
   hadoop-distcp:5 
   hadoop-extras:1 
   hadoop-sls:1 
   hadoop-yarn-applications-distributedshell:1 
   hadoop-yarn-client:5 
   hadoop-yarn-server-tests:2 
   hadoop-yarn-server-timelineservice:1 

Failed junit tests :

   hadoop.net.TestDNS 
   hadoop.hdfs.TestAppendSnapshotTruncate 
   hadoop.hdfs.TestDFSAddressConfig 
   hadoop.mapred.TestSpecialCharactersInOutputPath 
   hadoop.tools.TestIntegration 
   hadoop.tools.TestDistCpViewFs 
   hadoop.tools.TestDistCpSystem 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.yarn.sls.nodemanager.TestNMSimulator 
   
hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels 

Timed out junit tests :

   org.apache.hadoop.hdfs.TestReservedRawPaths 
   org.apache.hadoop.hdfs.TestAclsEndToEnd 
   org.apache.hadoop.hdfs.TestFileCreation 
   org.apache.hadoop.hdfs.TestDatanodeDeath 
   org.apache.hadoop.hdfs.TestDFSClientRetries 
   org.apache.hadoop.hdfs.TestFileAppend2 
   org.apache.hadoop.hdfs.TestFileCorruption 
   org.apache.hadoop.hdfs.TestFileCreationDelete 
   org.apache.hadoop.hdfs.TestSeekBug 
   org.apache.hadoop.hdfs.TestRestartDFS 
   org.apache.hadoop.hdfs.TestDFSClientSocketSize 
   org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead 
   org.apache.hadoop.hdfs.TestDFSRollback 
   org.apache.hadoop.hdfs.TestDFSClientExcludedNodes 
   org.apache.hadoop.hdfs.TestAbandonBlock 
   org.apache.hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperAsHASharedDir 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperEditLogStreams 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperSpeculativeRead 
   org.apache.hadoop.contrib.bkjournal.TestCurrentInprogress 
   org.apache.hadoop.mapred.TestClusterMapReduceTestCase 
   org.apache.hadoop.mapred.TestMRIntermediateDataEncryption 
   org.apache.hadoop.mapred.TestJobSysDirWithDFS 
   org.apache.hadoop.mapred.TestMRTimelineEventHandling 
   org.apache.hadoop.mapred.join.TestDatamerge 
   org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers 
   org.apache.hadoop.mapred.TestMiniMRClientCluster 
   org.apache.hadoop.mapred.TestReduceFetchFromPartialMem 
   org.apache.hadoop.mapred.TestMROpportunisticMaps 
   org.apache.hadoop.tools.TestHadoopArchiveLogsRunner 
   org.apache.hadoop.tools.TestHadoopArchives 
   org.apache.hadoop.tools.TestDistCpWithAcls 
   org.apache.hadoop.tools.TestDistCpSync 
   org.apache.hadoop.tools.TestDistCpWithXAttrs 
   org.apache.hadoop.tools.TestDistCpSyncReverseFromTarget 
   org.apache.hadoop.tools.TestDistCpSyncReverseFromSource 
   org.apache.hadoop.tools.TestCopyFiles 
   org.apache.hadoop.yarn.sls.TestSLSRunner 
   
org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell 
   org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy 
   org.apache.hadoop.yarn.client.cli.TestYarnCLI 
   org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA 
   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient 
   org.apache.hadoop.yarn.client.api.impl.TestYarnClient 
   org.apache.hadoop.yarn.server.TestMiniYarnCluster 
   org.apache.hadoop.yarn.server.TestContainerManagerSecurity 
   
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServices
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/43/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/43/artifact/out/diff-compile-javac-root.txt
  [324K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/43/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint: