[jira] [Commented] (HDFS-8914) Document HA support in the HDFS HdfsDesign.md

2015-11-22 Thread Ravindra Babu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021695#comment-15021695
 ] 

Ravindra Babu commented on HDFS-8914:
-

Hi,

I have seen  one SUCCESS and multiple FAILURE notifications. 

Is the fix has been committed successfully?

> Document HA support in the HDFS HdfsDesign.md
> -
>
> Key: HDFS-8914
> URL: https://issues.apache.org/jira/browse/HDFS-8914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
> Environment: Documentation page in live
>Reporter: Ravindra Babu
>Assignee: Lars Francke
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-8914.1.patch, HDFS-8914.2.patch
>
>
> Please refer to these two links and correct one of them.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> The NameNode machine is a single point of failure for an HDFS cluster. If the 
> NameNode machine fails, manual intervention is necessary. Currently, 
> automatic restart and failover of the NameNode software to another machine is 
> not supported.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The HDFS High Availability feature addresses the above problems by providing 
> the option of running two redundant NameNodes in the same cluster in an 
> Active/Passive configuration with a hot standby. This allows a fast failover 
> to a new NameNode in the case that a machine crashes, or a graceful 
> administrator-initiated failover for the purpose of planned maintenance.
> Please update hdfsDesign article with same facts to avoid confusion in 
> Reader's mind..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9435) TestBlockRecovery#testRBWReplicas is failing intermittently

2015-11-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9435:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~rakeshr] for the 
contribution. Also thanks [~iwasakims] for the good advises.

> TestBlockRecovery#testRBWReplicas is failing intermittently
> ---
>
> Key: HDFS-9435
> URL: https://issues.apache.org/jira/browse/HDFS-9435
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Rakesh R
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-9435-00.patch, HDFS-9435-01.patch, 
> HDFS-9435-02.patch, HDFS-9435-03.patch, testRBWReplicas.log
>
>
> TestBlockRecovery#testRBWReplicas is failing in the [build 
> 13536|https://builds.apache.org/job/PreCommit-HDFS-Build/13536/testReport/org.apache.hadoop.hdfs.server.datanode/TestBlockRecovery/testRBWReplicas/].
>  It looks like bug in tests due to race condition.
> Note: Attached logs taken from the build to this jira.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9435) TestBlockRecovery#testRBWReplicas is failing intermittently

2015-11-22 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021689#comment-15021689
 ] 

Walter Su commented on HDFS-9435:
-

+1

> TestBlockRecovery#testRBWReplicas is failing intermittently
> ---
>
> Key: HDFS-9435
> URL: https://issues.apache.org/jira/browse/HDFS-9435
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-9435-00.patch, HDFS-9435-01.patch, 
> HDFS-9435-02.patch, HDFS-9435-03.patch, testRBWReplicas.log
>
>
> TestBlockRecovery#testRBWReplicas is failing in the [build 
> 13536|https://builds.apache.org/job/PreCommit-HDFS-Build/13536/testReport/org.apache.hadoop.hdfs.server.datanode/TestBlockRecovery/testRBWReplicas/].
>  It looks like bug in tests due to race condition.
> Note: Attached logs taken from the build to this jira.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9428) Fix intermittent failure of TestDNFencing.testQueueingWithAppend

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021687#comment-15021687
 ] 

Hudson commented on HDFS-9428:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8859 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8859/])
HDFS-9428. Fix intermittent failure of (waltersu4549: rev 
5aba093361dcf6bb642e533700f772b9a94154ad)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix intermittent failure of TestDNFencing.testQueueingWithAppend
> 
>
> Key: HDFS-9428
> URL: https://issues.apache.org/jira/browse/HDFS-9428
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9428.001.patch, HDFS-9428.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7988) Replace usage of ExactSizeInputStream with LimitInputStream.

2015-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021686#comment-15021686
 ] 

Hadoop QA commented on HDFS-7988:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 27s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_85. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
35s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 38s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12773778/HDFS-7988.002.patch |
| JIRA Issue | HDFS-7988 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux eac8d9a00812 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8228697 |
| findbugs | v3.0.0 |
| JDK v1.7.0_85  Test Results | 
https://build

[jira] [Updated] (HDFS-9428) Fix intermittent failure of TestDNFencing.testQueueingWithAppend

2015-11-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9428:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~iwasakims] for the 
contribution.

> Fix intermittent failure of TestDNFencing.testQueueingWithAppend
> 
>
> Key: HDFS-9428
> URL: https://issues.apache.org/jira/browse/HDFS-9428
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9428.001.patch, HDFS-9428.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9356) Decommissioning node does not have Last Contact value in the UI

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021678#comment-15021678
 ] 

Hudson commented on HDFS-9356:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #701 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/701/])
HDFS-9356. Decommissioning node does not have Last Contact value in the 
(wheat9: rev 04c14b5dc45696951eddbb5f5c15db2ff0e3ce16)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Decommissioning node does not have Last Contact value in the UI
> ---
>
> Key: HDFS-9356
> URL: https://issues.apache.org/jira/browse/HDFS-9356
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9356.patch, decomm.png
>
>
> While DN is in decommissioning state, the Last contact value is empty in the 
> Datanode Information tab of Namenode UI.
> Attaching the snapshot of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9024) Deprecate the TotalFiles metric

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021680#comment-15021680
 ] 

Hudson commented on HDFS-9024:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #701 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/701/])
HDFS-9024. Deprecate the TotalFiles metric. Contributed by Akira (wheat9: rev 
822869785707b5665962ec0c699cd383dc767345)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java
* hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Deprecate the TotalFiles metric
> ---
>
> Key: HDFS-9024
> URL: https://issues.apache.org/jira/browse/HDFS-9024
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: metrics
> Fix For: 2.8.0
>
> Attachments: HDFS-9024.001.patch, HDFS-9024.002.patch
>
>
> There are two metrics (TotalFiles and FilesTotal) which are the same. In 
> HDFS-5165, we decided to remove TotalFiles but we need to deprecate the  
> metric before removing it. This issue is to deprecate the metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9153) Pretty-format the output for DFSIO

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021651#comment-15021651
 ] 

Hudson commented on HDFS-9153:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #629 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/629/])
HDFS-9153. Pretty-format the output for DFSIO. Contributed by Kai Zheng. 
(wheat9: rev 000e12f6fa114dfa45377df23acf552e66410838)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java


> Pretty-format the output for DFSIO
> --
>
> Key: HDFS-9153
> URL: https://issues.apache.org/jira/browse/HDFS-9153
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.8.0
>
> Attachments: HDFS-9153-v1.patch
>
>
> Ref. the following DFSIO output, I was surprised the test throughput was only 
> {{17}} MB/s, which doesn't make sense for a real cluster. Maybe it's used for 
> other purpose? For users, it may make more sense to give the throughput 1610 
> MB/s (1228800/763), calculated by *Total MBytes processed / Test exec time*.
> {noformat}
> 15/09/28 11:42:23 INFO fs.TestDFSIO: - TestDFSIO - : write
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Date & time: Mon Sep 28 
> 11:42:23 CST 2015
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Number of files: 100
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Total MBytes processed: 1228800.0
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  Throughput mb/sec: 
> 17.457387239456878
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.57563018798828
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  IO rate std deviation: 
> 1.7076328985378455
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Test exec time sec: 762.697
> 15/09/28 11:42:23 INFO fs.TestDFSIO: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7897) Shutdown metrics when stopping JournalNode

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021654#comment-15021654
 ] 

Hudson commented on HDFS-7897:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #629 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/629/])
HDFS-7897. Shutdown metrics when stopping JournalNode. Contributed by (wheat9: 
rev a4bd54f9d776f39080b41913afa455f8c0f6e46d)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Shutdown metrics when stopping JournalNode
> --
>
> Key: HDFS-7897
> URL: https://issues.apache.org/jira/browse/HDFS-7897
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7897-001.patch
>
>
> In JournalNode.stop(), the metrics system is forgotten to shutdown. The issue 
> is found when reading the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9402) Switch DataNode.LOG to use slf4j

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021650#comment-15021650
 ] 

Hudson commented on HDFS-9402:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #629 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/629/])
HDFS-9402. Switch DataNode.LOG to use slf4j. Contributed by Walter Su. (wheat9: 
rev 176ff5ce90f2cbcd8342016d0f5570337d2ff79f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestDistCh.java


> Switch DataNode.LOG to use slf4j
> 
>
> Key: HDFS-9402
> URL: https://issues.apache.org/jira/browse/HDFS-9402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9402-branch-2.01.patch, HDFS-9402.01.patch
>
>
> Similar to HDFS-8971, HDFS-7712.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6885) Fix wrong use of BytesWritable in FSEditLogOp#RenameOp

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021662#comment-15021662
 ] 

Hudson commented on HDFS-6885:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #629 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/629/])
HDFS-6885. Fix wrong use of BytesWritable in FSEditLogOp#RenameOp. (wheat9: rev 
bfbcfe73644db6c047d774c4a461da27915eef84)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix wrong use of BytesWritable in FSEditLogOp#RenameOp
> --
>
> Key: HDFS-6885
> URL: https://issues.apache.org/jira/browse/HDFS-6885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-6885.001.patch
>
>
> After readField using BytesWritable, the data length should be 
> {{writable.getLength()}}, instead of {{writable.getBytes().length}} which is 
> the buffer length. 
> This will cause returned {{Rename[]}} is longer than expected and may include 
> some incorrect values (Currently they are Rename#NONE, and have not caused 
> problem but code logic is incorrect). 
> {code}
> BytesWritable writable = new BytesWritable();
> writable.readFields(in);
> byte[] bytes = writable.getBytes();
> Rename[] options = new Rename[bytes.length];
> for (int i = 0; i < bytes.length; i++) {
>   options[i] = Rename.valueOf(bytes[i]);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3302) Review and improve HDFS trash documentation

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021656#comment-15021656
 ] 

Hudson commented on HDFS-3302:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #629 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/629/])
HDFS-3302. Review and improve HDFS trash documentation. Contributed by (wheat9: 
rev 2326171ea84b9ccea9df9fef137d6041df540d36)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md


> Review and improve HDFS trash documentation
> ---
>
> Key: HDFS-3302
> URL: https://issues.apache.org/jira/browse/HDFS-3302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: docs
> Fix For: 2.8.0
>
> Attachments: HDFS-3302.patch
>
>
> Improve Trash documentation for users.
> (0.23 published release docs are missing original HDFS docs btw...)
> A set of FAQ-like questions can be found on HDFS-2740
> I'll update the ticket shortly with the areas to cover in the docs, as 
> enabling trash by default (HDFS-2740) would be considered as a wide behavior 
> change per its follow ups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7796) Include X-editable for slick contenteditable fields in the web UI

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021649#comment-15021649
 ] 

Hudson commented on HDFS-7796:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #629 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/629/])
HDFS-7796. Include X-editable for slick contenteditable fields in the (wheat9: 
rev 38146a6cdbd3788d247f77dfc3248cd7f76d01f4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/img/clear.png
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/img/loading.gif
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/js/bootstrap-editable.min.js
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/css/bootstrap-editable.css


> Include X-editable for slick contenteditable fields in the web UI
> -
>
> Key: HDFS-7796
> URL: https://issues.apache.org/jira/browse/HDFS-7796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7796.01.patch
>
>
> This JIRA is for include X-editable (https://vitalets.github.io/x-editable/) 
> in the Hadoop UI. It is released under the MIT license so its fine. We need 
> it to make the owner / group / replication and possibly other fields in the 
> UI editable easily



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8914) Document HA support in the HDFS HdfsDesign.md

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021659#comment-15021659
 ] 

Hudson commented on HDFS-8914:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #629 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/629/])
HDFS-8914. Document HA support in the HDFS HdfsDesign.md. Contributed by 
(wheat9: rev 0c7340f377f6663052be097ef58d60eee25f7334)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Document HA support in the HDFS HdfsDesign.md
> -
>
> Key: HDFS-8914
> URL: https://issues.apache.org/jira/browse/HDFS-8914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
> Environment: Documentation page in live
>Reporter: Ravindra Babu
>Assignee: Lars Francke
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-8914.1.patch, HDFS-8914.2.patch
>
>
> Please refer to these two links and correct one of them.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> The NameNode machine is a single point of failure for an HDFS cluster. If the 
> NameNode machine fails, manual intervention is necessary. Currently, 
> automatic restart and failover of the NameNode software to another machine is 
> not supported.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The HDFS High Availability feature addresses the above problems by providing 
> the option of running two redundant NameNodes in the same cluster in an 
> Active/Passive configuration with a hot standby. This allows a fast failover 
> to a new NameNode in the case that a machine crashes, or a graceful 
> administrator-initiated failover for the purpose of planned maintenance.
> Please update hdfsDesign article with same facts to avoid confusion in 
> Reader's mind..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9024) Deprecate the TotalFiles metric

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021646#comment-15021646
 ] 

Hudson commented on HDFS-9024:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8858 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8858/])
HDFS-9024. Deprecate the TotalFiles metric. Contributed by Akira (wheat9: rev 
822869785707b5665962ec0c699cd383dc767345)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java


> Deprecate the TotalFiles metric
> ---
>
> Key: HDFS-9024
> URL: https://issues.apache.org/jira/browse/HDFS-9024
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: metrics
> Fix For: 2.8.0
>
> Attachments: HDFS-9024.001.patch, HDFS-9024.002.patch
>
>
> There are two metrics (TotalFiles and FilesTotal) which are the same. In 
> HDFS-5165, we decided to remove TotalFiles but we need to deprecate the  
> metric before removing it. This issue is to deprecate the metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5165) FSNameSystem TotalFiles and FilesTotal metrics are the same

2015-11-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021634#comment-15021634
 ] 

Haohui Mai commented on HDFS-5165:
--

Hi [~ajisakaa], can you please remove the documentation for the deprecated 
metric?

> FSNameSystem TotalFiles and FilesTotal metrics are the same
> ---
>
> Key: HDFS-5165
> URL: https://issues.apache.org/jira/browse/HDFS-5165
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.1.0-beta
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: BB2015-05-TBR, metrics, newbie
> Attachments: HDFS-5165.2.patch, HDFS-5165.patch
>
>
> Both FSNameSystem TotalFiles and FilesTotal metrics mean total files/dirs in 
> the cluster. One of these metrics should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9024) Deprecate the TotalFiles metric

2015-11-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9024:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~ajisakaa] for the 
contribution.

> Deprecate the TotalFiles metric
> ---
>
> Key: HDFS-9024
> URL: https://issues.apache.org/jira/browse/HDFS-9024
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: metrics
> Fix For: 2.8.0
>
> Attachments: HDFS-9024.001.patch, HDFS-9024.002.patch
>
>
> There are two metrics (TotalFiles and FilesTotal) which are the same. In 
> HDFS-5165, we decided to remove TotalFiles but we need to deprecate the  
> metric before removing it. This issue is to deprecate the metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7988) Replace usage of ExactSizeInputStream with LimitInputStream.

2015-11-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7988:

Attachment: HDFS-7988.002.patch

rebased.

> Replace usage of ExactSizeInputStream with LimitInputStream.
> 
>
> Key: HDFS-7988
> URL: https://issues.apache.org/jira/browse/HDFS-7988
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chris Nauroth
>Assignee: Walter Su
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7988.001.patch, HDFS-7988.002.patch
>
>
> HDFS has a class named {{ExactSizeInputStream}} used in the protobuf 
> translation layer.  This class wraps another {{InputStream}}, but constraints 
> the readable bytes to a specified length.  The functionality is nearly 
> identical to {{LimitInputStream}} in Hadoop Common, with some differences in 
> semantics regarding premature EOF.  This issue proposes to eliminate 
> {{ExactSizeInputStream}} in favor of {{LimitInputStream}} to reduce the size 
> of the codebase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9356) Decommissioning node does not have Last Contact value in the UI

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021626#comment-15021626
 ] 

Hudson commented on HDFS-9356:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8857 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8857/])
HDFS-9356. Decommissioning node does not have Last Contact value in the 
(wheat9: rev 04c14b5dc45696951eddbb5f5c15db2ff0e3ce16)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


> Decommissioning node does not have Last Contact value in the UI
> ---
>
> Key: HDFS-9356
> URL: https://issues.apache.org/jira/browse/HDFS-9356
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9356.patch, decomm.png
>
>
> While DN is in decommissioning state, the Last contact value is empty in the 
> Datanode Information tab of Namenode UI.
> Attaching the snapshot of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9024) Deprecate the TotalFiles metric

2015-11-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9024:
-
Summary: Deprecate the TotalFiles metric  (was: Deprecate TotalFiles metric)

> Deprecate the TotalFiles metric
> ---
>
> Key: HDFS-9024
> URL: https://issues.apache.org/jira/browse/HDFS-9024
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: metrics
> Attachments: HDFS-9024.001.patch, HDFS-9024.002.patch
>
>
> There are two metrics (TotalFiles and FilesTotal) which are the same. In 
> HDFS-5165, we decided to remove TotalFiles but we need to deprecate the  
> metric before removing it. This issue is to deprecate the metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9024) Deprecate TotalFiles metric

2015-11-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021622#comment-15021622
 ] 

Haohui Mai commented on HDFS-9024:
--

+1

> Deprecate TotalFiles metric
> ---
>
> Key: HDFS-9024
> URL: https://issues.apache.org/jira/browse/HDFS-9024
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: metrics
> Attachments: HDFS-9024.001.patch, HDFS-9024.002.patch
>
>
> There are two metrics (TotalFiles and FilesTotal) which are the same. In 
> HDFS-5165, we decided to remove TotalFiles but we need to deprecate the  
> metric before removing it. This issue is to deprecate the metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9121) Remove unnecessary "+" sysmbol from BlockManager log.

2015-11-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021621#comment-15021621
 ] 

Haohui Mai commented on HDFS-9121:
--

[~singar.ranga] can you please rebase? Thanks

> Remove unnecessary "+" sysmbol from BlockManager log.
> -
>
> Key: HDFS-9121
> URL: https://issues.apache.org/jira/browse/HDFS-9121
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Ranga Swamy
>Assignee: Ranga Swamy
>Priority: Minor
> Attachments: HDFS-9121.01.patch, HDFS-9121.patch
>
>
> Remove unnecessary "+" sysmbol from BlockManager log.
> {code}
> 2015-08-18 15:34:14,016 | INFO | IPC Server handler 12 on 25000 | BLOCK* 
> processOverReplicatedBlock: Postponing processing of over-replicated 
> blk_1075396202_1655682 since storage + 
> [DISK]DS-41c1b969-a3f9-48ff-8c76-6fea0152950c:NORMAL:160.149.0.113:25009datanode
>  160.149.0.113:25009 does not yet have up-to-date block information. | 
> BlockManager.java:2906
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9356) Decommissioning node does not have Last Contact value in the UI

2015-11-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9356:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~surendrasingh] for the 
contribution.

> Decommissioning node does not have Last Contact value in the UI
> ---
>
> Key: HDFS-9356
> URL: https://issues.apache.org/jira/browse/HDFS-9356
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9356.patch, decomm.png
>
>
> While DN is in decommissioning state, the Last contact value is empty in the 
> Datanode Information tab of Namenode UI.
> Attaching the snapshot of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9356) Decommissioning node does not have Last Contact value in the UI

2015-11-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021620#comment-15021620
 ] 

Haohui Mai commented on HDFS-9356:
--

+1

> Decommissioning node does not have Last Contact value in the UI
> ---
>
> Key: HDFS-9356
> URL: https://issues.apache.org/jira/browse/HDFS-9356
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9356.patch, decomm.png
>
>
> While DN is in decommissioning state, the Last contact value is empty in the 
> Datanode Information tab of Namenode UI.
> Attaching the snapshot of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9356) Decommissioning node does not have Last Contact value in the UI

2015-11-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9356:
-
Summary: Decommissioning node does not have Last Contact value in the UI  
(was: Last Contact value is empty in Datanode Info tab while Decommissioning )

> Decommissioning node does not have Last Contact value in the UI
> ---
>
> Key: HDFS-9356
> URL: https://issues.apache.org/jira/browse/HDFS-9356
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9356.patch, decomm.png
>
>
> While DN is in decommissioning state, the Last contact value is empty in the 
> Datanode Information tab of Namenode UI.
> Attaching the snapshot of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9445) Deadlock in datanode

2015-11-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9445:

Attachment: HDFS-9445.00.patch

I think the lock should be required in such order: bposLock --> FSDatasetLock. 
If FSDatasetLock is held, it shouldn't require bposLock any more.
Upload initial patch. If anyone already working on this and have better patch, 
be free to upload yours.

> Deadlock in datanode
> 
>
> Key: HDFS-9445
> URL: https://issues.apache.org/jira/browse/HDFS-9445
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Kihwal Lee
>Priority: Blocker
> Attachments: HDFS-9445.00.patch
>
>
> {noformat}
> Found one Java-level deadlock:
> =
> "DataXceiver for client DFSClient_attempt_xxx at /1.2.3.4:100 [Sending block 
> BP-x:blk_123_456]":
>   waiting to lock monitor 0x7f77d0731768 (object 0xd60d9930, a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl),
>   which is held by "Thread-565"
> "Thread-565":
>   waiting for ownable synchronizer 0xd55613c8, (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync),
>   which is held by "DataNode: heartbeating to my-nn:8020"
> "DataNode: heartbeating to my-nn:8020":
>   waiting to lock monitor 0x7f77d0731768 (object 0xd60d9930, a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl),
>   which is held by "Thread-565"
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7796) Include X-editable for slick contenteditable fields in the web UI

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021594#comment-15021594
 ] 

Hudson commented on HDFS-7796:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2567 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2567/])
HDFS-7796. Include X-editable for slick contenteditable fields in the (wheat9: 
rev 38146a6cdbd3788d247f77dfc3248cd7f76d01f4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/js/bootstrap-editable.min.js
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/img/clear.png
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/css/bootstrap-editable.css
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/img/loading.gif


> Include X-editable for slick contenteditable fields in the web UI
> -
>
> Key: HDFS-7796
> URL: https://issues.apache.org/jira/browse/HDFS-7796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7796.01.patch
>
>
> This JIRA is for include X-editable (https://vitalets.github.io/x-editable/) 
> in the Hadoop UI. It is released under the MIT license so its fine. We need 
> it to make the owner / group / replication and possibly other fields in the 
> UI editable easily



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6885) Fix wrong use of BytesWritable in FSEditLogOp#RenameOp

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021599#comment-15021599
 ] 

Hudson commented on HDFS-6885:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2567 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2567/])
HDFS-6885. Fix wrong use of BytesWritable in FSEditLogOp#RenameOp. (wheat9: rev 
bfbcfe73644db6c047d774c4a461da27915eef84)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java


> Fix wrong use of BytesWritable in FSEditLogOp#RenameOp
> --
>
> Key: HDFS-6885
> URL: https://issues.apache.org/jira/browse/HDFS-6885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-6885.001.patch
>
>
> After readField using BytesWritable, the data length should be 
> {{writable.getLength()}}, instead of {{writable.getBytes().length}} which is 
> the buffer length. 
> This will cause returned {{Rename[]}} is longer than expected and may include 
> some incorrect values (Currently they are Rename#NONE, and have not caused 
> problem but code logic is incorrect). 
> {code}
> BytesWritable writable = new BytesWritable();
> writable.readFields(in);
> byte[] bytes = writable.getBytes();
> Rename[] options = new Rename[bytes.length];
> for (int i = 0; i < bytes.length; i++) {
>   options[i] = Rename.valueOf(bytes[i]);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9153) Pretty-format the output for DFSIO

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021595#comment-15021595
 ] 

Hudson commented on HDFS-9153:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2567 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2567/])
HDFS-9153. Pretty-format the output for DFSIO. Contributed by Kai Zheng. 
(wheat9: rev 000e12f6fa114dfa45377df23acf552e66410838)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Pretty-format the output for DFSIO
> --
>
> Key: HDFS-9153
> URL: https://issues.apache.org/jira/browse/HDFS-9153
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.8.0
>
> Attachments: HDFS-9153-v1.patch
>
>
> Ref. the following DFSIO output, I was surprised the test throughput was only 
> {{17}} MB/s, which doesn't make sense for a real cluster. Maybe it's used for 
> other purpose? For users, it may make more sense to give the throughput 1610 
> MB/s (1228800/763), calculated by *Total MBytes processed / Test exec time*.
> {noformat}
> 15/09/28 11:42:23 INFO fs.TestDFSIO: - TestDFSIO - : write
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Date & time: Mon Sep 28 
> 11:42:23 CST 2015
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Number of files: 100
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Total MBytes processed: 1228800.0
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  Throughput mb/sec: 
> 17.457387239456878
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.57563018798828
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  IO rate std deviation: 
> 1.7076328985378455
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Test exec time sec: 762.697
> 15/09/28 11:42:23 INFO fs.TestDFSIO: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9402) Switch DataNode.LOG to use slf4j

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021597#comment-15021597
 ] 

Hudson commented on HDFS-9402:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2567 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2567/])
HDFS-9402. Switch DataNode.LOG to use slf4j. Contributed by Walter Su. (wheat9: 
rev 176ff5ce90f2cbcd8342016d0f5570337d2ff79f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestDistCh.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java


> Switch DataNode.LOG to use slf4j
> 
>
> Key: HDFS-9402
> URL: https://issues.apache.org/jira/browse/HDFS-9402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9402-branch-2.01.patch, HDFS-9402.01.patch
>
>
> Similar to HDFS-8971, HDFS-7712.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8914) Document HA support in the HDFS HdfsDesign.md

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021596#comment-15021596
 ] 

Hudson commented on HDFS-8914:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2567 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2567/])
HDFS-8914. Document HA support in the HDFS HdfsDesign.md. Contributed by 
(wheat9: rev 0c7340f377f6663052be097ef58d60eee25f7334)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Document HA support in the HDFS HdfsDesign.md
> -
>
> Key: HDFS-8914
> URL: https://issues.apache.org/jira/browse/HDFS-8914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
> Environment: Documentation page in live
>Reporter: Ravindra Babu
>Assignee: Lars Francke
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-8914.1.patch, HDFS-8914.2.patch
>
>
> Please refer to these two links and correct one of them.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> The NameNode machine is a single point of failure for an HDFS cluster. If the 
> NameNode machine fails, manual intervention is necessary. Currently, 
> automatic restart and failover of the NameNode software to another machine is 
> not supported.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The HDFS High Availability feature addresses the above problems by providing 
> the option of running two redundant NameNodes in the same cluster in an 
> Active/Passive configuration with a hot standby. This allows a fast failover 
> to a new NameNode in the case that a machine crashes, or a graceful 
> administrator-initiated failover for the purpose of planned maintenance.
> Please update hdfsDesign article with same facts to avoid confusion in 
> Reader's mind..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3302) Review and improve HDFS trash documentation

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021600#comment-15021600
 ] 

Hudson commented on HDFS-3302:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2567 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2567/])
HDFS-3302. Review and improve HDFS trash documentation. Contributed by (wheat9: 
rev 2326171ea84b9ccea9df9fef137d6041df540d36)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md


> Review and improve HDFS trash documentation
> ---
>
> Key: HDFS-3302
> URL: https://issues.apache.org/jira/browse/HDFS-3302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: docs
> Fix For: 2.8.0
>
> Attachments: HDFS-3302.patch
>
>
> Improve Trash documentation for users.
> (0.23 published release docs are missing original HDFS docs btw...)
> A set of FAQ-like questions can be found on HDFS-2740
> I'll update the ticket shortly with the areas to cover in the docs, as 
> enabling trash by default (HDFS-2740) would be considered as a wide behavior 
> change per its follow ups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9314) Improve BlockPlacementPolicyDefault's picking of excess replicas

2015-11-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021569#comment-15021569
 ] 

Xiao Chen commented on HDFS-9314:
-

Sorry typo above, attached is patch 6, not 7.

> Improve BlockPlacementPolicyDefault's picking of excess replicas
> 
>
> Key: HDFS-9314
> URL: https://issues.apache.org/jira/browse/HDFS-9314
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Xiao Chen
> Attachments: HDFS-9314.001.patch, HDFS-9314.002.patch, 
> HDFS-9314.003.patch, HDFS-9314.004.patch, HDFS-9314.005.patch, 
> HDFS-9314.006.patch
>
>
> The test case used in HDFS-9313 identified NullPointerException as well as 
> the limitation of excess replica picking. If the current replicas are on 
> {SSD(rack r1), DISK(rack 2), DISK(rack 3), DISK(rack 3)} and the storage 
> policy changes to HOT_STORAGE_POLICY_ID, BlockPlacementPolicyDefault's won't 
> be able to delete SSD replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9314) Improve BlockPlacementPolicyDefault's picking of excess replicas

2015-11-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9314:

Attachment: HDFS-9314.006.patch

> Improve BlockPlacementPolicyDefault's picking of excess replicas
> 
>
> Key: HDFS-9314
> URL: https://issues.apache.org/jira/browse/HDFS-9314
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Xiao Chen
> Attachments: HDFS-9314.001.patch, HDFS-9314.002.patch, 
> HDFS-9314.003.patch, HDFS-9314.004.patch, HDFS-9314.005.patch, 
> HDFS-9314.006.patch
>
>
> The test case used in HDFS-9313 identified NullPointerException as well as 
> the limitation of excess replica picking. If the current replicas are on 
> {SSD(rack r1), DISK(rack 2), DISK(rack 3), DISK(rack 3)} and the storage 
> policy changes to HOT_STORAGE_POLICY_ID, BlockPlacementPolicyDefault's won't 
> be able to delete SSD replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9314) Improve BlockPlacementPolicyDefault's picking of excess replicas

2015-11-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021568#comment-15021568
 ] 

Xiao Chen commented on HDFS-9314:
-

Thanks Walter.
>From patch 3, the implementation is no longer as a fallback strategy, but as a 
>guarantee-the-number-of-remaining-racks-don't-go-<2 strategy. See [comments 
>above|https://issues.apache.org/jira/browse/HDFS-9314?focusedCommentId=15012152&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15012152]
> between Ming and me for details about this decision.
So after the changes to default policy:
{code}
  /* If only 1 rack, pick all. If 2 racks, pick all that have more than
   * 1 replicas on the same rack; if no such replicas, pick all.
   * If 3 or more racks, pick all.
   */
{code}
Above said, currently the node-group policy favors {{first}} with node-group 
specific logic as long as {{first}} is not empty. Then when choosing from 
{{moreThanOne}} and {{exactlyOne}}, we could apply default logic here, but 
instead of passing in {{rackMap}} we pass in {{nodeGroupMap}}. I'm not sure 
from requirement perspective if this is acceptable, but it would be more 
consistent logically. Makes sense? Also asking [~mingma] for advice.

Attached patch 7 implements this idea. FYI - the only difference between patch 
6 and 7 is the following:
{code}
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java
@@ -367,7 +367,7 @@ private int 
addDependentNodesToExcludedNodes(DatanodeDescriptor chosenNode,
-return moreThanOne.isEmpty()? exactlyOne : moreThanOne;
+return super.pickupReplicaSet(moreThanOne, exactlyOne, nodeGroupMap);
   }
{code}

> Improve BlockPlacementPolicyDefault's picking of excess replicas
> 
>
> Key: HDFS-9314
> URL: https://issues.apache.org/jira/browse/HDFS-9314
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Xiao Chen
> Attachments: HDFS-9314.001.patch, HDFS-9314.002.patch, 
> HDFS-9314.003.patch, HDFS-9314.004.patch, HDFS-9314.005.patch
>
>
> The test case used in HDFS-9313 identified NullPointerException as well as 
> the limitation of excess replica picking. If the current replicas are on 
> {SSD(rack r1), DISK(rack 2), DISK(rack 3), DISK(rack 3)} and the storage 
> policy changes to HOT_STORAGE_POLICY_ID, BlockPlacementPolicyDefault's won't 
> be able to delete SSD replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7897) Shutdown metrics when stopping JournalNode

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021542#comment-15021542
 ] 

Hudson commented on HDFS-7897:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1435 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1435/])
HDFS-7897. Shutdown metrics when stopping JournalNode. Contributed by (wheat9: 
rev a4bd54f9d776f39080b41913afa455f8c0f6e46d)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Shutdown metrics when stopping JournalNode
> --
>
> Key: HDFS-7897
> URL: https://issues.apache.org/jira/browse/HDFS-7897
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7897-001.patch
>
>
> In JournalNode.stop(), the metrics system is forgotten to shutdown. The issue 
> is found when reading the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9446) tSize of libhdfs in hadoop-2.7.1 is still int32_t

2015-11-22 Thread Glen Cao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Glen Cao updated HDFS-9446:
---
Description: 
Issue (https://issues.apache.org/jira/browse/HDFS-466) says what I mentioned in 
the title is fixed. However, I find that in the source 
(hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h) of 
hadoop-2.7.1, tSize is still typedef-ed as int32_t and I don't find any 
compilation option about that.

In hdfs.h:
75 typedef int32_t   tSize; /// size of data for read/write io ops

> tSize of libhdfs in hadoop-2.7.1 is still int32_t
> -
>
> Key: HDFS-9446
> URL: https://issues.apache.org/jira/browse/HDFS-9446
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Glen Cao
>
> Issue (https://issues.apache.org/jira/browse/HDFS-466) says what I mentioned 
> in the title is fixed. However, I find that in the source 
> (hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h) of 
> hadoop-2.7.1, tSize is still typedef-ed as int32_t and I don't find any 
> compilation option about that.
> In hdfs.h:
> 75 typedef int32_t   tSize; /// size of data for read/write io ops



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9429) Tests in TestDFSAdminWithHA intermittently fail with EOFException

2015-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021525#comment-15021525
 ] 

Hadoop QA commented on HDFS-9429:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 17s 
{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 13s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 44s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_85. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 163m 33s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.TestDatanodeDeath |
|   | hadoop.hdfs.qjournal.TestNNWithQJM |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.TestReplication |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | hadoop.h

[jira] [Created] (HDFS-9446) tSize of libhdfs in hadoop-2.7.1 is still int32_t

2015-11-22 Thread Glen Cao (JIRA)
Glen Cao created HDFS-9446:
--

 Summary: tSize of libhdfs in hadoop-2.7.1 is still int32_t
 Key: HDFS-9446
 URL: https://issues.apache.org/jira/browse/HDFS-9446
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Glen Cao






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7897) Shutdown metrics when stopping JournalNode

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021507#comment-15021507
 ] 

Hudson commented on HDFS-7897:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2640 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2640/])
HDFS-7897. Shutdown metrics when stopping JournalNode. Contributed by (wheat9: 
rev a4bd54f9d776f39080b41913afa455f8c0f6e46d)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java


> Shutdown metrics when stopping JournalNode
> --
>
> Key: HDFS-7897
> URL: https://issues.apache.org/jira/browse/HDFS-7897
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7897-001.patch
>
>
> In JournalNode.stop(), the metrics system is forgotten to shutdown. The issue 
> is found when reading the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6885) Fix wrong use of BytesWritable in FSEditLogOp#RenameOp

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021493#comment-15021493
 ] 

Hudson commented on HDFS-6885:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1434 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1434/])
HDFS-6885. Fix wrong use of BytesWritable in FSEditLogOp#RenameOp. (wheat9: rev 
bfbcfe73644db6c047d774c4a461da27915eef84)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java


> Fix wrong use of BytesWritable in FSEditLogOp#RenameOp
> --
>
> Key: HDFS-6885
> URL: https://issues.apache.org/jira/browse/HDFS-6885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-6885.001.patch
>
>
> After readField using BytesWritable, the data length should be 
> {{writable.getLength()}}, instead of {{writable.getBytes().length}} which is 
> the buffer length. 
> This will cause returned {{Rename[]}} is longer than expected and may include 
> some incorrect values (Currently they are Rename#NONE, and have not caused 
> problem but code logic is incorrect). 
> {code}
> BytesWritable writable = new BytesWritable();
> writable.readFields(in);
> byte[] bytes = writable.getBytes();
> Rename[] options = new Rename[bytes.length];
> for (int i = 0; i < bytes.length; i++) {
>   options[i] = Rename.valueOf(bytes[i]);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9443) Disabling HDFS client socket cache causes logging message printed to console for CLI commands.

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021472#comment-15021472
 ] 

Hudson commented on HDFS-9443:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #628 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/628/])
HDFS-9443. Disabling HDFS client socket cache causes logging message (wheat9: 
rev 6039059c37626d3d1d231986440623a593e2726b)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/PeerCache.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Disabling HDFS client socket cache causes logging message printed to console 
> for CLI commands.
> --
>
> Key: HDFS-9443
> URL: https://issues.apache.org/jira/browse/HDFS-9443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-9443.001.patch
>
>
> The HDFS client's socket cache can be disabled by setting 
> {{dfs.client.socketcache.capacity}} to {{0}}.  When this is done, the 
> {{PeerCache}} class logs an info-level message stating that the cache is 
> disabled.  This message is getting printed to the console for CLI commands, 
> which disrupts CLI output.  This issue proposes to downgrade to debug-level 
> logging for this message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7897) Shutdown metrics when stopping JournalNode

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021468#comment-15021468
 ] 

Hudson commented on HDFS-7897:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #710 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/710/])
HDFS-7897. Shutdown metrics when stopping JournalNode. Contributed by (wheat9: 
rev a4bd54f9d776f39080b41913afa455f8c0f6e46d)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java


> Shutdown metrics when stopping JournalNode
> --
>
> Key: HDFS-7897
> URL: https://issues.apache.org/jira/browse/HDFS-7897
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7897-001.patch
>
>
> In JournalNode.stop(), the metrics system is forgotten to shutdown. The issue 
> is found when reading the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7897) Shutdown metrics when stopping JournalNode

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021462#comment-15021462
 ] 

Hudson commented on HDFS-7897:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #699 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/699/])
HDFS-7897. Shutdown metrics when stopping JournalNode. Contributed by (wheat9: 
rev a4bd54f9d776f39080b41913afa455f8c0f6e46d)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Shutdown metrics when stopping JournalNode
> --
>
> Key: HDFS-7897
> URL: https://issues.apache.org/jira/browse/HDFS-7897
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7897-001.patch
>
>
> In JournalNode.stop(), the metrics system is forgotten to shutdown. The issue 
> is found when reading the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7592) A bug in BlocksMap that cause NameNode memory leak.

2015-11-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7592:
-
Resolution: Cannot Reproduce
Status: Resolved  (was: Patch Available)

This is no longer an issue as BlockUnderConstruction has become a feature class 
after the merge of EC branch.

> A bug in BlocksMap that  cause NameNode  memory leak.
> -
>
> Key: HDFS-7592
> URL: https://issues.apache.org/jira/browse/HDFS-7592
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
> Environment: HDFS-0.21.0
>Reporter: JichengSong
>Assignee: JichengSong
>  Labels: BB2015-05-TBR, BlocksMap, leak, memory
> Attachments: HDFS-7592.patch
>
>
> In our HDFS production environment, NameNode FGC frequently after running for 
> 2 months, we have to restart NameNode manually.
> We dumped NameNode's Heap for objects statistics.
> Before restarting NameNode:
> num #instances #bytes class name
> --
>     1: 59262275 3613989480 [Ljava.lang.Object;
>     ...
>     10: 8549361 615553992 
> org.apache.hadoop.hdfs.server.namenode.BlockInfoUnderConstruction
>     11: 5941511 427788792 
> org.apache.hadoop.hdfs.server.namenode.INodeFileUnderConstruction
> After restarting NameNode:
> num #instances #bytes class name
> --
>      1: 44188391 2934099616 [Ljava.lang.Object;
>   ...
>     23: 721763 51966936 
> org.apache.hadoop.hdfs.server.namenode.BlockInfoUnderConstruction
>     24: 620028 44642016 
> org.apache.hadoop.hdfs.server.namenode.INodeFileUnderConstruction
> We find the number of BlockInfoUnderConstruction is abnormally large before 
> restarting NameNode.
> As we know, BlockInfoUnderConstruction keeps block state when the file is 
> being written. But the write pressure of
> our cluster is far less than million/sec. We think there is a memory leak in 
> NameNode.
> We fixed the bug as followsing patch.
> diff --git 
> a/hdfs/src/java/org/apache/hadoop/hdfs/server/namenode/BlocksMap.java 
> b/hdfs/src/java/org/apache/hadoop/hdfs/server/namenode/BlocksMap.java
> index 7a40522..857d340 100644
> --- a/hdfs/src/java/org/apache/hadoop/hdfs/server/namenode/BlocksMap.java
> +++ b/hdfs/src/java/org/apache/hadoop/hdfs/server/namenode/BlocksMap.java
> @@ -205,6 +205,8 @@ class BlocksMap {
>DatanodeDescriptor dn = currentBlock.getDatanode(idx);
>dn.replaceBlock(currentBlock, newBlock);
>  }
> +// change to fix bug about memory leak of NameNode
> +map.remove(newBlock);
>  // replace block in the map itself
>  map.put(newBlock, newBlock);
>  return newBlock;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7897) Shutdown metrics when stopping JournalNode

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021452#comment-15021452
 ] 

Hudson commented on HDFS-7897:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8854 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8854/])
HDFS-7897. Shutdown metrics when stopping JournalNode. Contributed by (wheat9: 
rev a4bd54f9d776f39080b41913afa455f8c0f6e46d)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java


> Shutdown metrics when stopping JournalNode
> --
>
> Key: HDFS-7897
> URL: https://issues.apache.org/jira/browse/HDFS-7897
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7897-001.patch
>
>
> In JournalNode.stop(), the metrics system is forgotten to shutdown. The issue 
> is found when reading the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7988) Replace usage of ExactSizeInputStream with LimitInputStream.

2015-11-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021445#comment-15021445
 ] 

Haohui Mai commented on HDFS-7988:
--

Hi [~walter.k.su] can you please rebase the patch? Thanks.

> Replace usage of ExactSizeInputStream with LimitInputStream.
> 
>
> Key: HDFS-7988
> URL: https://issues.apache.org/jira/browse/HDFS-7988
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chris Nauroth
>Assignee: Walter Su
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7988.001.patch
>
>
> HDFS has a class named {{ExactSizeInputStream}} used in the protobuf 
> translation layer.  This class wraps another {{InputStream}}, but constraints 
> the readable bytes to a specified length.  The functionality is nearly 
> identical to {{LimitInputStream}} in Hadoop Common, with some differences in 
> semantics regarding premature EOF.  This issue proposes to eliminate 
> {{ExactSizeInputStream}} in favor of {{LimitInputStream}} to reduce the size 
> of the codebase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6885) Fix wrong use of BytesWritable in FSEditLogOp#RenameOp

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021440#comment-15021440
 ] 

Hudson commented on HDFS-6885:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #709 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/709/])
HDFS-6885. Fix wrong use of BytesWritable in FSEditLogOp#RenameOp. (wheat9: rev 
bfbcfe73644db6c047d774c4a461da27915eef84)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix wrong use of BytesWritable in FSEditLogOp#RenameOp
> --
>
> Key: HDFS-6885
> URL: https://issues.apache.org/jira/browse/HDFS-6885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-6885.001.patch
>
>
> After readField using BytesWritable, the data length should be 
> {{writable.getLength()}}, instead of {{writable.getBytes().length}} which is 
> the buffer length. 
> This will cause returned {{Rename[]}} is longer than expected and may include 
> some incorrect values (Currently they are Rename#NONE, and have not caused 
> problem but code logic is incorrect). 
> {code}
> BytesWritable writable = new BytesWritable();
> writable.readFields(in);
> byte[] bytes = writable.getBytes();
> Rename[] options = new Rename[bytes.length];
> for (int i = 0; i < bytes.length; i++) {
>   options[i] = Rename.valueOf(bytes[i]);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7897) Shutdown metrics when stopping JournalNode

2015-11-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7897:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~sinago] for the 
contribution.

> Shutdown metrics when stopping JournalNode
> --
>
> Key: HDFS-7897
> URL: https://issues.apache.org/jira/browse/HDFS-7897
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7897-001.patch
>
>
> In JournalNode.stop(), the metrics system is forgotten to shutdown. The issue 
> is found when reading the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7897) Shutdown metrics when stopping JournalNode

2015-11-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021435#comment-15021435
 ] 

Haohui Mai commented on HDFS-7897:
--

+1. Ran the test locally and they passed. Committing.

> Shutdown metrics when stopping JournalNode
> --
>
> Key: HDFS-7897
> URL: https://issues.apache.org/jira/browse/HDFS-7897
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7897-001.patch
>
>
> In JournalNode.stop(), the metrics system is forgotten to shutdown. The issue 
> is found when reading the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5263) Delegation token is not created generateNodeDataHeader method of NamenodeJspHelper$NodeListJsp

2015-11-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5263:
-
Resolution: Cannot Reproduce
Status: Resolved  (was: Patch Available)

This is no longer an issue as the JSP based UI is removed in 2.7.

> Delegation token is not created generateNodeDataHeader method of 
> NamenodeJspHelper$NodeListJsp
> --
>
> Key: HDFS-5263
> URL: https://issues.apache.org/jira/browse/HDFS-5263
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, webhdfs
>Reporter: Vasu Mariyala
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5263-rev1.patch, HDFS-5263.patch
>
>
> When Kerberos authentication is enabled, we are unable to browse to the data 
> nodes using ( Name node web page --> Live Nodes --> Select any of the data 
> nodes). The reason behind this is the delegation token is not provided as 
> part of the url in the method (generateNodeDataHeader method of NodeListJsp)
> {code}
>   String url = HttpConfig.getSchemePrefix() + d.getHostName() + ":"
>   + d.getInfoPort()
>   + "/browseDirectory.jsp?namenodeInfoPort=" + nnHttpPort + "&dir="
>   + URLEncoder.encode("/", "UTF-8")
>   + JspHelper.getUrlParam(JspHelper.NAMENODE_ADDRESS, nnaddr);
> {code}
> But browsing the file system using name node web page --> Browse the file 
> system ->  is working fine as the redirectToRandomDataNode 
> method of NamenodeJspHelper creates the delegation token
> {code}
> redirectLocation = HttpConfig.getSchemePrefix() + fqdn + ":" + 
> redirectPort
> + "/browseDirectory.jsp?namenodeInfoPort="
> + nn.getHttpAddress().getPort() + "&dir=/"
> + (tokenString == null ? "" :
>JspHelper.getDelegationTokenUrlParam(tokenString))
> + JspHelper.getUrlParam(JspHelper.NAMENODE_ADDRESS, addr);
> {code}
> I will work on providing a patch for this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7764) DirectoryScanner shouldn't abort the scan if one directory had an error

2015-11-22 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-7764:
---
Target Version/s: 2.8.0  (was: 3.0.0)

> DirectoryScanner shouldn't abort the scan if one directory had an error
> ---
>
> Key: HDFS-7764
> URL: https://issues.apache.org/jira/browse/HDFS-7764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.0
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-7764-01.patch, HDFS-7764.patch
>
>
> If there is an exception while preparing the ScanInfo for the blocks in the 
> directory, DirectoryScanner is immediately throwing exception and coming out 
> of the current scan cycle. The idea of this jira is to discuss & improve the 
> exception handling mechanism.
> DirectoryScanner.java
> {code}
> for (Entry> report :
> compilersInProgress.entrySet()) {
>   try {
> dirReports[report.getKey()] = report.getValue().get();
>   } catch (Exception ex) {
> LOG.error("Error compiling report", ex);
> // Propagate ex to DataBlockScanner to deal with
> throw new RuntimeException(ex);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6885) Fix wrong use of BytesWritable in FSEditLogOp#RenameOp

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021416#comment-15021416
 ] 

Hudson commented on HDFS-6885:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #698 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/698/])
HDFS-6885. Fix wrong use of BytesWritable in FSEditLogOp#RenameOp. (wheat9: rev 
bfbcfe73644db6c047d774c4a461da27915eef84)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java


> Fix wrong use of BytesWritable in FSEditLogOp#RenameOp
> --
>
> Key: HDFS-6885
> URL: https://issues.apache.org/jira/browse/HDFS-6885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-6885.001.patch
>
>
> After readField using BytesWritable, the data length should be 
> {{writable.getLength()}}, instead of {{writable.getBytes().length}} which is 
> the buffer length. 
> This will cause returned {{Rename[]}} is longer than expected and may include 
> some incorrect values (Currently they are Rename#NONE, and have not caused 
> problem but code logic is incorrect). 
> {code}
> BytesWritable writable = new BytesWritable();
> writable.readFields(in);
> byte[] bytes = writable.getBytes();
> Rename[] options = new Rename[bytes.length];
> for (int i = 0; i < bytes.length; i++) {
>   options[i] = Rename.valueOf(bytes[i]);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6980) TestWebHdfsFileSystemContract fails in trunk

2015-11-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021411#comment-15021411
 ] 

Haohui Mai commented on HDFS-6980:
--

[~ozawa], can you please rebase your patch and use Java 7's try-with-resource 
statement? Thanks!

> TestWebHdfsFileSystemContract fails in trunk
> 
>
> Key: HDFS-6980
> URL: https://issues.apache.org/jira/browse/HDFS-6980
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira AJISAKA
>Assignee: Tsuyoshi Ozawa
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6980.1-2.patch, HDFS-6980.1.patch
>
>
> Many tests in TestWebHdfsFileSystemContract fail by "too many open files" 
> error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9153) Pretty-format the output for DFSIO

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021388#comment-15021388
 ] 

Hudson commented on HDFS-9153:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1433 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1433/])
HDFS-9153. Pretty-format the output for DFSIO. Contributed by Kai Zheng. 
(wheat9: rev 000e12f6fa114dfa45377df23acf552e66410838)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java


> Pretty-format the output for DFSIO
> --
>
> Key: HDFS-9153
> URL: https://issues.apache.org/jira/browse/HDFS-9153
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.8.0
>
> Attachments: HDFS-9153-v1.patch
>
>
> Ref. the following DFSIO output, I was surprised the test throughput was only 
> {{17}} MB/s, which doesn't make sense for a real cluster. Maybe it's used for 
> other purpose? For users, it may make more sense to give the throughput 1610 
> MB/s (1228800/763), calculated by *Total MBytes processed / Test exec time*.
> {noformat}
> 15/09/28 11:42:23 INFO fs.TestDFSIO: - TestDFSIO - : write
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Date & time: Mon Sep 28 
> 11:42:23 CST 2015
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Number of files: 100
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Total MBytes processed: 1228800.0
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  Throughput mb/sec: 
> 17.457387239456878
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.57563018798828
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  IO rate std deviation: 
> 1.7076328985378455
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Test exec time sec: 762.697
> 15/09/28 11:42:23 INFO fs.TestDFSIO: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9402) Switch DataNode.LOG to use slf4j

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021390#comment-15021390
 ] 

Hudson commented on HDFS-9402:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1433 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1433/])
HDFS-9402. Switch DataNode.LOG to use slf4j. Contributed by Walter Su. (wheat9: 
rev 176ff5ce90f2cbcd8342016d0f5570337d2ff79f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestDistCh.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java


> Switch DataNode.LOG to use slf4j
> 
>
> Key: HDFS-9402
> URL: https://issues.apache.org/jira/browse/HDFS-9402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9402-branch-2.01.patch, HDFS-9402.01.patch
>
>
> Similar to HDFS-8971, HDFS-7712.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7796) Include X-editable for slick contenteditable fields in the web UI

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021387#comment-15021387
 ] 

Hudson commented on HDFS-7796:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1433 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1433/])
HDFS-7796. Include X-editable for slick contenteditable fields in the (wheat9: 
rev 38146a6cdbd3788d247f77dfc3248cd7f76d01f4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/img/loading.gif
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/css/bootstrap-editable.css
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/js/bootstrap-editable.min.js
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/img/clear.png


> Include X-editable for slick contenteditable fields in the web UI
> -
>
> Key: HDFS-7796
> URL: https://issues.apache.org/jira/browse/HDFS-7796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7796.01.patch
>
>
> This JIRA is for include X-editable (https://vitalets.github.io/x-editable/) 
> in the Hadoop UI. It is released under the MIT license so its fine. We need 
> it to make the owner / group / replication and possibly other fields in the 
> UI editable easily



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8914) Document HA support in the HDFS HdfsDesign.md

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021389#comment-15021389
 ] 

Hudson commented on HDFS-8914:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1433 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1433/])
HDFS-8914. Document HA support in the HDFS HdfsDesign.md. Contributed by 
(wheat9: rev 0c7340f377f6663052be097ef58d60eee25f7334)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Document HA support in the HDFS HdfsDesign.md
> -
>
> Key: HDFS-8914
> URL: https://issues.apache.org/jira/browse/HDFS-8914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
> Environment: Documentation page in live
>Reporter: Ravindra Babu
>Assignee: Lars Francke
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-8914.1.patch, HDFS-8914.2.patch
>
>
> Please refer to these two links and correct one of them.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> The NameNode machine is a single point of failure for an HDFS cluster. If the 
> NameNode machine fails, manual intervention is necessary. Currently, 
> automatic restart and failover of the NameNode software to another machine is 
> not supported.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The HDFS High Availability feature addresses the above problems by providing 
> the option of running two redundant NameNodes in the same cluster in an 
> Active/Passive configuration with a hot standby. This allows a fast failover 
> to a new NameNode in the case that a machine crashes, or a graceful 
> administrator-initiated failover for the purpose of planned maintenance.
> Please update hdfsDesign article with same facts to avoid confusion in 
> Reader's mind..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3302) Review and improve HDFS trash documentation

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021391#comment-15021391
 ] 

Hudson commented on HDFS-3302:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1433 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1433/])
HDFS-3302. Review and improve HDFS trash documentation. Contributed by (wheat9: 
rev 2326171ea84b9ccea9df9fef137d6041df540d36)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md


> Review and improve HDFS trash documentation
> ---
>
> Key: HDFS-3302
> URL: https://issues.apache.org/jira/browse/HDFS-3302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: docs
> Fix For: 2.8.0
>
> Attachments: HDFS-3302.patch
>
>
> Improve Trash documentation for users.
> (0.23 published release docs are missing original HDFS docs btw...)
> A set of FAQ-like questions can be found on HDFS-2740
> I'll update the ticket shortly with the areas to cover in the docs, as 
> enabling trash by default (HDFS-2740) would be considered as a wide behavior 
> change per its follow ups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9429) Tests in TestDFSAdminWithHA intermittently fail with EOFException

2015-11-22 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021383#comment-15021383
 ] 

Xiao Chen commented on HDFS-9429:
-

Attached patch to reproduce the failure to a same stack trace but with a 
different type of exception. As mentioned above, EOFE needs to be very exact to 
reproduce. I think this reproduce patch is sufficient to prove that a 
{{waitActive}}-ish method is needed.

The reproduced failure is caused by JN rpc server starting later than the rpc 
call inside the said stack trace. Un-commenting the 
{{journalCluster.waitActive();}} in {{MiniQJMHACluster#MiniQJMHACluster}} at 
line 101 will make the unit test pass, due to the introduced {{waitActive}}.

Below is a sample failure stack trace using the attached patch.
{noformat}
java.io.IOException: Timed out waiting for response from loggers
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:229)
at 
org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:916)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:180)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1067)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:370)
at 
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:228)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1005)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.(MiniQJMHACluster.java:111)
at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.(MiniQJMHACluster.java:37)
at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:65)
at 
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.setUpHaCluster(TestDFSAdminWithHA.java:84)
at 
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testMetaSave(TestDFSAdminWithHA.java:205)
{noformat}

Please kindly review patch 1. Thanks.

> Tests in TestDFSAdminWithHA intermittently fail with EOFException
> -
>
> Key: HDFS-9429
> URL: https://issues.apache.org/jira/browse/HDFS-9429
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9429.001.patch, HDFS-9429.reproduce
>
>
> I have seen this fail a handful of times for {{testMetaSave}}, but from my 
> understanding this is from {{setUpHaCluster}} so theoretically it could fail 
> for any cases in the class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6885) Fix wrong use of BytesWritable in FSEditLogOp#RenameOp

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021373#comment-15021373
 ] 

Hudson commented on HDFS-6885:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2638 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2638/])
HDFS-6885. Fix wrong use of BytesWritable in FSEditLogOp#RenameOp. (wheat9: rev 
bfbcfe73644db6c047d774c4a461da27915eef84)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix wrong use of BytesWritable in FSEditLogOp#RenameOp
> --
>
> Key: HDFS-6885
> URL: https://issues.apache.org/jira/browse/HDFS-6885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-6885.001.patch
>
>
> After readField using BytesWritable, the data length should be 
> {{writable.getLength()}}, instead of {{writable.getBytes().length}} which is 
> the buffer length. 
> This will cause returned {{Rename[]}} is longer than expected and may include 
> some incorrect values (Currently they are Rename#NONE, and have not caused 
> problem but code logic is incorrect). 
> {code}
> BytesWritable writable = new BytesWritable();
> writable.readFields(in);
> byte[] bytes = writable.getBytes();
> Rename[] options = new Rename[bytes.length];
> for (int i = 0; i < bytes.length; i++) {
>   options[i] = Rename.valueOf(bytes[i]);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9429) Tests in TestDFSAdminWithHA intermittently fail with EOFException

2015-11-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9429:

Attachment: HDFS-9429.reproduce

> Tests in TestDFSAdminWithHA intermittently fail with EOFException
> -
>
> Key: HDFS-9429
> URL: https://issues.apache.org/jira/browse/HDFS-9429
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9429.001.patch, HDFS-9429.reproduce
>
>
> I have seen this fail a handful of times for {{testMetaSave}}, but from my 
> understanding this is from {{setUpHaCluster}} so theoretically it could fail 
> for any cases in the class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7796) Include X-editable for slick contenteditable fields in the web UI

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021354#comment-15021354
 ] 

Hudson commented on HDFS-7796:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #708 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/708/])
HDFS-7796. Include X-editable for slick contenteditable fields in the (wheat9: 
rev 38146a6cdbd3788d247f77dfc3248cd7f76d01f4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/img/clear.png
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/img/loading.gif
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/js/bootstrap-editable.min.js
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/css/bootstrap-editable.css


> Include X-editable for slick contenteditable fields in the web UI
> -
>
> Key: HDFS-7796
> URL: https://issues.apache.org/jira/browse/HDFS-7796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7796.01.patch
>
>
> This JIRA is for include X-editable (https://vitalets.github.io/x-editable/) 
> in the Hadoop UI. It is released under the MIT license so its fine. We need 
> it to make the owner / group / replication and possibly other fields in the 
> UI editable easily



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8914) Document HA support in the HDFS HdfsDesign.md

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021356#comment-15021356
 ] 

Hudson commented on HDFS-8914:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #708 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/708/])
HDFS-8914. Document HA support in the HDFS HdfsDesign.md. Contributed by 
(wheat9: rev 0c7340f377f6663052be097ef58d60eee25f7334)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Document HA support in the HDFS HdfsDesign.md
> -
>
> Key: HDFS-8914
> URL: https://issues.apache.org/jira/browse/HDFS-8914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
> Environment: Documentation page in live
>Reporter: Ravindra Babu
>Assignee: Lars Francke
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-8914.1.patch, HDFS-8914.2.patch
>
>
> Please refer to these two links and correct one of them.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> The NameNode machine is a single point of failure for an HDFS cluster. If the 
> NameNode machine fails, manual intervention is necessary. Currently, 
> automatic restart and failover of the NameNode software to another machine is 
> not supported.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The HDFS High Availability feature addresses the above problems by providing 
> the option of running two redundant NameNodes in the same cluster in an 
> Active/Passive configuration with a hot standby. This allows a fast failover 
> to a new NameNode in the case that a machine crashes, or a graceful 
> administrator-initiated failover for the purpose of planned maintenance.
> Please update hdfsDesign article with same facts to avoid confusion in 
> Reader's mind..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9402) Switch DataNode.LOG to use slf4j

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021357#comment-15021357
 ] 

Hudson commented on HDFS-9402:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #708 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/708/])
HDFS-9402. Switch DataNode.LOG to use slf4j. Contributed by Walter Su. (wheat9: 
rev 176ff5ce90f2cbcd8342016d0f5570337d2ff79f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
* 
hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestDistCh.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java


> Switch DataNode.LOG to use slf4j
> 
>
> Key: HDFS-9402
> URL: https://issues.apache.org/jira/browse/HDFS-9402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9402-branch-2.01.patch, HDFS-9402.01.patch
>
>
> Similar to HDFS-8971, HDFS-7712.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9153) Pretty-format the output for DFSIO

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021355#comment-15021355
 ] 

Hudson commented on HDFS-9153:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #708 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/708/])
HDFS-9153. Pretty-format the output for DFSIO. Contributed by Kai Zheng. 
(wheat9: rev 000e12f6fa114dfa45377df23acf552e66410838)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Pretty-format the output for DFSIO
> --
>
> Key: HDFS-9153
> URL: https://issues.apache.org/jira/browse/HDFS-9153
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.8.0
>
> Attachments: HDFS-9153-v1.patch
>
>
> Ref. the following DFSIO output, I was surprised the test throughput was only 
> {{17}} MB/s, which doesn't make sense for a real cluster. Maybe it's used for 
> other purpose? For users, it may make more sense to give the throughput 1610 
> MB/s (1228800/763), calculated by *Total MBytes processed / Test exec time*.
> {noformat}
> 15/09/28 11:42:23 INFO fs.TestDFSIO: - TestDFSIO - : write
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Date & time: Mon Sep 28 
> 11:42:23 CST 2015
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Number of files: 100
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Total MBytes processed: 1228800.0
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  Throughput mb/sec: 
> 17.457387239456878
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.57563018798828
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  IO rate std deviation: 
> 1.7076328985378455
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Test exec time sec: 762.697
> 15/09/28 11:42:23 INFO fs.TestDFSIO: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3302) Review and improve HDFS trash documentation

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021359#comment-15021359
 ] 

Hudson commented on HDFS-3302:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #708 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/708/])
HDFS-3302. Review and improve HDFS trash documentation. Contributed by (wheat9: 
rev 2326171ea84b9ccea9df9fef137d6041df540d36)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md


> Review and improve HDFS trash documentation
> ---
>
> Key: HDFS-3302
> URL: https://issues.apache.org/jira/browse/HDFS-3302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: docs
> Fix For: 2.8.0
>
> Attachments: HDFS-3302.patch
>
>
> Improve Trash documentation for users.
> (0.23 published release docs are missing original HDFS docs btw...)
> A set of FAQ-like questions can be found on HDFS-2740
> I'll update the ticket shortly with the areas to cover in the docs, as 
> enabling trash by default (HDFS-2740) would be considered as a wide behavior 
> change per its follow ups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6363) Improve concurrency while checking inclusion and exclusion of datanodes

2015-11-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021353#comment-15021353
 ] 

Haohui Mai commented on HDFS-6363:
--

The approach looks good to me overall. Can you please change volatile to 
AtomicReference?

> Improve concurrency while checking inclusion and exclusion of datanodes
> ---
>
> Key: HDFS-6363
> URL: https://issues.apache.org/jira/browse/HDFS-6363
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6363.patch
>
>
> HostFileManager holds two effectively immutable objects - includes and 
> excludes. These two objects can be safely published together using a volatile 
> container instead of synchronizing for all mutators and accessors.
> This improves the concurrency while using HostFileManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5396) FSImage.getFsImageName should check whether fsimage exists

2015-11-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5396:
-
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Closing this one as won't fix as the 1.x branch is no longer in active 
development.

> FSImage.getFsImageName should check whether fsimage exists
> --
>
> Key: HDFS-5396
> URL: https://issues.apache.org/jira/browse/HDFS-5396
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.1
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
>  Labels: BB2015-05-TBR
> Fix For: 1.3.0
>
> Attachments: HDFS-5396-branch-1.2.patch, HDFS-5396-branch-1.2.patch
>
>
> In https://issues.apache.org/jira/browse/HDFS-5367, fsimage may not write to 
> all IMAGE dir, so we need to check whether fsimage exists before 
> FSImage.getFsImageName returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9314) Improve BlockPlacementPolicyDefault's picking of excess replicas

2015-11-22 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021350#comment-15021350
 ] 

Walter Su commented on HDFS-9314:
-

Suppose {{first}}, {{second}} are not empty.

Before changes to default policy:
1. choose from {{first}}.

After changes to default policy:
1. choose from {{first}}.
2. If step.1 returns null, fallback to choose from {{second}}.

Suppose first, second are not empty, and suppose in {{first}} there are nodes 
on same node-group
Before changes to node-group policy:
1. choose from nodes belonging to same node-group from {{first}}.

After changes to node-group policy (What I thought I should be):
1. choose from nodes belonging to same node-group from {{first}}.
2. If step.1 returns null, fallback to all {{first}}.
3. If step.2 returns null, fallback to choose from {{second}}.
(The step.2~3 is the same as default policy)

_Satisfying the placement policy is the prerequisite for the fallback strategy._

What I meant is, we support fallback in default policy, why not node-group 
policy?
Besides, the 004 patch simply returns {{all}}, there is no preference to 
{{first}} or {{second}}. We can't say it's a fallback strategy.

> Improve BlockPlacementPolicyDefault's picking of excess replicas
> 
>
> Key: HDFS-9314
> URL: https://issues.apache.org/jira/browse/HDFS-9314
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Xiao Chen
> Attachments: HDFS-9314.001.patch, HDFS-9314.002.patch, 
> HDFS-9314.003.patch, HDFS-9314.004.patch, HDFS-9314.005.patch
>
>
> The test case used in HDFS-9313 identified NullPointerException as well as 
> the limitation of excess replica picking. If the current replicas are on 
> {SSD(rack r1), DISK(rack 2), DISK(rack 3), DISK(rack 3)} and the storage 
> policy changes to HOT_STORAGE_POLICY_ID, BlockPlacementPolicyDefault's won't 
> be able to delete SSD replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3302) Review and improve HDFS trash documentation

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021342#comment-15021342
 ] 

Hudson commented on HDFS-3302:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #697 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/697/])
HDFS-3302. Review and improve HDFS trash documentation. Contributed by (wheat9: 
rev 2326171ea84b9ccea9df9fef137d6041df540d36)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md


> Review and improve HDFS trash documentation
> ---
>
> Key: HDFS-3302
> URL: https://issues.apache.org/jira/browse/HDFS-3302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: docs
> Fix For: 2.8.0
>
> Attachments: HDFS-3302.patch
>
>
> Improve Trash documentation for users.
> (0.23 published release docs are missing original HDFS docs btw...)
> A set of FAQ-like questions can be found on HDFS-2740
> I'll update the ticket shortly with the areas to cover in the docs, as 
> enabling trash by default (HDFS-2740) would be considered as a wide behavior 
> change per its follow ups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9153) Pretty-format the output for DFSIO

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021341#comment-15021341
 ] 

Hudson commented on HDFS-9153:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #697 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/697/])
HDFS-9153. Pretty-format the output for DFSIO. Contributed by Kai Zheng. 
(wheat9: rev 000e12f6fa114dfa45377df23acf552e66410838)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java


> Pretty-format the output for DFSIO
> --
>
> Key: HDFS-9153
> URL: https://issues.apache.org/jira/browse/HDFS-9153
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.8.0
>
> Attachments: HDFS-9153-v1.patch
>
>
> Ref. the following DFSIO output, I was surprised the test throughput was only 
> {{17}} MB/s, which doesn't make sense for a real cluster. Maybe it's used for 
> other purpose? For users, it may make more sense to give the throughput 1610 
> MB/s (1228800/763), calculated by *Total MBytes processed / Test exec time*.
> {noformat}
> 15/09/28 11:42:23 INFO fs.TestDFSIO: - TestDFSIO - : write
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Date & time: Mon Sep 28 
> 11:42:23 CST 2015
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Number of files: 100
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Total MBytes processed: 1228800.0
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  Throughput mb/sec: 
> 17.457387239456878
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.57563018798828
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  IO rate std deviation: 
> 1.7076328985378455
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Test exec time sec: 762.697
> 15/09/28 11:42:23 INFO fs.TestDFSIO: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7796) Include X-editable for slick contenteditable fields in the web UI

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021340#comment-15021340
 ] 

Hudson commented on HDFS-7796:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #697 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/697/])
HDFS-7796. Include X-editable for slick contenteditable fields in the (wheat9: 
rev 38146a6cdbd3788d247f77dfc3248cd7f76d01f4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/img/clear.png
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/css/bootstrap-editable.css
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/img/loading.gif
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/js/bootstrap-editable.min.js


> Include X-editable for slick contenteditable fields in the web UI
> -
>
> Key: HDFS-7796
> URL: https://issues.apache.org/jira/browse/HDFS-7796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7796.01.patch
>
>
> This JIRA is for include X-editable (https://vitalets.github.io/x-editable/) 
> in the Hadoop UI. It is released under the MIT license so its fine. We need 
> it to make the owner / group / replication and possibly other fields in the 
> UI editable easily



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9443) Disabling HDFS client socket cache causes logging message printed to console for CLI commands.

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021339#comment-15021339
 ] 

Hudson commented on HDFS-9443:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2566 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2566/])
HDFS-9443. Disabling HDFS client socket cache causes logging message (wheat9: 
rev 6039059c37626d3d1d231986440623a593e2726b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/PeerCache.java


> Disabling HDFS client socket cache causes logging message printed to console 
> for CLI commands.
> --
>
> Key: HDFS-9443
> URL: https://issues.apache.org/jira/browse/HDFS-9443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-9443.001.patch
>
>
> The HDFS client's socket cache can be disabled by setting 
> {{dfs.client.socketcache.capacity}} to {{0}}.  When this is done, the 
> {{PeerCache}} class logs an info-level message stating that the cache is 
> disabled.  This message is getting printed to the console for CLI commands, 
> which disrupts CLI output.  This issue proposes to downgrade to debug-level 
> logging for this message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8251) Move the synthetic load generator into its own package

2015-11-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021335#comment-15021335
 ] 

Haohui Mai commented on HDFS-8251:
--

Maybe just hadoop-tools? Things like DFSIO / Terasort are occasionally used as 
benchmarks, but it seems that they can be more than tests as 
{{hadoop-test-tool}} implied?

> Move the synthetic load generator into its own package
> --
>
> Key: HDFS-8251
> URL: https://issues.apache.org/jira/browse/HDFS-8251
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: J.Andreina
>  Labels: BB2015-05-RFC
> Attachments: HDFS-8251.1.patch
>
>
> It doesn't really make sense for the HDFS load generator to be a part of the 
> (extremely large) mapreduce jobclient package. It should be pulled out and 
> put its own package, probably in hadoop-tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6885) Fix wrong use of BytesWritable in FSEditLogOp#RenameOp

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021328#comment-15021328
 ] 

Hudson commented on HDFS-6885:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8848 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8848/])
HDFS-6885. Fix wrong use of BytesWritable in FSEditLogOp#RenameOp. (wheat9: rev 
bfbcfe73644db6c047d774c4a461da27915eef84)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix wrong use of BytesWritable in FSEditLogOp#RenameOp
> --
>
> Key: HDFS-6885
> URL: https://issues.apache.org/jira/browse/HDFS-6885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-6885.001.patch
>
>
> After readField using BytesWritable, the data length should be 
> {{writable.getLength()}}, instead of {{writable.getBytes().length}} which is 
> the buffer length. 
> This will cause returned {{Rename[]}} is longer than expected and may include 
> some incorrect values (Currently they are Rename#NONE, and have not caused 
> problem but code logic is incorrect). 
> {code}
> BytesWritable writable = new BytesWritable();
> writable.readFields(in);
> byte[] bytes = writable.getBytes();
> Rename[] options = new Rename[bytes.length];
> for (int i = 0; i < bytes.length; i++) {
>   options[i] = Rename.valueOf(bytes[i]);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9314) Improve BlockPlacementPolicyDefault's picking of excess replicas

2015-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021326#comment-15021326
 ] 

Hadoop QA commented on HDFS-9314:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 110m 1s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 5s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_85. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 29s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 244m 53s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencing |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.server.namenode.ha.TestSe

[jira] [Updated] (HDFS-6885) Fix wrong use of BytesWritable in FSEditLogOp#RenameOp

2015-11-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6885:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~hitliuyi] for the 
contribution.

> Fix wrong use of BytesWritable in FSEditLogOp#RenameOp
> --
>
> Key: HDFS-6885
> URL: https://issues.apache.org/jira/browse/HDFS-6885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-6885.001.patch
>
>
> After readField using BytesWritable, the data length should be 
> {{writable.getLength()}}, instead of {{writable.getBytes().length}} which is 
> the buffer length. 
> This will cause returned {{Rename[]}} is longer than expected and may include 
> some incorrect values (Currently they are Rename#NONE, and have not caused 
> problem but code logic is incorrect). 
> {code}
> BytesWritable writable = new BytesWritable();
> writable.readFields(in);
> byte[] bytes = writable.getBytes();
> Rename[] options = new Rename[bytes.length];
> for (int i = 0; i < bytes.length; i++) {
>   options[i] = Rename.valueOf(bytes[i]);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6885) Fix wrong use of BytesWritable in FSEditLogOp#RenameOp

2015-11-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021320#comment-15021320
 ] 

Haohui Mai commented on HDFS-6885:
--

+1. Committing.

> Fix wrong use of BytesWritable in FSEditLogOp#RenameOp
> --
>
> Key: HDFS-6885
> URL: https://issues.apache.org/jira/browse/HDFS-6885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6885.001.patch
>
>
> After readField using BytesWritable, the data length should be 
> {{writable.getLength()}}, instead of {{writable.getBytes().length}} which is 
> the buffer length. 
> This will cause returned {{Rename[]}} is longer than expected and may include 
> some incorrect values (Currently they are Rename#NONE, and have not caused 
> problem but code logic is incorrect). 
> {code}
> BytesWritable writable = new BytesWritable();
> writable.readFields(in);
> byte[] bytes = writable.getBytes();
> Rename[] options = new Rename[bytes.length];
> for (int i = 0; i < bytes.length; i++) {
>   options[i] = Rename.valueOf(bytes[i]);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6853) MiniDFSCluster.isClusterUp() should not check if null NameNodes are up

2015-11-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021318#comment-15021318
 ] 

Haohui Mai commented on HDFS-6853:
--

It might make sense to fix {{isNameNodeUp()}} instead.

> MiniDFSCluster.isClusterUp() should not check if null NameNodes are up
> --
>
> Key: HDFS-6853
> URL: https://issues.apache.org/jira/browse/HDFS-6853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: James Thomas
>Assignee: James Thomas
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6853.2.patch, HDFS-6853.patch
>
>
> Suppose we have a two-NN cluster and then shut down one of the NN's (NN0). 
> When we try to restart the other NN (NN1), we wait for isClusterUp() to 
> return true, but this will never happen because NN0 is null and isNameNodeUp 
> returns false for a null NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9402) Switch DataNode.LOG to use slf4j

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021312#comment-15021312
 ] 

Hudson commented on HDFS-9402:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2637 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2637/])
HDFS-9402. Switch DataNode.LOG to use slf4j. Contributed by Walter Su. (wheat9: 
rev 176ff5ce90f2cbcd8342016d0f5570337d2ff79f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* 
hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestDistCh.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Switch DataNode.LOG to use slf4j
> 
>
> Key: HDFS-9402
> URL: https://issues.apache.org/jira/browse/HDFS-9402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9402-branch-2.01.patch, HDFS-9402.01.patch
>
>
> Similar to HDFS-8971, HDFS-7712.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3302) Review and improve HDFS trash documentation

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021314#comment-15021314
 ] 

Hudson commented on HDFS-3302:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2637 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2637/])
HDFS-3302. Review and improve HDFS trash documentation. Contributed by (wheat9: 
rev 2326171ea84b9ccea9df9fef137d6041df540d36)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md


> Review and improve HDFS trash documentation
> ---
>
> Key: HDFS-3302
> URL: https://issues.apache.org/jira/browse/HDFS-3302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: docs
> Fix For: 2.8.0
>
> Attachments: HDFS-3302.patch
>
>
> Improve Trash documentation for users.
> (0.23 published release docs are missing original HDFS docs btw...)
> A set of FAQ-like questions can be found on HDFS-2740
> I'll update the ticket shortly with the areas to cover in the docs, as 
> enabling trash by default (HDFS-2740) would be considered as a wide behavior 
> change per its follow ups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7796) Include X-editable for slick contenteditable fields in the web UI

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021309#comment-15021309
 ] 

Hudson commented on HDFS-7796:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2637 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2637/])
HDFS-7796. Include X-editable for slick contenteditable fields in the (wheat9: 
rev 38146a6cdbd3788d247f77dfc3248cd7f76d01f4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/img/clear.png
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/img/loading.gif
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/css/bootstrap-editable.css
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/js/bootstrap-editable.min.js


> Include X-editable for slick contenteditable fields in the web UI
> -
>
> Key: HDFS-7796
> URL: https://issues.apache.org/jira/browse/HDFS-7796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7796.01.patch
>
>
> This JIRA is for include X-editable (https://vitalets.github.io/x-editable/) 
> in the Hadoop UI. It is released under the MIT license so its fine. We need 
> it to make the owner / group / replication and possibly other fields in the 
> UI editable easily



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8914) Document HA support in the HDFS HdfsDesign.md

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021311#comment-15021311
 ] 

Hudson commented on HDFS-8914:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2637 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2637/])
HDFS-8914. Document HA support in the HDFS HdfsDesign.md. Contributed by 
(wheat9: rev 0c7340f377f6663052be097ef58d60eee25f7334)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md


> Document HA support in the HDFS HdfsDesign.md
> -
>
> Key: HDFS-8914
> URL: https://issues.apache.org/jira/browse/HDFS-8914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
> Environment: Documentation page in live
>Reporter: Ravindra Babu
>Assignee: Lars Francke
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-8914.1.patch, HDFS-8914.2.patch
>
>
> Please refer to these two links and correct one of them.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> The NameNode machine is a single point of failure for an HDFS cluster. If the 
> NameNode machine fails, manual intervention is necessary. Currently, 
> automatic restart and failover of the NameNode software to another machine is 
> not supported.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The HDFS High Availability feature addresses the above problems by providing 
> the option of running two redundant NameNodes in the same cluster in an 
> Active/Passive configuration with a hot standby. This allows a fast failover 
> to a new NameNode in the case that a machine crashes, or a graceful 
> administrator-initiated failover for the purpose of planned maintenance.
> Please update hdfsDesign article with same facts to avoid confusion in 
> Reader's mind..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9153) Pretty-format the output for DFSIO

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021310#comment-15021310
 ] 

Hudson commented on HDFS-9153:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2637 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2637/])
HDFS-9153. Pretty-format the output for DFSIO. Contributed by Kai Zheng. 
(wheat9: rev 000e12f6fa114dfa45377df23acf552e66410838)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java


> Pretty-format the output for DFSIO
> --
>
> Key: HDFS-9153
> URL: https://issues.apache.org/jira/browse/HDFS-9153
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.8.0
>
> Attachments: HDFS-9153-v1.patch
>
>
> Ref. the following DFSIO output, I was surprised the test throughput was only 
> {{17}} MB/s, which doesn't make sense for a real cluster. Maybe it's used for 
> other purpose? For users, it may make more sense to give the throughput 1610 
> MB/s (1228800/763), calculated by *Total MBytes processed / Test exec time*.
> {noformat}
> 15/09/28 11:42:23 INFO fs.TestDFSIO: - TestDFSIO - : write
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Date & time: Mon Sep 28 
> 11:42:23 CST 2015
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Number of files: 100
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Total MBytes processed: 1228800.0
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  Throughput mb/sec: 
> 17.457387239456878
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.57563018798828
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  IO rate std deviation: 
> 1.7076328985378455
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Test exec time sec: 762.697
> 15/09/28 11:42:23 INFO fs.TestDFSIO: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6436) WebHdfsFileSystem execute get, renew and cancel delegationtoken operation should use spnego to authenticate

2015-11-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6436:
-
Resolution: Cannot Reproduce
Status: Resolved  (was: Patch Available)

Based on [~daryn]'s comments, I'm closing this jira as cannot reproduce for 
now. Please feel free to reopen it if the issue still exist in trunk / branch-2.

> WebHdfsFileSystem execute get, renew and cancel delegationtoken operation 
> should use spnego to authenticate
> ---
>
> Key: HDFS-6436
> URL: https://issues.apache.org/jira/browse/HDFS-6436
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.4.0
> Environment: Kerberos
>Reporter: Bangtao Zhou
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6436.patch
>
>
> while in kerberos secure mode, when using WebHdfsFileSystem to access HDFS, 
> it allways get an 
> *org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Unauthorized*, for example, when call WebHdfsFileSystem.listStatus it will 
> execute a LISTSTATUS Op, and this Op should authenticate via *delegation 
> token*, so it will execute a GETDELEGATIONTOKEN Op to get a delegation 
> token(actually GETDELEGATIONTOKEN authenticates via *SPNEGO*), but it still 
> use delegation token to authenticate, so it allways get an Unauthorized 
> Exception.
> Exception is like this:
> {code:java}
> 19:05:11.758 [main] DEBUG o.a.h.hdfs.web.URLConnectionFactory - open URL 
> connection
> java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Unauthorized
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:287)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:82)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:538)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:406)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:434)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:430)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:1058)
> 19:05:11.766 [main] DEBUG o.a.h.security.UserGroupInformation - 
> PrivilegedActionException as:bang...@cyhadoop.com (auth:KERBEROS) 
> cause:java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Unauthorized
>   at 
> org.apache.hadoop.hdfs.web.TokenAspect.ensureTokenInitialized(TokenAspect.java:134)
> 19:05:11.767 [main] DEBUG o.a.h.security.UserGroupInformation - 
> PrivilegedActionException as:bang...@cyhadoop.com (auth:KERBEROS) 
> cause:java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Unauthorized
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:213)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getAuthParameters(WebHdfsFileSystem.java:371)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toUrl(WebHdfsFileSystem.java:392)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractFsPathRunner.getUrl(WebHdfsFileSystem.java:602)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:533)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:406)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:434)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:430)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.listStatus(WebHdfsFileSystem.java:1037)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1483)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1523)
>   at org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:1679)
>   at 
> org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1678)
>   at 
> org.apache.hadoop.fs.FileSystem.listLoca

[jira] [Updated] (HDFS-6540) TestOfflineImageViewer.outputOfLSVisitor fails for certain usernames

2015-11-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6540:
-
Resolution: Cannot Reproduce
Status: Resolved  (was: Patch Available)

Looks like the issue has been addressed in 2.5. Closing it as cannot reproduce.

> TestOfflineImageViewer.outputOfLSVisitor fails for certain usernames
> 
>
> Key: HDFS-6540
> URL: https://issues.apache.org/jira/browse/HDFS-6540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6540-branch-2.4.patch, HDFS-6540.patch
>
>
> TestOfflineImageViewer.outputOfLSVisitor() fails if the username contains "-" 
> (dash). A dash is a valid character in a username.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8335) FSNamesystem/FSDirStatAndListingOp getFileInfo and getListingInt construct FSPermissionChecker regardless of isPermissionEnabled()

2015-11-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021290#comment-15021290
 ] 

Haohui Mai commented on HDFS-8335:
--

Hi [~gliptak], can you please rebase the patch? Thanks.

> FSNamesystem/FSDirStatAndListingOp getFileInfo and getListingInt construct 
> FSPermissionChecker regardless of isPermissionEnabled()
> --
>
> Key: HDFS-8335
> URL: https://issues.apache.org/jira/browse/HDFS-8335
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.5.0, 2.6.0, 2.7.0, 2.8.0
>Reporter: David Bryson
>Assignee: Gabor Liptak
> Attachments: HDFS-8335.2.patch, HDFS-8335.patch
>
>
> FSNamesystem (2.5.x)/FSDirStatAndListingOp(current trunk) getFileInfo and 
> getListingInt methods call getPermissionChecker() to construct a 
> FSPermissionChecker regardless of isPermissionEnabled(). When permission 
> checking is disabled, this leads to an unnecessary performance hit 
> constructing a UserGroupInformation object that is never used.
> For example, from a stack dump when driving concurrent requests, they all end 
> up blocking.
> Here's the thread holding the lock:
> "IPC Server handler 9 on 9000" daemon prio=10 tid=0x7f78d8b9e800 
> nid=0x142f3 runnable [0x7f78c2ddc000]
>java.lang.Thread.State: RUNNABLE
> at java.io.FileInputStream.readBytes(Native Method)
> at java.io.FileInputStream.read(FileInputStream.java:272)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> - locked <0x0007d9b105c0> (a java.lang.UNIXProcess$ProcessPipeInputStream)
> at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
> at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
> at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
> - locked <0x0007d9b1a888> (a java.io.InputStreamReader)
> at java.io.InputStreamReader.read(InputStreamReader.java:184)
> at java.io.BufferedReader.fill(BufferedReader.java:154)
> at java.io.BufferedReader.read1(BufferedReader.java:205)
> at java.io.BufferedReader.read(BufferedReader.java:279)
> - locked <0x0007d9b1a888> (a java.io.InputStreamReader)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:715)
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:524)
> at org.apache.hadoop.util.Shell.run(Shell.java:455)
> at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:774)
> at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:84)
> at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
> at org.apache.hadoop.security.Groups.getGroups(Groups.java:139)
> at 
> org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1474)
> - locked <0x0007a6df75f8> (a 
> org.apache.hadoop.security.UserGroupInformation)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.(FSPermissionChecker.java:82)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3534)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4489)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4478)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:898)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:602)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> Here is (one of the many) threads waiting on the lock:
> "IPC Server handler 2 on 9000" daemon prio=10 tid=0x7f78d8c48800 
> nid=0x142ec waiti

[jira] [Commented] (HDFS-3296) Running libhdfs tests in mac fails

2015-11-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021289#comment-15021289
 ] 

Haohui Mai commented on HDFS-3296:
--

Hi [~cnauroth], can you please rebase the latest patch? Thanks.

> Running libhdfs tests in mac fails
> --
>
> Key: HDFS-3296
> URL: https://issues.apache.org/jira/browse/HDFS-3296
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: Amareshwari Sriramadasu
>Assignee: Chris Nauroth
> Attachments: HDFS-3296.001.patch
>
>
> Running "ant -Dcompile.c++=true -Dlibhdfs=true test-c++-libhdfs" on Mac fails 
> with following error:
> {noformat}
>  [exec] dyld: lazy symbol binding failed: Symbol not found: 
> _JNI_GetCreatedJavaVMs
>  [exec]   Referenced from: 
> /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib
>  [exec]   Expected in: flat namespace
>  [exec] 
>  [exec] dyld: Symbol not found: _JNI_GetCreatedJavaVMs
>  [exec]   Referenced from: 
> /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib
>  [exec]   Expected in: flat namespace
>  [exec] 
>  [exec] 
> /Users/amareshwari.sr/workspace/hadoop/src/c++/libhdfs/tests/test-libhdfs.sh: 
> line 122: 39485 Trace/BPT trap: 5   CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH 
> LD_PRELOAD="$LIB_JVM_DIR/libjvm.so:$LIBHDFS_INSTALL_DIR/libhdfs.so:" 
> $LIBHDFS_BUILD_DIR/$HDFS_TEST
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3302) Review and improve HDFS trash documentation

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021287#comment-15021287
 ] 

Hudson commented on HDFS-3302:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8845 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8845/])
HDFS-3302. Review and improve HDFS trash documentation. Contributed by (wheat9: 
rev 2326171ea84b9ccea9df9fef137d6041df540d36)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md


> Review and improve HDFS trash documentation
> ---
>
> Key: HDFS-3302
> URL: https://issues.apache.org/jira/browse/HDFS-3302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: docs
> Fix For: 2.8.0
>
> Attachments: HDFS-3302.patch
>
>
> Improve Trash documentation for users.
> (0.23 published release docs are missing original HDFS docs btw...)
> A set of FAQ-like questions can be found on HDFS-2740
> I'll update the ticket shortly with the areas to cover in the docs, as 
> enabling trash by default (HDFS-2740) would be considered as a wide behavior 
> change per its follow ups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9153) Pretty-format the output for DFSIO

2015-11-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021284#comment-15021284
 ] 

Kai Zheng commented on HDFS-9153:
-

Thanks [~wheat9] for committing this!

> Pretty-format the output for DFSIO
> --
>
> Key: HDFS-9153
> URL: https://issues.apache.org/jira/browse/HDFS-9153
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.8.0
>
> Attachments: HDFS-9153-v1.patch
>
>
> Ref. the following DFSIO output, I was surprised the test throughput was only 
> {{17}} MB/s, which doesn't make sense for a real cluster. Maybe it's used for 
> other purpose? For users, it may make more sense to give the throughput 1610 
> MB/s (1228800/763), calculated by *Total MBytes processed / Test exec time*.
> {noformat}
> 15/09/28 11:42:23 INFO fs.TestDFSIO: - TestDFSIO - : write
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Date & time: Mon Sep 28 
> 11:42:23 CST 2015
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Number of files: 100
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Total MBytes processed: 1228800.0
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  Throughput mb/sec: 
> 17.457387239456878
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.57563018798828
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  IO rate std deviation: 
> 1.7076328985378455
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Test exec time sec: 762.697
> 15/09/28 11:42:23 INFO fs.TestDFSIO: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7796) Include X-editable for slick contenteditable fields in the web UI

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021282#comment-15021282
 ] 

Hudson commented on HDFS-7796:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8844 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8844/])
HDFS-7796. Include X-editable for slick contenteditable fields in the (wheat9: 
rev 38146a6cdbd3788d247f77dfc3248cd7f76d01f4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/css/bootstrap-editable.css
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/js/bootstrap-editable.min.js
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/img/loading.gif
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.0.2/img/clear.png


> Include X-editable for slick contenteditable fields in the web UI
> -
>
> Key: HDFS-7796
> URL: https://issues.apache.org/jira/browse/HDFS-7796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7796.01.patch
>
>
> This JIRA is for include X-editable (https://vitalets.github.io/x-editable/) 
> in the Hadoop UI. It is released under the MIT license so its fine. We need 
> it to make the owner / group / replication and possibly other fields in the 
> UI editable easily



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8914) Document HA support in the HDFS HdfsDesign.md

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021277#comment-15021277
 ] 

Hudson commented on HDFS-8914:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #696 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/696/])
HDFS-8914. Document HA support in the HDFS HdfsDesign.md. Contributed by 
(wheat9: rev 0c7340f377f6663052be097ef58d60eee25f7334)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Document HA support in the HDFS HdfsDesign.md
> -
>
> Key: HDFS-8914
> URL: https://issues.apache.org/jira/browse/HDFS-8914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
> Environment: Documentation page in live
>Reporter: Ravindra Babu
>Assignee: Lars Francke
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-8914.1.patch, HDFS-8914.2.patch
>
>
> Please refer to these two links and correct one of them.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> The NameNode machine is a single point of failure for an HDFS cluster. If the 
> NameNode machine fails, manual intervention is necessary. Currently, 
> automatic restart and failover of the NameNode software to another machine is 
> not supported.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The HDFS High Availability feature addresses the above problems by providing 
> the option of running two redundant NameNodes in the same cluster in an 
> Active/Passive configuration with a hot standby. This allows a fast failover 
> to a new NameNode in the case that a machine crashes, or a graceful 
> administrator-initiated failover for the purpose of planned maintenance.
> Please update hdfsDesign article with same facts to avoid confusion in 
> Reader's mind..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9402) Switch DataNode.LOG to use slf4j

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021278#comment-15021278
 ] 

Hudson commented on HDFS-9402:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #696 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/696/])
HDFS-9402. Switch DataNode.LOG to use slf4j. Contributed by Walter Su. (wheat9: 
rev 176ff5ce90f2cbcd8342016d0f5570337d2ff79f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestDistCh.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java


> Switch DataNode.LOG to use slf4j
> 
>
> Key: HDFS-9402
> URL: https://issues.apache.org/jira/browse/HDFS-9402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9402-branch-2.01.patch, HDFS-9402.01.patch
>
>
> Similar to HDFS-8971, HDFS-7712.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8958) Clean up the checkstyle warinings about NameNodeMXBean

2015-11-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021276#comment-15021276
 ] 

Haohui Mai commented on HDFS-8958:
--

[~surendrasingh], can you please update your patch to the latest trunk? Thanks

> Clean up the checkstyle warinings about NameNodeMXBean
> --
>
> Key: HDFS-8958
> URL: https://issues.apache.org/jira/browse/HDFS-8958
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8958-001.patch
>
>
> These warnings were generated in HDFS-8388..
> https://issues.apache.org/jira/browse/HDFS-8388?focusedCommentId=14708960&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14708960



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3302) Review and improve HDFS trash documentation

2015-11-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-3302:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~qwertymaniac] for the 
contribution.

> Review and improve HDFS trash documentation
> ---
>
> Key: HDFS-3302
> URL: https://issues.apache.org/jira/browse/HDFS-3302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: docs
> Fix For: 2.8.0
>
> Attachments: HDFS-3302.patch
>
>
> Improve Trash documentation for users.
> (0.23 published release docs are missing original HDFS docs btw...)
> A set of FAQ-like questions can be found on HDFS-2740
> I'll update the ticket shortly with the areas to cover in the docs, as 
> enabling trash by default (HDFS-2740) would be considered as a wide behavior 
> change per its follow ups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3302) Review and improve HDFS trash documentation

2015-11-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021274#comment-15021274
 ] 

Haohui Mai commented on HDFS-3302:
--

+1. Committing.

> Review and improve HDFS trash documentation
> ---
>
> Key: HDFS-3302
> URL: https://issues.apache.org/jira/browse/HDFS-3302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: docs
> Fix For: 2.8.0
>
> Attachments: HDFS-3302.patch
>
>
> Improve Trash documentation for users.
> (0.23 published release docs are missing original HDFS docs btw...)
> A set of FAQ-like questions can be found on HDFS-2740
> I'll update the ticket shortly with the areas to cover in the docs, as 
> enabling trash by default (HDFS-2740) would be considered as a wide behavior 
> change per its follow ups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9153) Pretty-format the output for DFSIO

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021273#comment-15021273
 ] 

Hudson commented on HDFS-9153:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #8843 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8843/])
HDFS-9153. Pretty-format the output for DFSIO. Contributed by Kai Zheng. 
(wheat9: rev 000e12f6fa114dfa45377df23acf552e66410838)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java


> Pretty-format the output for DFSIO
> --
>
> Key: HDFS-9153
> URL: https://issues.apache.org/jira/browse/HDFS-9153
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.8.0
>
> Attachments: HDFS-9153-v1.patch
>
>
> Ref. the following DFSIO output, I was surprised the test throughput was only 
> {{17}} MB/s, which doesn't make sense for a real cluster. Maybe it's used for 
> other purpose? For users, it may make more sense to give the throughput 1610 
> MB/s (1228800/763), calculated by *Total MBytes processed / Test exec time*.
> {noformat}
> 15/09/28 11:42:23 INFO fs.TestDFSIO: - TestDFSIO - : write
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Date & time: Mon Sep 28 
> 11:42:23 CST 2015
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Number of files: 100
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Total MBytes processed: 1228800.0
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  Throughput mb/sec: 
> 17.457387239456878
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.57563018798828
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  IO rate std deviation: 
> 1.7076328985378455
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Test exec time sec: 762.697
> 15/09/28 11:42:23 INFO fs.TestDFSIO: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9443) Disabling HDFS client socket cache causes logging message printed to console for CLI commands.

2015-11-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021272#comment-15021272
 ] 

Hudson commented on HDFS-9443:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2636 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2636/])
HDFS-9443. Disabling HDFS client socket cache causes logging message (wheat9: 
rev 6039059c37626d3d1d231986440623a593e2726b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/PeerCache.java


> Disabling HDFS client socket cache causes logging message printed to console 
> for CLI commands.
> --
>
> Key: HDFS-9443
> URL: https://issues.apache.org/jira/browse/HDFS-9443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-9443.001.patch
>
>
> The HDFS client's socket cache can be disabled by setting 
> {{dfs.client.socketcache.capacity}} to {{0}}.  When this is done, the 
> {{PeerCache}} class logs an info-level message stating that the cache is 
> disabled.  This message is getting printed to the console for CLI commands, 
> which disrupts CLI output.  This issue proposes to downgrade to debug-level 
> logging for this message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >