[jira] [Commented] (HDFS-6080) rtmax and wtmax for NFS-hdfs-gateway should be configurable

2014-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925151#comment-13925151
 ] 

Hadoop QA commented on HDFS-6080:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12633583/HDFS-6080.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-nfs hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-nfs:

  org.apache.hadoop.fs.TestHdfsNativeCodeLoader
  org.apache.hadoop.hdfs.TestFileAppend4

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6360//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6360//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs-nfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6360//console

This message is automatically generated.

 rtmax and wtmax for NFS-hdfs-gateway should be configurable
 ---

 Key: HDFS-6080
 URL: https://issues.apache.org/jira/browse/HDFS-6080
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Abin Shahab
Assignee: Abin Shahab
 Fix For: 2.2.0, 2.3.0

 Attachments: HDFS-6080.patch


 Right now rtmax and wtmax are hardcoded in RpcProgramNFS3. These dictate the 
 maximum read and write capacity of the server. Therefore, these affect the 
 read and write performance.
 We ran performance tests with 1mb, 100mb, and 1GB files. We noticed 
 significant performance decline with the size increase when compared to fuse. 
 We realized that the issue was with the hardcoded rtmax size(64k). 
 When we increased the rtmax to 1MB, we got a 10x improvement in performance.
 NFS reads:
 +---++---+---+---++--+
 | File  | Size   | Run 1 | Run 2 | Run 3 
 | Average| Std. Dev.|
 | testFile100Mb | 104857600  | 23.131158137  | 19.24552955   | 19.793332866  
 | 20.72334018435 | 1.7172094782219731   |
 | testFile1Gb   | 1073741824 | 219.108776636 | 201.064032255 | 217.433909843 
 | 212.5355729113 | 8.14037175506561 |
 | testFile1Mb   | 1048576| 0.330546906   | 0.256391808   | 0.28730168
 | 0.291413464667 | 0.030412987573361663 |
 +---++---+---+---++--+
 Fuse reads:
 +---++-+--+--++---+
 | File  | Size   | Run 1   | Run 2| Run 3| 
 Average| Std. Dev. |
 | testFile100Mb | 104857600  | 2.394459443 | 2.695265191  | 2.50046517   | 
 2.530063267997 | 0.12457410127142007   |
 | testFile1Gb   | 1073741824 | 25.03324924 | 24.155102554 | 24.901525525 | 
 24.69662577297 | 0.386672412437576 |
 | testFile1Mb   | 1048576| 0.271615094 | 0.270835986  | 0.271796438  | 
 0.271415839333 | 0.0004166483951065848 |
 +---++-+--+--++---+
 (NFS read after rtmax = 1MB)
 +---++--+-+--+-+-+
 | File  | Size   | Run 1| Run 2   | Run 3| 
 Average | Std. Dev.|
 | testFile100Mb | 104857600  | 3.655261869  | 3.438676067 | 3.557464787  | 
 3.550467574336  | 0.0885591069882058   |
 | testFile1Gb   | 1073741824 | 34.663612417 | 37.32089122 | 37.997718857 | 
 36.66074083135  | 1.4389615098060426  

[jira] [Commented] (HDFS-3405) Checkpointing should use HTTP POST or PUT instead of GET-GET to send merged fsimages

2014-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925177#comment-13925177
 ] 

Hudson commented on HDFS-3405:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #504 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/504/])
HDFS-3405. Checkpointing should use HTTP POST or PUT instead of GET-GET to send 
merged fsimages. Contributed by Vinayakumar B. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1575611)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyCheckpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTransferFsImage.java
Revert HDFS-3405 for recommit with correct renamed files (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1575610)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyCheckpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTransferFsImage.java
Remove extra file from HDFS-3405. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1575609)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java


 Checkpointing should use HTTP POST or PUT instead of GET-GET to send merged 
 fsimages
 

 Key: HDFS-3405
 URL: https://issues.apache.org/jira/browse/HDFS-3405
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.0, 3.0.0, 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Vinayakumar B
 Fix For: 3.0.0

 Attachments: HDFS-3405.patch, HDFS-3405.patch, HDFS-3405.patch, 
 HDFS-3405.patch, HDFS-3405.patch, HDFS-3405.patch, 

[jira] [Commented] (HDFS-6078) TestIncrementalBlockReports is flaky

2014-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925178#comment-13925178
 ] 

Hudson commented on HDFS-6078:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #504 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/504/])
HDFS-6078. TestIncrementalBlockReports is flaky. (Arpit Agarwal) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1575559)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBlockReports.java


 TestIncrementalBlockReports is flaky
 

 Key: HDFS-6078
 URL: https://issues.apache.org/jira/browse/HDFS-6078
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.4.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
Priority: Minor
 Fix For: 3.0.0, 2.4.0

 Attachments: HDFS-6078.01.patch


 {{TestIncrementalBlockReports#testReplaceReceivedBlock}} can fail if an 
 report is generated between the two calls to {{injectBlockReceived()}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-3405) Checkpointing should use HTTP POST or PUT instead of GET-GET to send merged fsimages

2014-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925197#comment-13925197
 ] 

Hudson commented on HDFS-3405:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1696 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1696/])
HDFS-3405. Checkpointing should use HTTP POST or PUT instead of GET-GET to send 
merged fsimages. Contributed by Vinayakumar B. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1575611)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyCheckpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTransferFsImage.java
Revert HDFS-3405 for recommit with correct renamed files (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1575610)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyCheckpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTransferFsImage.java
Remove extra file from HDFS-3405. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1575609)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java


 Checkpointing should use HTTP POST or PUT instead of GET-GET to send merged 
 fsimages
 

 Key: HDFS-3405
 URL: https://issues.apache.org/jira/browse/HDFS-3405
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.0, 3.0.0, 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Vinayakumar B
 Fix For: 3.0.0

 Attachments: HDFS-3405.patch, HDFS-3405.patch, HDFS-3405.patch, 
 HDFS-3405.patch, HDFS-3405.patch, HDFS-3405.patch, 

[jira] [Commented] (HDFS-6078) TestIncrementalBlockReports is flaky

2014-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925198#comment-13925198
 ] 

Hudson commented on HDFS-6078:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1696 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1696/])
HDFS-6078. TestIncrementalBlockReports is flaky. (Arpit Agarwal) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1575559)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBlockReports.java


 TestIncrementalBlockReports is flaky
 

 Key: HDFS-6078
 URL: https://issues.apache.org/jira/browse/HDFS-6078
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.4.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
Priority: Minor
 Fix For: 3.0.0, 2.4.0

 Attachments: HDFS-6078.01.patch


 {{TestIncrementalBlockReports#testReplaceReceivedBlock}} can fail if an 
 report is generated between the two calls to {{injectBlockReceived()}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-3405) Checkpointing should use HTTP POST or PUT instead of GET-GET to send merged fsimages

2014-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925209#comment-13925209
 ] 

Hudson commented on HDFS-3405:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1721 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1721/])
HDFS-3405. Checkpointing should use HTTP POST or PUT instead of GET-GET to send 
merged fsimages. Contributed by Vinayakumar B. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1575611)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyCheckpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTransferFsImage.java
Revert HDFS-3405 for recommit with correct renamed files (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1575610)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyCheckpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTransferFsImage.java
Remove extra file from HDFS-3405. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1575609)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java


 Checkpointing should use HTTP POST or PUT instead of GET-GET to send merged 
 fsimages
 

 Key: HDFS-3405
 URL: https://issues.apache.org/jira/browse/HDFS-3405
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.0, 3.0.0, 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Vinayakumar B
 Fix For: 3.0.0

 Attachments: HDFS-3405.patch, HDFS-3405.patch, HDFS-3405.patch, 
 HDFS-3405.patch, HDFS-3405.patch, 

[jira] [Commented] (HDFS-6078) TestIncrementalBlockReports is flaky

2014-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925210#comment-13925210
 ] 

Hudson commented on HDFS-6078:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1721 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1721/])
HDFS-6078. TestIncrementalBlockReports is flaky. (Arpit Agarwal) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1575559)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBlockReports.java


 TestIncrementalBlockReports is flaky
 

 Key: HDFS-6078
 URL: https://issues.apache.org/jira/browse/HDFS-6078
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.4.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
Priority: Minor
 Fix For: 3.0.0, 2.4.0

 Attachments: HDFS-6078.01.patch


 {{TestIncrementalBlockReports#testReplaceReceivedBlock}} can fail if an 
 report is generated between the two calls to {{injectBlockReceived()}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6081) TestRetryCacheWithHA#testCreateSymlink occasionally fails in trunk

2014-03-09 Thread Ted Yu (JIRA)
Ted Yu created HDFS-6081:


 Summary: TestRetryCacheWithHA#testCreateSymlink occasionally fails 
in trunk
 Key: HDFS-6081
 URL: https://issues.apache.org/jira/browse/HDFS-6081
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu


From 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/1696/testReport/junit/org.apache.hadoop.hdfs.server.namenode.ha/TestRetryCacheWithHA/testCreateSymlink/
 :
{code}
2014-03-09 13:18:47,515 WARN  security.UserGroupInformation 
(UserGroupInformation.java:doAs(1600)) - PriviledgedActionException as:jenkins 
(auth:SIMPLE) cause:java.io.IOException: failed to create link /testlink either 
because the filename is invalid or the file exists
2014-03-09 13:18:47,515 INFO  ipc.Server (Server.java:run(2093)) - IPC Server 
handler 0 on 39303, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.createSymlink from 
127.0.0.1:32909 Call#682 Retry#1: error: java.io.IOException: failed to create 
link /testlink either because the filename is invalid or the file exists
java.io.IOException: failed to create link /testlink either because the 
filename is invalid or the file exists
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createSymlinkInt(FSNamesystem.java:2053)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createSymlink(FSNamesystem.java:2023)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.createSymlink(NameNodeRpcServer.java:965)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.createSymlink(ClientNamenodeProtocolServerSideTranslatorPB.java:844)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2071)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2067)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1597)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2065)
2014-03-09 13:18:47,522 INFO  blockmanagement.BlockManager 
(BlockManager.java:processMisReplicatesAsync(2475)) - Total number of blocks
= 1
2014-03-09 13:18:47,523 INFO  blockmanagement.BlockManager 
(BlockManager.java:processMisReplicatesAsync(2476)) - Number of invalid blocks  
= 0
2014-03-09 13:18:47,523 INFO  blockmanagement.BlockManager 
(BlockManager.java:processMisReplicatesAsync(2477)) - Number of 
under-replicated blocks = 0
2014-03-09 13:18:47,523 INFO  ha.TestRetryCacheWithHA 
(TestRetryCacheWithHA.java:run(1162)) - Got Exception while calling 
createSymlink
org.apache.hadoop.ipc.RemoteException(java.io.IOException): failed to create 
link /testlink either because the filename is invalid or the file exists
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createSymlinkInt(FSNamesystem.java:2053)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createSymlink(FSNamesystem.java:2023)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.createSymlink(NameNodeRpcServer.java:965)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.createSymlink(ClientNamenodeProtocolServerSideTranslatorPB.java:844)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2071)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2067)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1597)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2065)

at org.apache.hadoop.ipc.Client.call(Client.java:1409)
at org.apache.hadoop.ipc.Client.call(Client.java:1362)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at $Proxy17.createSymlink(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.createSymlink(ClientNamenodeProtocolTranslatorPB.java:794)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 

[jira] [Updated] (HDFS-6055) Change default configuration to limit file name length in HDFS

2014-03-09 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6055:


Attachment: HDFS-6055.2.patch

That's a good idea, Nicholas.  Here is v2 of the patch to update the 
documentation.

 Change default configuration to limit file name length in HDFS
 --

 Key: HDFS-6055
 URL: https://issues.apache.org/jira/browse/HDFS-6055
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.4.0
Reporter: Suresh Srinivas
Assignee: Chris Nauroth
 Attachments: HDFS-6055.1.patch, HDFS-6055.2.patch


 Currently configuration dfs.namenode.fs-limits.max-component-length is set 
 to 0. With this HDFS file names have no length limit. However, we see more 
 users run into issues where they copy files from HDFS to another file system 
 and the copy fails due to the file name length being too long.
 I propose changing the default configuration 
 dfs.namenode.fs-limits.max-component-length to a reasonable value. This 
 will be an incompatible change. However, user who need long file names can 
 override this configuration to turn off length limit.
 What do folks think?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6080) rtmax and wtmax for NFS-hdfs-gateway should be configurable

2014-03-09 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6080:
-

Fix Version/s: (was: 2.3.0)
   (was: 2.2.0)

 rtmax and wtmax for NFS-hdfs-gateway should be configurable
 ---

 Key: HDFS-6080
 URL: https://issues.apache.org/jira/browse/HDFS-6080
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Abin Shahab
Assignee: Abin Shahab
 Attachments: HDFS-6080.patch


 Right now rtmax and wtmax are hardcoded in RpcProgramNFS3. These dictate the 
 maximum read and write capacity of the server. Therefore, these affect the 
 read and write performance.
 We ran performance tests with 1mb, 100mb, and 1GB files. We noticed 
 significant performance decline with the size increase when compared to fuse. 
 We realized that the issue was with the hardcoded rtmax size(64k). 
 When we increased the rtmax to 1MB, we got a 10x improvement in performance.
 NFS reads:
 +---++---+---+---++--+
 | File  | Size   | Run 1 | Run 2 | Run 3 
 | Average| Std. Dev.|
 | testFile100Mb | 104857600  | 23.131158137  | 19.24552955   | 19.793332866  
 | 20.72334018435 | 1.7172094782219731   |
 | testFile1Gb   | 1073741824 | 219.108776636 | 201.064032255 | 217.433909843 
 | 212.5355729113 | 8.14037175506561 |
 | testFile1Mb   | 1048576| 0.330546906   | 0.256391808   | 0.28730168
 | 0.291413464667 | 0.030412987573361663 |
 +---++---+---+---++--+
 Fuse reads:
 +---++-+--+--++---+
 | File  | Size   | Run 1   | Run 2| Run 3| 
 Average| Std. Dev. |
 | testFile100Mb | 104857600  | 2.394459443 | 2.695265191  | 2.50046517   | 
 2.530063267997 | 0.12457410127142007   |
 | testFile1Gb   | 1073741824 | 25.03324924 | 24.155102554 | 24.901525525 | 
 24.69662577297 | 0.386672412437576 |
 | testFile1Mb   | 1048576| 0.271615094 | 0.270835986  | 0.271796438  | 
 0.271415839333 | 0.0004166483951065848 |
 +---++-+--+--++---+
 (NFS read after rtmax = 1MB)
 +---++--+-+--+-+-+
 | File  | Size   | Run 1| Run 2   | Run 3| 
 Average | Std. Dev.|
 | testFile100Mb | 104857600  | 3.655261869  | 3.438676067 | 3.557464787  | 
 3.550467574336  | 0.0885591069882058   |
 | testFile1Gb   | 1073741824 | 34.663612417 | 37.32089122 | 37.997718857 | 
 36.66074083135  | 1.4389615098060426   |
 | testFile1Mb   | 1048576| 0.115602858  | 0.106826253 | 0.125229976  | 
 0.1158863623334 | 0.007515962395481867 |
 +---++--+-+--+-+-+



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6080) rtmax and wtmax for NFS-hdfs-gateway should be configurable

2014-03-09 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6080:
-

Issue Type: Improvement  (was: Bug)

 rtmax and wtmax for NFS-hdfs-gateway should be configurable
 ---

 Key: HDFS-6080
 URL: https://issues.apache.org/jira/browse/HDFS-6080
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Abin Shahab
Assignee: Abin Shahab
 Attachments: HDFS-6080.patch


 Right now rtmax and wtmax are hardcoded in RpcProgramNFS3. These dictate the 
 maximum read and write capacity of the server. Therefore, these affect the 
 read and write performance.
 We ran performance tests with 1mb, 100mb, and 1GB files. We noticed 
 significant performance decline with the size increase when compared to fuse. 
 We realized that the issue was with the hardcoded rtmax size(64k). 
 When we increased the rtmax to 1MB, we got a 10x improvement in performance.
 NFS reads:
 +---++---+---+---++--+
 | File  | Size   | Run 1 | Run 2 | Run 3 
 | Average| Std. Dev.|
 | testFile100Mb | 104857600  | 23.131158137  | 19.24552955   | 19.793332866  
 | 20.72334018435 | 1.7172094782219731   |
 | testFile1Gb   | 1073741824 | 219.108776636 | 201.064032255 | 217.433909843 
 | 212.5355729113 | 8.14037175506561 |
 | testFile1Mb   | 1048576| 0.330546906   | 0.256391808   | 0.28730168
 | 0.291413464667 | 0.030412987573361663 |
 +---++---+---+---++--+
 Fuse reads:
 +---++-+--+--++---+
 | File  | Size   | Run 1   | Run 2| Run 3| 
 Average| Std. Dev. |
 | testFile100Mb | 104857600  | 2.394459443 | 2.695265191  | 2.50046517   | 
 2.530063267997 | 0.12457410127142007   |
 | testFile1Gb   | 1073741824 | 25.03324924 | 24.155102554 | 24.901525525 | 
 24.69662577297 | 0.386672412437576 |
 | testFile1Mb   | 1048576| 0.271615094 | 0.270835986  | 0.271796438  | 
 0.271415839333 | 0.0004166483951065848 |
 +---++-+--+--++---+
 (NFS read after rtmax = 1MB)
 +---++--+-+--+-+-+
 | File  | Size   | Run 1| Run 2   | Run 3| 
 Average | Std. Dev.|
 | testFile100Mb | 104857600  | 3.655261869  | 3.438676067 | 3.557464787  | 
 3.550467574336  | 0.0885591069882058   |
 | testFile1Gb   | 1073741824 | 34.663612417 | 37.32089122 | 37.997718857 | 
 36.66074083135  | 1.4389615098060426   |
 | testFile1Mb   | 1048576| 0.115602858  | 0.106826253 | 0.125229976  | 
 0.1158863623334 | 0.007515962395481867 |
 +---++--+-+--+-+-+



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6080) rtmax and wtmax for NFS-hdfs-gateway should be configurable

2014-03-09 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6080:
-

Component/s: nfs

 rtmax and wtmax for NFS-hdfs-gateway should be configurable
 ---

 Key: HDFS-6080
 URL: https://issues.apache.org/jira/browse/HDFS-6080
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Abin Shahab
Assignee: Abin Shahab
 Attachments: HDFS-6080.patch


 Right now rtmax and wtmax are hardcoded in RpcProgramNFS3. These dictate the 
 maximum read and write capacity of the server. Therefore, these affect the 
 read and write performance.
 We ran performance tests with 1mb, 100mb, and 1GB files. We noticed 
 significant performance decline with the size increase when compared to fuse. 
 We realized that the issue was with the hardcoded rtmax size(64k). 
 When we increased the rtmax to 1MB, we got a 10x improvement in performance.
 NFS reads:
 +---++---+---+---++--+
 | File  | Size   | Run 1 | Run 2 | Run 3 
 | Average| Std. Dev.|
 | testFile100Mb | 104857600  | 23.131158137  | 19.24552955   | 19.793332866  
 | 20.72334018435 | 1.7172094782219731   |
 | testFile1Gb   | 1073741824 | 219.108776636 | 201.064032255 | 217.433909843 
 | 212.5355729113 | 8.14037175506561 |
 | testFile1Mb   | 1048576| 0.330546906   | 0.256391808   | 0.28730168
 | 0.291413464667 | 0.030412987573361663 |
 +---++---+---+---++--+
 Fuse reads:
 +---++-+--+--++---+
 | File  | Size   | Run 1   | Run 2| Run 3| 
 Average| Std. Dev. |
 | testFile100Mb | 104857600  | 2.394459443 | 2.695265191  | 2.50046517   | 
 2.530063267997 | 0.12457410127142007   |
 | testFile1Gb   | 1073741824 | 25.03324924 | 24.155102554 | 24.901525525 | 
 24.69662577297 | 0.386672412437576 |
 | testFile1Mb   | 1048576| 0.271615094 | 0.270835986  | 0.271796438  | 
 0.271415839333 | 0.0004166483951065848 |
 +---++-+--+--++---+
 (NFS read after rtmax = 1MB)
 +---++--+-+--+-+-+
 | File  | Size   | Run 1| Run 2   | Run 3| 
 Average | Std. Dev.|
 | testFile100Mb | 104857600  | 3.655261869  | 3.438676067 | 3.557464787  | 
 3.550467574336  | 0.0885591069882058   |
 | testFile1Gb   | 1073741824 | 34.663612417 | 37.32089122 | 37.997718857 | 
 36.66074083135  | 1.4389615098060426   |
 | testFile1Mb   | 1048576| 0.115602858  | 0.106826253 | 0.125229976  | 
 0.1158863623334 | 0.007515962395481867 |
 +---++--+-+--+-+-+



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6075) Introducing non-replication mode

2014-03-09 Thread Charles Wimmer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925274#comment-13925274
 ] 

Charles Wimmer commented on HDFS-6075:
--

dfs.datanode.balance.bandwidthPerSec may be set dynamically while the cluster 
is running.  We requested this feature for exactly the type of operational 
situation you describe.  You may not be able to eliminate replication, but you 
can minimize the impact by temporarily setting the bandwidth extremely low.

From hdfs dfsadmin -help:
{noformat}
-setBalancerBandwidth bandwidth:
Changes the network bandwidth used by each datanode during
HDFS block balancing.

bandwidth is the maximum number of bytes per second
that will be used by each datanode. This value overrides
the dfs.balance.bandwidthPerSec parameter.

--- NOTE: The new value is not persistent on the DataNode.---
{noformat}

 Introducing non-replication mode
 --

 Key: HDFS-6075
 URL: https://issues.apache.org/jira/browse/HDFS-6075
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Adam Kawa
Priority: Minor

 Afaik, HDFS does not provide an easy way to temporarily disable the 
 replication of missing blocks.
 If you would like to temporarily disable the replication, you would have to
 * set dfs.namenode.replication.interval (_The periodicity in seconds with 
 which the namenode computes repliaction work for datanodes_ Default 3) to 
 something very high. *Disadvantage*: you have to restart the NN
 * go into the safe-mode. *Disadvantage*: all write operations will fail
 We have the situation that we need to replace our top-of-rack switches for 
 each rack. Replacing a switch should take around 30 minutes. Each rack has 
 around 0.6 PB of data. We would like to avoid an expensive replication, since 
 we know that we will put this rack online quickly. To avoid any downtime, or 
 excessive network transfer, we think that temporarily disabling the 
 replication could fit us.
 The default block placement policy puts blocks into two racks, so when one 
 rack temporarily goes offline, we still have an access to at least replica of 
 each block. Of course, if we lose this replica, then we would have to wait 
 until the rack goes back online. This is what the administrator should be 
 aware of.
 This feature could disable the replication
 * globally - for a whole cluster
 * partially - e.g. only for missing blocks that come from a specified set of 
 DataNodes. So a file like we_will_be_back_soon :) could be introduced, 
 similar to include and exclude.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6055) Change default configuration to limit file name length in HDFS

2014-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925282#comment-13925282
 ] 

Hadoop QA commented on HDFS-6055:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12633596/HDFS-6055.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives
  org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6361//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6361//console

This message is automatically generated.

 Change default configuration to limit file name length in HDFS
 --

 Key: HDFS-6055
 URL: https://issues.apache.org/jira/browse/HDFS-6055
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.4.0
Reporter: Suresh Srinivas
Assignee: Chris Nauroth
 Attachments: HDFS-6055.1.patch, HDFS-6055.2.patch


 Currently configuration dfs.namenode.fs-limits.max-component-length is set 
 to 0. With this HDFS file names have no length limit. However, we see more 
 users run into issues where they copy files from HDFS to another file system 
 and the copy fails due to the file name length being too long.
 I propose changing the default configuration 
 dfs.namenode.fs-limits.max-component-length to a reasonable value. This 
 will be an incompatible change. However, user who need long file names can 
 override this configuration to turn off length limit.
 What do folks think?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6082) List all NNs with state

2014-03-09 Thread Rafal Wojdyla (JIRA)
Rafal Wojdyla created HDFS-6082:
---

 Summary: List all NNs with state
 Key: HDFS-6082
 URL: https://issues.apache.org/jira/browse/HDFS-6082
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha, namenode
Reporter: Rafal Wojdyla


HAAdmin let's you determine state of *given* service. It would be nice to have 
a call to determine states of all Namenodes (services?), something like:

hdfs haadmin -getServicesState

And the output would be:
hostname | state

This can be implemented at HAAdmin level - for all HA services or at DFSHAAdmin 
level - for Namenode HA only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6075) Introducing non-replication mode

2014-03-09 Thread Adam Kawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925310#comment-13925310
 ] 

Adam Kawa commented on HDFS-6075:
-

I thought that dfs.datanode.balance.bandwidthPerSec is used only when balancing 
blocks, not when replicating missing replicas.
Can you confirm that dfs.datanode.balance.bandwidthPerSec is used also when 
replicating blocks?

 Introducing non-replication mode
 --

 Key: HDFS-6075
 URL: https://issues.apache.org/jira/browse/HDFS-6075
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Adam Kawa
Priority: Minor

 Afaik, HDFS does not provide an easy way to temporarily disable the 
 replication of missing blocks.
 If you would like to temporarily disable the replication, you would have to
 * set dfs.namenode.replication.interval (_The periodicity in seconds with 
 which the namenode computes repliaction work for datanodes_ Default 3) to 
 something very high. *Disadvantage*: you have to restart the NN
 * go into the safe-mode. *Disadvantage*: all write operations will fail
 We have the situation that we need to replace our top-of-rack switches for 
 each rack. Replacing a switch should take around 30 minutes. Each rack has 
 around 0.6 PB of data. We would like to avoid an expensive replication, since 
 we know that we will put this rack online quickly. To avoid any downtime, or 
 excessive network transfer, we think that temporarily disabling the 
 replication could fit us.
 The default block placement policy puts blocks into two racks, so when one 
 rack temporarily goes offline, we still have an access to at least replica of 
 each block. Of course, if we lose this replica, then we would have to wait 
 until the rack goes back online. This is what the administrator should be 
 aware of.
 This feature could disable the replication
 * globally - for a whole cluster
 * partially - e.g. only for missing blocks that come from a specified set of 
 DataNodes. So a file like we_will_be_back_soon :) could be introduced, 
 similar to include and exclude.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6055) Change default configuration to limit file name length in HDFS

2014-03-09 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925340#comment-13925340
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6055:
---

+1 the new patch looks good.  Thanks, Chris.

 Change default configuration to limit file name length in HDFS
 --

 Key: HDFS-6055
 URL: https://issues.apache.org/jira/browse/HDFS-6055
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.4.0
Reporter: Suresh Srinivas
Assignee: Chris Nauroth
 Attachments: HDFS-6055.1.patch, HDFS-6055.2.patch


 Currently configuration dfs.namenode.fs-limits.max-component-length is set 
 to 0. With this HDFS file names have no length limit. However, we see more 
 users run into issues where they copy files from HDFS to another file system 
 and the copy fails due to the file name length being too long.
 I propose changing the default configuration 
 dfs.namenode.fs-limits.max-component-length to a reasonable value. This 
 will be an incompatible change. However, user who need long file names can 
 override this configuration to turn off length limit.
 What do folks think?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6082) List all NNs with state

2014-03-09 Thread Rafal Wojdyla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rafal Wojdyla updated HDFS-6082:


Description: 
HAAdmin let's you determine state of *given* service. It would be nice to have 
a call to determine states of all Namenodes (services?), something like:

hdfs haadmin -ns foobar -getServicesState

And the output would be:
hostname | state

This can be implemented at HAAdmin level - for all HA services or at DFSHAAdmin 
level - for Namenode HA only.

  was:
HAAdmin let's you determine state of *given* service. It would be nice to have 
a call to determine states of all Namenodes (services?), something like:

hdfs haadmin -getServicesState

And the output would be:
hostname | state

This can be implemented at HAAdmin level - for all HA services or at DFSHAAdmin 
level - for Namenode HA only.


 List all NNs with state
 ---

 Key: HDFS-6082
 URL: https://issues.apache.org/jira/browse/HDFS-6082
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha, namenode
Reporter: Rafal Wojdyla

 HAAdmin let's you determine state of *given* service. It would be nice to 
 have a call to determine states of all Namenodes (services?), something like:
 hdfs haadmin -ns foobar -getServicesState
 And the output would be:
 hostname | state
 This can be implemented at HAAdmin level - for all HA services or at 
 DFSHAAdmin level - for Namenode HA only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6071) BlockReaderLocal doesn't return -1 on EOF when doing a zero-length read on a short file

2014-03-09 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925411#comment-13925411
 ] 

Colin Patrick McCabe commented on HDFS-6071:


The test failure is an unrelated flake.  It passes for me locally.  Will commit 
shortly.

 BlockReaderLocal doesn't return -1 on EOF when doing a zero-length read on a 
 short file
 ---

 Key: HDFS-6071
 URL: https://issues.apache.org/jira/browse/HDFS-6071
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-6071.001.patch, HDFS-6071.002.patch


 BlockReaderLocal doesn't return -1 on EOF when doing a zero-length read on a 
 short file.  Specifically, if the file is shorter than the readahead buffer, 
 or if the position is nearer to the end than the length of the readahead 
 buffer, this may happen.  This is mainly a concern because libhdfs relies on 
 this to determine whether it should use direct reads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6071) BlockReaderLocal doesn't return -1 on EOF when doing a zero-length read on a short file

2014-03-09 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-6071:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 BlockReaderLocal doesn't return -1 on EOF when doing a zero-length read on a 
 short file
 ---

 Key: HDFS-6071
 URL: https://issues.apache.org/jira/browse/HDFS-6071
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-6071.001.patch, HDFS-6071.002.patch


 BlockReaderLocal doesn't return -1 on EOF when doing a zero-length read on a 
 short file.  Specifically, if the file is shorter than the readahead buffer, 
 or if the position is nearer to the end than the length of the readahead 
 buffer, this may happen.  This is mainly a concern because libhdfs relies on 
 this to determine whether it should use direct reads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6071) BlockReaderLocal doesn't return -1 on EOF when doing a zero-length read on a short file

2014-03-09 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-6071:
---

Fix Version/s: 2.4.0

 BlockReaderLocal doesn't return -1 on EOF when doing a zero-length read on a 
 short file
 ---

 Key: HDFS-6071
 URL: https://issues.apache.org/jira/browse/HDFS-6071
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.4.0

 Attachments: HDFS-6071.001.patch, HDFS-6071.002.patch


 BlockReaderLocal doesn't return -1 on EOF when doing a zero-length read on a 
 short file.  Specifically, if the file is shorter than the readahead buffer, 
 or if the position is nearer to the end than the length of the readahead 
 buffer, this may happen.  This is mainly a concern because libhdfs relies on 
 this to determine whether it should use direct reads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6071) BlockReaderLocal doesn't return -1 on EOF when doing a zero-length read on a short file

2014-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925415#comment-13925415
 ] 

Hudson commented on HDFS-6071:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5294 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5294/])
HDFS-6071. BlockReaderLocal does not return -1 on EOF when doing a zero-length 
read on a short file. (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1575797)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRead.java


 BlockReaderLocal doesn't return -1 on EOF when doing a zero-length read on a 
 short file
 ---

 Key: HDFS-6071
 URL: https://issues.apache.org/jira/browse/HDFS-6071
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.4.0

 Attachments: HDFS-6071.001.patch, HDFS-6071.002.patch


 BlockReaderLocal doesn't return -1 on EOF when doing a zero-length read on a 
 short file.  Specifically, if the file is shorter than the readahead buffer, 
 or if the position is nearer to the end than the length of the readahead 
 buffer, this may happen.  This is mainly a concern because libhdfs relies on 
 this to determine whether it should use direct reads.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6010) Make balancer able to balance data among specified servers

2014-03-09 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925426#comment-13925426
 ] 

Yu Li commented on HDFS-6010:
-

Hi [~szetszwo],

How do you think about the use case? Does it make sense to you? If so, is it ok 
for me to submit the patch for hadoop QA to test? Thanks. :-)

 Make balancer able to balance data among specified servers
 --

 Key: HDFS-6010
 URL: https://issues.apache.org/jira/browse/HDFS-6010
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer
Affects Versions: 2.3.0
Reporter: Yu Li
Assignee: Yu Li
Priority: Minor
 Attachments: HDFS-6010-trunk.patch


 Currently, the balancer tool balances data among all datanodes. However, in 
 some particular case, we would need to balance data only among specified 
 nodes instead of the whole set.
 In this JIRA, a new -servers option would be introduced to implement this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6083) TestQuorumJournalManager#testChangeWritersLogsOutOfSync2 occasionally fails

2014-03-09 Thread Ted Yu (JIRA)
Ted Yu created HDFS-6083:


 Summary: TestQuorumJournalManager#testChangeWritersLogsOutOfSync2 
occasionally fails
 Key: HDFS-6083
 URL: https://issues.apache.org/jira/browse/HDFS-6083
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


From 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/1695/testReport/junit/org.apache.hadoop.hdfs.qjournal.client/TestQuorumJournalManager/testChangeWritersLogsOutOfSync2/
 :
{code}
Leaked thread: IPC Client (26533782) connection to /127.0.0.1:57898 from 
jenkins Id=590 RUNNABLE
 at java.lang.System.arraycopy(Native Method)
 at java.lang.ThreadGroup.remove(ThreadGroup.java:885)
 at java.lang.Thread.exit(Thread.java:672)
{code}
The following check should give more time for the threads to shutdown:
{code}
// Should not leak clients between tests -- this can cause flaky tests.
// (See HDFS-4643)
GenericTestUtils.assertNoThreadsMatching(.*IPC Client.*);
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)