[jira] [Comment Edited] (HDFS-7423) various typos and message formatting fixes in nfs daemon and doc

2015-01-28 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295120#comment-14295120
 ] 

Yi Liu edited comment on HDFS-7423 at 1/28/15 1:11 PM:
---

Hi Charles, could you rebase the patch for trunk, and make a patch for 
branch-2? I find some conflicts when committing.


was (Author: hitliuyi):
Hi Charles, could you rebase the patch for trunk, and or make a patch for 
branch-2? I find some conflicts when committing.

 various typos and message formatting fixes in nfs daemon and doc
 

 Key: HDFS-7423
 URL: https://issues.apache.org/jira/browse/HDFS-7423
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HDFS-7423.001.patch, HDFS-7423.002.patch, 
 HDFS-7423.003.patch


 These are accumulated fixes for log messages, formatting, typos, etc. in the 
 nfs3 daemon that I came across while working on a customer issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7423) various typos and message formatting fixes in nfs daemon and doc

2015-01-28 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-7423:
---
Attachment: HDFS-7423.004.patch

Hi [~hitliuyi],

The .004 is rebased for the trunk. Let's wait for the jenkins run. Once that 
completes, I'll upload the branch-2 rebase diffs.


 various typos and message formatting fixes in nfs daemon and doc
 

 Key: HDFS-7423
 URL: https://issues.apache.org/jira/browse/HDFS-7423
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HDFS-7423.001.patch, HDFS-7423.002.patch, 
 HDFS-7423.003.patch, HDFS-7423.004.patch


 These are accumulated fixes for log messages, formatting, typos, etc. in the 
 nfs3 daemon that I came across while working on a customer issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7677) DistributedFileSystem#truncate should resolve symlinks

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295182#comment-14295182
 ] 

Hudson commented on HDFS-7677:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #84 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/84/])
HDFS-7677. DistributedFileSystem#truncate should resolve symlinks. (yliu) 
(yliu: rev 9ca565e9704d236ce839c0138d82d54453d793fb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 DistributedFileSystem#truncate should resolve symlinks
 --

 Key: HDFS-7677
 URL: https://issues.apache.org/jira/browse/HDFS-7677
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7677.001.patch, HDFS-7677.002.patch


 We should resolve the symlinks in DistributedFileSystem#truncate as we do for 
 {{create}}, {{open}}, {{append}} and so on, I don't see any reason not 
 support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295176#comment-14295176
 ] 

Hudson commented on HDFS-3689:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #84 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/84/])
HDFS-3689. Add support for variable length block. Contributed by Jing Zhao. 
(jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java


 Add support for variable length block
 -

 Key: HDFS-3689
 URL: https://issues.apache.org/jira/browse/HDFS-3689
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Affects Versions: 3.0.0
 

[jira] [Commented] (HDFS-7675) Remove unused member DFSClient#spanReceiverHost

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295194#comment-14295194
 ] 

Hudson commented on HDFS-7675:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2019 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2019/])
HDFS-7675. Remove unused member DFSClient.spanReceiverHost (cmccabe) (cmccabe: 
rev d12dd47f4516fe125221ae073f1bc88b702b122f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


 Remove unused member DFSClient#spanReceiverHost
 ---

 Key: HDFS-7675
 URL: https://issues.apache.org/jira/browse/HDFS-7675
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: dfsclient
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Colin Patrick McCabe
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-7675.001.patch


 {{DFSClient#spanReceiverHost}} is initialised but never used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7566) Remove obsolete entries from hdfs-default.xml

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295186#comment-14295186
 ] 

Hudson commented on HDFS-7566:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2019 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2019/])
HDFS-7566. Remove obsolete entries from hdfs-default.xml (Ray Chiang via aw) 
(aw: rev 0a05ae1782488597cbf8667866f98f0df341abc0)
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/resources/job_1329348432655_0001_conf.xml
* hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json


 Remove obsolete entries from hdfs-default.xml
 -

 Key: HDFS-7566
 URL: https://issues.apache.org/jira/browse/HDFS-7566
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Ray Chiang
Assignee: Ray Chiang
  Labels: supportability
 Fix For: 2.7.0

 Attachments: HDFS-7566.001.patch


 So far, I've found these five properties which may be obsolete in 
 hdfs-default.xml:
 - dfs.https.enable
 - dfs.namenode.edits.journal-plugin.qjournal
 - dfs.namenode.logging.level
 - dfs.ha.namenodes.EXAMPLENAMESERVICE
   + Should this be kept in the .xml file?
 - dfs.support.append
   + Removed with HDFS-6246
 I'd like to get feedback about the state of any of the above properties.
 This is the HDFS equivalent of MAPREDUCE-6057 and YARN-2460.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2015-01-28 Thread Ayappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295201#comment-14295201
 ] 

Ayappan commented on HDFS-4681:
---

Can any maintainer commit this patch?

 TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
 using IBM java
 -

 Key: HDFS-4681
 URL: https://issues.apache.org/jira/browse/HDFS-4681
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.5.2
 Environment: PowerPC Big Endian architecture
Reporter: Tian Hong Wang
Assignee: Suresh Srinivas
 Attachments: HDFS-4681-v1.patch, HDFS-4681-v2.patch, HDFS-4681.patch


 TestBlocksWithNotEnoughRacks unit test fails with the following error message:
 
 testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
   Time elapsed: 8997 sec   FAILURE!
 org.junit.ComparisonFailure: Corrupt replica 
 expected:...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02���:)$�{|�^@�-���|GvW��7g
  �/M��[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���   
 oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
 ��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C] but 
 was:...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02�:)$�{|�^@�-���|GvW��7g
  �/M�[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���  
 oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]
 at org.junit.Assert.assertEquals(Assert.java:123)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7376) Upgrade jsch lib to jsch-0.1.51 to avoid problems running on java7

2015-01-28 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-7376:
-
Assignee: Tsuyoshi OZAWA
  Status: Patch Available  (was: Open)

 Upgrade jsch lib to jsch-0.1.51 to avoid problems running on java7
 --

 Key: HDFS-7376
 URL: https://issues.apache.org/jira/browse/HDFS-7376
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Reporter: Johannes Zillmann
Assignee: Tsuyoshi OZAWA
 Attachments: HDFS-7376.1.patch


 We had an application sitting on top of Hadoop and got problems using jsch 
 once we switched to java 7. Got this exception:
 {noformat}
  com.jcraft.jsch.JSchException: verify: false
   at com.jcraft.jsch.Session.connect(Session.java:330)
   at com.jcraft.jsch.Session.connect(Session.java:183)
 {noformat}
 Upgrading to jsch-0.1.51 from jsch-0.1.49 fixed the issue for us, but then it 
 got in conflict with hadoop's jsch version (we fixed this for us by 
 jarjar'ing our jsch version).
 So i think jsch got introduce by namenode HA (HDFS-1623). So you guys should 
 check if the ssh part is properly working for java7 or preventively upgrade 
 the jsch lib to jsch-0.1.51!
 Some references to problems reported:
 - 
 http://sourceforge.net/p/jsch/mailman/jsch-users/thread/loom.20131009t211650-...@post.gmane.org/
 - https://issues.apache.org/bugzilla/show_bug.cgi?id=53437



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7376) Upgrade jsch lib to jsch-0.1.51 to avoid problems running on java7

2015-01-28 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-7376:
-
Attachment: HDFS-7376.1.patch

Attaching first patch.

 Upgrade jsch lib to jsch-0.1.51 to avoid problems running on java7
 --

 Key: HDFS-7376
 URL: https://issues.apache.org/jira/browse/HDFS-7376
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Reporter: Johannes Zillmann
 Attachments: HDFS-7376.1.patch


 We had an application sitting on top of Hadoop and got problems using jsch 
 once we switched to java 7. Got this exception:
 {noformat}
  com.jcraft.jsch.JSchException: verify: false
   at com.jcraft.jsch.Session.connect(Session.java:330)
   at com.jcraft.jsch.Session.connect(Session.java:183)
 {noformat}
 Upgrading to jsch-0.1.51 from jsch-0.1.49 fixed the issue for us, but then it 
 got in conflict with hadoop's jsch version (we fixed this for us by 
 jarjar'ing our jsch version).
 So i think jsch got introduce by namenode HA (HDFS-1623). So you guys should 
 check if the ssh part is properly working for java7 or preventively upgrade 
 the jsch lib to jsch-0.1.51!
 Some references to problems reported:
 - 
 http://sourceforge.net/p/jsch/mailman/jsch-users/thread/loom.20131009t211650-...@post.gmane.org/
 - https://issues.apache.org/bugzilla/show_bug.cgi?id=53437



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295189#comment-14295189
 ] 

Hudson commented on HDFS-3689:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2019 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2019/])
HDFS-3689. Add support for variable length block. Contributed by Jing Zhao. 
(jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0)
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java


 Add support for variable length block
 -

 Key: HDFS-3689
 URL: https://issues.apache.org/jira/browse/HDFS-3689
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Affects Versions: 3.0.0

[jira] [Commented] (HDFS-7677) DistributedFileSystem#truncate should resolve symlinks

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295195#comment-14295195
 ] 

Hudson commented on HDFS-7677:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2019 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2019/])
HDFS-7677. DistributedFileSystem#truncate should resolve symlinks. (yliu) 
(yliu: rev 9ca565e9704d236ce839c0138d82d54453d793fb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java


 DistributedFileSystem#truncate should resolve symlinks
 --

 Key: HDFS-7677
 URL: https://issues.apache.org/jira/browse/HDFS-7677
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7677.001.patch, HDFS-7677.002.patch


 We should resolve the symlinks in DistributedFileSystem#truncate as we do for 
 {{create}}, {{open}}, {{append}} and so on, I don't see any reason not 
 support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7683) Combine usages and percent stats in NameNode UI

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295179#comment-14295179
 ] 

Hudson commented on HDFS-7683:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #84 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/84/])
HDFS-7683. Combine usages and percent stats in NameNode UI. Contributed by 
Vinayakumar B. (wheat9: rev 1e2d98a394d98f9f1b6791cbe9cef474c19b8ceb)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Combine usages and percent stats in NameNode UI
 ---

 Key: HDFS-7683
 URL: https://issues.apache.org/jira/browse/HDFS-7683
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Fix For: 2.7.0

 Attachments: 7683-snapshot.jpg, HDFS-7683-001.patch, 
 HDFS-7683-001.patch


 In NameNode UI, there are separate rows to display cluster usage, one is in 
 bytes, another one is in percentage.
 We can combine these two rows to just display percent usage in brackets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7566) Remove obsolete entries from hdfs-default.xml

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295173#comment-14295173
 ] 

Hudson commented on HDFS-7566:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #84 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/84/])
HDFS-7566. Remove obsolete entries from hdfs-default.xml (Ray Chiang via aw) 
(aw: rev 0a05ae1782488597cbf8667866f98f0df341abc0)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/resources/job_1329348432655_0001_conf.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json


 Remove obsolete entries from hdfs-default.xml
 -

 Key: HDFS-7566
 URL: https://issues.apache.org/jira/browse/HDFS-7566
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Ray Chiang
Assignee: Ray Chiang
  Labels: supportability
 Fix For: 2.7.0

 Attachments: HDFS-7566.001.patch


 So far, I've found these five properties which may be obsolete in 
 hdfs-default.xml:
 - dfs.https.enable
 - dfs.namenode.edits.journal-plugin.qjournal
 - dfs.namenode.logging.level
 - dfs.ha.namenodes.EXAMPLENAMESERVICE
   + Should this be kept in the .xml file?
 - dfs.support.append
   + Removed with HDFS-6246
 I'd like to get feedback about the state of any of the above properties.
 This is the HDFS equivalent of MAPREDUCE-6057 and YARN-2460.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7675) Remove unused member DFSClient#spanReceiverHost

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295181#comment-14295181
 ] 

Hudson commented on HDFS-7675:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #84 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/84/])
HDFS-7675. Remove unused member DFSClient.spanReceiverHost (cmccabe) (cmccabe: 
rev d12dd47f4516fe125221ae073f1bc88b702b122f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


 Remove unused member DFSClient#spanReceiverHost
 ---

 Key: HDFS-7675
 URL: https://issues.apache.org/jira/browse/HDFS-7675
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: dfsclient
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Colin Patrick McCabe
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-7675.001.patch


 {{DFSClient#spanReceiverHost}} is initialised but never used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7683) Combine usages and percent stats in NameNode UI

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295192#comment-14295192
 ] 

Hudson commented on HDFS-7683:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2019 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2019/])
HDFS-7683. Combine usages and percent stats in NameNode UI. Contributed by 
Vinayakumar B. (wheat9: rev 1e2d98a394d98f9f1b6791cbe9cef474c19b8ceb)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


 Combine usages and percent stats in NameNode UI
 ---

 Key: HDFS-7683
 URL: https://issues.apache.org/jira/browse/HDFS-7683
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Fix For: 2.7.0

 Attachments: 7683-snapshot.jpg, HDFS-7683-001.patch, 
 HDFS-7683-001.patch


 In NameNode UI, there are separate rows to display cluster usage, one is in 
 bytes, another one is in percentage.
 We can combine these two rows to just display percent usage in brackets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7630) TestConnCache hardcode block size without considering native OS

2015-01-28 Thread sam liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294898#comment-14294898
 ] 

sam liu commented on HDFS-7630:
---

Arpit,

Not all such hard-coding will cause failure and the patch is mainly to remove 
the hard-coding. But sometimes the hard-coding could cause failure. For 
example, without patch HDFS-7585, test TestEnhancedByteBufferAccess will fail 
on power platform. In the tests, the size of BLOCK_SIZE is usually set as 4096 
and it just equals to the default page size of x86 Linux operating system, but 
on power Linux operating system the default page size is 65536. As HDFS is 
based on operating system, it might be better the unit tests could consider the 
differences of operating systems.

Thanks!

 TestConnCache hardcode block size without considering native OS
 ---

 Key: HDFS-7630
 URL: https://issues.apache.org/jira/browse/HDFS-7630
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: sam liu
Assignee: sam liu
 Attachments: HDFS-7630.001.patch, HDFS-7630.002.patch


 TestConnCache hardcode block size with 'BLOCK_SIZE = 4096', however it's 
 incorrect on some platforms. For example, on power platform, the correct 
 value is 65536.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295032#comment-14295032
 ] 

Hudson commented on HDFS-3689:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #87 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/87/])
HDFS-3689. Add support for variable length block. Contributed by Jing Zhao. 
(jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java


 Add support for variable length block
 -

 Key: HDFS-3689
 URL: https://issues.apache.org/jira/browse/HDFS-3689
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Affects Versions: 3.0.0
 

[jira] [Commented] (HDFS-7675) Remove unused member DFSClient#spanReceiverHost

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295042#comment-14295042
 ] 

Hudson commented on HDFS-7675:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #87 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/87/])
HDFS-7675. Remove unused member DFSClient.spanReceiverHost (cmccabe) (cmccabe: 
rev d12dd47f4516fe125221ae073f1bc88b702b122f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove unused member DFSClient#spanReceiverHost
 ---

 Key: HDFS-7675
 URL: https://issues.apache.org/jira/browse/HDFS-7675
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: dfsclient
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Colin Patrick McCabe
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-7675.001.patch


 {{DFSClient#spanReceiverHost}} is initialised but never used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-49) MiniDFSCluster.stopDataNode will always shut down a node in the cluster if a matching name is not found

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-49?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295036#comment-14295036
 ] 

Hudson commented on HDFS-49:


FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #87 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/87/])
HDFS-49. MiniDFSCluster.stopDataNode will always shut down a node in the 
cluster if a matching name is not found. (stevel) (stevel: rev 
0da53a37ec46b887f441df98c6986b31fa7671a2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java


 MiniDFSCluster.stopDataNode will always shut down a node in the cluster if a 
 matching name is not found
 ---

 Key: HDFS-49
 URL: https://issues.apache.org/jira/browse/HDFS-49
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.20.204.0, 0.20.205.0, 1.1.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
  Labels: codereview, newbie
 Fix For: 2.7.0

 Attachments: HDFS-49-002.patch, hdfs-49.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 The stopDataNode method will shut down the last node in the list of nodes, if 
 one matching a specific name is not found
 This is possibly not what was intended. Better to return false or fail in 
 some other manner if the named node was not located
  synchronized boolean stopDataNode(String name) {
 int i;
 for (i = 0; i  dataNodes.size(); i++) {
   DataNode dn = dataNodes.get(i).datanode;
   if (dn.dnRegistration.getName().equals(name)) {
 break;
   }
 }
 return stopDataNode(i);
   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7683) Combine usages and percent stats in NameNode UI

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295035#comment-14295035
 ] 

Hudson commented on HDFS-7683:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #87 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/87/])
HDFS-7683. Combine usages and percent stats in NameNode UI. Contributed by 
Vinayakumar B. (wheat9: rev 1e2d98a394d98f9f1b6791cbe9cef474c19b8ceb)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


 Combine usages and percent stats in NameNode UI
 ---

 Key: HDFS-7683
 URL: https://issues.apache.org/jira/browse/HDFS-7683
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Fix For: 2.7.0

 Attachments: 7683-snapshot.jpg, HDFS-7683-001.patch, 
 HDFS-7683-001.patch


 In NameNode UI, there are separate rows to display cluster usage, one is in 
 bytes, another one is in percentage.
 We can combine these two rows to just display percent usage in brackets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7677) DistributedFileSystem#truncate should resolve symlinks

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295044#comment-14295044
 ] 

Hudson commented on HDFS-7677:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #87 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/87/])
HDFS-7677. DistributedFileSystem#truncate should resolve symlinks. (yliu) 
(yliu: rev 9ca565e9704d236ce839c0138d82d54453d793fb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java


 DistributedFileSystem#truncate should resolve symlinks
 --

 Key: HDFS-7677
 URL: https://issues.apache.org/jira/browse/HDFS-7677
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7677.001.patch, HDFS-7677.002.patch


 We should resolve the symlinks in DistributedFileSystem#truncate as we do for 
 {{create}}, {{open}}, {{append}} and so on, I don't see any reason not 
 support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7692) BlockPoolSliceStorage#loadBpStorageDirectories(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-01-28 Thread Leitao Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295013#comment-14295013
 ] 

Leitao Guo commented on HDFS-7692:
--

When we upgrade HDFS from 2.0.0 to 2.5.0 with datanodes have 12 dataDirs and 
each dataDir has 2.5TB data, upgrade of block pool at each dataDir cost us 
about 25 minutes and the total time is 5 hours. This is really time consuming, 
especially when datanodes have more dataDirs and more data.

I will submit the patch later after some tests.



 BlockPoolSliceStorage#loadBpStorageDirectories(...) should support 
 MultiThread to speedup the upgrade of block pool at multi storage directories.
 -

 Key: HDFS-7692
 URL: https://issues.apache.org/jira/browse/HDFS-7692
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.5.2
Reporter: Leitao Guo

 {code:title=BlockPoolSliceStorage#loadBpStorageDirectories(...)|borderStyle=solid}
 for (File dataDir : dataDirs) {
 if (containsStorageDir(dataDir)) {
   throw new IOException(
   BlockPoolSliceStorage.recoverTransitionRead:  +
   attempt to load an used block storage:  + dataDir);
 }
 StorageDirectory sd =
 loadStorageDirectory(datanode, nsInfo, dataDir, startOpt);
 succeedDirs.add(sd);
   }
 {code}
 In the above code the storage directories will be analyzed one by one, which 
 is really time consuming when upgrading HDFS with datanodes have dozens of 
 large volumes.  MultiThread dataDirs analyzing should be supported here to 
 speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7566) Remove obsolete entries from hdfs-default.xml

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295039#comment-14295039
 ] 

Hudson commented on HDFS-7566:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #87 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/87/])
HDFS-7566. Remove obsolete entries from hdfs-default.xml (Ray Chiang via aw) 
(aw: rev 0a05ae1782488597cbf8667866f98f0df341abc0)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/resources/job_1329348432655_0001_conf.xml
* hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Remove obsolete entries from hdfs-default.xml
 -

 Key: HDFS-7566
 URL: https://issues.apache.org/jira/browse/HDFS-7566
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Ray Chiang
Assignee: Ray Chiang
  Labels: supportability
 Fix For: 2.7.0

 Attachments: HDFS-7566.001.patch


 So far, I've found these five properties which may be obsolete in 
 hdfs-default.xml:
 - dfs.https.enable
 - dfs.namenode.edits.journal-plugin.qjournal
 - dfs.namenode.logging.level
 - dfs.ha.namenodes.EXAMPLENAMESERVICE
   + Should this be kept in the .xml file?
 - dfs.support.append
   + Removed with HDFS-6246
 I'd like to get feedback about the state of any of the above properties.
 This is the HDFS equivalent of MAPREDUCE-6057 and YARN-2460.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295050#comment-14295050
 ] 

Hudson commented on HDFS-3689:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #821 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/821/])
HDFS-3689. Add support for variable length block. Contributed by Jing Zhao. 
(jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java


 Add support for variable length block
 -

 Key: HDFS-3689
 URL: https://issues.apache.org/jira/browse/HDFS-3689
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Affects Versions: 3.0.0

[jira] [Commented] (HDFS-49) MiniDFSCluster.stopDataNode will always shut down a node in the cluster if a matching name is not found

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-49?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295054#comment-14295054
 ] 

Hudson commented on HDFS-49:


FAILURE: Integrated in Hadoop-Yarn-trunk #821 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/821/])
HDFS-49. MiniDFSCluster.stopDataNode will always shut down a node in the 
cluster if a matching name is not found. (stevel) (stevel: rev 
0da53a37ec46b887f441df98c6986b31fa7671a2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 MiniDFSCluster.stopDataNode will always shut down a node in the cluster if a 
 matching name is not found
 ---

 Key: HDFS-49
 URL: https://issues.apache.org/jira/browse/HDFS-49
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.20.204.0, 0.20.205.0, 1.1.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
  Labels: codereview, newbie
 Fix For: 2.7.0

 Attachments: HDFS-49-002.patch, hdfs-49.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 The stopDataNode method will shut down the last node in the list of nodes, if 
 one matching a specific name is not found
 This is possibly not what was intended. Better to return false or fail in 
 some other manner if the named node was not located
  synchronized boolean stopDataNode(String name) {
 int i;
 for (i = 0; i  dataNodes.size(); i++) {
   DataNode dn = dataNodes.get(i).datanode;
   if (dn.dnRegistration.getName().equals(name)) {
 break;
   }
 }
 return stopDataNode(i);
   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7675) Remove unused member DFSClient#spanReceiverHost

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295060#comment-14295060
 ] 

Hudson commented on HDFS-7675:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #821 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/821/])
HDFS-7675. Remove unused member DFSClient.spanReceiverHost (cmccabe) (cmccabe: 
rev d12dd47f4516fe125221ae073f1bc88b702b122f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


 Remove unused member DFSClient#spanReceiverHost
 ---

 Key: HDFS-7675
 URL: https://issues.apache.org/jira/browse/HDFS-7675
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: dfsclient
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Colin Patrick McCabe
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-7675.001.patch


 {{DFSClient#spanReceiverHost}} is initialised but never used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7683) Combine usages and percent stats in NameNode UI

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295053#comment-14295053
 ] 

Hudson commented on HDFS-7683:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #821 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/821/])
HDFS-7683. Combine usages and percent stats in NameNode UI. Contributed by 
Vinayakumar B. (wheat9: rev 1e2d98a394d98f9f1b6791cbe9cef474c19b8ceb)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Combine usages and percent stats in NameNode UI
 ---

 Key: HDFS-7683
 URL: https://issues.apache.org/jira/browse/HDFS-7683
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Fix For: 2.7.0

 Attachments: 7683-snapshot.jpg, HDFS-7683-001.patch, 
 HDFS-7683-001.patch


 In NameNode UI, there are separate rows to display cluster usage, one is in 
 bytes, another one is in percentage.
 We can combine these two rows to just display percent usage in brackets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7677) DistributedFileSystem#truncate should resolve symlinks

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295062#comment-14295062
 ] 

Hudson commented on HDFS-7677:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #821 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/821/])
HDFS-7677. DistributedFileSystem#truncate should resolve symlinks. (yliu) 
(yliu: rev 9ca565e9704d236ce839c0138d82d54453d793fb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java


 DistributedFileSystem#truncate should resolve symlinks
 --

 Key: HDFS-7677
 URL: https://issues.apache.org/jira/browse/HDFS-7677
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7677.001.patch, HDFS-7677.002.patch


 We should resolve the symlinks in DistributedFileSystem#truncate as we do for 
 {{create}}, {{open}}, {{append}} and so on, I don't see any reason not 
 support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7376) Upgrade jsch lib to jsch-0.1.51 to avoid problems running on java7

2015-01-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-7376:
-
Affects Version/s: 2.6.0

 Upgrade jsch lib to jsch-0.1.51 to avoid problems running on java7
 --

 Key: HDFS-7376
 URL: https://issues.apache.org/jira/browse/HDFS-7376
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.6.0
Reporter: Johannes Zillmann
Assignee: Tsuyoshi OZAWA
 Attachments: HDFS-7376.1.patch


 We had an application sitting on top of Hadoop and got problems using jsch 
 once we switched to java 7. Got this exception:
 {noformat}
  com.jcraft.jsch.JSchException: verify: false
   at com.jcraft.jsch.Session.connect(Session.java:330)
   at com.jcraft.jsch.Session.connect(Session.java:183)
 {noformat}
 Upgrading to jsch-0.1.51 from jsch-0.1.49 fixed the issue for us, but then it 
 got in conflict with hadoop's jsch version (we fixed this for us by 
 jarjar'ing our jsch version).
 So i think jsch got introduce by namenode HA (HDFS-1623). So you guys should 
 check if the ssh part is properly working for java7 or preventively upgrade 
 the jsch lib to jsch-0.1.51!
 Some references to problems reported:
 - 
 http://sourceforge.net/p/jsch/mailman/jsch-users/thread/loom.20131009t211650-...@post.gmane.org/
 - https://issues.apache.org/bugzilla/show_bug.cgi?id=53437



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7692) BlockPoolSliceStorage#loadBpStorageDirectories(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-01-28 Thread Leitao Guo (JIRA)
Leitao Guo created HDFS-7692:


 Summary: BlockPoolSliceStorage#loadBpStorageDirectories(...) 
should support MultiThread to speedup the upgrade of block pool at multi 
storage directories.
 Key: HDFS-7692
 URL: https://issues.apache.org/jira/browse/HDFS-7692
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.5.2
Reporter: Leitao Guo


{code:title=BlockPoolSliceStorage#loadBpStorageDirectories(...)|borderStyle=solid}
for (File dataDir : dataDirs) {
if (containsStorageDir(dataDir)) {
  throw new IOException(
  BlockPoolSliceStorage.recoverTransitionRead:  +
  attempt to load an used block storage:  + dataDir);
}
StorageDirectory sd =
loadStorageDirectory(datanode, nsInfo, dataDir, startOpt);
succeedDirs.add(sd);
  }
{code}

In the above code the storage directories will be analyzed one by one, which is 
really time consuming when upgrading HDFS with datanodes have dozens of large 
volumes.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7692) BlockPoolSliceStorage#loadBpStorageDirectories(...) should support MultiThread to speedup the upgrade of block pool at multi storage directories.

2015-01-28 Thread Leitao Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leitao Guo updated HDFS-7692:
-
Description: 
{code:title=BlockPoolSliceStorage#loadBpStorageDirectories(...)|borderStyle=solid}
for (File dataDir : dataDirs) {
if (containsStorageDir(dataDir)) {
  throw new IOException(
  BlockPoolSliceStorage.recoverTransitionRead:  +
  attempt to load an used block storage:  + dataDir);
}
StorageDirectory sd =
loadStorageDirectory(datanode, nsInfo, dataDir, startOpt);
succeedDirs.add(sd);
  }
{code}

In the above code the storage directories will be analyzed one by one, which is 
really time consuming when upgrading HDFS with datanodes have dozens of large 
volumes.  MultiThread dataDirs analyzing should be supported here to speedup 
upgrade.

  was:
{code:title=BlockPoolSliceStorage#loadBpStorageDirectories(...)|borderStyle=solid}
for (File dataDir : dataDirs) {
if (containsStorageDir(dataDir)) {
  throw new IOException(
  BlockPoolSliceStorage.recoverTransitionRead:  +
  attempt to load an used block storage:  + dataDir);
}
StorageDirectory sd =
loadStorageDirectory(datanode, nsInfo, dataDir, startOpt);
succeedDirs.add(sd);
  }
{code}

In the above code the storage directories will be analyzed one by one, which is 
really time consuming when upgrading HDFS with datanodes have dozens of large 
volumes.  


 BlockPoolSliceStorage#loadBpStorageDirectories(...) should support 
 MultiThread to speedup the upgrade of block pool at multi storage directories.
 -

 Key: HDFS-7692
 URL: https://issues.apache.org/jira/browse/HDFS-7692
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.5.2
Reporter: Leitao Guo

 {code:title=BlockPoolSliceStorage#loadBpStorageDirectories(...)|borderStyle=solid}
 for (File dataDir : dataDirs) {
 if (containsStorageDir(dataDir)) {
   throw new IOException(
   BlockPoolSliceStorage.recoverTransitionRead:  +
   attempt to load an used block storage:  + dataDir);
 }
 StorageDirectory sd =
 loadStorageDirectory(datanode, nsInfo, dataDir, startOpt);
 succeedDirs.add(sd);
   }
 {code}
 In the above code the storage directories will be analyzed one by one, which 
 is really time consuming when upgrading HDFS with datanodes have dozens of 
 large volumes.  MultiThread dataDirs analyzing should be supported here to 
 speedup upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7566) Remove obsolete entries from hdfs-default.xml

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295057#comment-14295057
 ] 

Hudson commented on HDFS-7566:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #821 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/821/])
HDFS-7566. Remove obsolete entries from hdfs-default.xml (Ray Chiang via aw) 
(aw: rev 0a05ae1782488597cbf8667866f98f0df341abc0)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/resources/job_1329348432655_0001_conf.xml
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json


 Remove obsolete entries from hdfs-default.xml
 -

 Key: HDFS-7566
 URL: https://issues.apache.org/jira/browse/HDFS-7566
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Ray Chiang
Assignee: Ray Chiang
  Labels: supportability
 Fix For: 2.7.0

 Attachments: HDFS-7566.001.patch


 So far, I've found these five properties which may be obsolete in 
 hdfs-default.xml:
 - dfs.https.enable
 - dfs.namenode.edits.journal-plugin.qjournal
 - dfs.namenode.logging.level
 - dfs.ha.namenodes.EXAMPLENAMESERVICE
   + Should this be kept in the .xml file?
 - dfs.support.append
   + Removed with HDFS-6246
 I'd like to get feedback about the state of any of the above properties.
 This is the HDFS equivalent of MAPREDUCE-6057 and YARN-2460.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7423) various typos and message formatting fixes in nfs daemon and doc

2015-01-28 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295120#comment-14295120
 ] 

Yi Liu commented on HDFS-7423:
--

Hi Charles, could you rebase the patch for trunk, and or make a patch for 
branch-2? I find some conflicts when committing.

 various typos and message formatting fixes in nfs daemon and doc
 

 Key: HDFS-7423
 URL: https://issues.apache.org/jira/browse/HDFS-7423
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HDFS-7423.001.patch, HDFS-7423.002.patch, 
 HDFS-7423.003.patch


 These are accumulated fixes for log messages, formatting, typos, etc. in the 
 nfs3 daemon that I came across while working on a customer issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7423) various typos and message formatting fixes in nfs daemon and doc

2015-01-28 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295475#comment-14295475
 ] 

Charles Lamb commented on HDFS-7423:


Test failures unrelated.

 various typos and message formatting fixes in nfs daemon and doc
 

 Key: HDFS-7423
 URL: https://issues.apache.org/jira/browse/HDFS-7423
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HDFS-7423-branch-2.004.patch, HDFS-7423.001.patch, 
 HDFS-7423.002.patch, HDFS-7423.003.patch, HDFS-7423.004.patch


 These are accumulated fixes for log messages, formatting, typos, etc. in the 
 nfs3 daemon that I came across while working on a customer issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7423) various typos and message formatting fixes in nfs daemon and doc

2015-01-28 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-7423:
---
Attachment: HDFS-7423-branch-2.004.patch

branch-2 diffs attached.

 various typos and message formatting fixes in nfs daemon and doc
 

 Key: HDFS-7423
 URL: https://issues.apache.org/jira/browse/HDFS-7423
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HDFS-7423-branch-2.004.patch, HDFS-7423.001.patch, 
 HDFS-7423.002.patch, HDFS-7423.003.patch, HDFS-7423.004.patch


 These are accumulated fixes for log messages, formatting, typos, etc. in the 
 nfs3 daemon that I came across while working on a customer issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6571) NameNode should delete intermediate fsimage.ckpt when checkpoint fails

2015-01-28 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HDFS-6571.
-
Resolution: Duplicate

 NameNode should delete intermediate fsimage.ckpt when checkpoint fails
 --

 Key: HDFS-6571
 URL: https://issues.apache.org/jira/browse/HDFS-6571
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Charles Lamb

 When checkpoint fails in getting a new fsimage from standby NameNode or 
 SecondaryNameNode, intermediate fsimage (fsimage.ckpt_txid) is left and 
 never to be cleaned up.
 If fsimage is large and fails to checkpoint many times, the growing 
 intermediate fsimage may cause out of disk space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7423) various typos and message formatting fixes in nfs daemon and doc

2015-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295395#comment-14295395
 ] 

Hadoop QA commented on HDFS-7423:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12694983/HDFS-7423.004.patch
  against trunk revision 9850e15.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs:

  
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager
  org.apache.hadoop.hdfs.server.balancer.TestBalancer
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9357//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9357//console

This message is automatically generated.

 various typos and message formatting fixes in nfs daemon and doc
 

 Key: HDFS-7423
 URL: https://issues.apache.org/jira/browse/HDFS-7423
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HDFS-7423.001.patch, HDFS-7423.002.patch, 
 HDFS-7423.003.patch, HDFS-7423.004.patch


 These are accumulated fixes for log messages, formatting, typos, etc. in the 
 nfs3 daemon that I came across while working on a customer issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7551) Fix the new findbugs warning from TransferFsImage

2015-01-28 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-7551:

  Resolution: Duplicate
Hadoop Flags:   (was: Reviewed)
  Status: Resolved  (was: Patch Available)

 Fix the new findbugs warning from TransferFsImage
 -

 Key: HDFS-7551
 URL: https://issues.apache.org/jira/browse/HDFS-7551
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Minor
 Attachments: HDFS-7551-001.txt, HDFS-7551-002.txt


 There is a findbug warning in 
 https://builds.apache.org/job/PreCommit-HDFS-Build/9080//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
  , 
 {code}
 Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE
 In class org.apache.hadoop.hdfs.server.namenode.TransferFsImage
 In method 
 org.apache.hadoop.hdfs.server.namenode.TransferFsImage.deleteTmpFiles(List)
 Called method java.io.File.delete()
 At TransferFsImage.java:[line 577]
 {code}
 seems to me it came from https://issues.apache.org/jira/browse/HDFS-7373 's 
 change



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager

2015-01-28 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295795#comment-14295795
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7411:
---

 Those methods you mentioned are pretty small, and are far from the bulk of 
 the patch. ...

The GenericHdfsTestUtils change also adds a significant amount of code to the 
patch.  The code refactor does seem occupying half of the patch.

 ... Two other reviewers have also made it through this successfully patch, so 
 I don't think it's so bad to review.

I did not say that it is impossible to review the patch.  It just unnecessarily 
complicated so that reviewers need to spend extra time to review the patch.

 Regarding the removed config property, this is something discussed above. ...

It seems the discussion above did not consider the incompatibility.  I guess 
the unnecessarily complicated and large patch did hide the important details.  
We need to revisit it.

 ... I don't see a way of deprecating this gracefully, since the units of the 
 old and new config properties are incompatible. ...

One way is to keep the original code.  Use the old code if the old conf is set 
and the new code if new conf is set.  Isn't it simple enough?

Since this involve an incompatible change, please move the refactoring code out 
from the improvement.

 Refactor and improve decommissioning logic into DecommissionManager
 ---

 Key: HDFS-7411
 URL: https://issues.apache.org/jira/browse/HDFS-7411
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.5.1
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, 
 hdfs-7411.003.patch, hdfs-7411.004.patch, hdfs-7411.005.patch, 
 hdfs-7411.006.patch, hdfs-7411.007.patch, hdfs-7411.008.patch, 
 hdfs-7411.009.patch


 Would be nice to split out decommission logic from DatanodeManager to 
 DecommissionManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6673) Add delimited format support to PB OIV tool

2015-01-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6673:
--
   Resolution: Fixed
Fix Version/s: 2.7.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2, thanks for the contribution Eddy. Haohui, 
thanks for your comments as well, if we missed anything let's do it in a 
follow-on.

 Add delimited format support to PB OIV tool
 ---

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-6673.000.patch, HDFS-6673.001.patch, 
 HDFS-6673.002.patch, HDFS-6673.003.patch, HDFS-6673.004.patch, 
 HDFS-6673.005.patch, HDFS-6673.006.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7584) Enable Quota Support for Storage Types

2015-01-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7584:
-
Attachment: HDFS-7584.6.patch

Update the patch with the missing new file.

 Enable Quota Support for Storage Types
 --

 Key: HDFS-7584
 URL: https://issues.apache.org/jira/browse/HDFS-7584
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-7584 Quota by Storage Type - 01202015.pdf, 
 HDFS-7584.0.patch, HDFS-7584.1.patch, HDFS-7584.2.patch, HDFS-7584.3.patch, 
 HDFS-7584.4.patch, HDFS-7584.5.patch, HDFS-7584.6.patch, editsStored


 Phase II of the Heterogeneous storage features have completed by HDFS-6584. 
 This JIRA is opened to enable Quota support of different storage types in 
 terms of storage space usage. This is more important for certain storage 
 types such as SSD as it is precious and more performant. 
 As described in the design doc of HDFS-5682, we plan to add new 
 quotaByStorageType command and new name node RPC protocol for it. The quota 
 by storage type feature is applied to HDFS directory level similar to 
 traditional HDFS space quota. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7584) Enable Quota Support for Storage Types

2015-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295897#comment-14295897
 ] 

Hadoop QA commented on HDFS-7584:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12695090/HDFS-7584.7.patch
  against trunk revision caf7298.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9361//console

This message is automatically generated.

 Enable Quota Support for Storage Types
 --

 Key: HDFS-7584
 URL: https://issues.apache.org/jira/browse/HDFS-7584
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-7584 Quota by Storage Type - 01202015.pdf, 
 HDFS-7584.0.patch, HDFS-7584.1.patch, HDFS-7584.2.patch, HDFS-7584.3.patch, 
 HDFS-7584.4.patch, HDFS-7584.5.patch, HDFS-7584.6.patch, HDFS-7584.7.patch, 
 editsStored


 Phase II of the Heterogeneous storage features have completed by HDFS-6584. 
 This JIRA is opened to enable Quota support of different storage types in 
 terms of storage space usage. This is more important for certain storage 
 types such as SSD as it is precious and more performant. 
 As described in the design doc of HDFS-5682, we plan to add new 
 quotaByStorageType command and new name node RPC protocol for it. The quota 
 by storage type feature is applied to HDFS directory level similar to 
 traditional HDFS space quota. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7559) Create unit test to automatically compare HDFS related classes and hdfs-default.xml

2015-01-28 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295777#comment-14295777
 ] 

Ray Chiang commented on HDFS-7559:
--

RE: failing unit tests

Both tests pass fine in my tree.

 Create unit test to automatically compare HDFS related classes and 
 hdfs-default.xml
 ---

 Key: HDFS-7559
 URL: https://issues.apache.org/jira/browse/HDFS-7559
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
  Labels: supportability
 Attachments: HDFS-7559.001.patch, HDFS-7559.002.patch


 Create a unit test that will automatically compare the fields in the various 
 HDFS related classes and hdfs-default.xml. It should throw an error if a 
 property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7175) Client-side SocketTimeoutException during Fsck

2015-01-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295782#comment-14295782
 ] 

Allen Wittenauer commented on HDFS-7175:


What are the chances this is a JDK7 vs. JDK8 change in behavior?

 Client-side SocketTimeoutException during Fsck
 --

 Key: HDFS-7175
 URL: https://issues.apache.org/jira/browse/HDFS-7175
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Carl Steinbach
Assignee: Akira AJISAKA
 Attachments: HDFS-7175.2.patch, HDFS-7175.3.patch, HDFS-7175.patch, 
 HDFS-7175.patch


 HDFS-2538 disabled status reporting for the fsck command (it can optionally 
 be enabled with the -showprogress option). We have observed that without 
 status reporting the client will abort with read timeout:
 {noformat}
 [hdfs@lva1-hcl0030 ~]$ hdfs fsck / 
 Connecting to namenode via http://lva1-tarocknn01.grid.linkedin.com:50070
 14/09/30 06:03:41 WARN security.UserGroupInformation: 
 PriviledgedActionException as:h...@grid.linkedin.com (auth:KERBEROS) 
 cause:java.net.SocketTimeoutException: Read timed out
 Exception in thread main java.net.SocketTimeoutException: Read timed out
   at java.net.SocketInputStream.socketRead0(Native Method)
   at java.net.SocketInputStream.read(SocketInputStream.java:152)
   at java.net.SocketInputStream.read(SocketInputStream.java:122)
   at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
   at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
   at 
 sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
   at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:312)
   at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:149)
   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:146)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
   at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:145)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:346)
 {noformat}
 Since there's nothing for the client to read it will abort if the time 
 required to complete the fsck operation is longer than the client's read 
 timeout setting.
 I can think of a couple ways to fix this:
 # Set an infinite read timeout on the client side (not a good idea!).
 # Have the server-side write (and flush) zeros to the wire and instruct the 
 client to ignore these characters instead of echoing them.
 # It's possible that flushing an empty buffer on the server-side will trigger 
 an HTTP response with a zero length payload. This may be enough to keep the 
 client from hanging up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7175) Client-side SocketTimeoutException during Fsck

2015-01-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-7175:
---
Status: Open  (was: Patch Available)

 Client-side SocketTimeoutException during Fsck
 --

 Key: HDFS-7175
 URL: https://issues.apache.org/jira/browse/HDFS-7175
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Carl Steinbach
Assignee: Akira AJISAKA
 Attachments: HDFS-7175.2.patch, HDFS-7175.3.patch, HDFS-7175.patch, 
 HDFS-7175.patch


 HDFS-2538 disabled status reporting for the fsck command (it can optionally 
 be enabled with the -showprogress option). We have observed that without 
 status reporting the client will abort with read timeout:
 {noformat}
 [hdfs@lva1-hcl0030 ~]$ hdfs fsck / 
 Connecting to namenode via http://lva1-tarocknn01.grid.linkedin.com:50070
 14/09/30 06:03:41 WARN security.UserGroupInformation: 
 PriviledgedActionException as:h...@grid.linkedin.com (auth:KERBEROS) 
 cause:java.net.SocketTimeoutException: Read timed out
 Exception in thread main java.net.SocketTimeoutException: Read timed out
   at java.net.SocketInputStream.socketRead0(Native Method)
   at java.net.SocketInputStream.read(SocketInputStream.java:152)
   at java.net.SocketInputStream.read(SocketInputStream.java:122)
   at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
   at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
   at 
 sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
   at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:312)
   at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:149)
   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:146)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
   at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:145)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:346)
 {noformat}
 Since there's nothing for the client to read it will abort if the time 
 required to complete the fsck operation is longer than the client's read 
 timeout setting.
 I can think of a couple ways to fix this:
 # Set an infinite read timeout on the client side (not a good idea!).
 # Have the server-side write (and flush) zeros to the wire and instruct the 
 client to ignore these characters instead of echoing them.
 # It's possible that flushing an empty buffer on the server-side will trigger 
 an HTTP response with a zero length payload. This may be enough to keep the 
 client from hanging up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7584) Enable Quota Support for Storage Types

2015-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295864#comment-14295864
 ] 

Hadoop QA commented on HDFS-7584:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12695086/HDFS-7584.6.patch
  against trunk revision caf7298.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9360//console

This message is automatically generated.

 Enable Quota Support for Storage Types
 --

 Key: HDFS-7584
 URL: https://issues.apache.org/jira/browse/HDFS-7584
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-7584 Quota by Storage Type - 01202015.pdf, 
 HDFS-7584.0.patch, HDFS-7584.1.patch, HDFS-7584.2.patch, HDFS-7584.3.patch, 
 HDFS-7584.4.patch, HDFS-7584.5.patch, HDFS-7584.6.patch, editsStored


 Phase II of the Heterogeneous storage features have completed by HDFS-6584. 
 This JIRA is opened to enable Quota support of different storage types in 
 terms of storage space usage. This is more important for certain storage 
 types such as SSD as it is precious and more performant. 
 As described in the design doc of HDFS-5682, we plan to add new 
 quotaByStorageType command and new name node RPC protocol for it. The quota 
 by storage type feature is applied to HDFS directory level similar to 
 traditional HDFS space quota. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6673) Add delimited format support to PB OIV tool

2015-01-28 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295830#comment-14295830
 ] 

Haohui Mai commented on HDFS-6673:
--

bq. user can leverage laptop's SSD to get decent performance ...

That's a bold assumption. Even SSD can deliver 50x more IOPS, it still requires 
20 hours to process the fsimage.

bq. In summary, we suggest to use the PB OIV tool as following:

As pointed out the tool does not handle all existing use cases today. It is 
insufficient to just support in the implementation. It is important to document 
the use cases and make it explicit on to which extent the tool is applicable.

Please file a follow up jira to fix the documentation issue.

 Add delimited format support to PB OIV tool
 ---

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-6673.000.patch, HDFS-6673.001.patch, 
 HDFS-6673.002.patch, HDFS-6673.003.patch, HDFS-6673.004.patch, 
 HDFS-6673.005.patch, HDFS-6673.006.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7693) libhdfs: add hdfsFile cache

2015-01-28 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-7693:
--

 Summary: libhdfs: add hdfsFile cache
 Key: HDFS-7693
 URL: https://issues.apache.org/jira/browse/HDFS-7693
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Add an hdfsFile cache inside libhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-6673) Add delimited format support to PB OIV tool

2015-01-28 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295830#comment-14295830
 ] 

Haohui Mai edited comment on HDFS-6673 at 1/28/15 8:51 PM:
---

bq. user can leverage laptop's SSD to get decent performance ...

That's a bold assumption. Even SSD can deliver 50x more IOPS, it still requires 
20 hours to process the fsimage.

bq. In summary, we suggest to use the PB OIV tool as following:

After the offline discussion with [~andrew.wang] and [~eddyxu], the tool is 
intended to be a debugging tool for the developers. For this particular purpose 
I'm willing to change my -0 to +0. However, as the tool cannot gracefully 
handle the bigger fsimage in production today, I think it is necessary to 
document the applicability for the tool. Let's file a follow up jira to fix the 
documentation issue before claiming the tool is ready.


was (Author: wheat9):
bq. user can leverage laptop's SSD to get decent performance ...

That's a bold assumption. Even SSD can deliver 50x more IOPS, it still requires 
20 hours to process the fsimage.

bq. In summary, we suggest to use the PB OIV tool as following:

As pointed out the tool does not handle all existing use cases today. It is 
insufficient to just support in the implementation. It is important to document 
the use cases and make it explicit on to which extent the tool is applicable.

Please file a follow up jira to fix the documentation issue.

 Add delimited format support to PB OIV tool
 ---

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-6673.000.patch, HDFS-6673.001.patch, 
 HDFS-6673.002.patch, HDFS-6673.003.patch, HDFS-6673.004.patch, 
 HDFS-6673.005.patch, HDFS-6673.006.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7332) Fix TestMultipleNNDataBlockScanner failures in jenkin slaves.

2015-01-28 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-7332:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

 Fix TestMultipleNNDataBlockScanner failures in jenkin slaves.
 -

 Key: HDFS-7332
 URL: https://issues.apache.org/jira/browse/HDFS-7332
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.5.1
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-7332.000.patch


 {{BlockPoolSliceScanner}} sorts {{BlockScanInfo}} by its {{lastScanTime}}. 
 Each time, the scanner picks a {{BlockScanInfo}} with the smallest 
 {{lastScanTime}} to scan and then updates its {{lastScanTime = 
 Time.mononicNow()}}. 
 Since the Jenkins test slave VM is usually rebooted for each job, the 
 {{Time.mononicNow()}} in the VM returns a small number, which is smaller than 
 the initial values of {{BlockScanInfo}}. Thus {{BlockPoolSliceScanner}} stops 
 at the first block that has been scanned and could not finish scanning all 
 blocks. The result of it is that 
 {{TestMultipleNNDataBlockScanner#testDataBlockScanner}} times out due to 
 unfinished scanning job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7584) Enable Quota Support for Storage Types

2015-01-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7584:
-
Attachment: HDFS-7584.7.patch

 Enable Quota Support for Storage Types
 --

 Key: HDFS-7584
 URL: https://issues.apache.org/jira/browse/HDFS-7584
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-7584 Quota by Storage Type - 01202015.pdf, 
 HDFS-7584.0.patch, HDFS-7584.1.patch, HDFS-7584.2.patch, HDFS-7584.3.patch, 
 HDFS-7584.4.patch, HDFS-7584.5.patch, HDFS-7584.6.patch, HDFS-7584.7.patch, 
 editsStored


 Phase II of the Heterogeneous storage features have completed by HDFS-6584. 
 This JIRA is opened to enable Quota support of different storage types in 
 terms of storage space usage. This is more important for certain storage 
 types such as SSD as it is precious and more performant. 
 As described in the design doc of HDFS-5682, we plan to add new 
 quotaByStorageType command and new name node RPC protocol for it. The quota 
 by storage type feature is applied to HDFS directory level similar to 
 traditional HDFS space quota. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7611) deleteSnapshot and delete of a file can leave orphaned blocks in the blocksMap on NameNode restart.

2015-01-28 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao reassigned HDFS-7611:
---

Assignee: Jing Zhao  (was: Byron Wong)

 deleteSnapshot and delete of a file can leave orphaned blocks in the 
 blocksMap on NameNode restart.
 ---

 Key: HDFS-7611
 URL: https://issues.apache.org/jira/browse/HDFS-7611
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
Assignee: Jing Zhao
Priority: Critical
 Attachments: HDFS-7611.000.patch, HDFS-7611.001.patch, 
 HDFS-7611.002.patch, blocksNotDeletedTest.patch, testTruncateEditLogLoad.log


 If quotas are enabled a combination of operations *deleteSnapshot* and 
 *delete* of a file can leave  orphaned  blocks in the blocksMap on NameNode 
 restart. They are counted as missing on the NameNode, and can prevent 
 NameNode from coming out of safeMode and could cause memory leak during 
 startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7611) deleteSnapshot and delete of a file can leave orphaned blocks in the blocksMap on NameNode restart.

2015-01-28 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296035#comment-14296035
 ] 

Jing Zhao commented on HDFS-7611:
-

Thanks again for the review, [~shv]. I will commit the patch shortly.

 deleteSnapshot and delete of a file can leave orphaned blocks in the 
 blocksMap on NameNode restart.
 ---

 Key: HDFS-7611
 URL: https://issues.apache.org/jira/browse/HDFS-7611
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
Assignee: Byron Wong
Priority: Critical
 Attachments: HDFS-7611.000.patch, HDFS-7611.001.patch, 
 HDFS-7611.002.patch, blocksNotDeletedTest.patch, testTruncateEditLogLoad.log


 If quotas are enabled a combination of operations *deleteSnapshot* and 
 *delete* of a file can leave  orphaned  blocks in the blocksMap on NameNode 
 restart. They are counted as missing on the NameNode, and can prevent 
 NameNode from coming out of safeMode and could cause memory leak during 
 startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7681) Fix ReplicaInputStream constructor to take InputStreams

2015-01-28 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296090#comment-14296090
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7681:
---

The failure of TestBalancer is not related to this.

During reviewing the patch, I found that getTmpInputStreams(..) may lead file 
descriptors in case of exceptions.  Will file a JIRA for following up.

+1 patch looks good.

 Fix ReplicaInputStream constructor to take InputStreams
 ---

 Key: HDFS-7681
 URL: https://issues.apache.org/jira/browse/HDFS-7681
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 3.0.0
Reporter: Joe Pallas
Assignee: Joe Pallas
 Attachments: HDFS-7681.patch


 As noted in HDFS-5194, the constructor for {{ReplicaInputStream}} takes 
 {{FileDescriptor}} s that are immediately turned into {{InputStream}} s, 
 while the callers already have {{FileInputStream}} s from which they extract 
 {{FileDescriptor}} s.  This seems to have been done as part of a large set of 
 changes to appease findbugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7696) FsDatasetImpl.getTmpInputStreams(..) may lead file descriptors

2015-01-28 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-7696:
-

 Summary: FsDatasetImpl.getTmpInputStreams(..) may lead file 
descriptors
 Key: HDFS-7696
 URL: https://issues.apache.org/jira/browse/HDFS-7696
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


getTmpInputStreams(..) opens a block file and a meta file, and then return them 
as ReplicaInputStreams.  The caller responses to closes those streams.  In case 
of errors, an exception is thrown without closing the files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7696) FsDatasetImpl.getTmpInputStreams(..) may leak file descriptors

2015-01-28 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7696:
--
Status: Patch Available  (was: Open)

 FsDatasetImpl.getTmpInputStreams(..) may leak file descriptors
 --

 Key: HDFS-7696
 URL: https://issues.apache.org/jira/browse/HDFS-7696
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: h7696_20150128.patch


 getTmpInputStreams(..) opens a block file and a meta file, and then return 
 them as ReplicaInputStreams.  The caller responses to closes those streams.  
 In case of errors, an exception is thrown without closing the files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7696) FsDatasetImpl.getTmpInputStreams(..) may leak file descriptors

2015-01-28 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7696:
--
Attachment: h7696_20150128.patch

h7696_20150128.patch: close files in case of exceptions.

 FsDatasetImpl.getTmpInputStreams(..) may leak file descriptors
 --

 Key: HDFS-7696
 URL: https://issues.apache.org/jira/browse/HDFS-7696
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: h7696_20150128.patch


 getTmpInputStreams(..) opens a block file and a meta file, and then return 
 them as ReplicaInputStreams.  The caller responses to closes those streams.  
 In case of errors, an exception is thrown without closing the files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6673) Add delimited format support to PB OIV tool

2015-01-28 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296159#comment-14296159
 ] 

Haohui Mai commented on HDFS-6673:
--

Thanks Andrew and Eddy for the work. File HDFS-7697 to track the documentation 
issue.

 Add delimited format support to PB OIV tool
 ---

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-6673.000.patch, HDFS-6673.001.patch, 
 HDFS-6673.002.patch, HDFS-6673.003.patch, HDFS-6673.004.patch, 
 HDFS-6673.005.patch, HDFS-6673.006.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6673) Add delimited format support to PB OIV tool

2015-01-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6673:
--
Summary: Add delimited format support to PB OIV tool  (was: Add Delimited 
format supports for PB OIV tool)

 Add delimited format support to PB OIV tool
 ---

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-6673.000.patch, HDFS-6673.001.patch, 
 HDFS-6673.002.patch, HDFS-6673.003.patch, HDFS-6673.004.patch, 
 HDFS-6673.005.patch, HDFS-6673.006.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6673) Add delimited format support to PB OIV tool

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295828#comment-14295828
 ] 

Hudson commented on HDFS-6673:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6952 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6952/])
HDFS-6673. Add delimited format support to PB OIV tool. Contributed by Eddy Xu. 
(wang: rev caf7298e49f646a40023af999f62d61846fde5e2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageDelimitedTextWriter.java


 Add delimited format support to PB OIV tool
 ---

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-6673.000.patch, HDFS-6673.001.patch, 
 HDFS-6673.002.patch, HDFS-6673.003.patch, HDFS-6673.004.patch, 
 HDFS-6673.005.patch, HDFS-6673.006.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7611) deleteSnapshot and delete of a file can leave orphaned blocks in the blocksMap on NameNode restart.

2015-01-28 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7611:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks for digging into the 
issue, figuring out the cause and providing the test case, [~Byron Wong]! 
Actually this is the hardest part. I also list you as the patch contributor.

 deleteSnapshot and delete of a file can leave orphaned blocks in the 
 blocksMap on NameNode restart.
 ---

 Key: HDFS-7611
 URL: https://issues.apache.org/jira/browse/HDFS-7611
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
Assignee: Jing Zhao
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7611.000.patch, HDFS-7611.001.patch, 
 HDFS-7611.002.patch, blocksNotDeletedTest.patch, testTruncateEditLogLoad.log


 If quotas are enabled a combination of operations *deleteSnapshot* and 
 *delete* of a file can leave  orphaned  blocks in the blocksMap on NameNode 
 restart. They are counted as missing on the NameNode, and can prevent 
 NameNode from coming out of safeMode and could cause memory leak during 
 startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7611) deleteSnapshot and delete of a file can leave orphaned blocks in the blocksMap on NameNode restart.

2015-01-28 Thread Byron Wong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296063#comment-14296063
 ] 

Byron Wong commented on HDFS-7611:
--

Thanks for taking over the work for this JIRA, [~jingzhao]!

 deleteSnapshot and delete of a file can leave orphaned blocks in the 
 blocksMap on NameNode restart.
 ---

 Key: HDFS-7611
 URL: https://issues.apache.org/jira/browse/HDFS-7611
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
Assignee: Jing Zhao
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7611.000.patch, HDFS-7611.001.patch, 
 HDFS-7611.002.patch, blocksNotDeletedTest.patch, testTruncateEditLogLoad.log


 If quotas are enabled a combination of operations *deleteSnapshot* and 
 *delete* of a file can leave  orphaned  blocks in the blocksMap on NameNode 
 restart. They are counted as missing on the NameNode, and can prevent 
 NameNode from coming out of safeMode and could cause memory leak during 
 startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7681) Fix ReplicaInputStream constructor to take InputStreams

2015-01-28 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7681:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Joe!

 Fix ReplicaInputStream constructor to take InputStreams
 ---

 Key: HDFS-7681
 URL: https://issues.apache.org/jira/browse/HDFS-7681
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 3.0.0
Reporter: Joe Pallas
Assignee: Joe Pallas
 Fix For: 3.0.0

 Attachments: HDFS-7681.patch


 As noted in HDFS-5194, the constructor for {{ReplicaInputStream}} takes 
 {{FileDescriptor}} s that are immediately turned into {{InputStream}} s, 
 while the callers already have {{FileInputStream}} s from which they extract 
 {{FileDescriptor}} s.  This seems to have been done as part of a large set of 
 changes to appease findbugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7681) Fix ReplicaInputStream constructor to take InputStreams

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296120#comment-14296120
 ] 

Hudson commented on HDFS-7681:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6955 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6955/])
HDFS-7681. Change ReplicaInputStreams constructor to take InputStream(s) 
instead of FileDescriptor(s).  Contributed by Joe Pallas (szetszwo: rev 
5a0051f4da6e102846d795a7965a6a18216d74f7)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/ReplicaInputStreams.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


 Fix ReplicaInputStream constructor to take InputStreams
 ---

 Key: HDFS-7681
 URL: https://issues.apache.org/jira/browse/HDFS-7681
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 3.0.0
Reporter: Joe Pallas
Assignee: Joe Pallas
 Fix For: 3.0.0

 Attachments: HDFS-7681.patch


 As noted in HDFS-5194, the constructor for {{ReplicaInputStream}} takes 
 {{FileDescriptor}} s that are immediately turned into {{InputStream}} s, 
 while the callers already have {{FileInputStream}} s from which they extract 
 {{FileDescriptor}} s.  This seems to have been done as part of a large set of 
 changes to appease findbugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7696) FsDatasetImpl.getTmpInputStreams(..) may leak file descriptors

2015-01-28 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296135#comment-14296135
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7696:
---

getBlockInputStream(..) also has similar bug.

 FsDatasetImpl.getTmpInputStreams(..) may leak file descriptors
 --

 Key: HDFS-7696
 URL: https://issues.apache.org/jira/browse/HDFS-7696
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze

 getTmpInputStreams(..) opens a block file and a meta file, and then return 
 them as ReplicaInputStreams.  The caller responses to closes those streams.  
 In case of errors, an exception is thrown without closing the files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7681) Fix ReplicaInputStream constructor to take InputStreams

2015-01-28 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296139#comment-14296139
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7681:
---

 During reviewing the patch, I found that getTmpInputStreams(..) may lead file 
 descriptors in case of exceptions. ...

Filed HDFS-7696.

 Fix ReplicaInputStream constructor to take InputStreams
 ---

 Key: HDFS-7681
 URL: https://issues.apache.org/jira/browse/HDFS-7681
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 3.0.0
Reporter: Joe Pallas
Assignee: Joe Pallas
 Fix For: 3.0.0

 Attachments: HDFS-7681.patch


 As noted in HDFS-5194, the constructor for {{ReplicaInputStream}} takes 
 {{FileDescriptor}} s that are immediately turned into {{InputStream}} s, 
 while the callers already have {{FileInputStream}} s from which they extract 
 {{FileDescriptor}} s.  This seems to have been done as part of a large set of 
 changes to appease findbugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7697) Document the scope of the PB OIV tool

2015-01-28 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-7697:


 Summary: Document the scope of the PB OIV tool
 Key: HDFS-7697
 URL: https://issues.apache.org/jira/browse/HDFS-7697
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai


As par HDFS-6673, we need to document the applicable scope of the new PB OIV 
tool so that it won't catch users by surprise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7611) deleteSnapshot and delete of a file can leave orphaned blocks in the blocksMap on NameNode restart.

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296055#comment-14296055
 ] 

Hudson commented on HDFS-7611:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6953 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6953/])
HDFS-7611. deleteSnapshot and delete of a file can leave orphaned blocks in the 
blocksMap on NameNode restart. Contributed by Jing Zhao and Byron Wong. (jing9: 
rev d244574d03903b0514b0308da85d2f06c2384524)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 deleteSnapshot and delete of a file can leave orphaned blocks in the 
 blocksMap on NameNode restart.
 ---

 Key: HDFS-7611
 URL: https://issues.apache.org/jira/browse/HDFS-7611
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
Assignee: Jing Zhao
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7611.000.patch, HDFS-7611.001.patch, 
 HDFS-7611.002.patch, blocksNotDeletedTest.patch, testTruncateEditLogLoad.log


 If quotas are enabled a combination of operations *deleteSnapshot* and 
 *delete* of a file can leave  orphaned  blocks in the blocksMap on NameNode 
 restart. They are counted as missing on the NameNode, and can prevent 
 NameNode from coming out of safeMode and could cause memory leak during 
 startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6651) Deletion failure can leak inodes permanently.

2015-01-28 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-6651:

Attachment: HDFS-6651.002.patch

Rebase the patch after HDFS-7611.

 Deletion failure can leak inodes permanently.
 -

 Key: HDFS-6651
 URL: https://issues.apache.org/jira/browse/HDFS-6651
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Jing Zhao
Priority: Critical
 Attachments: HDFS-6651.000.patch, HDFS-6651.001.patch, 
 HDFS-6651.002.patch


 As discussed in HDFS-6618, if a deletion of tree fails in the middle, any 
 collected inodes and blocks will not be removed from {{INodeMap}} and 
 {{BlocksMap}}. 
 Since fsimage is saved by iterating over {{INodeMap}}, the leak will persist 
 across name node restart. Although blanked out inodes will not have reference 
 to blocks, blocks will still refer to the inode as {{BlockCollection}}. As 
 long as it is not null, blocks will live on. The leaked blocks from blanked 
 out inodes will go away after restart.
 Options (when delete fails in the middle)
 - Complete the partial delete: edit log the partial delete and remove inodes 
 and blocks. 
 - Somehow undo the partial delete.
 - Check quota for snapshot diff beforehand for the whole subtree.
 - Ignore quota check during delete even if snapshot is present.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7696) FsDatasetImpl.getTmpInputStreams(..) may leak file descriptors

2015-01-28 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-7696:
--
Summary: FsDatasetImpl.getTmpInputStreams(..) may leak file descriptors  
(was: FsDatasetImpl.getTmpInputStreams(..) may lead file descriptors)

 FsDatasetImpl.getTmpInputStreams(..) may leak file descriptors
 --

 Key: HDFS-7696
 URL: https://issues.apache.org/jira/browse/HDFS-7696
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze

 getTmpInputStreams(..) opens a block file and a meta file, and then return 
 them as ReplicaInputStreams.  The caller responses to closes those streams.  
 In case of errors, an exception is thrown without closing the files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7683) Combine usages and percent stats in NameNode UI

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295274#comment-14295274
 ] 

Hudson commented on HDFS-7683:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2038 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2038/])
HDFS-7683. Combine usages and percent stats in NameNode UI. Contributed by 
Vinayakumar B. (wheat9: rev 1e2d98a394d98f9f1b6791cbe9cef474c19b8ceb)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Combine usages and percent stats in NameNode UI
 ---

 Key: HDFS-7683
 URL: https://issues.apache.org/jira/browse/HDFS-7683
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Fix For: 2.7.0

 Attachments: 7683-snapshot.jpg, HDFS-7683-001.patch, 
 HDFS-7683-001.patch


 In NameNode UI, there are separate rows to display cluster usage, one is in 
 bytes, another one is in percentage.
 We can combine these two rows to just display percent usage in brackets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7675) Remove unused member DFSClient#spanReceiverHost

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295280#comment-14295280
 ] 

Hudson commented on HDFS-7675:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2038 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2038/])
HDFS-7675. Remove unused member DFSClient.spanReceiverHost (cmccabe) (cmccabe: 
rev d12dd47f4516fe125221ae073f1bc88b702b122f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove unused member DFSClient#spanReceiverHost
 ---

 Key: HDFS-7675
 URL: https://issues.apache.org/jira/browse/HDFS-7675
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: dfsclient
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Colin Patrick McCabe
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-7675.001.patch


 {{DFSClient#spanReceiverHost}} is initialised but never used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295270#comment-14295270
 ] 

Hudson commented on HDFS-3689:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2038 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2038/])
HDFS-3689. Add support for variable length block. Contributed by Jing Zhao. 
(jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0)
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java


 Add support for variable length block
 -

 Key: HDFS-3689
 URL: https://issues.apache.org/jira/browse/HDFS-3689
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Affects Versions: 3.0.0
   

[jira] [Commented] (HDFS-7566) Remove obsolete entries from hdfs-default.xml

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295276#comment-14295276
 ] 

Hudson commented on HDFS-7566:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2038 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2038/])
HDFS-7566. Remove obsolete entries from hdfs-default.xml (Ray Chiang via aw) 
(aw: rev 0a05ae1782488597cbf8667866f98f0df341abc0)
* hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/resources/job_1329348432655_0001_conf.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove obsolete entries from hdfs-default.xml
 -

 Key: HDFS-7566
 URL: https://issues.apache.org/jira/browse/HDFS-7566
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Ray Chiang
Assignee: Ray Chiang
  Labels: supportability
 Fix For: 2.7.0

 Attachments: HDFS-7566.001.patch


 So far, I've found these five properties which may be obsolete in 
 hdfs-default.xml:
 - dfs.https.enable
 - dfs.namenode.edits.journal-plugin.qjournal
 - dfs.namenode.logging.level
 - dfs.ha.namenodes.EXAMPLENAMESERVICE
   + Should this be kept in the .xml file?
 - dfs.support.append
   + Removed with HDFS-6246
 I'd like to get feedback about the state of any of the above properties.
 This is the HDFS equivalent of MAPREDUCE-6057 and YARN-2460.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7677) DistributedFileSystem#truncate should resolve symlinks

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295282#comment-14295282
 ] 

Hudson commented on HDFS-7677:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2038 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2038/])
HDFS-7677. DistributedFileSystem#truncate should resolve symlinks. (yliu) 
(yliu: rev 9ca565e9704d236ce839c0138d82d54453d793fb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 DistributedFileSystem#truncate should resolve symlinks
 --

 Key: HDFS-7677
 URL: https://issues.apache.org/jira/browse/HDFS-7677
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7677.001.patch, HDFS-7677.002.patch


 We should resolve the symlinks in DistributedFileSystem#truncate as we do for 
 {{create}}, {{open}}, {{append}} and so on, I don't see any reason not 
 support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295240#comment-14295240
 ] 

Hudson commented on HDFS-3689:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #88 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/88/])
HDFS-3689. Add support for variable length block. Contributed by Jing Zhao. 
(jing9: rev 2848db814a98b83e7546f65a2751e56fb5b2dbe0)
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/InotifyFSEditLogOpTranslator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java


 Add support for variable length block
 -

 Key: HDFS-3689
 URL: https://issues.apache.org/jira/browse/HDFS-3689
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Affects Versions: 

[jira] [Commented] (HDFS-7683) Combine usages and percent stats in NameNode UI

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295244#comment-14295244
 ] 

Hudson commented on HDFS-7683:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #88 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/88/])
HDFS-7683. Combine usages and percent stats in NameNode UI. Contributed by 
Vinayakumar B. (wheat9: rev 1e2d98a394d98f9f1b6791cbe9cef474c19b8ceb)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Combine usages and percent stats in NameNode UI
 ---

 Key: HDFS-7683
 URL: https://issues.apache.org/jira/browse/HDFS-7683
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Fix For: 2.7.0

 Attachments: 7683-snapshot.jpg, HDFS-7683-001.patch, 
 HDFS-7683-001.patch


 In NameNode UI, there are separate rows to display cluster usage, one is in 
 bytes, another one is in percentage.
 We can combine these two rows to just display percent usage in brackets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7677) DistributedFileSystem#truncate should resolve symlinks

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295252#comment-14295252
 ] 

Hudson commented on HDFS-7677:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #88 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/88/])
HDFS-7677. DistributedFileSystem#truncate should resolve symlinks. (yliu) 
(yliu: rev 9ca565e9704d236ce839c0138d82d54453d793fb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java


 DistributedFileSystem#truncate should resolve symlinks
 --

 Key: HDFS-7677
 URL: https://issues.apache.org/jira/browse/HDFS-7677
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7677.001.patch, HDFS-7677.002.patch


 We should resolve the symlinks in DistributedFileSystem#truncate as we do for 
 {{create}}, {{open}}, {{append}} and so on, I don't see any reason not 
 support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7675) Remove unused member DFSClient#spanReceiverHost

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295250#comment-14295250
 ] 

Hudson commented on HDFS-7675:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #88 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/88/])
HDFS-7675. Remove unused member DFSClient.spanReceiverHost (cmccabe) (cmccabe: 
rev d12dd47f4516fe125221ae073f1bc88b702b122f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


 Remove unused member DFSClient#spanReceiverHost
 ---

 Key: HDFS-7675
 URL: https://issues.apache.org/jira/browse/HDFS-7675
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: dfsclient
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Colin Patrick McCabe
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-7675.001.patch


 {{DFSClient#spanReceiverHost}} is initialised but never used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7566) Remove obsolete entries from hdfs-default.xml

2015-01-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295246#comment-14295246
 ] 

Hudson commented on HDFS-7566:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #88 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/88/])
HDFS-7566. Remove obsolete entries from hdfs-default.xml (Ray Chiang via aw) 
(aw: rev 0a05ae1782488597cbf8667866f98f0df341abc0)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/resources/job_1329348432655_0001_conf.xml
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json


 Remove obsolete entries from hdfs-default.xml
 -

 Key: HDFS-7566
 URL: https://issues.apache.org/jira/browse/HDFS-7566
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Ray Chiang
Assignee: Ray Chiang
  Labels: supportability
 Fix For: 2.7.0

 Attachments: HDFS-7566.001.patch


 So far, I've found these five properties which may be obsolete in 
 hdfs-default.xml:
 - dfs.https.enable
 - dfs.namenode.edits.journal-plugin.qjournal
 - dfs.namenode.logging.level
 - dfs.ha.namenodes.EXAMPLENAMESERVICE
   + Should this be kept in the .xml file?
 - dfs.support.append
   + Removed with HDFS-6246
 I'd like to get feedback about the state of any of the above properties.
 This is the HDFS equivalent of MAPREDUCE-6057 and YARN-2460.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7611) deleteSnapshot and delete of a file can leave orphaned blocks in the blocksMap on NameNode restart.

2015-01-28 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295956#comment-14295956
 ] 

Konstantin Shvachko commented on HDFS-7611:
---

Jing, I looked closer. That is added more logging. I think there is something 
going with delete itself, not snapshot delete.
On the positive side I was running this with pre-HDFS-7676 version of 
TestFileTruncate and it never failed.
So I think this particular problem is fixed. Let's file another jira for 
TestOpenFilesWithSnapshot, if there isn't one already.
+1 for your patch.

 deleteSnapshot and delete of a file can leave orphaned blocks in the 
 blocksMap on NameNode restart.
 ---

 Key: HDFS-7611
 URL: https://issues.apache.org/jira/browse/HDFS-7611
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
Assignee: Byron Wong
Priority: Critical
 Attachments: HDFS-7611.000.patch, HDFS-7611.001.patch, 
 HDFS-7611.002.patch, blocksNotDeletedTest.patch, testTruncateEditLogLoad.log


 If quotas are enabled a combination of operations *deleteSnapshot* and 
 *delete* of a file can leave  orphaned  blocks in the blocksMap on NameNode 
 restart. They are counted as missing on the NameNode, and can prevent 
 NameNode from coming out of safeMode and could cause memory leak during 
 startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7694) FSDataInputStream should support unbuffer

2015-01-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7694:
---
Status: Patch Available  (was: Open)

 FSDataInputStream should support unbuffer
 ---

 Key: HDFS-7694
 URL: https://issues.apache.org/jira/browse/HDFS-7694
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7694.001.patch


 For applications that have many open HDFS (or other Hadoop filesystem) files, 
 it would be useful to have an API to clear readahead buffers and sockets.  
 This could be added to the existing APIs as an optional interface, in much 
 the same way as we added setReadahead / setDropBehind / etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7332) Fix TestMultipleNNDataBlockScanner failures in jenkin slaves.

2015-01-28 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295982#comment-14295982
 ] 

Lei (Eddy) Xu commented on HDFS-7332:
-

[~yzhangal] It was addressed in HDFS-7430.


 Fix TestMultipleNNDataBlockScanner failures in jenkin slaves.
 -

 Key: HDFS-7332
 URL: https://issues.apache.org/jira/browse/HDFS-7332
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.5.1
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-7332.000.patch


 {{BlockPoolSliceScanner}} sorts {{BlockScanInfo}} by its {{lastScanTime}}. 
 Each time, the scanner picks a {{BlockScanInfo}} with the smallest 
 {{lastScanTime}} to scan and then updates its {{lastScanTime = 
 Time.mononicNow()}}. 
 Since the Jenkins test slave VM is usually rebooted for each job, the 
 {{Time.mononicNow()}} in the VM returns a small number, which is smaller than 
 the initial values of {{BlockScanInfo}}. Thus {{BlockPoolSliceScanner}} stops 
 at the first block that has been scanned and could not finish scanning all 
 blocks. The result of it is that 
 {{TestMultipleNNDataBlockScanner#testDataBlockScanner}} times out due to 
 unfinished scanning job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7348) Process erasure decoding work in datanode

2015-01-28 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7348:
--
Summary: Process erasure decoding work in datanode  (was: Process erasure 
decoding work)

 Process erasure decoding work in datanode
 -

 Key: HDFS-7348
 URL: https://issues.apache.org/jira/browse/HDFS-7348
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Kai Zheng
Assignee: Li Bo

 As one of the tasks for HDFS-7344, this is to process decoding work, 
 recovering data blocks according to block groups and codec schema;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7348) Process erasure decoding work in datanode

2015-01-28 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7348:
--
Component/s: datanode

 Process erasure decoding work in datanode
 -

 Key: HDFS-7348
 URL: https://issues.apache.org/jira/browse/HDFS-7348
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Kai Zheng
Assignee: Li Bo

 As one of the tasks for HDFS-7344, this is to process decoding work, 
 recovering data blocks according to block groups and codec schema;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7332) Fix TestMultipleNNDataBlockScanner failures in jenkin slaves.

2015-01-28 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296020#comment-14296020
 ] 

Yongjun Zhang commented on HDFS-7332:
-

Thanks Lei.


 Fix TestMultipleNNDataBlockScanner failures in jenkin slaves.
 -

 Key: HDFS-7332
 URL: https://issues.apache.org/jira/browse/HDFS-7332
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.5.1
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-7332.000.patch


 {{BlockPoolSliceScanner}} sorts {{BlockScanInfo}} by its {{lastScanTime}}. 
 Each time, the scanner picks a {{BlockScanInfo}} with the smallest 
 {{lastScanTime}} to scan and then updates its {{lastScanTime = 
 Time.mononicNow()}}. 
 Since the Jenkins test slave VM is usually rebooted for each job, the 
 {{Time.mononicNow()}} in the VM returns a small number, which is smaller than 
 the initial values of {{BlockScanInfo}}. Thus {{BlockPoolSliceScanner}} stops 
 at the first block that has been scanned and could not finish scanning all 
 blocks. The result of it is that 
 {{TestMultipleNNDataBlockScanner#testDataBlockScanner}} times out due to 
 unfinished scanning job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7693) libhdfs: add hdfsFile cache

2015-01-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7693:
---
Attachment: HDFS-7693.001.patch

 libhdfs: add hdfsFile cache
 ---

 Key: HDFS-7693
 URL: https://issues.apache.org/jira/browse/HDFS-7693
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7693.001.patch


 Add an hdfsFile cache inside libhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7694) FSDataInputStream should support unbuffer

2015-01-28 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-7694:
--

 Summary: FSDataInputStream should support unbuffer
 Key: HDFS-7694
 URL: https://issues.apache.org/jira/browse/HDFS-7694
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


For applications that have many open HDFS (or other Hadoop filesystem) files, 
it would be useful to have an API to clear readahead buffers and sockets.  This 
could be added to the existing APIs as an optional interface, in much the same 
way as we added setReadahead / setDropBehind / etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7694) FSDataInputStream should support unbuffer

2015-01-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7694:
---
Attachment: HDFS-7694.001.patch

 FSDataInputStream should support unbuffer
 ---

 Key: HDFS-7694
 URL: https://issues.apache.org/jira/browse/HDFS-7694
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7694.001.patch


 For applications that have many open HDFS (or other Hadoop filesystem) files, 
 it would be useful to have an API to clear readahead buffers and sockets.  
 This could be added to the existing APIs as an optional interface, in much 
 the same way as we added setReadahead / setDropBehind / etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7693) libhdfs: add hdfsFile cache

2015-01-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7693:
---
Status: Patch Available  (was: Open)

 libhdfs: add hdfsFile cache
 ---

 Key: HDFS-7693
 URL: https://issues.apache.org/jira/browse/HDFS-7693
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7693.001.patch


 Add an hdfsFile cache inside libhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7695) Intermittent failures in TestOpenFilesWithSnapshot

2015-01-28 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-7695:
-

 Summary: Intermittent failures in TestOpenFilesWithSnapshot
 Key: HDFS-7695
 URL: https://issues.apache.org/jira/browse/HDFS-7695
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko


This is to investigate intermittent failures of {{TestOpenFilesWithSnapshot}}, 
which is timing out on the NameNode restart as it is unable to leave SafeMode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7346) Process erasure encoding work in datanode

2015-01-28 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7346:
--
Summary: Process erasure encoding work in datanode  (was: Process erasure 
encoding work)

 Process erasure encoding work in datanode
 -

 Key: HDFS-7346
 URL: https://issues.apache.org/jira/browse/HDFS-7346
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Kai Zheng
Assignee: Li Bo

 As one of the tasks for HDFS-7344, this is to process encoding work, 
 calculating parity blocks as specified in block groups and codec schema.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7695) Intermittent failures in TestOpenFilesWithSnapshot

2015-01-28 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295998#comment-14295998
 ] 

Konstantin Shvachko commented on HDFS-7695:
---

This was partly investigated under HDFS-7611. The simptoms looked similar to 
the bug described there.
Different test cases are failing there on different runs, with the same 
exception
{code}
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1200)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1825)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1786)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testParentDirWithUCFileDeleteWithSnapShot(TestOpenFilesWithSnapshot.java:89)
{code}
The test
- creates a file and starts adding data
- then aborts the stream
- creates a snapshot while file is not closed
- deletes the file without deleting the snapshot and
- restarts NameNode

The behavior I see from the logs (added extanded logging info) that on restart 
NN replays the edits acoording to the steps above. The block are then reported 
by DNs, but they remain having 0 replicas, and therefore NN cannot leave 
SafeMode.
The missing blocks are supposed to be present, because even though the file was 
deleted its snapshot was not. I do not understand why replicas are not added to 
the locations when they are reported.

 Intermittent failures in TestOpenFilesWithSnapshot
 --

 Key: HDFS-7695
 URL: https://issues.apache.org/jira/browse/HDFS-7695
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko

 This is to investigate intermittent failures of 
 {{TestOpenFilesWithSnapshot}}, which is timing out on the NameNode restart as 
 it is unable to leave SafeMode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7332) Fix TestMultipleNNDataBlockScanner failures in jenkin slaves.

2015-01-28 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295969#comment-14295969
 ] 

Yongjun Zhang commented on HDFS-7332:
-

Hi [~eddyxu], I saw you mark this as a duplicate but don't see the duplicating 
jira indicated, would you please add it? thanks.


 Fix TestMultipleNNDataBlockScanner failures in jenkin slaves.
 -

 Key: HDFS-7332
 URL: https://issues.apache.org/jira/browse/HDFS-7332
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.5.1
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-7332.000.patch


 {{BlockPoolSliceScanner}} sorts {{BlockScanInfo}} by its {{lastScanTime}}. 
 Each time, the scanner picks a {{BlockScanInfo}} with the smallest 
 {{lastScanTime}} to scan and then updates its {{lastScanTime = 
 Time.mononicNow()}}. 
 Since the Jenkins test slave VM is usually rebooted for each job, the 
 {{Time.mononicNow()}} in the VM returns a small number, which is smaller than 
 the initial values of {{BlockScanInfo}}. Thus {{BlockPoolSliceScanner}} stops 
 at the first block that has been scanned and could not finish scanning all 
 blocks. The result of it is that 
 {{TestMultipleNNDataBlockScanner#testDataBlockScanner}} times out due to 
 unfinished scanning job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-01-28 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296210#comment-14296210
 ] 

Zhe Zhang commented on HDFS-7285:
-

We had a very productive meetup today. Please find a summary below:
*Attendees*: [~szetszwo], [~zhz], [~jingzhao]

*NameNode handling of block groups* (HDFS-7339):
# Under the striping layout, it's viable to use the first block to represent 
the entire block group.
# A separate map for block groups is not necessary; {{blocksMap}} can be used 
for both regular blocks and striped block groups.
# Block ID allocation: we will use the following protocol, which partitions the 
entire ID space with a binary flag
{code}
Contiguous: {reserved block IDs | flag | block ID}
Striped: {reserved block IDs | flag | reserved block group IDs | block group ID 
| index in group}
{code}
# When the cluster has randomly generated block IDs (from legacy code), the 
block group ID generator needs to check for ID conflicts in the entire range of 
IDs generated. We should file a follow-on JIRA to investigate possible 
optimizations for efficient conflict detection.
# To make HDFS-7339 more trackable, we should shrink its scope and remove the 
client RPC code. It should be limited to block management and INode handling.
# Existing block states are sufficient to represent a block group. A client 
should {{COMMIT}} a block group just as a block. The {{COMPLETE}} state needs 
to collect ack from all participating DNs in the group.
# We should subclass {{BlockInfo}} to remember the block group layout. This is 
an optimization to avoid frequently retrieving the info from file INode.

*EC and storage policy*:
# We agreed that _EC vs. replication_ is another configuration dimension, 
orthogonal to the current storage-type-based policies (HOT, WARM, COLD). Adding 
EC in the storage policy space will require too many combinations to be 
explicitly listed and chosen from.
# On-going development can still use HDFS-7347, which embeds EC as one of the 
storage policies (it has already been committed to HDFS-EC). HDFS-7337 should 
take the EC policy out from file header and put it as an XAttr. Other EC 
parameters, including codec algorithm and schema, should also be stored in XAttr
# HDFS-7343 fundamentally addresses the issue of complex storage policy space. 
It's a hard problem and should be kept separate from the HDFS-EC project.

*Client and DataNode*:
# At this point the design of HDFS-7545 -- which wraps around the 
{{DataStreamer}} logic -- looks reasonable. In the future we can consider 
adding a simpler and more efficient output class for the _one replica_ scenario.

We also went over the *list of subtasks*. Several high level comments:
# The list is already pretty long. We should reorder the items to have better 
grouping and more appropriate priorities. I will make a first pass.
# It seems HDFS-7689 should extend the {{ReplicationMonitor}} rather than 
creating another checker.
# We agreed the best way to support hflush/hsync is to write temporary parity 
data and update later, when a complete stripe is accumulated.
# We need another JIRA for truncate/append support.

 Erasure Coding Support inside HDFS
 --

 Key: HDFS-7285
 URL: https://issues.apache.org/jira/browse/HDFS-7285
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Weihua Jiang
Assignee: Zhe Zhang
 Attachments: ECAnalyzer.py, ECParser.py, 
 HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
 fsimage-analysis-20150105.pdf


 Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
 of data reliability, comparing to the existing HDFS 3-replica approach. For 
 example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
 with storage overhead only being 40%. This makes EC a quite attractive 
 alternative for big data storage, particularly for cold data. 
 Facebook had a related open source project called HDFS-RAID. It used to be 
 one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
 for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
 on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
 cold files that are intended not to be appended anymore; 3) the pure Java EC 
 coding implementation is extremely slow in practical use. Due to these, it 
 might not be a good idea to just bring HDFS-RAID back.
 We (Intel and Cloudera) are working on a design to build EC into HDFS that 
 gets rid of any external dependencies, makes it self-contained and 
 independently maintained. This design lays the EC feature on the storage type 
 support and considers compatible with existing HDFS features like caching, 
 snapshot, encryption, high availability and etc. This design will also 
 support 

[jira] [Commented] (HDFS-7694) FSDataInputStream should support unbuffer

2015-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296248#comment-14296248
 ] 

Hadoop QA commented on HDFS-7694:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12695104/HDFS-7694.001.patch
  against trunk revision caf7298.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9362//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9362//console

This message is automatically generated.

 FSDataInputStream should support unbuffer
 ---

 Key: HDFS-7694
 URL: https://issues.apache.org/jira/browse/HDFS-7694
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7694.001.patch


 For applications that have many open HDFS (or other Hadoop filesystem) files, 
 it would be useful to have an API to clear readahead buffers and sockets.  
 This could be added to the existing APIs as an optional interface, in much 
 the same way as we added setReadahead / setDropBehind / etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7690) Avoid Block movement in Balancer and Mover for the erasure encoded blocks

2015-01-28 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HDFS-7690.
-
Resolution: Duplicate

Thanks [~szetszwo] for the pointer. I missed it.
Resolving as duplicate

 Avoid Block movement in Balancer and Mover for the erasure encoded blocks
 -

 Key: HDFS-7690
 URL: https://issues.apache.org/jira/browse/HDFS-7690
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B

 As striped design says, its would be more fault tolerant if the striped 
 blocks reside in different nodes of different racks. But Balancer and Mover 
 may break this by moving the encoded blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7699) Erasure Codec API to possiblly consider all the essential aspects for an erasure code

2015-01-28 Thread Kai Zheng (JIRA)
Kai Zheng created HDFS-7699:
---

 Summary: Erasure Codec API to possiblly consider all the essential 
aspects for an erasure code
 Key: HDFS-7699
 URL: https://issues.apache.org/jira/browse/HDFS-7699
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


This is to define the even higher level API *ErasureCodec* to possiblly 
consider all the essential aspects for an erasure code, as discussed in in 
HDFS-7337 in details. Generally, it will cover the necessary configurations 
about which *RawErasureCoder* to use for the code scheme, how to form and 
layout the BlockGroup, and etc. It will also discuss how an *ErasureCodec* will 
be used in both client and DataNode, in all the supported modes related to EC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7693) libhdfs: add hdfsFile cache

2015-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296237#comment-14296237
 ] 

Hadoop QA commented on HDFS-7693:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12695098/HDFS-7693.001.patch
  against trunk revision caf7298.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9363//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9363//console

This message is automatically generated.

 libhdfs: add hdfsFile cache
 ---

 Key: HDFS-7693
 URL: https://issues.apache.org/jira/browse/HDFS-7693
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7693.001.patch


 Add an hdfsFile cache inside libhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7698) Fix locking on HDFS read statistics and add a method for clearing them.

2015-01-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296332#comment-14296332
 ] 

Hadoop QA commented on HDFS-7698:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12695172/HDFS-7698.002.patch
  against trunk revision 5a0051f.

{color:red}-1 @author{color}.  The patch appears to contain  @author tags 
which the Hadoop community has agreed to not allow in code contributions.

{color:green}+1 tests included{color}.  The patch appears to include  new 
or modified test files.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9368//console

This message is automatically generated.

 Fix locking on HDFS read statistics and add a method for clearing them.
 ---

 Key: HDFS-7698
 URL: https://issues.apache.org/jira/browse/HDFS-7698
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7698.002.patch


 Fix locking on HDFS read statistics and add a method for clearing them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >