[jira] [Commented] (HDFS-8726) Move protobuf files that define the client-sever protocols to hdfs-client

2015-07-08 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618188#comment-14618188
 ] 

Yi Liu commented on HDFS-8726:
--

+1, pending Jenkins.

 Move protobuf files that define the client-sever protocols to hdfs-client
 -

 Key: HDFS-8726
 URL: https://issues.apache.org/jira/browse/HDFS-8726
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-8726.000.patch, HDFS-8726.001.patch


 The protobuf files that defines the RPC protocols between the HDFS clients 
 and servers current sit in the hdfs package. They should be moved the the 
 hdfs-client package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8712) Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618428#comment-14618428
 ] 

Hudson commented on HDFS-8712:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8130 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8130/])
HDFS-8712. Remove 'public' and 'abstract' modifiers in FsVolumeSpi and 
FsDatasetSpi (Contributed by Lei (Eddy) Xu) (vinayakumarb: rev 
bd4e10900cc53a2768c31cc29fdb3698684bc2a0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java


 Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi
 

 Key: HDFS-8712
 URL: https://issues.apache.org/jira/browse/HDFS-8712
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8712.000.patch, HDFS-8712.001.patch


 In [Java Language Specification 
 9.4|http://docs.oracle.com/javase/specs/jls/se7/html/jls-9.html#jls-9.4]:
 bq. It is permitted, but discouraged as a matter of style, to redundantly 
 specify the public and/or abstract modifier for a method declared in an 
 interface.
 {{FsDatasetSpi}} and {{FsVolumeSpi}} mark methods as public, which cause many 
 warnings in IDEs and {{checkstyle}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8726) Move protobuf files that define the client-sever protocols to hdfs-client

2015-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618429#comment-14618429
 ] 

Hadoop QA commented on HDFS-8726:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  19m  5s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 37s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 49s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m  7s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 17s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 158m 39s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 28s | Tests passed in 
hadoop-hdfs-client. |
| {color:green}+1{color} | hdfs tests |   3m 56s | Tests passed in bkjournal. |
| | | 213m 14s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744159/HDFS-8726.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / d632574 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11618/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11618/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11618/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| bkjournal test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11618/artifact/patchprocess/testrun_bkjournal.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11618/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11618/console |


This message was automatically generated.

 Move protobuf files that define the client-sever protocols to hdfs-client
 -

 Key: HDFS-8726
 URL: https://issues.apache.org/jira/browse/HDFS-8726
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-8726.000.patch, HDFS-8726.001.patch


 The protobuf files that defines the RPC protocols between the HDFS clients 
 and servers current sit in the hdfs package. They should be moved the the 
 hdfs-client package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8259) Erasure Coding: System Test of reading EC file

2015-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618434#comment-14618434
 ] 

Hadoop QA commented on HDFS-8259:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   5m 47s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 4 new or modified test files. |
| {color:green}+1{color} | javac |   7m 42s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 13s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 38s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 31s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 31s | The patch appears to introduce 4 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m 20s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  68m 33s | Tests failed in hadoop-hdfs. |
| | |  89m 58s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.TestFileAppend4 |
|   | hadoop.hdfs.TestRead |
|   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshot |
|   | hadoop.hdfs.server.namenode.TestSecondaryWebUi |
|   | hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional |
|   | hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd |
|   | hadoop.hdfs.server.namenode.ha.TestHAStateTransitions |
|   | hadoop.hdfs.server.datanode.TestRefreshNamenodes |
|   | hadoop.hdfs.TestHdfsAdmin |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength |
|   | hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler |
|   | hadoop.hdfs.TestClientReportBadBlock |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.server.namenode.TestAuditLogger |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy |
|   | hadoop.hdfs.server.blockmanagement.TestDatanodeManager |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.server.namenode.TestNameNodeRpcServer |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade |
|   | hadoop.cli.TestErasureCodingCLI |
|   | hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot |
|   | hadoop.hdfs.server.namenode.ha.TestHAFsck |
|   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
|   | hadoop.hdfs.protocol.TestBlockListAsLongs |
|   | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
|   | hadoop.hdfs.server.namenode.TestHDFSConcat |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.TestRefreshCallQueue |
|   | hadoop.hdfs.TestListFilesInDFS |
|   | hadoop.hdfs.server.namenode.TestAddBlock |
|   | hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.TestNameEditsConfigs |
|   | hadoop.hdfs.TestMiniDFSCluster |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.security.TestPermissionSymlinks |
|   | hadoop.hdfs.TestDFSRollback |
|   | hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica |
|   | hadoop.hdfs.TestFileConcurrentReader |
|   | hadoop.hdfs.TestFileAppend2 |
|   | hadoop.hdfs.server.datanode.TestDataNodeExit |
|   | hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA |
|   | hadoop.hdfs.TestGetFileChecksum |
|   | hadoop.hdfs.server.namenode.TestSnapshotPathINodes |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.security.TestRefreshUserMappings |
|   | 

[jira] [Commented] (HDFS-8719) Erasure Coding: client generates too many small packets when writing parity data

2015-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618445#comment-14618445
 ] 

Hadoop QA commented on HDFS-8719:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 20s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 43s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 38s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 27s | The patch appears to introduce 4 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 173m 23s | Tests failed in hadoop-hdfs. |
| | | 215m 50s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestEncryptedTransfer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744163/HDFS-8719-HDFS-7285-001.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 2c494a8 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11619/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11619/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11619/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11619/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11619/console |


This message was automatically generated.

 Erasure Coding: client generates too many small packets when writing parity 
 data
 

 Key: HDFS-8719
 URL: https://issues.apache.org/jira/browse/HDFS-8719
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8719-001.patch, HDFS-8719-HDFS-7285-001.patch, 
 HDFS-8719-HDFS-7285-002.patch


 Typically a packet is about 64K, but when writing parity data, many small 
 packets with size 512 bytes are generated. This may slow the write speed and 
 increase the network IO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8642) Improve TestFileTruncate#setup by deleting the snapshots

2015-07-08 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8642:
---
Attachment: (was: HDFS-8642-03.patch)

 Improve TestFileTruncate#setup by deleting the snapshots
 

 Key: HDFS-8642
 URL: https://issues.apache.org/jira/browse/HDFS-8642
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, 
 HDFS-8642-02.patch


 I've observed {{TestFileTruncate#setup()}} function has to be improved by 
 making it more independent. Presently if any of the snapshots related test 
 failures will affect all the subsequent unit test cases. One such error has 
 been observed in the 
 [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
 {code}
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
 org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted 
 since /test is snapshottable and already has snapshots
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
   at org.apache.hadoop.ipc.Client.call(Client.java:1440)
   at org.apache.hadoop.ipc.Client.call(Client.java:1371)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy22.delete(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
   at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy23.delete(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1711)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8712) Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi

2015-07-08 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8712:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.
Thanks [~eddyxu] for the contribution.
Thanks [~andrew.wang] for review.

 Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi
 

 Key: HDFS-8712
 URL: https://issues.apache.org/jira/browse/HDFS-8712
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8712.000.patch, HDFS-8712.001.patch


 In [Java Language Specification 
 9.4|http://docs.oracle.com/javase/specs/jls/se7/html/jls-9.html#jls-9.4]:
 bq. It is permitted, but discouraged as a matter of style, to redundantly 
 specify the public and/or abstract modifier for a method declared in an 
 interface.
 {{FsDatasetSpi}} and {{FsVolumeSpi}} mark methods as public, which cause many 
 warnings in IDEs and {{checkstyle}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8642) Improve TestFileTruncate#setup by deleting the snapshots

2015-07-08 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8642:
---
Attachment: HDFS-8642-03.patch

 Improve TestFileTruncate#setup by deleting the snapshots
 

 Key: HDFS-8642
 URL: https://issues.apache.org/jira/browse/HDFS-8642
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, 
 HDFS-8642-02.patch, HDFS-8642-03.patch


 I've observed {{TestFileTruncate#setup()}} function has to be improved by 
 making it more independent. Presently if any of the snapshots related test 
 failures will affect all the subsequent unit test cases. One such error has 
 been observed in the 
 [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
 {code}
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
 org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted 
 since /test is snapshottable and already has snapshots
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
   at org.apache.hadoop.ipc.Client.call(Client.java:1440)
   at org.apache.hadoop.ipc.Client.call(Client.java:1371)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy22.delete(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
   at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy23.delete(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1711)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8563) Erasure Coding: fsck handles file smaller than a full stripe

2015-07-08 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618405#comment-14618405
 ] 

Walter Su commented on HDFS-8563:
-

Uploaded 02 patch address Jing's comments.

 Erasure Coding: fsck handles file smaller than a full stripe
 

 Key: HDFS-8563
 URL: https://issues.apache.org/jira/browse/HDFS-8563
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8563-HDFS-7285.01.patch


 Uploaded a small file. Fsck shows it's UNRECOVERABLE. It's not correct.
 {noformat}
 Erasure Coded Block Groups:
  Total size:1366 B
  Total files:   1
  Total block groups (validated):1 (avg. block group size 1366 B)
   
   UNRECOVERABLE BLOCK GROUPS:   1 (100.0 %)
   MIN REQUIRED EC BLOCK:6
   
  Minimally erasure-coded block groups:  0 (0.0 %)
  Over-erasure-coded block groups:   0 (0.0 %)
  Under-erasure-coded block groups:  1 (100.0 %)
  Unsatisfactory placement block groups: 0 (0.0 %)
  Default schema:RS-6-3
  Average block group size:  4.0
  Missing block groups:  0
  Corrupt block groups:  0
  Missing ec-blocks: 5 (55.57 %)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8711) setSpaceQuota command should print the available storage type when input storage type is wrong

2015-07-08 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8711:

Issue Type: Improvement  (was: Bug)

 setSpaceQuota command should print the available storage type when input 
 storage type is wrong
 --

 Key: HDFS-8711
 URL: https://issues.apache.org/jira/browse/HDFS-8711
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
  Labels: reviewed
 Attachments: HDFS-8711-01.patch, HDFS-8711.patch


 If input storage type is wrong then currently *setSpaceQuota* give exception 
 like this.
 {code}
 ./hdfs dfsadmin -setSpaceQuota 1000 -storageType COLD /testDir
  setSpaceQuota: No enum constant org.apache.hadoop.fs.StorageType.COLD
 {code}
 It should be 
 {code}
 setSpaceQuota: Storage type COLD not available. Available storage type are 
 [SSD, DISK, ARCHIVE]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8711) setSpaceQuota command should print the available storage type when input storage type is wrong

2015-07-08 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618409#comment-14618409
 ] 

Vinayakumar B commented on HDFS-8711:
-

Cherry picked this to branch-2 as well. To keep the diff between trunk and 
branch-2 minimal.

 setSpaceQuota command should print the available storage type when input 
 storage type is wrong
 --

 Key: HDFS-8711
 URL: https://issues.apache.org/jira/browse/HDFS-8711
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
  Labels: reviewed
 Attachments: HDFS-8711-01.patch, HDFS-8711.patch


 If input storage type is wrong then currently *setSpaceQuota* give exception 
 like this.
 {code}
 ./hdfs dfsadmin -setSpaceQuota 1000 -storageType COLD /testDir
  setSpaceQuota: No enum constant org.apache.hadoop.fs.StorageType.COLD
 {code}
 It should be 
 {code}
 setSpaceQuota: Storage type COLD not available. Available storage type are 
 [SSD, DISK, ARCHIVE]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8711) setSpaceQuota command should print the available storage type when input storage type is wrong

2015-07-08 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8711:

Fix Version/s: 2.8.0

 setSpaceQuota command should print the available storage type when input 
 storage type is wrong
 --

 Key: HDFS-8711
 URL: https://issues.apache.org/jira/browse/HDFS-8711
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
  Labels: reviewed
 Fix For: 2.8.0

 Attachments: HDFS-8711-01.patch, HDFS-8711.patch


 If input storage type is wrong then currently *setSpaceQuota* give exception 
 like this.
 {code}
 ./hdfs dfsadmin -setSpaceQuota 1000 -storageType COLD /testDir
  setSpaceQuota: No enum constant org.apache.hadoop.fs.StorageType.COLD
 {code}
 It should be 
 {code}
 setSpaceQuota: Storage type COLD not available. Available storage type are 
 [SSD, DISK, ARCHIVE]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8735) Inotify : All events classes should implement toString() API.

2015-07-08 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-8735:
-
Issue Type: Improvement  (was: Bug)

 Inotify : All events classes should implement toString() API.
 -

 Key: HDFS-8735
 URL: https://issues.apache.org/jira/browse/HDFS-8735
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore

 Event classes is used by client, it’s good to implement toString() API.
 {code}
 for(Event event : events){
   System.out.println(event.toString());
 }
 {code}
 This will give output like this
 {code}
 org.apache.hadoop.hdfs.inotify.Event$CreateEvent@6916d97d
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8642) Improve TestFileTruncate#setup by deleting the snapshots

2015-07-08 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618426#comment-14618426
 ] 

Rakesh R commented on HDFS-8642:


OK, I got it. Thanks[~arpitagarwal] for the explanation. Attached another patch 
where it does start/shutdown cluster for each test case.

 Improve TestFileTruncate#setup by deleting the snapshots
 

 Key: HDFS-8642
 URL: https://issues.apache.org/jira/browse/HDFS-8642
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, 
 HDFS-8642-02.patch


 I've observed {{TestFileTruncate#setup()}} function has to be improved by 
 making it more independent. Presently if any of the snapshots related test 
 failures will affect all the subsequent unit test cases. One such error has 
 been observed in the 
 [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
 {code}
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
 org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted 
 since /test is snapshottable and already has snapshots
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
   at org.apache.hadoop.ipc.Client.call(Client.java:1440)
   at org.apache.hadoop.ipc.Client.call(Client.java:1371)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy22.delete(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
   at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy23.delete(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1711)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8719) Erasure Coding: client generates too many small packets when writing parity data

2015-07-08 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618249#comment-14618249
 ] 

Li Bo commented on HDFS-8719:
-

Yes, when to start a parity cell, the {{streamer}} used in 
{{adjustChunkBoundary}}  is the previous one. I think we need to update 
{{chunksPerPacket}} after the streamer is updated. Please see my new patch.

 Erasure Coding: client generates too many small packets when writing parity 
 data
 

 Key: HDFS-8719
 URL: https://issues.apache.org/jira/browse/HDFS-8719
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8719-001.patch, HDFS-8719-HDFS-7285-001.patch


 Typically a packet is about 64K, but when writing parity data, many small 
 packets with size 512 bytes are generated. This may slow the write speed and 
 increase the network IO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8719) Erasure Coding: client generates too many small packets when writing parity data

2015-07-08 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-8719:

Attachment: HDFS-8719-HDFS-7285-002.patch

 Erasure Coding: client generates too many small packets when writing parity 
 data
 

 Key: HDFS-8719
 URL: https://issues.apache.org/jira/browse/HDFS-8719
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8719-001.patch, HDFS-8719-HDFS-7285-001.patch, 
 HDFS-8719-HDFS-7285-002.patch


 Typically a packet is about 64K, but when writing parity data, many small 
 packets with size 512 bytes are generated. This may slow the write speed and 
 increase the network IO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8734) Erasure Coding: one cell need two packets

2015-07-08 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8734:

Attachment: HDFS-8734.01.patch

The sulosion is simple, I take streamer#0 (index=0) as example:
(top 64512 of cell_0) -- packet_0
(last 1024 of cell_0) + (top 63488 of cell_1) -- packet_1
and so on..

 Erasure Coding: one cell need two packets
 -

 Key: HDFS-8734
 URL: https://issues.apache.org/jira/browse/HDFS-8734
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8734.01.patch


 The default WritePacketSize is 64k
 Currently default cellSize is 64k
 We hope one cell consumes one packet. In fact it's not.
 By default,
 chunkSize = 516( 512 data + 4 checksum)
 packetSize = 64k
 chunksPerPacket = 126 ( See DFSOutputStream#computePacketChunkSize for 
 details)
 numBytes of data in one packet = 64512
 cellSize = 65536
 When first packet is full ( with 64512 data), there are still 65536 - 64512 = 
 1024 bytes left.
 {code}
 super.writeChunk(bytes, offset, len, checksum, ckoff, cklen);
 // cell is full and current packet has not been enqueued,
 if (cellFull  currentPacket != null) {
   enqueueCurrentPacketFull();
 }   
 {code}
 When  the last 1024 bytes of the cell was written, we meet {{cellFull}} and 
 create another packet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8726) Move protobuf files that define the client-sever protocols to hdfs-client

2015-07-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8726:
-
Attachment: HDFS-8726.001.patch

 Move protobuf files that define the client-sever protocols to hdfs-client
 -

 Key: HDFS-8726
 URL: https://issues.apache.org/jira/browse/HDFS-8726
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-8726.000.patch, HDFS-8726.001.patch


 The protobuf files that defines the RPC protocols between the HDFS clients 
 and servers current sit in the hdfs package. They should be moved the the 
 hdfs-client package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8719) Erasure Coding: client generates too many small packets when writing parity data

2015-07-08 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618363#comment-14618363
 ] 

Walter Su commented on HDFS-8719:
-

02 patch does adjustChunkBoundary() every time before switching streamer. It's 
a simple but smart approach.

{code}
+adjustChunkBoundary();
{code}
This line matters. Please remove the overrided function 
{{adjustChunkBoundary()}}. It's no longer needed.

 Erasure Coding: client generates too many small packets when writing parity 
 data
 

 Key: HDFS-8719
 URL: https://issues.apache.org/jira/browse/HDFS-8719
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8719-001.patch, HDFS-8719-HDFS-7285-001.patch, 
 HDFS-8719-HDFS-7285-002.patch


 Typically a packet is about 64K, but when writing parity data, many small 
 packets with size 512 bytes are generated. This may slow the write speed and 
 increase the network IO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8733) Keep server related definition in hdfs.proto on server side

2015-07-08 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu reassigned HDFS-8733:


Assignee: Yi Liu

 Keep server related definition in hdfs.proto on server side
 ---

 Key: HDFS-8733
 URL: https://issues.apache.org/jira/browse/HDFS-8733
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Yi Liu
Assignee: Yi Liu





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8733) Keep server related definition in hdfs.proto on server side

2015-07-08 Thread Yi Liu (JIRA)
Yi Liu created HDFS-8733:


 Summary: Keep server related definition in hdfs.proto on server 
side
 Key: HDFS-8733
 URL: https://issues.apache.org/jira/browse/HDFS-8733
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Yi Liu






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8642) Improve TestFileTruncate#setup by deleting the snapshots

2015-07-08 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8642:
---
Attachment: HDFS-8642-02.patch

 Improve TestFileTruncate#setup by deleting the snapshots
 

 Key: HDFS-8642
 URL: https://issues.apache.org/jira/browse/HDFS-8642
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, 
 HDFS-8642-02.patch


 I've observed {{TestFileTruncate#setup()}} function has to be improved by 
 making it more independent. Presently if any of the snapshots related test 
 failures will affect all the subsequent unit test cases. One such error has 
 been observed in the 
 [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
 {code}
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
 org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted 
 since /test is snapshottable and already has snapshots
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
   at org.apache.hadoop.ipc.Client.call(Client.java:1440)
   at org.apache.hadoop.ipc.Client.call(Client.java:1371)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy22.delete(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
   at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy23.delete(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1711)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8711) setSpaceQuota command should print the available storage type when input storage type is wrong

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618382#comment-14618382
 ] 

Hudson commented on HDFS-8711:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #250 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/250/])
HDFS-8711. setSpaceQuota command should print the available storage type when 
input storage type is wrong. Contributed by Brahma Reddy Battula. (xyao: rev 
b68701b7b2a9597b4183e0ba19b1551680d543a1)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java


 setSpaceQuota command should print the available storage type when input 
 storage type is wrong
 --

 Key: HDFS-8711
 URL: https://issues.apache.org/jira/browse/HDFS-8711
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
  Labels: reviewed
 Attachments: HDFS-8711-01.patch, HDFS-8711.patch


 If input storage type is wrong then currently *setSpaceQuota* give exception 
 like this.
 {code}
 ./hdfs dfsadmin -setSpaceQuota 1000 -storageType COLD /testDir
  setSpaceQuota: No enum constant org.apache.hadoop.fs.StorageType.COLD
 {code}
 It should be 
 {code}
 setSpaceQuota: Storage type COLD not available. Available storage type are 
 [SSD, DISK, ARCHIVE]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8620) Clean up the checkstyle warinings about ClientProtocol

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618374#comment-14618374
 ] 

Hudson commented on HDFS-8620:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #250 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/250/])
HDFS-8620. Clean up the checkstyle warinings about ClientProtocol. Contributed 
by Takanobu Asanuma. (wheat9: rev c0b8e4e5b5083631ed22d8d36c8992df7d34303c)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Clean up the checkstyle warinings about ClientProtocol
 --

 Key: HDFS-8620
 URL: https://issues.apache.org/jira/browse/HDFS-8620
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma
 Fix For: 2.8.0

 Attachments: HDFS-8620.1.patch, HDFS-8620.2.patch, HDFS-8620.3.patch, 
 HDFS-8620.4.patch


 These warnings were generated in HDFS-8238.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8652) Track BlockInfo instead of Block in CorruptReplicasMap

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618377#comment-14618377
 ] 

Hudson commented on HDFS-8652:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #250 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/250/])
Revert HDFS-8652. Track BlockInfo instead of Block in CorruptReplicasMap. 
Contributed by Jing Zhao. (jing9: rev bc99aaffe7b0ed13b1efc37b6a32cdbd344c2d75)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestCorruptReplicaInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ContiguousBlockStorageOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstructionContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java


 Track BlockInfo instead of Block in CorruptReplicasMap
 --

 Key: HDFS-8652
 URL: https://issues.apache.org/jira/browse/HDFS-8652
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 2.8.0

 Attachments: HDFS-8652.000.patch, HDFS-8652.001.patch, 
 HDFS-8652.002.patch


 Currently {{CorruptReplicasMap}} uses {{Block}} as its key and records the 
 list of DataNodes with corrupted replicas. For Erasure Coding since a striped 
 block group contains multiple internal blocks with different block ID, we 
 should use {{BlockInfo}} as the key.
 HDFS-8619 is the jira to fix this for EC. To ease merging we will use jira to 
 first make changes in trunk/branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8735) Inotify : All events classes should implement toString() API.

2015-07-08 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-8735:


 Summary: Inotify : All events classes should implement toString() 
API.
 Key: HDFS-8735
 URL: https://issues.apache.org/jira/browse/HDFS-8735
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore


Event classes is used by client, it’s good to implement toString() API.
{code}
for(Event event : events){
System.out.println(event.toString());
}
{code}

This will give output like this

{code}
org.apache.hadoop.hdfs.inotify.Event$CreateEvent@6916d97d
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8726) Move protobuf files that define the client-sever protocols to hdfs-client

2015-07-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618149#comment-14618149
 ] 

Haohui Mai commented on HDFS-8726:
--

Thanks for the review. Yes I think it is a good idea to file a follow-up jira 
to clean up hdfs.proto and keep relevant definitions on the server side.

 Move protobuf files that define the client-sever protocols to hdfs-client
 -

 Key: HDFS-8726
 URL: https://issues.apache.org/jira/browse/HDFS-8726
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-8726.000.patch, HDFS-8726.001.patch


 The protobuf files that defines the RPC protocols between the HDFS clients 
 and servers current sit in the hdfs package. They should be moved the the 
 hdfs-client package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8732) Erasure Coding: Fail to read a file with corrupted blocks

2015-07-08 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618194#comment-14618194
 ] 

Walter Su commented on HDFS-8732:
-

It's different from HDFS-8602. HDFS-8602 corrupts block by changing block size. 
This jira overwrites some bytes of the block to cause checksum exception.

 Erasure Coding: Fail to read a file with corrupted blocks
 -

 Key: HDFS-8732
 URL: https://issues.apache.org/jira/browse/HDFS-8732
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xinwei Qin 

 In system test of reading EC file(HDFS-8259), the methods 
 {{testReadCorruptedData*()}} failed to read a EC file with corrupted 
 blocks(overwrite some data to several blocks and this will make client get a  
 checksum exception). 
 Exception logs:
 {code}
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StatefulStripeReader.readChunk(DFSStripedInputStream.java:771)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:623)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:335)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:465)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:946)
 at java.io.DataInputStream.read(DataInputStream.java:149)
 at 
 org.apache.hadoop.hdfs.StripedFileTestUtil.verifyStatefulRead(StripedFileTestUtil.java:98)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.verifyRead(TestReadStripedFileWithDecoding.java:196)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testOneFileWithBlockCorrupted(TestReadStripedFileWithDecoding.java:246)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testReadCorruptedData11(TestReadStripedFileWithDecoding.java:114)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8732) Erasure Coding: Fail to read a file with corrupted blocks

2015-07-08 Thread Xinwei Qin (JIRA)
Xinwei Qin  created HDFS-8732:
-

 Summary: Erasure Coding: Fail to read a file with corrupted blocks
 Key: HDFS-8732
 URL: https://issues.apache.org/jira/browse/HDFS-8732
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xinwei Qin 


In system test of reading EC file(HDFS-8259), the methods 
{{testReadCorruptedData*()}} failed to read a EC file with corrupted 
blocks(overwrite some data to several blocks and this will make client get a  
checksum exception). 

Exception logs:
{code}
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.DFSStripedInputStream$StatefulStripeReader.readChunk(DFSStripedInputStream.java:771)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:623)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:335)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:465)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:946)
at java.io.DataInputStream.read(DataInputStream.java:149)
at 
org.apache.hadoop.hdfs.StripedFileTestUtil.verifyStatefulRead(StripedFileTestUtil.java:98)
at 
org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.verifyRead(TestReadStripedFileWithDecoding.java:196)
at 
org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testOneFileWithBlockCorrupted(TestReadStripedFileWithDecoding.java:246)
at 
org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testReadCorruptedData11(TestReadStripedFileWithDecoding.java:114)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8732) Erasure Coding: Fail to read a file with corrupted blocks

2015-07-08 Thread Xinwei Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618199#comment-14618199
 ] 

Xinwei Qin  commented on HDFS-8732:
---

Hi, [~hitliuyi], I noticed HDFS-8602 had resolved the similar problem, but it 
cannot fix the issue in this jira.
Thanks [~walter.k.su] to clarify.

The error log in HDFS-8602:
{code}
2015-07-08 16:19:04,742 ERROR datanode.DataNode 
(BlockSender.java:sendPacket(615)) - BlockSender.sendChunks() exception: 
java.io.EOFException: EOF Reached. file size is 10 and 65526 more bytes left to 
be transfered.
at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:228)
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:585)
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:765)
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:712)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:556)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:256)
at java.lang.Thread.run(Thread.java:722)
{code}

and the error or warn log of this jira:
{code}
2015-07-08 15:05:13,455 WARN hdfs.DFSClient 
(DFSInputStream.java:actualGetFromOneDataNode(1203)) - fetchBlockByteRange(). 
Got a checksum exception for /partially_corrupted_1_0 at 
BP-1928182115-9.96.1.31-1436339108502:blk_-9223372036854775792_1001:13824 from 
DatanodeInfoWithStorage[127.0.0.1:36871,DS-cfab070a-8983-4c61-8647-eb0526df31c9,DISK]
2015-07-08 15:05:13,457 WARN hdfs.DFSClient 
(StripedBlockUtil.java:getNextCompletedStripedRead(215)) - ExecutionException 
java.util.concurrent.ExecutionException: java.io.IOException: 
fetchBlockByteRange(). Got a checksum exception for /partially_corrupted_1_0 at 
BP-1928182115-9.96.1.31-1436339108502:blk_-9223372036854775792_1001:13824 from 
DatanodeInfoWithStorage[127.0.0.1:36871,DS-cfab070a-8983-4c61-8647-eb0526df31c9,DISK]
2015-07-08 15:05:13,560 INFO hdfs.StateChange 
(FSNamesystem.java:reportBadBlocks(5783)) - *DIR* reportBadBlocks
2015-07-08 15:05:13,561 INFO BlockStateChange 
(CorruptReplicasMap.java:addToCorruptReplicasMap(76)) - BLOCK 
NameSystem.addToCorruptReplicasMap: blk_-9223372036854775792 added as corrupt 
on 127.0.0.1:36871 by /127.0.0.1 because client machine reported it
2015-07-08 15:05:13,690 WARN hdfs.DFSClient 
(DFSInputStream.java:actualGetFromOneDataNode(1203)) - fetchBlockByteRange(). 
Got a checksum exception for /partially_corrupted_1_0 at 
BP-1928182115-9.96.1.31-1436339108502:blk_-9223372036854775792_1001:13824 from 
DatanodeInfoWithStorage[127.0.0.1:36871,DS-cfab070a-8983-4c61-8647-eb0526df31c9,DISK]
2015-07-08 15:05:13,693 WARN hdfs.DFSClient 
(StripedBlockUtil.java:getNextCompletedStripedRead(215)) - ExecutionException 
java.util.concurrent.ExecutionException: java.io.IOException: 
fetchBlockByteRange(). Got a checksum exception for /partially_corrupted_1_0 at 
BP-1928182115-9.96.1.31-1436339108502:blk_-9223372036854775792_1001:13824 from 
DatanodeInfoWithStorage[127.0.0.1:36871,DS-cfab070a-8983-4c61-8647-eb0526df31c9,DISK]
2015-07-08 15:05:13,705 INFO hdfs.StateChange 
(FSNamesystem.java:reportBadBlocks(5783)) - *DIR* reportBadBlocks
2015-07-08 15:05:13,706 INFO BlockStateChange 
(CorruptReplicasMap.java:addToCorruptReplicasMap(81)) - BLOCK 
NameSystem.addToCorruptReplicasMap: duplicate requested for 
blk_-9223372036854775792 to add as corrupt on 127.0.0.1:36871 by /127.0.0.1 
because client machine reported it
2015-07-08 15:05:14,033 INFO FSNamesystem.audit 
(FSNamesystem.java:logAuditMessage(7816)) - allowed=trueugi=root 
(auth:SIMPLE)  ip=/127.0.0.1   cmd=opensrc=/partially_corrupted_1_0
dst=nullperm=null   proto=rpc
2015-07-08 15:05:14,049 INFO hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
{code}

 Erasure Coding: Fail to read a file with corrupted blocks
 -

 Key: HDFS-8732
 URL: https://issues.apache.org/jira/browse/HDFS-8732
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xinwei Qin 
Assignee: Walter Su

 In system test of reading EC file(HDFS-8259), the methods 
 {{testReadCorruptedData*()}} failed to read a EC file with corrupted 
 blocks(overwrite some data to several blocks and this will make client get a  
 checksum exception). 
 Exception logs:
 {code}
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StatefulStripeReader.readChunk(DFSStripedInputStream.java:771)
 at 
 

[jira] [Updated] (HDFS-8259) Erasure Coding: System Test of reading EC file

2015-07-08 Thread Xinwei Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinwei Qin  updated HDFS-8259:
--
Status: Patch Available  (was: Open)

 Erasure Coding: System Test of reading EC file
 --

 Key: HDFS-8259
 URL: https://issues.apache.org/jira/browse/HDFS-8259
 Project: Hadoop HDFS
  Issue Type: Test
Affects Versions: HDFS-7285
Reporter: GAO Rui
Assignee: Xinwei Qin 
 Attachments: HDFS-8259-HDFS-7285.001.patch


 1. Normally reading EC file(reading without datanote failure and no need of 
 recovery)
 2. Reading EC file with datanode failure.
 3. Reading EC file with data block recovery by decoding from parity blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8734) Erasure Coding: one cell need two packets

2015-07-08 Thread Walter Su (JIRA)
Walter Su created HDFS-8734:
---

 Summary: Erasure Coding: one cell need two packets
 Key: HDFS-8734
 URL: https://issues.apache.org/jira/browse/HDFS-8734
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su


The default WritePacketSize is 64k
Currently default cellSize is 64k

We hope one cell consumes one packet. In fact it's not.

By default,
chunkSize = 516( 512 data + 4 checksum)
packetSize = 64k
chunksPerPacket = 126 ( See DFSOutputStream#computePacketChunkSize for details)
numBytes of data in one packet = 64512
cellSize = 65536

When first packet is full ( with 64512 data), there are still 65536 - 64512 = 
1024 bytes left.
{code}
super.writeChunk(bytes, offset, len, checksum, ckoff, cklen);

// cell is full and current packet has not been enqueued,
if (cellFull  currentPacket != null) {
  enqueueCurrentPacketFull();
}   
{code}
When  the last 1024 bytes of the cell was written, we meet {{cellFull}} and 
create another packet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8719) Erasure Coding: client generates too many small packets when writing parity data

2015-07-08 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618170#comment-14618170
 ] 

Walter Su commented on HDFS-8719:
-

bq. If the last packet is not full, it will be enqueued in closeImpl()
I debug TestDFSStripedOutputStream with filesize = blockSize*dataBlocks*3 -1

You're right it will be enqueued in closeImpl(). But the stack trace shows:
closeImpl()
-- flushBuffer()
-- writeChunk()
-- if (currentPacket.getNumChunks() == currentPacket.getMaxChunks().) is true
-- enqueueCurrentPacketFull()
-- remainingBytes=1

 Erasure Coding: client generates too many small packets when writing parity 
 data
 

 Key: HDFS-8719
 URL: https://issues.apache.org/jira/browse/HDFS-8719
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8719-001.patch, HDFS-8719-HDFS-7285-001.patch


 Typically a packet is about 64K, but when writing parity data, many small 
 packets with size 512 bytes are generated. This may slow the write speed and 
 increase the network IO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8732) Erasure Coding: Fail to read a file with corrupted blocks

2015-07-08 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su reassigned HDFS-8732:
---

Assignee: Walter Su

 Erasure Coding: Fail to read a file with corrupted blocks
 -

 Key: HDFS-8732
 URL: https://issues.apache.org/jira/browse/HDFS-8732
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xinwei Qin 
Assignee: Walter Su

 In system test of reading EC file(HDFS-8259), the methods 
 {{testReadCorruptedData*()}} failed to read a EC file with corrupted 
 blocks(overwrite some data to several blocks and this will make client get a  
 checksum exception). 
 Exception logs:
 {code}
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StatefulStripeReader.readChunk(DFSStripedInputStream.java:771)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:623)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:335)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:465)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:946)
 at java.io.DataInputStream.read(DataInputStream.java:149)
 at 
 org.apache.hadoop.hdfs.StripedFileTestUtil.verifyStatefulRead(StripedFileTestUtil.java:98)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.verifyRead(TestReadStripedFileWithDecoding.java:196)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testOneFileWithBlockCorrupted(TestReadStripedFileWithDecoding.java:246)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testReadCorruptedData11(TestReadStripedFileWithDecoding.java:114)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8732) Erasure Coding: Fail to read a file with corrupted blocks

2015-07-08 Thread Xinwei Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618246#comment-14618246
 ] 

Xinwei Qin  commented on HDFS-8732:
---

a simple patch

 Erasure Coding: Fail to read a file with corrupted blocks
 -

 Key: HDFS-8732
 URL: https://issues.apache.org/jira/browse/HDFS-8732
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xinwei Qin 
Assignee: Walter Su
 Attachments: testReadCorruptedData.patch


 In system test of reading EC file(HDFS-8259), the methods 
 {{testReadCorruptedData*()}} failed to read a EC file with corrupted 
 blocks(overwrite some data to several blocks and this will make client get a  
 checksum exception). 
 Exception logs:
 {code}
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StatefulStripeReader.readChunk(DFSStripedInputStream.java:771)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:623)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:335)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:465)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:946)
 at java.io.DataInputStream.read(DataInputStream.java:149)
 at 
 org.apache.hadoop.hdfs.StripedFileTestUtil.verifyStatefulRead(StripedFileTestUtil.java:98)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.verifyRead(TestReadStripedFileWithDecoding.java:196)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testOneFileWithBlockCorrupted(TestReadStripedFileWithDecoding.java:246)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testReadCorruptedData11(TestReadStripedFileWithDecoding.java:114)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8734) Erasure Coding: fix one cell need two packets

2015-07-08 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8734:

Summary: Erasure Coding: fix one cell need two packets  (was: Erasure 
Coding: one cell need two packets)

 Erasure Coding: fix one cell need two packets
 -

 Key: HDFS-8734
 URL: https://issues.apache.org/jira/browse/HDFS-8734
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8734.01.patch


 The default WritePacketSize is 64k
 Currently default cellSize is 64k
 We hope one cell consumes one packet. In fact it's not.
 By default,
 chunkSize = 516( 512 data + 4 checksum)
 packetSize = 64k
 chunksPerPacket = 126 ( See DFSOutputStream#computePacketChunkSize for 
 details)
 numBytes of data in one packet = 64512
 cellSize = 65536
 When first packet is full ( with 64512 data), there are still 65536 - 64512 = 
 1024 bytes left.
 {code}
 super.writeChunk(bytes, offset, len, checksum, ckoff, cklen);
 // cell is full and current packet has not been enqueued,
 if (cellFull  currentPacket != null) {
   enqueueCurrentPacketFull();
 }   
 {code}
 When  the last 1024 bytes of the cell was written, we meet {{cellFull}} and 
 create another packet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8719) Erasure Coding: client generates too many small packets when writing parity data

2015-07-08 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618364#comment-14618364
 ] 

Walter Su commented on HDFS-8719:
-

revision of  my previous comment:
02 patch does adjustChunkBoundary() every time -before- switching streamer.
02 patch does adjustChunkBoundary() every time *after* switching streamer.

Btw, I found another bug [one cell need two 
packets|https://issues.apache.org/jira/browse/HDFS-8734]. Welcome any advice.

 Erasure Coding: client generates too many small packets when writing parity 
 data
 

 Key: HDFS-8719
 URL: https://issues.apache.org/jira/browse/HDFS-8719
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8719-001.patch, HDFS-8719-HDFS-7285-001.patch, 
 HDFS-8719-HDFS-7285-002.patch


 Typically a packet is about 64K, but when writing parity data, many small 
 packets with size 512 bytes are generated. This may slow the write speed and 
 increase the network IO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8652) Track BlockInfo instead of Block in CorruptReplicasMap

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618391#comment-14618391
 ] 

Hudson commented on HDFS-8652:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #980 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/980/])
Revert HDFS-8652. Track BlockInfo instead of Block in CorruptReplicasMap. 
Contributed by Jing Zhao. (jing9: rev bc99aaffe7b0ed13b1efc37b6a32cdbd344c2d75)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstructionContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestCorruptReplicaInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ContiguousBlockStorageOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java


 Track BlockInfo instead of Block in CorruptReplicasMap
 --

 Key: HDFS-8652
 URL: https://issues.apache.org/jira/browse/HDFS-8652
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 2.8.0

 Attachments: HDFS-8652.000.patch, HDFS-8652.001.patch, 
 HDFS-8652.002.patch


 Currently {{CorruptReplicasMap}} uses {{Block}} as its key and records the 
 list of DataNodes with corrupted replicas. For Erasure Coding since a striped 
 block group contains multiple internal blocks with different block ID, we 
 should use {{BlockInfo}} as the key.
 HDFS-8619 is the jira to fix this for EC. To ease merging we will use jira to 
 first make changes in trunk/branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8711) setSpaceQuota command should print the available storage type when input storage type is wrong

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618396#comment-14618396
 ] 

Hudson commented on HDFS-8711:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #980 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/980/])
HDFS-8711. setSpaceQuota command should print the available storage type when 
input storage type is wrong. Contributed by Brahma Reddy Battula. (xyao: rev 
b68701b7b2a9597b4183e0ba19b1551680d543a1)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java


 setSpaceQuota command should print the available storage type when input 
 storage type is wrong
 --

 Key: HDFS-8711
 URL: https://issues.apache.org/jira/browse/HDFS-8711
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
  Labels: reviewed
 Attachments: HDFS-8711-01.patch, HDFS-8711.patch


 If input storage type is wrong then currently *setSpaceQuota* give exception 
 like this.
 {code}
 ./hdfs dfsadmin -setSpaceQuota 1000 -storageType COLD /testDir
  setSpaceQuota: No enum constant org.apache.hadoop.fs.StorageType.COLD
 {code}
 It should be 
 {code}
 setSpaceQuota: Storage type COLD not available. Available storage type are 
 [SSD, DISK, ARCHIVE]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8620) Clean up the checkstyle warinings about ClientProtocol

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618388#comment-14618388
 ] 

Hudson commented on HDFS-8620:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #980 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/980/])
HDFS-8620. Clean up the checkstyle warinings about ClientProtocol. Contributed 
by Takanobu Asanuma. (wheat9: rev c0b8e4e5b5083631ed22d8d36c8992df7d34303c)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java


 Clean up the checkstyle warinings about ClientProtocol
 --

 Key: HDFS-8620
 URL: https://issues.apache.org/jira/browse/HDFS-8620
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma
 Fix For: 2.8.0

 Attachments: HDFS-8620.1.patch, HDFS-8620.2.patch, HDFS-8620.3.patch, 
 HDFS-8620.4.patch


 These warnings were generated in HDFS-8238.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8732) Erasure Coding: Fail to read a file with corrupted blocks

2015-07-08 Thread Xinwei Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinwei Qin  updated HDFS-8732:
--
Attachment: testReadCorruptedData.patch

Attach a simple path including a test method to  reproduce the exception and 
verify the solution in next step. The detail and comprehensive tests are in 
HDFS-8259.

 Erasure Coding: Fail to read a file with corrupted blocks
 -

 Key: HDFS-8732
 URL: https://issues.apache.org/jira/browse/HDFS-8732
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xinwei Qin 
Assignee: Walter Su
 Attachments: testReadCorruptedData.patch


 In system test of reading EC file(HDFS-8259), the methods 
 {{testReadCorruptedData*()}} failed to read a EC file with corrupted 
 blocks(overwrite some data to several blocks and this will make client get a  
 checksum exception). 
 Exception logs:
 {code}
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StatefulStripeReader.readChunk(DFSStripedInputStream.java:771)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:623)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:335)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:465)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:946)
 at java.io.DataInputStream.read(DataInputStream.java:149)
 at 
 org.apache.hadoop.hdfs.StripedFileTestUtil.verifyStatefulRead(StripedFileTestUtil.java:98)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.verifyRead(TestReadStripedFileWithDecoding.java:196)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testOneFileWithBlockCorrupted(TestReadStripedFileWithDecoding.java:246)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testReadCorruptedData11(TestReadStripedFileWithDecoding.java:114)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8721) Add a metric for number of encryption zones

2015-07-08 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8721:
---
Attachment: HDFS-8721-01.patch

 Add a metric for number of encryption zones
 ---

 Key: HDFS-8721
 URL: https://issues.apache.org/jira/browse/HDFS-8721
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: encryption
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8721-00.patch, HDFS-8721-01.patch


 Would be good to expose the number of encryption zones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8712) Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618558#comment-14618558
 ] 

Hudson commented on HDFS-8712:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #248 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/248/])
HDFS-8712. Remove 'public' and 'abstract' modifiers in FsVolumeSpi and 
FsDatasetSpi (Contributed by Lei (Eddy) Xu) (vinayakumarb: rev 
bd4e10900cc53a2768c31cc29fdb3698684bc2a0)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java


 Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi
 

 Key: HDFS-8712
 URL: https://issues.apache.org/jira/browse/HDFS-8712
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8712.000.patch, HDFS-8712.001.patch


 In [Java Language Specification 
 9.4|http://docs.oracle.com/javase/specs/jls/se7/html/jls-9.html#jls-9.4]:
 bq. It is permitted, but discouraged as a matter of style, to redundantly 
 specify the public and/or abstract modifier for a method declared in an 
 interface.
 {{FsDatasetSpi}} and {{FsVolumeSpi}} mark methods as public, which cause many 
 warnings in IDEs and {{checkstyle}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8620) Clean up the checkstyle warinings about ClientProtocol

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618556#comment-14618556
 ] 

Hudson commented on HDFS-8620:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #248 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/248/])
HDFS-8620. Clean up the checkstyle warinings about ClientProtocol. Contributed 
by Takanobu Asanuma. (wheat9: rev c0b8e4e5b5083631ed22d8d36c8992df7d34303c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java


 Clean up the checkstyle warinings about ClientProtocol
 --

 Key: HDFS-8620
 URL: https://issues.apache.org/jira/browse/HDFS-8620
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma
 Fix For: 2.8.0

 Attachments: HDFS-8620.1.patch, HDFS-8620.2.patch, HDFS-8620.3.patch, 
 HDFS-8620.4.patch


 These warnings were generated in HDFS-8238.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8711) setSpaceQuota command should print the available storage type when input storage type is wrong

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618559#comment-14618559
 ] 

Hudson commented on HDFS-8711:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #248 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/248/])
HDFS-8711. setSpaceQuota command should print the available storage type when 
input storage type is wrong. Contributed by Brahma Reddy Battula. (xyao: rev 
b68701b7b2a9597b4183e0ba19b1551680d543a1)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java


 setSpaceQuota command should print the available storage type when input 
 storage type is wrong
 --

 Key: HDFS-8711
 URL: https://issues.apache.org/jira/browse/HDFS-8711
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
  Labels: reviewed
 Fix For: 2.8.0

 Attachments: HDFS-8711-01.patch, HDFS-8711.patch


 If input storage type is wrong then currently *setSpaceQuota* give exception 
 like this.
 {code}
 ./hdfs dfsadmin -setSpaceQuota 1000 -storageType COLD /testDir
  setSpaceQuota: No enum constant org.apache.hadoop.fs.StorageType.COLD
 {code}
 It should be 
 {code}
 setSpaceQuota: Storage type COLD not available. Available storage type are 
 [SSD, DISK, ARCHIVE]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8642) Improve TestFileTruncate#setup by deleting the snapshots

2015-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618576#comment-14618576
 ] 

Hadoop QA commented on HDFS-8642:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   5m 38s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 19s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 47s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 20s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m 19s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 158m 33s | Tests passed in hadoop-hdfs. 
|
| | | 179m 36s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744187/HDFS-8642-02.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / d632574 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11623/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11623/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11623/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11623/console |


This message was automatically generated.

 Improve TestFileTruncate#setup by deleting the snapshots
 

 Key: HDFS-8642
 URL: https://issues.apache.org/jira/browse/HDFS-8642
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, 
 HDFS-8642-02.patch


 I've observed {{TestFileTruncate#setup()}} function has to be improved by 
 making it more independent. Presently if any of the snapshots related test 
 failures will affect all the subsequent unit test cases. One such error has 
 been observed in the 
 [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
 {code}
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
 org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted 
 since /test is snapshottable and already has snapshots
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
   at java.security.AccessController.doPrivileged(Native Method)
   at 

[jira] [Updated] (HDFS-8716) introduce a new config specifically for safe mode block count

2015-07-08 Thread Chang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang Li updated HDFS-8716:
---
Attachment: HDFS-8716.6.patch

[~vinayrpet] thanks for review! Just updated my patch with the description of 
the new config in hdfs-default.xml

 introduce a new config specifically for safe mode block count
 -

 Key: HDFS-8716
 URL: https://issues.apache.org/jira/browse/HDFS-8716
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Chang Li
Assignee: Chang Li
 Attachments: HDFS-8716.1.patch, HDFS-8716.2.patch, HDFS-8716.3.patch, 
 HDFS-8716.4.patch, HDFS-8716.5.patch, HDFS-8716.6.patch


 During the start up, namenode waits for n replicas of each block to be 
 reported by datanodes before exiting the safe mode. Currently n is tied to 
 the min replicas config. We could set min replicas to more than one but we 
 might want to exit safe mode as soon as each block has one replica reported. 
 This can be worked out by introducing a new config variable for safe mode 
 block count



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8652) Track BlockInfo instead of Block in CorruptReplicasMap

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618554#comment-14618554
 ] 

Hudson commented on HDFS-8652:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #248 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/248/])
Revert HDFS-8652. Track BlockInfo instead of Block in CorruptReplicasMap. 
Contributed by Jing Zhao. (jing9: rev bc99aaffe7b0ed13b1efc37b6a32cdbd344c2d75)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestCorruptReplicaInfo.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstructionContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ContiguousBlockStorageOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java


 Track BlockInfo instead of Block in CorruptReplicasMap
 --

 Key: HDFS-8652
 URL: https://issues.apache.org/jira/browse/HDFS-8652
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 2.8.0

 Attachments: HDFS-8652.000.patch, HDFS-8652.001.patch, 
 HDFS-8652.002.patch


 Currently {{CorruptReplicasMap}} uses {{Block}} as its key and records the 
 list of DataNodes with corrupted replicas. For Erasure Coding since a striped 
 block group contains multiple internal blocks with different block ID, we 
 should use {{BlockInfo}} as the key.
 HDFS-8619 is the jira to fix this for EC. To ease merging we will use jira to 
 first make changes in trunk/branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8711) setSpaceQuota command should print the available storage type when input storage type is wrong

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618585#comment-14618585
 ] 

Hudson commented on HDFS-8711:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2196 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2196/])
HDFS-8711. setSpaceQuota command should print the available storage type when 
input storage type is wrong. Contributed by Brahma Reddy Battula. (xyao: rev 
b68701b7b2a9597b4183e0ba19b1551680d543a1)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java


 setSpaceQuota command should print the available storage type when input 
 storage type is wrong
 --

 Key: HDFS-8711
 URL: https://issues.apache.org/jira/browse/HDFS-8711
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
  Labels: reviewed
 Fix For: 2.8.0

 Attachments: HDFS-8711-01.patch, HDFS-8711.patch


 If input storage type is wrong then currently *setSpaceQuota* give exception 
 like this.
 {code}
 ./hdfs dfsadmin -setSpaceQuota 1000 -storageType COLD /testDir
  setSpaceQuota: No enum constant org.apache.hadoop.fs.StorageType.COLD
 {code}
 It should be 
 {code}
 setSpaceQuota: Storage type COLD not available. Available storage type are 
 [SSD, DISK, ARCHIVE]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8652) Track BlockInfo instead of Block in CorruptReplicasMap

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618580#comment-14618580
 ] 

Hudson commented on HDFS-8652:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2196 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2196/])
Revert HDFS-8652. Track BlockInfo instead of Block in CorruptReplicasMap. 
Contributed by Jing Zhao. (jing9: rev bc99aaffe7b0ed13b1efc37b6a32cdbd344c2d75)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ContiguousBlockStorageOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestCorruptReplicaInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstructionContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java


 Track BlockInfo instead of Block in CorruptReplicasMap
 --

 Key: HDFS-8652
 URL: https://issues.apache.org/jira/browse/HDFS-8652
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 2.8.0

 Attachments: HDFS-8652.000.patch, HDFS-8652.001.patch, 
 HDFS-8652.002.patch


 Currently {{CorruptReplicasMap}} uses {{Block}} as its key and records the 
 list of DataNodes with corrupted replicas. For Erasure Coding since a striped 
 block group contains multiple internal blocks with different block ID, we 
 should use {{BlockInfo}} as the key.
 HDFS-8619 is the jira to fix this for EC. To ease merging we will use jira to 
 first make changes in trunk/branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8620) Clean up the checkstyle warinings about ClientProtocol

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618582#comment-14618582
 ] 

Hudson commented on HDFS-8620:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2196 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2196/])
HDFS-8620. Clean up the checkstyle warinings about ClientProtocol. Contributed 
by Takanobu Asanuma. (wheat9: rev c0b8e4e5b5083631ed22d8d36c8992df7d34303c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java


 Clean up the checkstyle warinings about ClientProtocol
 --

 Key: HDFS-8620
 URL: https://issues.apache.org/jira/browse/HDFS-8620
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma
 Fix For: 2.8.0

 Attachments: HDFS-8620.1.patch, HDFS-8620.2.patch, HDFS-8620.3.patch, 
 HDFS-8620.4.patch


 These warnings were generated in HDFS-8238.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8712) Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618584#comment-14618584
 ] 

Hudson commented on HDFS-8712:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2196 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2196/])
HDFS-8712. Remove 'public' and 'abstract' modifiers in FsVolumeSpi and 
FsDatasetSpi (Contributed by Lei (Eddy) Xu) (vinayakumarb: rev 
bd4e10900cc53a2768c31cc29fdb3698684bc2a0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java


 Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi
 

 Key: HDFS-8712
 URL: https://issues.apache.org/jira/browse/HDFS-8712
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8712.000.patch, HDFS-8712.001.patch


 In [Java Language Specification 
 9.4|http://docs.oracle.com/javase/specs/jls/se7/html/jls-9.html#jls-9.4]:
 bq. It is permitted, but discouraged as a matter of style, to redundantly 
 specify the public and/or abstract modifier for a method declared in an 
 interface.
 {{FsDatasetSpi}} and {{FsVolumeSpi}} mark methods as public, which cause many 
 warnings in IDEs and {{checkstyle}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7582) Enforce maximum number of ACL entries separately per access and default.

2015-07-08 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618571#comment-14618571
 ] 

Yi Liu commented on HDFS-7582:
--

I'm +1 on the late patch based on [~cnauroth]'s comment. Chris, would you like 
to check it too? Thanks.

 Enforce maximum number of ACL entries separately per access and default.
 

 Key: HDFS-7582
 URL: https://issues.apache.org/jira/browse/HDFS-7582
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.0
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7582-001.patch, HDFS-7582-01.patch


 Current ACL limits are only on the total number of entries.
 But there can be a situation where number of default entries for a directory 
 will be more than half of the maximum entries, i.e.  16.
 In such case, under this parent directory only files can be created which 
 will have ACLs inherited using parent's default entries.
 But when directories are created, total number of entries will be more than 
 the maximum allowed, because sub-directories copies both inherited ACLs as 
 well as default entries.
 Since currently there is no check while copying ACLs from default ACLs 
 directory creation succeeds, but any modification (only permission on one 
 entry also) on the same ACL will fail.
 It would be better to enforce the maximum of 32 entries separately per access 
 and default.  This would be consistent with our observations testing ACLs on 
 other file systems, such as XFS and ext3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8721) Add a metric for number of encryption zones

2015-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618533#comment-14618533
 ] 

Hadoop QA commented on HDFS-8721:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  23m 31s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 14s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  6s | Site still builds. |
| {color:red}-1{color} | checkstyle |   3m 27s | The applied patch generated  1 
new checkstyle issues (total was 300, now 300). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m 10s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 47s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 157m 44s | Tests failed in hadoop-hdfs. |
| | | 236m 23s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestDistributedFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744172/HDFS-8721-01.patch |
| Optional Tests | site javadoc javac unit findbugs checkstyle |
| git revision | trunk / d632574 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11620/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11620/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11620/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11620/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11620/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11620/console |


This message was automatically generated.

 Add a metric for number of encryption zones
 ---

 Key: HDFS-8721
 URL: https://issues.apache.org/jira/browse/HDFS-8721
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: encryption
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8721-00.patch, HDFS-8721-01.patch


 Would be good to expose the number of encryption zones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8719) Erasure Coding: client generates too many small packets when writing parity data

2015-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618543#comment-14618543
 ] 

Hadoop QA commented on HDFS-8719:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 14s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   9m 23s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  12m 13s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 16s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 45s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   2m  4s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   1m  5s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   4m  9s | The patch appears to introduce 4 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   4m 33s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 181m 53s | Tests failed in hadoop-hdfs. |
| | | 234m 40s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestEncryptedTransfer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744174/HDFS-8719-HDFS-7285-002.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 2c494a8 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11621/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11621/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11621/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11621/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf908.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11621/console |


This message was automatically generated.

 Erasure Coding: client generates too many small packets when writing parity 
 data
 

 Key: HDFS-8719
 URL: https://issues.apache.org/jira/browse/HDFS-8719
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8719-001.patch, HDFS-8719-HDFS-7285-001.patch, 
 HDFS-8719-HDFS-7285-002.patch


 Typically a packet is about 64K, but when writing parity data, many small 
 packets with size 512 bytes are generated. This may slow the write speed and 
 increase the network IO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8620) Clean up the checkstyle warinings about ClientProtocol

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618652#comment-14618652
 ] 

Hudson commented on HDFS-8620:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2177 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2177/])
HDFS-8620. Clean up the checkstyle warinings about ClientProtocol. Contributed 
by Takanobu Asanuma. (wheat9: rev c0b8e4e5b5083631ed22d8d36c8992df7d34303c)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Clean up the checkstyle warinings about ClientProtocol
 --

 Key: HDFS-8620
 URL: https://issues.apache.org/jira/browse/HDFS-8620
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma
 Fix For: 2.8.0

 Attachments: HDFS-8620.1.patch, HDFS-8620.2.patch, HDFS-8620.3.patch, 
 HDFS-8620.4.patch


 These warnings were generated in HDFS-8238.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8712) Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618655#comment-14618655
 ] 

Hudson commented on HDFS-8712:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2177 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2177/])
HDFS-8712. Remove 'public' and 'abstract' modifiers in FsVolumeSpi and 
FsDatasetSpi (Contributed by Lei (Eddy) Xu) (vinayakumarb: rev 
bd4e10900cc53a2768c31cc29fdb3698684bc2a0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi
 

 Key: HDFS-8712
 URL: https://issues.apache.org/jira/browse/HDFS-8712
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8712.000.patch, HDFS-8712.001.patch


 In [Java Language Specification 
 9.4|http://docs.oracle.com/javase/specs/jls/se7/html/jls-9.html#jls-9.4]:
 bq. It is permitted, but discouraged as a matter of style, to redundantly 
 specify the public and/or abstract modifier for a method declared in an 
 interface.
 {{FsDatasetSpi}} and {{FsVolumeSpi}} mark methods as public, which cause many 
 warnings in IDEs and {{checkstyle}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8711) setSpaceQuota command should print the available storage type when input storage type is wrong

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618656#comment-14618656
 ] 

Hudson commented on HDFS-8711:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2177 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2177/])
HDFS-8711. setSpaceQuota command should print the available storage type when 
input storage type is wrong. Contributed by Brahma Reddy Battula. (xyao: rev 
b68701b7b2a9597b4183e0ba19b1551680d543a1)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java


 setSpaceQuota command should print the available storage type when input 
 storage type is wrong
 --

 Key: HDFS-8711
 URL: https://issues.apache.org/jira/browse/HDFS-8711
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
  Labels: reviewed
 Fix For: 2.8.0

 Attachments: HDFS-8711-01.patch, HDFS-8711.patch


 If input storage type is wrong then currently *setSpaceQuota* give exception 
 like this.
 {code}
 ./hdfs dfsadmin -setSpaceQuota 1000 -storageType COLD /testDir
  setSpaceQuota: No enum constant org.apache.hadoop.fs.StorageType.COLD
 {code}
 It should be 
 {code}
 setSpaceQuota: Storage type COLD not available. Available storage type are 
 [SSD, DISK, ARCHIVE]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8652) Track BlockInfo instead of Block in CorruptReplicasMap

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618650#comment-14618650
 ] 

Hudson commented on HDFS-8652:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2177 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2177/])
Revert HDFS-8652. Track BlockInfo instead of Block in CorruptReplicasMap. 
Contributed by Jing Zhao. (jing9: rev bc99aaffe7b0ed13b1efc37b6a32cdbd344c2d75)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ContiguousBlockStorageOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstructionContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestCorruptReplicaInfo.java


 Track BlockInfo instead of Block in CorruptReplicasMap
 --

 Key: HDFS-8652
 URL: https://issues.apache.org/jira/browse/HDFS-8652
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 2.8.0

 Attachments: HDFS-8652.000.patch, HDFS-8652.001.patch, 
 HDFS-8652.002.patch


 Currently {{CorruptReplicasMap}} uses {{Block}} as its key and records the 
 list of DataNodes with corrupted replicas. For Erasure Coding since a striped 
 block group contains multiple internal blocks with different block ID, we 
 should use {{BlockInfo}} as the key.
 HDFS-8619 is the jira to fix this for EC. To ease merging we will use jira to 
 first make changes in trunk/branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8727) Allow using path style addressing for accessing the s3 endpoint

2015-07-08 Thread Andrew Baptist (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Baptist updated HDFS-8727:
-
Attachment: (was: hdfs-8728.patch)

 Allow using path style addressing for accessing the s3 endpoint
 ---

 Key: HDFS-8727
 URL: https://issues.apache.org/jira/browse/HDFS-8727
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS
Affects Versions: 2.7.1
Reporter: Andrew Baptist
Assignee: Andrew Baptist
  Labels: features
 Fix For: 2.7.2


 There is no ability to specify using path style access for the s3 endpoint. 
 There are numerous non-amazon implementations of storage that support the 
 amazon API's but only support path style access such as Cleversafe and Ceph. 
 Additionally in many environments it is difficult to configure DNS correctly 
 to get virtual host style addressing to work



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8727) Allow using path style addressing for accessing the s3 endpoint

2015-07-08 Thread Andrew Baptist (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Baptist updated HDFS-8727:
-
Attachment: hdfs-8728.patch.2

No easy way to add unit tests for this - this requires integration with an 
external system to test.

 Allow using path style addressing for accessing the s3 endpoint
 ---

 Key: HDFS-8727
 URL: https://issues.apache.org/jira/browse/HDFS-8727
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS
Affects Versions: 2.7.1
Reporter: Andrew Baptist
Assignee: Andrew Baptist
  Labels: features
 Fix For: 2.7.2

 Attachments: hdfs-8728.patch.2


 There is no ability to specify using path style access for the s3 endpoint. 
 There are numerous non-amazon implementations of storage that support the 
 amazon API's but only support path style access such as Cleversafe and Ceph. 
 Additionally in many environments it is difficult to configure DNS correctly 
 to get virtual host style addressing to work



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8712) Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi

2015-07-08 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618732#comment-14618732
 ] 

Lei (Eddy) Xu commented on HDFS-8712:
-

Thanks much for committing this, [~vinayrpet].  And thanks for the reviews, 
[~andrew.wang].

 Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi
 

 Key: HDFS-8712
 URL: https://issues.apache.org/jira/browse/HDFS-8712
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8712.000.patch, HDFS-8712.001.patch


 In [Java Language Specification 
 9.4|http://docs.oracle.com/javase/specs/jls/se7/html/jls-9.html#jls-9.4]:
 bq. It is permitted, but discouraged as a matter of style, to redundantly 
 specify the public and/or abstract modifier for a method declared in an 
 interface.
 {{FsDatasetSpi}} and {{FsVolumeSpi}} mark methods as public, which cause many 
 warnings in IDEs and {{checkstyle}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8718) Block replicating cannot work after upgrading to 2.7

2015-07-08 Thread kanaka kumar avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618740#comment-14618740
 ] 

kanaka kumar avvaru commented on HDFS-8718:
---

Hi [~jianbginglover], 

I think this log must be preceded with some other log message which looks like 
{code} Failed to place enough replicas, still in need of X to reach Y 
(unavailableStorages=[DISK, ARCHIVE] , storagePolicy={HOT:7, 
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
newBlock=true/false) {code}

If possible please share NN logs with may give clue on root cause.

Also, please confirm both the machines {{172.22.49.3 and 172.22.49.5}} are in 
same rack or not

 Block replicating cannot work after upgrading to 2.7 
 -

 Key: HDFS-8718
 URL: https://issues.apache.org/jira/browse/HDFS-8718
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Bing Jiang

 Decommission a datanode from hadoop, and hdfs can calculate the correct 
 number of  blocks to be replicated from web-ui. 
 {code}
 Decomissioning
 Node  Last contactUnder replicated blocks Blocks with no live replicas
 Under Replicated Blocks 
 In files under construction
 TS-BHTEST-03:50010 (172.22.49.3:50010)25641   0   0
 {code}
 From NN's log, the work of block replicating cannot be enforced due to 
 inconsistent expected storage type.
 {code}
 Node /default/rack_02/172.22.49.5:50010 [
   Storage 
 [DISK]DS-3915533b-4ae4-4806-bf83caf1446f1e2f:NORMAL:172.22.49.5:50010 is not 
 chosen since storage types do not match, where the required storage type is 
 ARCHIVE.
   Storage 
 [DISK]DS-3e54c331-3eaf-4447-b5e4-9bf91bc71b17:NORMAL:172.22.49.5:50010 is not 
 chosen since storage types do not match, where the required storage type is 
 ARCHIVE.
   Storage 
 [DISK]DS-d44fa611-aa73-4415-a2de-7e73c9c5ea68:NORMAL:172.22.49.5:50010 is not 
 chosen since storage types do not match, where the required storage type is 
 ARCHIVE.
   Storage 
 [DISK]DS-cebbf410-06a0-4171-a9bd-d0db55dad6d3:NORMAL:172.22.49.5:50010 is not 
 chosen since storage types do not match, where the required storage type is 
 ARCHIVE.
   Storage 
 [DISK]DS-4c50b1c7-eaad-4858-b476-99dec17d68b5:NORMAL:172.22.49.5:50010 is not 
 chosen since storage types do not match, where the required storage type is 
 ARCHIVE.
   Storage 
 [DISK]DS-f6cf9123-4125-4234-8e21-34b12170e576:NORMAL:172.22.49.5:50010 is not 
 chosen since storage types do not match, where the required storage type is 
 ARCHIVE.
   Storage 
 [DISK]DS-7601b634-1761-45cc-9ffd-73ee8687c2a7:NORMAL:172.22.49.5:50010 is not 
 chosen since storage types do not match, where the required storage type is 
 ARCHIVE.
   Storage 
 [DISK]DS-1d4b91ab-fe2f-4d5f-bd0a-57e9a0714654:NORMAL:172.22.49.5:50010 is not 
 chosen since storage types do not match, where the required storage type is 
 ARCHIVE.
   Storage 
 [DISK]DS-cd2279cf-9c5a-4380-8c41-7681fa688eaf:NORMAL:172.22.49.5:50010 is not 
 chosen since storage types do not match, where the required storage type is 
 ARCHIVE.
   Storage 
 [DISK]DS-630c734f-334a-466d-9649-4818d6e91181:NORMAL:172.22.49.5:50010 is not 
 chosen since storage types do not match, where the required storage type is 
 ARCHIVE.
   Storage 
 [DISK]DS-31cd0d68-5f7c-4a0a-91e6-afa53c4df820:NORMAL:172.22.49.5:50010 is not 
 chosen since storage types do not match, where the required storage type is 
 ARCHIVE.
 ]
 2015-07-07 16:00:22,032 WARN 
 org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough 
 replicas: expected size is 1 but onl
 y 0 storage types can be selected (replication=3, selected=[], 
 unavailable=[DISK, ARCHIVE], removed=[DISK], policy=BlockStoragePolicy{HOT:7,
  storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
 2015-07-07 16:00:22,032 WARN 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
 place enough replicas, still in n
 eed of 1 to reach 3 (unavailableStorages=[DISK, ARCHIVE], 
 storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
 creationFallbacks=[],
  replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types 
 are unavailable:  unavailableStorages=[DISK, ARCHIVE], storageP
 olicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], 
 replicationFallbacks=[ARCHIVE]}
 {code}
 We have upgraded the hadoop cluster from 2.5 to 2.7.0 previously. I believe 
 the feature of ARCHIVE STORAGE has been enforced, but how about the block's 
 storage type after upgrading?
 The default BlockStoragePolicy is hot, and I guess those blocks do not have 
 the correct information bit of BlockStoragePolicy, so it cannot be handled 
 well.
 After I shutdown the datanode, the under-replicated blocks can be asked to 
 copy. So the workaround is to 

[jira] [Commented] (HDFS-8620) Clean up the checkstyle warinings about ClientProtocol

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618703#comment-14618703
 ] 

Hudson commented on HDFS-8620:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #238 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/238/])
HDFS-8620. Clean up the checkstyle warinings about ClientProtocol. Contributed 
by Takanobu Asanuma. (wheat9: rev c0b8e4e5b5083631ed22d8d36c8992df7d34303c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java


 Clean up the checkstyle warinings about ClientProtocol
 --

 Key: HDFS-8620
 URL: https://issues.apache.org/jira/browse/HDFS-8620
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma
 Fix For: 2.8.0

 Attachments: HDFS-8620.1.patch, HDFS-8620.2.patch, HDFS-8620.3.patch, 
 HDFS-8620.4.patch


 These warnings were generated in HDFS-8238.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8712) Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618706#comment-14618706
 ] 

Hudson commented on HDFS-8712:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #238 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/238/])
HDFS-8712. Remove 'public' and 'abstract' modifiers in FsVolumeSpi and 
FsDatasetSpi (Contributed by Lei (Eddy) Xu) (vinayakumarb: rev 
bd4e10900cc53a2768c31cc29fdb3698684bc2a0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi
 

 Key: HDFS-8712
 URL: https://issues.apache.org/jira/browse/HDFS-8712
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8712.000.patch, HDFS-8712.001.patch


 In [Java Language Specification 
 9.4|http://docs.oracle.com/javase/specs/jls/se7/html/jls-9.html#jls-9.4]:
 bq. It is permitted, but discouraged as a matter of style, to redundantly 
 specify the public and/or abstract modifier for a method declared in an 
 interface.
 {{FsDatasetSpi}} and {{FsVolumeSpi}} mark methods as public, which cause many 
 warnings in IDEs and {{checkstyle}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8711) setSpaceQuota command should print the available storage type when input storage type is wrong

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618707#comment-14618707
 ] 

Hudson commented on HDFS-8711:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #238 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/238/])
HDFS-8711. setSpaceQuota command should print the available storage type when 
input storage type is wrong. Contributed by Brahma Reddy Battula. (xyao: rev 
b68701b7b2a9597b4183e0ba19b1551680d543a1)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 setSpaceQuota command should print the available storage type when input 
 storage type is wrong
 --

 Key: HDFS-8711
 URL: https://issues.apache.org/jira/browse/HDFS-8711
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
  Labels: reviewed
 Fix For: 2.8.0

 Attachments: HDFS-8711-01.patch, HDFS-8711.patch


 If input storage type is wrong then currently *setSpaceQuota* give exception 
 like this.
 {code}
 ./hdfs dfsadmin -setSpaceQuota 1000 -storageType COLD /testDir
  setSpaceQuota: No enum constant org.apache.hadoop.fs.StorageType.COLD
 {code}
 It should be 
 {code}
 setSpaceQuota: Storage type COLD not available. Available storage type are 
 [SSD, DISK, ARCHIVE]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8652) Track BlockInfo instead of Block in CorruptReplicasMap

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618701#comment-14618701
 ] 

Hudson commented on HDFS-8652:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #238 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/238/])
Revert HDFS-8652. Track BlockInfo instead of Block in CorruptReplicasMap. 
Contributed by Jing Zhao. (jing9: rev bc99aaffe7b0ed13b1efc37b6a32cdbd344c2d75)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestCorruptReplicaInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ContiguousBlockStorageOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstructionContiguous.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Track BlockInfo instead of Block in CorruptReplicasMap
 --

 Key: HDFS-8652
 URL: https://issues.apache.org/jira/browse/HDFS-8652
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 2.8.0

 Attachments: HDFS-8652.000.patch, HDFS-8652.001.patch, 
 HDFS-8652.002.patch


 Currently {{CorruptReplicasMap}} uses {{Block}} as its key and records the 
 list of DataNodes with corrupted replicas. For Erasure Coding since a striped 
 block group contains multiple internal blocks with different block ID, we 
 should use {{BlockInfo}} as the key.
 HDFS-8619 is the jira to fix this for EC. To ease merging we will use jira to 
 first make changes in trunk/branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8642) Make TestFileTruncate more reliable

2015-07-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8642:

Summary: Make TestFileTruncate more reliable  (was: Improve 
TestFileTruncate#setup by deleting the snapshots)

 Make TestFileTruncate more reliable
 ---

 Key: HDFS-8642
 URL: https://issues.apache.org/jira/browse/HDFS-8642
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, 
 HDFS-8642-02.patch


 I've observed {{TestFileTruncate#setup()}} function has to be improved by 
 making it more independent. Presently if any of the snapshots related test 
 failures will affect all the subsequent unit test cases. One such error has 
 been observed in the 
 [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
 {code}
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
 org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted 
 since /test is snapshottable and already has snapshots
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
   at org.apache.hadoop.ipc.Client.call(Client.java:1440)
   at org.apache.hadoop.ipc.Client.call(Client.java:1371)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy22.delete(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
   at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy23.delete(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1711)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8642) Make TestFileTruncate more reliable

2015-07-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8642:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.8.0
Target Version/s:   (was: 2.8.0)
  Status: Resolved  (was: Patch Available)

Committed for 2.8.0.

Thank you for the contribution [~rakeshr].

 Make TestFileTruncate more reliable
 ---

 Key: HDFS-8642
 URL: https://issues.apache.org/jira/browse/HDFS-8642
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, 
 HDFS-8642-02.patch


 I've observed {{TestFileTruncate#setup()}} function has to be improved by 
 making it more independent. Presently if any of the snapshots related test 
 failures will affect all the subsequent unit test cases. One such error has 
 been observed in the 
 [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
 {code}
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
 org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted 
 since /test is snapshottable and already has snapshots
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
   at org.apache.hadoop.ipc.Client.call(Client.java:1440)
   at org.apache.hadoop.ipc.Client.call(Client.java:1371)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy22.delete(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
   at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy23.delete(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1711)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8642) Improve TestFileTruncate#setup by deleting the snapshots

2015-07-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618871#comment-14618871
 ] 

Arpit Agarwal commented on HDFS-8642:
-

+1 for the .02 patch, thanks for for updating it [~rakeshr].

The findbugs warning looks bogus. I will commit it shortly.

 Improve TestFileTruncate#setup by deleting the snapshots
 

 Key: HDFS-8642
 URL: https://issues.apache.org/jira/browse/HDFS-8642
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, 
 HDFS-8642-02.patch


 I've observed {{TestFileTruncate#setup()}} function has to be improved by 
 making it more independent. Presently if any of the snapshots related test 
 failures will affect all the subsequent unit test cases. One such error has 
 been observed in the 
 [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
 {code}
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
 org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted 
 since /test is snapshottable and already has snapshots
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
   at org.apache.hadoop.ipc.Client.call(Client.java:1440)
   at org.apache.hadoop.ipc.Client.call(Client.java:1371)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy22.delete(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
   at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy23.delete(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1711)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8727) Allow using path style addressing for accessing the s3 endpoint

2015-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618850#comment-14618850
 ] 

Hadoop QA commented on HDFS-8727:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | patch |   0m  1s | The patch file was not named 
according to hadoop's naming conventions. Please see 
https://wiki.apache.org/hadoop/HowToContribute for instructions. |
| {color:blue}0{color} | pre-patch |  19m 38s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  1s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 54s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  2s | Site still builds. |
| {color:red}-1{color} | checkstyle |   0m 19s | The applied patch generated  1 
new checkstyle issues (total was 62, now 62). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 47s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   0m 13s | Tests passed in 
hadoop-aws. |
| | |  44m 16s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744236/hdfs-8728.patch.2 |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / 98e5926 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11625/artifact/patchprocess/diffcheckstylehadoop-aws.txt
 |
| hadoop-aws test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11625/artifact/patchprocess/testrun_hadoop-aws.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11625/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11625/console |


This message was automatically generated.

 Allow using path style addressing for accessing the s3 endpoint
 ---

 Key: HDFS-8727
 URL: https://issues.apache.org/jira/browse/HDFS-8727
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS
Affects Versions: 2.7.1
Reporter: Andrew Baptist
Assignee: Andrew Baptist
  Labels: features
 Fix For: 2.7.2

 Attachments: hdfs-8728.patch.2


 There is no ability to specify using path style access for the s3 endpoint. 
 There are numerous non-amazon implementations of storage that support the 
 amazon API's but only support path style access such as Cleversafe and Ceph. 
 Additionally in many environments it is difficult to configure DNS correctly 
 to get virtual host style addressing to work



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8719) Erasure Coding: client generates too many small packets when writing parity data

2015-07-08 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618144#comment-14618144
 ] 

Li Bo commented on HDFS-8719:
-

Thanks Walter for the review.
{{remainingBytes}} will be not 1 because streamer writes data chunk by 
chunk(512 bytes). If the last packet is not full, it will be enqueued in 
{{closeImpl()}}, and {{enqueueCurrentPacket()}} is called, not 
{{enqueueCurrentPacketFull}}. I will update the patch later.

 Erasure Coding: client generates too many small packets when writing parity 
 data
 

 Key: HDFS-8719
 URL: https://issues.apache.org/jira/browse/HDFS-8719
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8719-001.patch


 Typically a packet is about 64K, but when writing parity data, many small 
 packets with size 512 bytes are generated. This may slow the write speed and 
 increase the network IO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8719) Erasure Coding: client generates too many small packets when writing parity data

2015-07-08 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-8719:

Attachment: HDFS-8719-HDFS-7285-001.patch

 Erasure Coding: client generates too many small packets when writing parity 
 data
 

 Key: HDFS-8719
 URL: https://issues.apache.org/jira/browse/HDFS-8719
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8719-001.patch, HDFS-8719-HDFS-7285-001.patch


 Typically a packet is about 64K, but when writing parity data, many small 
 packets with size 512 bytes are generated. This may slow the write speed and 
 increase the network IO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8732) Erasure Coding: Fail to read a file with corrupted blocks

2015-07-08 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618180#comment-14618180
 ] 

Yi Liu commented on HDFS-8732:
--

fixed in HDFS-8602?

 Erasure Coding: Fail to read a file with corrupted blocks
 -

 Key: HDFS-8732
 URL: https://issues.apache.org/jira/browse/HDFS-8732
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xinwei Qin 

 In system test of reading EC file(HDFS-8259), the methods 
 {{testReadCorruptedData*()}} failed to read a EC file with corrupted 
 blocks(overwrite some data to several blocks and this will make client get a  
 checksum exception). 
 Exception logs:
 {code}
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StatefulStripeReader.readChunk(DFSStripedInputStream.java:771)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:623)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:335)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:465)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:946)
 at java.io.DataInputStream.read(DataInputStream.java:149)
 at 
 org.apache.hadoop.hdfs.StripedFileTestUtil.verifyStatefulRead(StripedFileTestUtil.java:98)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.verifyRead(TestReadStripedFileWithDecoding.java:196)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testOneFileWithBlockCorrupted(TestReadStripedFileWithDecoding.java:246)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testReadCorruptedData11(TestReadStripedFileWithDecoding.java:114)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8259) Erasure Coding: System Test of reading EC file

2015-07-08 Thread Xinwei Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618181#comment-14618181
 ] 

Xinwei Qin  commented on HDFS-8259:
---

In this patch, the test methods {{testReadCorruptedData*()}} will fail to read 
a EC file with corrupted blocks, I have created HDFS-8732 to address it.

 Erasure Coding: System Test of reading EC file
 --

 Key: HDFS-8259
 URL: https://issues.apache.org/jira/browse/HDFS-8259
 Project: Hadoop HDFS
  Issue Type: Test
Affects Versions: HDFS-7285
Reporter: GAO Rui
Assignee: Xinwei Qin 
 Attachments: HDFS-8259-HDFS-7285.001.patch


 1. Normally reading EC file(reading without datanote failure and no need of 
 recovery)
 2. Reading EC file with datanode failure.
 3. Reading EC file with data block recovery by decoding from parity blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8722) Optimize datanode writes for small writes and flushes

2015-07-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619104#comment-14619104
 ] 

Kihwal Lee commented on HDFS-8722:
--

Forgot to remove one line in the patch.

 Optimize datanode writes for small writes and flushes
 -

 Key: HDFS-8722
 URL: https://issues.apache.org/jira/browse/HDFS-8722
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: HDFS-8722.patch


 After the data corruption fix by HDFS-4660, the CRC recalculation for partial 
 chunk is executed more frequently, if the client repeats writing few bytes 
 and calling hflush/hsync.  This is because the generic logic forces CRC 
 recalculation if on-disk data is not CRC chunk aligned. Prior to HDFS-4660, 
 datanode blindly accepted whatever CRC client provided, if the incoming data 
 is chunk-aligned. This was the source of the corruption.
 We can still optimize for the most common case where a client is repeatedly 
 writing small number of bytes followed by hflush/hsync with no pipeline 
 recovery or append, by allowing the previous behavior for this specific case. 
  If the incoming data has a duplicate portion and that is at the last 
 chunk-boundary before the partial chunk on disk, datanode can use the 
 checksum supplied by the client without redoing the checksum on its own.  
 This reduces disk reads as well as CPU load for the checksum calculation.
 If the incoming packet data goes back further than the last on-disk chunk 
 boundary, datanode will still do a recalculation, but this occurs rarely 
 during pipeline recoveries. Thus the optimization for this specific case 
 should be sufficient to speed up the vast majority of cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8728) Erasure coding: revisit and simplify BlockInfoStriped and INodeFile

2015-07-08 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8728:

Summary: Erasure coding: revisit and simplify BlockInfoStriped and 
INodeFile  (was: Erasure coding: revisit BlockInfoStriped based on code 
hierarchy from HDFS-8499)

 Erasure coding: revisit and simplify BlockInfoStriped and INodeFile
 ---

 Key: HDFS-8728
 URL: https://issues.apache.org/jira/browse/HDFS-8728
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-07-08 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618936#comment-14618936
 ] 

Zhe Zhang commented on HDFS-7285:
-

Thanks Jing for the helpful comments! I've created HDFS-8728 focusing on the 
{{BlockInfo}} and {{INodeFile}} changes. 

 Erasure Coding Support inside HDFS
 --

 Key: HDFS-7285
 URL: https://issues.apache.org/jira/browse/HDFS-7285
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Weihua Jiang
Assignee: Zhe Zhang
 Attachments: Consolidated-20150707.patch, ECAnalyzer.py, ECParser.py, 
 HDFS-7285-initial-PoC.patch, HDFS-7285-merge-consolidated-01.patch, 
 HDFS-7285-merge-consolidated-trunk-01.patch, 
 HDFS-7285-merge-consolidated.trunk.03.patch, 
 HDFS-7285-merge-consolidated.trunk.04.patch, 
 HDFS-EC-Merge-PoC-20150624.patch, HDFS-EC-merge-consolidated-01.patch, 
 HDFS-bistriped.patch, HDFSErasureCodingDesign-20141028.pdf, 
 HDFSErasureCodingDesign-20141217.pdf, HDFSErasureCodingDesign-20150204.pdf, 
 HDFSErasureCodingDesign-20150206.pdf, HDFSErasureCodingPhaseITestPlan.pdf, 
 fsimage-analysis-20150105.pdf


 Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
 of data reliability, comparing to the existing HDFS 3-replica approach. For 
 example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
 with storage overhead only being 40%. This makes EC a quite attractive 
 alternative for big data storage, particularly for cold data. 
 Facebook had a related open source project called HDFS-RAID. It used to be 
 one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
 for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
 on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
 cold files that are intended not to be appended anymore; 3) the pure Java EC 
 coding implementation is extremely slow in practical use. Due to these, it 
 might not be a good idea to just bring HDFS-RAID back.
 We (Intel and Cloudera) are working on a design to build EC into HDFS that 
 gets rid of any external dependencies, makes it self-contained and 
 independently maintained. This design lays the EC feature on the storage type 
 support and considers compatible with existing HDFS features like caching, 
 snapshot, encryption, high availability and etc. This design will also 
 support different EC coding schemes, implementations and policies for 
 different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
 ISA-L library), an implementation can greatly improve the performance of EC 
 encoding/decoding and makes the EC solution even more attractive. We will 
 post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8736) ability to deny access to different filesystems

2015-07-08 Thread Purvesh Patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Purvesh Patel updated HDFS-8736:

Attachment: Patch.pdf

 ability to deny access to different filesystems
 ---

 Key: HDFS-8736
 URL: https://issues.apache.org/jira/browse/HDFS-8736
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: security
Affects Versions: 2.5.0
Reporter: Purvesh Patel
Priority: Minor
  Labels: security
 Attachments: Patch.pdf


 In order to run in a secure context, ability to deny access to different 
 filesystems(specifically the local file system) to non-trusted code this 
 patch adds a new SecurityPermission class(AccessFileSystemPermission) and 
 checks the permission in FileSystem#get before returning a cached file system 
 or creating a new one. Please see attached patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8736) ability to deny access to different filesystems

2015-07-08 Thread Purvesh Patel (JIRA)
Purvesh Patel created HDFS-8736:
---

 Summary: ability to deny access to different filesystems
 Key: HDFS-8736
 URL: https://issues.apache.org/jira/browse/HDFS-8736
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: security
Affects Versions: 2.5.0
Reporter: Purvesh Patel
Priority: Minor


In order to run in a secure context, ability to deny access to different 
filesystems(specifically the local file system) to non-trusted code this patch 
adds a new SecurityPermission class(AccessFileSystemPermission) and checks the 
permission in FileSystem#get before returning a cached file system or creating 
a new one. Please see attached patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8642) Make TestFileTruncate more reliable

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618898#comment-14618898
 ] 

Hudson commented on HDFS-8642:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8132 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8132/])
HDFS-8642. Make TestFileTruncate more reliable. (Contributed by Rakesh R) (arp: 
rev 4119ad3112dcfb7286ca68288489bbcb6235cf53)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java


 Make TestFileTruncate more reliable
 ---

 Key: HDFS-8642
 URL: https://issues.apache.org/jira/browse/HDFS-8642
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, 
 HDFS-8642-02.patch


 I've observed {{TestFileTruncate#setup()}} function has to be improved by 
 making it more independent. Presently if any of the snapshots related test 
 failures will affect all the subsequent unit test cases. One such error has 
 been observed in the 
 [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
 {code}
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
 org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted 
 since /test is snapshottable and already has snapshots
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
   at org.apache.hadoop.ipc.Client.call(Client.java:1440)
   at org.apache.hadoop.ipc.Client.call(Client.java:1371)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy22.delete(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
   at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy23.delete(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1711)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6672) Regression with hdfs oiv tool

2015-07-08 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618927#comment-14618927
 ] 

Chris Nauroth commented on HDFS-6672:
-

Hi [~eddyxu].  This is a master jira with a single sub-task that has been 
resolved for several months.  Is it time to close this too, or do you expect 
additional work later?  Thanks!

 Regression with hdfs oiv tool
 -

 Key: HDFS-6672
 URL: https://issues.apache.org/jira/browse/HDFS-6672
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
  Labels: patch, regression, tools

 Because the fsimage format changes from Writeable encoding to ProtocolBuffer, 
 a new {{OIV}} tool was written. However it lacks a few features existed in 
 the old {{OIV}} tool, such as a _Delimited_ processor. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6672) Regression with hdfs oiv tool

2015-07-08 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu resolved HDFS-6672.
-
   Resolution: Fixed
Fix Version/s: 2.7.0

Hi, [~cnauroth]. Thanks for bring this up.

I have a few other {{oiv}} related JIRAs under umbrella JIRA HDFS-8061. I think 
we can close this JIRA for now.

 Regression with hdfs oiv tool
 -

 Key: HDFS-6672
 URL: https://issues.apache.org/jira/browse/HDFS-6672
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
  Labels: patch, regression, tools
 Fix For: 2.7.0


 Because the fsimage format changes from Writeable encoding to ProtocolBuffer, 
 a new {{OIV}} tool was written. However it lacks a few features existed in 
 the old {{OIV}} tool, such as a _Delimited_ processor. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8716) introduce a new config specifically for safe mode block count

2015-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619009#comment-14619009
 ] 

Hadoop QA commented on HDFS-8716:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  17m 59s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 37s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 43s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 15s | The applied patch generated  1 
new checkstyle issues (total was 676, now 676). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 16s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 158m  4s | Tests passed in hadoop-hdfs. 
|
| | | 204m 42s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744216/HDFS-8716.6.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / bd4e109 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11624/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11624/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11624/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11624/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11624/console |


This message was automatically generated.

 introduce a new config specifically for safe mode block count
 -

 Key: HDFS-8716
 URL: https://issues.apache.org/jira/browse/HDFS-8716
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Chang Li
Assignee: Chang Li
 Attachments: HDFS-8716.1.patch, HDFS-8716.2.patch, HDFS-8716.3.patch, 
 HDFS-8716.4.patch, HDFS-8716.5.patch, HDFS-8716.6.patch


 During the start up, namenode waits for n replicas of each block to be 
 reported by datanodes before exiting the safe mode. Currently n is tied to 
 the min replicas config. We could set min replicas to more than one but we 
 might want to exit safe mode as soon as each block has one replica reported. 
 This can be worked out by introducing a new config variable for safe mode 
 block count



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8726) Move protobuf files that define the client-sever protocols to hdfs-client

2015-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619033#comment-14619033
 ] 

Hudson commented on HDFS-8726:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8133 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8133/])
HDFS-8726. Move protobuf files that define the client-sever protocols to 
hdfs-client. Contributed by Haohui Mai. (wheat9: rev 
fc6182d5ed92ac70de1f4633edd5265b7be1a8dc)
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/xattr.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/editlog.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/inotify.proto
* hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/acl.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientDatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/encryption.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/acl.proto
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto


 Move protobuf files that define the client-sever protocols to hdfs-client
 -

 Key: HDFS-8726
 URL: https://issues.apache.org/jira/browse/HDFS-8726
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.8.0

 Attachments: HDFS-8726.000.patch, HDFS-8726.001.patch


 The protobuf files that defines the RPC protocols between the HDFS clients 
 and servers current sit in the hdfs package. They should be moved the the 
 hdfs-client package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8729) Fix testTruncateWithDataNodesRestartImmediately occasionally failed

2015-07-08 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619035#comment-14619035
 ] 

Jing Zhao commented on HDFS-8729:
-

To trigger block report or not before restarting DataNodes may test different 
code paths: if DNs send report to NN before restarting, it is very possible 
that the truncate can be done before the restarting. Otherwise the recovery 
process may happen after DN restarts. In these two scenarios the block replicas 
reported from DN, and the block info stored in NN, can have different states 
when the restarted DNs send their first block reports to NN.

In my test looks like the reason of the timeout is a race scenario in the block 
recovery process: the second dn sends block report after the block truncation 
is finished thus its replica is marked as corrupted. However the replication 
monitor cannot schedule an extra replica because there are only 3 datanodes in 
the test. So maybe a quick fix is to change the total number of DN from 3 to 4. 
What do you think, Walter?

 Fix testTruncateWithDataNodesRestartImmediately occasionally failed
 ---

 Key: HDFS-8729
 URL: https://issues.apache.org/jira/browse/HDFS-8729
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Walter Su
Assignee: Walter Su
Priority: Minor
 Attachments: HDFS-8729.01.patch


 https://builds.apache.org/job/PreCommit-HDFS-Build/11449/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11593/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11596/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11599/testReport/
 {noformat}
 java.util.concurrent.TimeoutException: Timed out waiting for 
 /test/testTruncateWithDataNodesRestartImmediately to reach 3 replicas
   at 
 org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:761)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestartImmediately(TestFileTruncate.java:814)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8728) Erasure coding: revisit and simplify BlockInfoStriped and INodeFile

2015-07-08 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8728:

Attachment: Merge-8-inodeFile.patch
Merge-7-replicationMonitor.patch
Merge-6-locatedStripedBlock.patch
Merge-5-blockPlacementPolicies.patch
Merge-4-blockmanagement.patch
Merge-3-blockInfo.patch
Merge-2-ecZones.patch
Merge-1-codec.patch

Attaching sub-patches 1~8 from the consolidated HDFS-7285 patch as background 
of this proposed change. Note that they are trunk-based. 

The root of this proposed change is the new {{BlockInfo}} hierarchy introduced 
by HDFS-8499. Based on that, we are able to simplify striped blocks handling in 
several perspectives:
# {{BlockUCContiguous}} and {{BlockInfoUCStriped}} can now share the UC logic. 
This can be seen in {{Merge-3-blockInfo.patch}}. That sub-patch also separates 
out a new {{StripedBlockStorageOp}} class to handle striping-specific logic.
# By keeping {{BlockInfoUC}} as a subclass of {{BlockInfo}} (instead of a 
separate interface as in the branch), we can avoid changing trunk code that 
relies on this relationship. This can be seen in 
{{Merge-4-blockmanagement.patch}} (Allocate and manage striped blocks in 
NameNode blockmanagement module.). With this change and other pre-merge 
efforts (HDFS-8487, HDFS-8608, HDFS-8623), the [rebased consolidated patch | 
https://issues.apache.org/jira/secure/attachment/12744101/Consolidated-20150707.patch]
 has much fewer intrusive changes to {{blockmanagement}} and {{namenode}} 
modules than the current HDFS-7285 consolidated patch:
{code}
Current HDFS-7285: 
2532 insertions(+), 1156 deletions(-) in blockmanagement
1826 insertions(+), 444 deletions(-) in namenode

Rebased (including this proposed simplification):
1251 insertions(+), 201 deletions(-) in blockmanagement
1324 insertions(+), 168 deletions(-) in namenode
{code}
# As we [discussed | 
https://issues.apache.org/jira/browse/HDFS-7285?focusedCommentId=14600362page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14600362]
 under HDFS-7285, this new {{BlockInfo}} hierarchy also requires us to unify 
contiguous and striped blocks handling along the direction of HDFS-8058. I 
think we should also take this chance to reach a conclusion on this issue. My 
current thought is that we can take the HDFS-8058 approach to avoid duplicating 
code, and thoroughly address type safety as a follow-on (some thoughts can be 
found on HDFS-8655).

The attached sub-patches 1, 2, 5, 6, 7 are mostly the same as the branch; 3, 4, 
8 contain the proposed simplification and should be more thoroughly reviewed 
([~andrew.wang]: we went over these ideas in offline discussions; could you 
take a look at the patches?). Per Jing's suggestion I'm also working on 
generating a patch against the current HDFS-7285 branch.

 Erasure coding: revisit and simplify BlockInfoStriped and INodeFile
 ---

 Key: HDFS-8728
 URL: https://issues.apache.org/jira/browse/HDFS-8728
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: Merge-1-codec.patch, Merge-2-ecZones.patch, 
 Merge-3-blockInfo.patch, Merge-4-blockmanagement.patch, 
 Merge-5-blockPlacementPolicies.patch, Merge-6-locatedStripedBlock.patch, 
 Merge-7-replicationMonitor.patch, Merge-8-inodeFile.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8728) Erasure coding: revisit and simplify BlockInfoStriped and INodeFile

2015-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619066#comment-14619066
 ] 

Hadoop QA commented on HDFS-8728:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744285/HDFS-8728.00.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle shellcheck |
| git revision | trunk / fc6182d |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11627/console |


This message was automatically generated.

 Erasure coding: revisit and simplify BlockInfoStriped and INodeFile
 ---

 Key: HDFS-8728
 URL: https://issues.apache.org/jira/browse/HDFS-8728
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8728.00.patch, Merge-1-codec.patch, 
 Merge-2-ecZones.patch, Merge-3-blockInfo.patch, 
 Merge-4-blockmanagement.patch, Merge-5-blockPlacementPolicies.patch, 
 Merge-6-locatedStripedBlock.patch, Merge-7-replicationMonitor.patch, 
 Merge-8-inodeFile.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8726) Move protobuf files that define the client-sever protocols to hdfs-client

2015-07-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8726:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks Yi for the reviews.

 Move protobuf files that define the client-sever protocols to hdfs-client
 -

 Key: HDFS-8726
 URL: https://issues.apache.org/jira/browse/HDFS-8726
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.8.0

 Attachments: HDFS-8726.000.patch, HDFS-8726.001.patch


 The protobuf files that defines the RPC protocols between the HDFS clients 
 and servers current sit in the hdfs package. They should be moved the the 
 hdfs-client package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8563) Erasure Coding: fsck handles file smaller than a full stripe

2015-07-08 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619056#comment-14619056
 ] 

Jing Zhao commented on HDFS-8563:
-

Forgot to include the attachment?

 Erasure Coding: fsck handles file smaller than a full stripe
 

 Key: HDFS-8563
 URL: https://issues.apache.org/jira/browse/HDFS-8563
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8563-HDFS-7285.01.patch


 Uploaded a small file. Fsck shows it's UNRECOVERABLE. It's not correct.
 {noformat}
 Erasure Coded Block Groups:
  Total size:1366 B
  Total files:   1
  Total block groups (validated):1 (avg. block group size 1366 B)
   
   UNRECOVERABLE BLOCK GROUPS:   1 (100.0 %)
   MIN REQUIRED EC BLOCK:6
   
  Minimally erasure-coded block groups:  0 (0.0 %)
  Over-erasure-coded block groups:   0 (0.0 %)
  Under-erasure-coded block groups:  1 (100.0 %)
  Unsatisfactory placement block groups: 0 (0.0 %)
  Default schema:RS-6-3
  Average block group size:  4.0
  Missing block groups:  0
  Corrupt block groups:  0
  Missing ec-blocks: 5 (55.57 %)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8728) Erasure coding: revisit and simplify BlockInfoStriped and INodeFile

2015-07-08 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8728:

Status: Patch Available  (was: Open)

Submitting consolidated patches 1~8 to trigger Jenkins.

 Erasure coding: revisit and simplify BlockInfoStriped and INodeFile
 ---

 Key: HDFS-8728
 URL: https://issues.apache.org/jira/browse/HDFS-8728
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: Merge-1-codec.patch, Merge-2-ecZones.patch, 
 Merge-3-blockInfo.patch, Merge-4-blockmanagement.patch, 
 Merge-5-blockPlacementPolicies.patch, Merge-6-locatedStripedBlock.patch, 
 Merge-7-replicationMonitor.patch, Merge-8-inodeFile.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8728) Erasure coding: revisit and simplify BlockInfoStriped and INodeFile

2015-07-08 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8728:

Attachment: HDFS-8728.00.patch

 Erasure coding: revisit and simplify BlockInfoStriped and INodeFile
 ---

 Key: HDFS-8728
 URL: https://issues.apache.org/jira/browse/HDFS-8728
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8728.00.patch, Merge-1-codec.patch, 
 Merge-2-ecZones.patch, Merge-3-blockInfo.patch, 
 Merge-4-blockmanagement.patch, Merge-5-blockPlacementPolicies.patch, 
 Merge-6-locatedStripedBlock.patch, Merge-7-replicationMonitor.patch, 
 Merge-8-inodeFile.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8728) Erasure coding: revisit and simplify BlockInfoStriped and INodeFile

2015-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619057#comment-14619057
 ] 

Hadoop QA commented on HDFS-8728:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744285/HDFS-8728.00.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle shellcheck |
| git revision | trunk / fc6182d |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11626/console |


This message was automatically generated.

 Erasure coding: revisit and simplify BlockInfoStriped and INodeFile
 ---

 Key: HDFS-8728
 URL: https://issues.apache.org/jira/browse/HDFS-8728
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8728.00.patch, Merge-1-codec.patch, 
 Merge-2-ecZones.patch, Merge-3-blockInfo.patch, 
 Merge-4-blockmanagement.patch, Merge-5-blockPlacementPolicies.patch, 
 Merge-6-locatedStripedBlock.patch, Merge-7-replicationMonitor.patch, 
 Merge-8-inodeFile.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8728) Erasure coding: revisit and simplify BlockInfoStriped and INodeFile

2015-07-08 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8728:

Attachment: HDFS-8728.01.patch

Rebased patch.

 Erasure coding: revisit and simplify BlockInfoStriped and INodeFile
 ---

 Key: HDFS-8728
 URL: https://issues.apache.org/jira/browse/HDFS-8728
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8728.00.patch, HDFS-8728.01.patch, 
 Merge-1-codec.patch, Merge-2-ecZones.patch, Merge-3-blockInfo.patch, 
 Merge-4-blockmanagement.patch, Merge-5-blockPlacementPolicies.patch, 
 Merge-6-locatedStripedBlock.patch, Merge-7-replicationMonitor.patch, 
 Merge-8-inodeFile.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8734) Erasure Coding: fix one cell need two packets

2015-07-08 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619467#comment-14619467
 ] 

Jing Zhao commented on HDFS-8734:
-

The analysis makes sense to me. But looks like we cannot fix the issue in this 
way since the currentPacket variable is shared by all the streamers. BTW, we 
may need to have streamer[] and packet[] for DFSStripedOutputStream instead of 
using the same variable and keeping refreshing their values. But that also 
requires a lot of code refactoring in DFSOutputStream.

 Erasure Coding: fix one cell need two packets
 -

 Key: HDFS-8734
 URL: https://issues.apache.org/jira/browse/HDFS-8734
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-8734.01.patch


 The default WritePacketSize is 64k
 Currently default cellSize is 64k
 We hope one cell consumes one packet. In fact it's not.
 By default,
 chunkSize = 516( 512 data + 4 checksum)
 packetSize = 64k
 chunksPerPacket = 126 ( See DFSOutputStream#computePacketChunkSize for 
 details)
 numBytes of data in one packet = 64512
 cellSize = 65536
 When first packet is full ( with 64512 data), there are still 65536 - 64512 = 
 1024 bytes left.
 {code}
 super.writeChunk(bytes, offset, len, checksum, ckoff, cklen);
 // cell is full and current packet has not been enqueued,
 if (cellFull  currentPacket != null) {
   enqueueCurrentPacketFull();
 }   
 {code}
 When  the last 1024 bytes of the cell was written, we meet {{cellFull}} and 
 create another packet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7101) Potential null dereference in DFSck#doWork()

2015-07-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7101:
-
Description: 
{code}
String lastLine = null;
int errCode = -1;
try {
  while ((line = input.readLine()) != null) {
...
if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
  errCode = 0;
{code}
If readLine() throws exception, lastLine may be null, leading to NPE.

  was:
{code}
String lastLine = null;
int errCode = -1;
try {
  while ((line = input.readLine()) != null) {
...
if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
  errCode = 0;
{code}

If readLine() throws exception, lastLine may be null, leading to NPE.


 Potential null dereference in DFSck#doWork()
 

 Key: HDFS-7101
 URL: https://issues.apache.org/jira/browse/HDFS-7101
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Ted Yu
Assignee: skrho
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-7101_001.patch


 {code}
 String lastLine = null;
 int errCode = -1;
 try {
   while ((line = input.readLine()) != null) {
 ...
 if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
   errCode = 0;
 {code}
 If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8737) Implement the Hadoop RPC v9 protocol

2015-07-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619325#comment-14619325
 ] 

Haohui Mai commented on HDFS-8737:
--

The v0 patch only includes the RPC implementation. I plan to add gmock-based 
test in a separate jira.

 Implement the Hadoop RPC v9 protocol
 

 Key: HDFS-8737
 URL: https://issues.apache.org/jira/browse/HDFS-8737
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-8737.000.patch


 This jira tracks the effort of implementing the Hadoop RPC v9 protocol.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7314) Aborted DFSClient's impact on long running service like YARN

2015-07-08 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-7314:
--
Attachment: HDFS-7314-8.patch

Here is a slightly different version we have deployed on our production 
clusters. It doesn't address all the possible race conditions discussed above; 
but it should take care of the immediate issue.

The question is if we should use this jira to address these race conditions 
systematically. Getting rid of LeaseRenewer expiry is one way to tackle that. 
We can just keep LeaseRenewer objects and their threads around once they have 
been created. Thoughts?

 Aborted DFSClient's impact on long running service like YARN
 

 Key: HDFS-7314
 URL: https://issues.apache.org/jira/browse/HDFS-7314
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Assignee: Ming Ma
  Labels: BB2015-05-TBR
 Attachments: HDFS-7314-2.patch, HDFS-7314-3.patch, HDFS-7314-4.patch, 
 HDFS-7314-5.patch, HDFS-7314-6.patch, HDFS-7314-7.patch, HDFS-7314-8.patch, 
 HDFS-7314.patch


 It happened in YARN nodemanger scenario. But it could happen to any long 
 running service that use cached instance of DistrbutedFileSystem.
 1. Active NN is under heavy load. So it became unavailable for 10 minutes; 
 any DFSClient request will get ConnectTimeoutException.
 2. YARN nodemanager use DFSClient for certain write operation such as log 
 aggregator or shared cache in YARN-1492. DFSClient used by YARN NM's 
 renewLease RPC got ConnectTimeoutException.
 {noformat}
 2014-10-29 01:36:19,559 WARN org.apache.hadoop.hdfs.LeaseRenewer: Failed to 
 renew lease for [DFSClient_NONMAPREDUCE_-550838118_1] for 372 seconds.  
 Aborting ...
 {noformat}
 3. After DFSClient is in Aborted state, YARN NM can't use that cached 
 instance of DistributedFileSystem.
 {noformat}
 2014-10-29 20:26:23,991 INFO 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
  Failed to download rsrc...
 java.io.IOException: Filesystem closed
 at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:727)
 at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1780)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
 at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
 at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:237)
 at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:340)
 at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:57)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 We can make YARN or DFSClient more tolerant to temporary NN unavailability. 
 Given the callstack is YARN - DistributedFileSystem - DFSClient, this can 
 be addressed at different layers.
 * YARN closes the DistributedFileSystem object when it receives some well 
 defined exception. Then the next HDFS call will create a new instance of 
 DistributedFileSystem. We have to fix all the places in YARN. Plus other HDFS 
 applications need to address this as well.
 * DistributedFileSystem detects Aborted DFSClient and create a new instance 
 of DFSClient. We will need to fix all the places DistributedFileSystem calls 
 DFSClient.
 * After DFSClient gets into Aborted state, it doesn't have to reject all 
 requests , instead it can retry. If NN is available again it can transition 
 to healthy state.
 Comments?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8737) Implement the Hadoop RPC v9 protocol

2015-07-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8737:
-
Attachment: HDFS-8737.000.patch

 Implement the Hadoop RPC v9 protocol
 

 Key: HDFS-8737
 URL: https://issues.apache.org/jira/browse/HDFS-8737
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-8737.000.patch


 This jira tracks the effort of implementing the Hadoop RPC v9 protocol.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8719) Erasure Coding: client generates too many small packets when writing parity data

2015-07-08 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619376#comment-14619376
 ] 

Jing Zhao commented on HDFS-8719:
-

Looks like switching streamer and refreshing {{chunksPerPacket}}/{{packetSize}} 
should always happen together. Do we also need to update the current 
{{writeChunk}} function? Also shall we put these two ops into the same function 
and always call the combined function?
{code}
if (cellFull) {
  int next = index + 1;
  //When all data cells in a stripe are ready, we need to encode
  //them and generate some parity cells. These cells will be
  //converted to packets and put to their DataStreamer's queue.
  if (next == numDataBlocks) {
cellBuffers.flipDataBuffers();
writeParityCells();
next = 0;
  }
  setCurrentStreamer(next);
}
{code}

 Erasure Coding: client generates too many small packets when writing parity 
 data
 

 Key: HDFS-8719
 URL: https://issues.apache.org/jira/browse/HDFS-8719
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8719-001.patch, HDFS-8719-HDFS-7285-001.patch, 
 HDFS-8719-HDFS-7285-002.patch


 Typically a packet is about 64K, but when writing parity data, many small 
 packets with size 512 bytes are generated. This may slow the write speed and 
 increase the network IO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >