[jira] [Commented] (HDFS-8722) Optimize datanode writes for small writes and flushes

2015-07-09 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620533#comment-14620533
 ] 

Kihwal Lee commented on HDFS-8722:
--

The latest patch restores the small write/hflush performance comparable to the 
pre-HDFS-4660 level, less the corruption bug.

Checkstyle:
{noformat}
BlockReceiver.java:470:3: Method length is 300 lines (max allowed is 150)
{noformat}
The method was already longer than the limit. Excluding comments, I removed 5 
lines and added 7 lines.

No new test case is needed. Existing test cases do fully exercise the code path 
by performing small writes/flush or small appends.

 Optimize datanode writes for small writes and flushes
 -

 Key: HDFS-8722
 URL: https://issues.apache.org/jira/browse/HDFS-8722
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: HDFS-8722.patch, HDFS-8722.v1.patch


 After the data corruption fix by HDFS-4660, the CRC recalculation for partial 
 chunk is executed more frequently, if the client repeats writing few bytes 
 and calling hflush/hsync.  This is because the generic logic forces CRC 
 recalculation if on-disk data is not CRC chunk aligned. Prior to HDFS-4660, 
 datanode blindly accepted whatever CRC client provided, if the incoming data 
 is chunk-aligned. This was the source of the corruption.
 We can still optimize for the most common case where a client is repeatedly 
 writing small number of bytes followed by hflush/hsync with no pipeline 
 recovery or append, by allowing the previous behavior for this specific case. 
  If the incoming data has a duplicate portion and that is at the last 
 chunk-boundary before the partial chunk on disk, datanode can use the 
 checksum supplied by the client without redoing the checksum on its own.  
 This reduces disk reads as well as CPU load for the checksum calculation.
 If the incoming packet data goes back further than the last on-disk chunk 
 boundary, datanode will still do a recalculation, but this occurs rarely 
 during pipeline recoveries. Thus the optimization for this specific case 
 should be sufficient to speed up the vast majority of cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8726) Move protobuf files that define the client-sever protocols to hdfs-client

2015-07-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620595#comment-14620595
 ] 

Hudson commented on HDFS-8726:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2178 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2178/])
HDFS-8726. Move protobuf files that define the client-sever protocols to 
hdfs-client. Contributed by Haohui Mai. (wheat9: rev 
fc6182d5ed92ac70de1f4633edd5265b7be1a8dc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/inotify.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/acl.proto
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientDatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/acl.proto
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/editlog.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/encryption.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/xattr.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto


 Move protobuf files that define the client-sever protocols to hdfs-client
 -

 Key: HDFS-8726
 URL: https://issues.apache.org/jira/browse/HDFS-8726
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.8.0

 Attachments: HDFS-8726.000.patch, HDFS-8726.001.patch


 The protobuf files that defines the RPC protocols between the HDFS clients 
 and servers current sit in the hdfs package. They should be moved the the 
 hdfs-client package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8642) Make TestFileTruncate more reliable

2015-07-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620599#comment-14620599
 ] 

Hudson commented on HDFS-8642:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2178 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2178/])
HDFS-8642. Make TestFileTruncate more reliable. (Contributed by Rakesh R) (arp: 
rev 4119ad3112dcfb7286ca68288489bbcb6235cf53)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Make TestFileTruncate more reliable
 ---

 Key: HDFS-8642
 URL: https://issues.apache.org/jira/browse/HDFS-8642
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, 
 HDFS-8642-02.patch


 I've observed {{TestFileTruncate#setup()}} function has to be improved by 
 making it more independent. Presently if any of the snapshots related test 
 failures will affect all the subsequent unit test cases. One such error has 
 been observed in the 
 [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
 {code}
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
 org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted 
 since /test is snapshottable and already has snapshots
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
   at org.apache.hadoop.ipc.Client.call(Client.java:1440)
   at org.apache.hadoop.ipc.Client.call(Client.java:1371)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy22.delete(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
   at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy23.delete(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1711)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2956) calling fetchdt without a --renewer argument throws NPE

2015-07-09 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620619#comment-14620619
 ] 

Vinayakumar B commented on HDFS-2956:
-

Findbugs is not actually there.
trunk test failures are not related.
branch-2 checkstyles are also not related.

 calling fetchdt without a --renewer argument throws NPE
 ---

 Key: HDFS-2956
 URL: https://issues.apache.org/jira/browse/HDFS-2956
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Todd Lipcon
Assignee: Vinayakumar B
  Labels: BB2015-05-TBR
 Attachments: HDFS-2956-01.patch, HDFS-2956-02.patch, 
 HDFS-2956.03.patch, HDFS-2956.branch-2.03.patch, HDFS-2956.patch


 If I call bin/hdfs fetchdt /tmp/mytoken without a --renewer foo argument, 
 then it will throw a NullPointerException:
 Exception in thread main java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:830)
 this is because getDelegationToken is being called with a null renewer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8642) Make TestFileTruncate more reliable

2015-07-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620705#comment-14620705
 ] 

Hudson commented on HDFS-8642:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2197 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2197/])
HDFS-8642. Make TestFileTruncate more reliable. (Contributed by Rakesh R) (arp: 
rev 4119ad3112dcfb7286ca68288489bbcb6235cf53)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Make TestFileTruncate more reliable
 ---

 Key: HDFS-8642
 URL: https://issues.apache.org/jira/browse/HDFS-8642
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, 
 HDFS-8642-02.patch


 I've observed {{TestFileTruncate#setup()}} function has to be improved by 
 making it more independent. Presently if any of the snapshots related test 
 failures will affect all the subsequent unit test cases. One such error has 
 been observed in the 
 [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
 {code}
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
 org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted 
 since /test is snapshottable and already has snapshots
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
   at org.apache.hadoop.ipc.Client.call(Client.java:1440)
   at org.apache.hadoop.ipc.Client.call(Client.java:1371)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy22.delete(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
   at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy23.delete(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1711)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8726) Move protobuf files that define the client-sever protocols to hdfs-client

2015-07-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620701#comment-14620701
 ] 

Hudson commented on HDFS-8726:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2197 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2197/])
HDFS-8726. Move protobuf files that define the client-sever protocols to 
hdfs-client. Contributed by Haohui Mai. (wheat9: rev 
fc6182d5ed92ac70de1f4633edd5265b7be1a8dc)
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/encryption.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/acl.proto
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/inotify.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/editlog.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/xattr.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientDatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/acl.proto


 Move protobuf files that define the client-sever protocols to hdfs-client
 -

 Key: HDFS-8726
 URL: https://issues.apache.org/jira/browse/HDFS-8726
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.8.0

 Attachments: HDFS-8726.000.patch, HDFS-8726.001.patch


 The protobuf files that defines the RPC protocols between the HDFS clients 
 and servers current sit in the hdfs package. They should be moved the the 
 hdfs-client package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8743) Update document for hdfs fetchdt

2015-07-09 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620614#comment-14620614
 ] 

Brahma Reddy Battula commented on HDFS-8743:


[~ajisakaa] dupe of HDFS-8628,, It's not merged to branch-2.7.

 Update document for hdfs fetchdt
 

 Key: HDFS-8743
 URL: https://issues.apache.org/jira/browse/HDFS-8743
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula

 Now hdfs fetchdt command accepts the following options:
 * --webservice
 * --renewer
 * --cancel
 * --renew
 * --print
 However, only --webservice option is documented. 
 http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#fetchdt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8726) Move protobuf files that define the client-sever protocols to hdfs-client

2015-07-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620576#comment-14620576
 ] 

Hudson commented on HDFS-8726:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #239 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/239/])
HDFS-8726. Move protobuf files that define the client-sever protocols to 
hdfs-client. Contributed by Haohui Mai. (wheat9: rev 
fc6182d5ed92ac70de1f4633edd5265b7be1a8dc)
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/acl.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/xattr.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/encryption.proto
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/inotify.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientDatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/editlog.proto
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/acl.proto
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java


 Move protobuf files that define the client-sever protocols to hdfs-client
 -

 Key: HDFS-8726
 URL: https://issues.apache.org/jira/browse/HDFS-8726
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.8.0

 Attachments: HDFS-8726.000.patch, HDFS-8726.001.patch


 The protobuf files that defines the RPC protocols between the HDFS clients 
 and servers current sit in the hdfs package. They should be moved the the 
 hdfs-client package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8642) Make TestFileTruncate more reliable

2015-07-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620580#comment-14620580
 ] 

Hudson commented on HDFS-8642:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #239 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/239/])
HDFS-8642. Make TestFileTruncate more reliable. (Contributed by Rakesh R) (arp: 
rev 4119ad3112dcfb7286ca68288489bbcb6235cf53)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Make TestFileTruncate more reliable
 ---

 Key: HDFS-8642
 URL: https://issues.apache.org/jira/browse/HDFS-8642
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, 
 HDFS-8642-02.patch


 I've observed {{TestFileTruncate#setup()}} function has to be improved by 
 making it more independent. Presently if any of the snapshots related test 
 failures will affect all the subsequent unit test cases. One such error has 
 been observed in the 
 [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
 {code}
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
 org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted 
 since /test is snapshottable and already has snapshots
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
   at org.apache.hadoop.ipc.Client.call(Client.java:1440)
   at org.apache.hadoop.ipc.Client.call(Client.java:1371)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy22.delete(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
   at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy23.delete(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1711)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8642) Make TestFileTruncate more reliable

2015-07-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620682#comment-14620682
 ] 

Hudson commented on HDFS-8642:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #249 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/249/])
HDFS-8642. Make TestFileTruncate more reliable. (Contributed by Rakesh R) (arp: 
rev 4119ad3112dcfb7286ca68288489bbcb6235cf53)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Make TestFileTruncate more reliable
 ---

 Key: HDFS-8642
 URL: https://issues.apache.org/jira/browse/HDFS-8642
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, 
 HDFS-8642-02.patch


 I've observed {{TestFileTruncate#setup()}} function has to be improved by 
 making it more independent. Presently if any of the snapshots related test 
 failures will affect all the subsequent unit test cases. One such error has 
 been observed in the 
 [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
 {code}
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
 org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted 
 since /test is snapshottable and already has snapshots
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
   at org.apache.hadoop.ipc.Client.call(Client.java:1440)
   at org.apache.hadoop.ipc.Client.call(Client.java:1371)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy22.delete(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
   at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy23.delete(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1711)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8726) Move protobuf files that define the client-sever protocols to hdfs-client

2015-07-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620678#comment-14620678
 ] 

Hudson commented on HDFS-8726:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #249 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/249/])
HDFS-8726. Move protobuf files that define the client-sever protocols to 
hdfs-client. Contributed by Haohui Mai. (wheat9: rev 
fc6182d5ed92ac70de1f4633edd5265b7be1a8dc)
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/inotify.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/acl.proto
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientDatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/encryption.proto
* hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/editlog.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/xattr.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/acl.proto
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml


 Move protobuf files that define the client-sever protocols to hdfs-client
 -

 Key: HDFS-8726
 URL: https://issues.apache.org/jira/browse/HDFS-8726
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.8.0

 Attachments: HDFS-8726.000.patch, HDFS-8726.001.patch


 The protobuf files that defines the RPC protocols between the HDFS clients 
 and servers current sit in the hdfs package. They should be moved the the 
 hdfs-client package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8745) Use Doxygen to generate documents for libhdfspp

2015-07-09 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8745:
-
Summary: Use Doxygen to generate documents for libhdfspp  (was: Use Doxygen 
to generate documents)

 Use Doxygen to generate documents for libhdfspp
 ---

 Key: HDFS-8745
 URL: https://issues.apache.org/jira/browse/HDFS-8745
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor

 This jira proposes to add Doxygen hooks to generate documentation for the 
 library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8723) Integrate the build infrastructure with hdfs-client

2015-07-09 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620852#comment-14620852
 ] 

Alan Burlison commented on HDFS-8723:
-

Ah OK, thanks. Note I recently made some fairly major changes to the CMake 
infrastructure, see HADOOP-12036. If you have questions about what I did and 
why, please feel free to ask ;-)

 Integrate the build infrastructure with hdfs-client
 ---

 Key: HDFS-8723
 URL: https://issues.apache.org/jira/browse/HDFS-8723
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: HDFS-8707

 Attachments: HDFS-8723.000.patch


 This jira proposes to integrate the build infrastructures of libhdfspp with 
 the one in hdfs-client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8723) Integrate the build infrastructure with hdfs-client

2015-07-09 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620832#comment-14620832
 ] 

Alan Burlison commented on HDFS-8723:
-

Why is this adding an empty CMakeLists.txt to a directory that apparently 
doesn't contain any native code?

 Integrate the build infrastructure with hdfs-client
 ---

 Key: HDFS-8723
 URL: https://issues.apache.org/jira/browse/HDFS-8723
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: HDFS-8707

 Attachments: HDFS-8723.000.patch


 This jira proposes to integrate the build infrastructures of libhdfspp with 
 the one in hdfs-client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8723) Integrate the build infrastructure with hdfs-client

2015-07-09 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620840#comment-14620840
 ] 

Haohui Mai commented on HDFS-8723:
--

The empty CMakeLists.txt is a placeholder which minimizes the changesets of 
subsequent patches.

 Integrate the build infrastructure with hdfs-client
 ---

 Key: HDFS-8723
 URL: https://issues.apache.org/jira/browse/HDFS-8723
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: HDFS-8707

 Attachments: HDFS-8723.000.patch


 This jira proposes to integrate the build infrastructures of libhdfspp with 
 the one in hdfs-client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8745) Use Doxygen to generate documents

2015-07-09 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-8745:


 Summary: Use Doxygen to generate documents
 Key: HDFS-8745
 URL: https://issues.apache.org/jira/browse/HDFS-8745
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor


This jira proposes to add Doxygen hooks to generate documentation for the 
library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8745) Use Doxygen to generate documents for libhdfspp

2015-07-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620881#comment-14620881
 ] 

Allen Wittenauer commented on HDFS-8745:


This should probably be a full blown JIRA rather than hidden as a subtask.

 Use Doxygen to generate documents for libhdfspp
 ---

 Key: HDFS-8745
 URL: https://issues.apache.org/jira/browse/HDFS-8745
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor

 This jira proposes to add Doxygen hooks to generate documentation for the 
 library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8745) Use Doxygen to generate documents for libhdfspp

2015-07-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620885#comment-14620885
 ] 

Allen Wittenauer commented on HDFS-8745:


Also: why?  We just moved almost all of our docs to markdown.  What's the use 
case here?

 Use Doxygen to generate documents for libhdfspp
 ---

 Key: HDFS-8745
 URL: https://issues.apache.org/jira/browse/HDFS-8745
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor

 This jira proposes to add Doxygen hooks to generate documentation for the 
 library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8563) Erasure Coding: fsck handles file smaller than a full stripe

2015-07-09 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-8563:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7285
   Status: Resolved  (was: Patch Available)

+1. I've committed this to the feature branch. Thanks Walter for the 
contribution!

 Erasure Coding: fsck handles file smaller than a full stripe
 

 Key: HDFS-8563
 URL: https://issues.apache.org/jira/browse/HDFS-8563
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
 Fix For: HDFS-7285

 Attachments: HDFS-8563-HDFS-7285.01.patch, 
 HDFS-8563-HDFS-7285.02.patch


 Uploaded a small file. Fsck shows it's UNRECOVERABLE. It's not correct.
 {noformat}
 Erasure Coded Block Groups:
  Total size:1366 B
  Total files:   1
  Total block groups (validated):1 (avg. block group size 1366 B)
   
   UNRECOVERABLE BLOCK GROUPS:   1 (100.0 %)
   MIN REQUIRED EC BLOCK:6
   
  Minimally erasure-coded block groups:  0 (0.0 %)
  Over-erasure-coded block groups:   0 (0.0 %)
  Under-erasure-coded block groups:  1 (100.0 %)
  Unsatisfactory placement block groups: 0 (0.0 %)
  Default schema:RS-6-3
  Average block group size:  4.0
  Missing block groups:  0
  Corrupt block groups:  0
  Missing ec-blocks: 5 (55.57 %)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8732) Erasure Coding: Fail to read a file with corrupted blocks

2015-07-09 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621060#comment-14621060
 ] 

Jing Zhao commented on HDFS-8732:
-

HDFS-8669 can fix this I think.

 Erasure Coding: Fail to read a file with corrupted blocks
 -

 Key: HDFS-8732
 URL: https://issues.apache.org/jira/browse/HDFS-8732
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xinwei Qin 
Assignee: Walter Su
 Attachments: testReadCorruptedData.patch


 In system test of reading EC file(HDFS-8259), the methods 
 {{testReadCorruptedData*()}} failed to read a EC file with corrupted 
 blocks(overwrite some data to several blocks and this will make client get a  
 checksum exception). 
 Exception logs:
 {code}
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StatefulStripeReader.readChunk(DFSStripedInputStream.java:771)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:623)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:335)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:465)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:946)
 at java.io.DataInputStream.read(DataInputStream.java:149)
 at 
 org.apache.hadoop.hdfs.StripedFileTestUtil.verifyStatefulRead(StripedFileTestUtil.java:98)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.verifyRead(TestReadStripedFileWithDecoding.java:196)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testOneFileWithBlockCorrupted(TestReadStripedFileWithDecoding.java:246)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testReadCorruptedData11(TestReadStripedFileWithDecoding.java:114)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8679) Move DatasetSpi to new package

2015-07-09 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620952#comment-14620952
 ] 

Arpit Agarwal commented on HDFS-8679:
-

Hi [~anu], thanks for taking a look.

Yes the intention was to have the caller synchronize multiple invocations. 
{{DataNode#allocateFsDataset}} handles the synchronization. I will add a 
comment to make that clearer.

 Move DatasetSpi to new package
 --

 Key: HDFS-8679
 URL: https://issues.apache.org/jira/browse/HDFS-8679
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-8679-HDFS-7240.01.patch, 
 HDFS-8679-HDFS-7240.02.patch


 The DatasetSpi and VolumeSpi interfaces are currently in 
 {{org.apache.hadoop.hdfs.server.datanode.fsdataset}}. They can be moved to a 
 new package {{org.apache.hadoop.hdfs.server.datanode.dataset}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8679) Move DatasetSpi to new package

2015-07-09 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620955#comment-14620955
 ] 

Anu Engineer commented on HDFS-8679:


[~arpitagarwal] Thanks for the explanation. +1 (non-binding)

 Move DatasetSpi to new package
 --

 Key: HDFS-8679
 URL: https://issues.apache.org/jira/browse/HDFS-8679
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-8679-HDFS-7240.01.patch, 
 HDFS-8679-HDFS-7240.02.patch


 The DatasetSpi and VolumeSpi interfaces are currently in 
 {{org.apache.hadoop.hdfs.server.datanode.fsdataset}}. They can be moved to a 
 new package {{org.apache.hadoop.hdfs.server.datanode.dataset}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8497) ErasureCodingWorker fails to do decode work

2015-07-09 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621012#comment-14621012
 ] 

Jing Zhao commented on HDFS-8497:
-

Looks like we can resolve this jira now?

 ErasureCodingWorker fails to do decode work
 ---

 Key: HDFS-8497
 URL: https://issues.apache.org/jira/browse/HDFS-8497
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8497-HDFS-7285-01.patch


 When I run the unit test in HDFS-8449, it fails due to the decode error in 
 ErasureCodingWorker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8497) ErasureCodingWorker fails to do decode work

2015-07-09 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-8497.
-
Resolution: Duplicate

Thanks for the comment Jing. Closing this as a duplicate of HDFS-8328.

 ErasureCodingWorker fails to do decode work
 ---

 Key: HDFS-8497
 URL: https://issues.apache.org/jira/browse/HDFS-8497
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8497-HDFS-7285-01.patch


 When I run the unit test in HDFS-8449, it fails due to the decode error in 
 ErasureCodingWorker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8679) Move DatasetSpi to new package

2015-07-09 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620926#comment-14620926
 ] 

Anu Engineer commented on HDFS-8679:


one question : in {{FsDatasetFactory#newInstance}} you seem to have removed the 
{{synchronized}} , just wanted to make sure that is because you are certain it 
is never called from multiple threads. 


 Move DatasetSpi to new package
 --

 Key: HDFS-8679
 URL: https://issues.apache.org/jira/browse/HDFS-8679
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-8679-HDFS-7240.01.patch, 
 HDFS-8679-HDFS-7240.02.patch


 The DatasetSpi and VolumeSpi interfaces are currently in 
 {{org.apache.hadoop.hdfs.server.datanode.fsdataset}}. They can be moved to a 
 new package {{org.apache.hadoop.hdfs.server.datanode.dataset}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8732) Erasure Coding: Fail to read a file with corrupted blocks

2015-07-09 Thread Xinwei Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621593#comment-14621593
 ] 

Xinwei Qin  commented on HDFS-8732:
---

Yes, [~jingzhao], the test passed with patch in HDFS-8669. I think this jira 
can be closed now.

 Erasure Coding: Fail to read a file with corrupted blocks
 -

 Key: HDFS-8732
 URL: https://issues.apache.org/jira/browse/HDFS-8732
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xinwei Qin 
Assignee: Walter Su
 Attachments: testReadCorruptedData.patch


 In system test of reading EC file(HDFS-8259), the methods 
 {{testReadCorruptedData*()}} failed to read a EC file with corrupted 
 blocks(overwrite some data to several blocks and this will make client get a  
 checksum exception). 
 Exception logs:
 {code}
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StatefulStripeReader.readChunk(DFSStripedInputStream.java:771)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:623)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:335)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:465)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:946)
 at java.io.DataInputStream.read(DataInputStream.java:149)
 at 
 org.apache.hadoop.hdfs.StripedFileTestUtil.verifyStatefulRead(StripedFileTestUtil.java:98)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.verifyRead(TestReadStripedFileWithDecoding.java:196)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testOneFileWithBlockCorrupted(TestReadStripedFileWithDecoding.java:246)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testReadCorruptedData11(TestReadStripedFileWithDecoding.java:114)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8732) Erasure Coding: Fail to read a file with corrupted blocks

2015-07-09 Thread Xinwei Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinwei Qin  resolved HDFS-8732.
---
Resolution: Fixed

 Erasure Coding: Fail to read a file with corrupted blocks
 -

 Key: HDFS-8732
 URL: https://issues.apache.org/jira/browse/HDFS-8732
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xinwei Qin 
Assignee: Walter Su
 Attachments: testReadCorruptedData.patch


 In system test of reading EC file(HDFS-8259), the methods 
 {{testReadCorruptedData*()}} failed to read a EC file with corrupted 
 blocks(overwrite some data to several blocks and this will make client get a  
 checksum exception). 
 Exception logs:
 {code}
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StatefulStripeReader.readChunk(DFSStripedInputStream.java:771)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:623)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:335)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:465)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:946)
 at java.io.DataInputStream.read(DataInputStream.java:149)
 at 
 org.apache.hadoop.hdfs.StripedFileTestUtil.verifyStatefulRead(StripedFileTestUtil.java:98)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.verifyRead(TestReadStripedFileWithDecoding.java:196)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testOneFileWithBlockCorrupted(TestReadStripedFileWithDecoding.java:246)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testReadCorruptedData11(TestReadStripedFileWithDecoding.java:114)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8743) Update document for hdfs fetchdt

2015-07-09 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HDFS-8743.
-
Resolution: Duplicate

 Update document for hdfs fetchdt
 

 Key: HDFS-8743
 URL: https://issues.apache.org/jira/browse/HDFS-8743
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula

 Now hdfs fetchdt command accepts the following options:
 * --webservice
 * --renewer
 * --cancel
 * --renew
 * --print
 However, only --webservice option is documented. 
 http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#fetchdt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8702) Erasure coding: update BlockManager.blockHasEnoughRacks(..) logic for striped block

2015-07-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621736#comment-14621736
 ] 

Hadoop QA commented on HDFS-8702:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 39s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 38s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 41s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 37s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 23s | The patch appears to introduce 4 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 173m 16s | Tests failed in hadoop-hdfs. |
| | | 216m  1s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744365/HDFS-8702-HDFS-7285.00.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / e692c7d |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11649/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11649/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11649/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11649/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11649/console |


This message was automatically generated.

 Erasure coding: update BlockManager.blockHasEnoughRacks(..) logic for striped 
 block
 ---

 Key: HDFS-8702
 URL: https://issues.apache.org/jira/browse/HDFS-8702
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Kai Sasaki
 Attachments: HDFS-8702-HDFS-7285.00.patch


 Currently blockHasEnoughRacks(..) only guarantees 2 racks. Logic needs 
 updated for striped blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8677) Ozone: Introduce KeyValueContainerDatasetSpi

2015-07-09 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8677:

Status: Patch Available  (was: Open)

 Ozone: Introduce KeyValueContainerDatasetSpi
 

 Key: HDFS-8677
 URL: https://issues.apache.org/jira/browse/HDFS-8677
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-8677-HDFS-7240.01.patch


 KeyValueContainerDatasetSpi will be a new interface for Ozone containers, 
 just as FsDatasetSpi is an interface for manipulating HDFS block files.
 The interface will have support for both key-value containers for storing 
 Ozone metadata and blobs for storing user data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8677) Ozone: Introduce KeyValueContainerDatasetSpi

2015-07-09 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8677:

Attachment: HDFS-8677-HDFS-7240.01.patch

 Ozone: Introduce KeyValueContainerDatasetSpi
 

 Key: HDFS-8677
 URL: https://issues.apache.org/jira/browse/HDFS-8677
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-8677-HDFS-7240.01.patch


 KeyValueContainerDatasetSpi will be a new interface for Ozone containers, 
 just as FsDatasetSpi is an interface for manipulating HDFS block files.
 The interface will have support for both key-value containers for storing 
 Ozone metadata and blobs for storing user data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8578) On upgrade, Datanode should process all storage/data dirs in parallel

2015-07-09 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621783#comment-14621783
 ] 

Vinayakumar B commented on HDFS-8578:
-

Thanks [~raju.bairishetti] for posting the metrics.
Good to see the improvements.
Will post a updated patch soon with the configuration for the num of parallel 
threads

 On upgrade, Datanode should process all storage/data dirs in parallel
 -

 Key: HDFS-8578
 URL: https://issues.apache.org/jira/browse/HDFS-8578
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Raju Bairishetti
Priority: Critical
 Attachments: HDFS-8578-01.patch, HDFS-8578-02.patch, 
 HDFS-8578-branch-2.6.0.patch


 Right now, during upgrades datanode is processing all the storage dirs 
 sequentially. Assume it takes ~20 mins to process a single storage dir then  
 datanode which has ~10 disks will take around 3hours to come up.
 *BlockPoolSliceStorage.java*
 {code}
for (int idx = 0; idx  getNumStorageDirs(); idx++) {
   doTransition(datanode, getStorageDir(idx), nsInfo, startOpt);
   assert getCTime() == nsInfo.getCTime() 
   : Data-node and name-node CTimes must be the same.;
 }
 {code}
 It would save lots of time during major upgrades if datanode process all 
 storagedirs/disks parallelly.
 Can we make datanode to process all storage dirs parallelly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8749) Fix findbugs warning in BlockManager.java

2015-07-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621787#comment-14621787
 ] 

Hadoop QA commented on HDFS-8749:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  17m 32s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 49s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 37s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 20s | The applied patch generated  1 
new checkstyle issues (total was 171, now 171). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 19s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings, and fixes 1 pre-existing warnings. |
| {color:green}+1{color} | native |   3m  1s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 166m 10s | Tests failed in hadoop-hdfs. |
| | | 210m 18s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744621/HDFS-8749.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1a0752d |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11650/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11650/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11650/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11650/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf908.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11650/console |


This message was automatically generated.

 Fix findbugs warning in BlockManager.java
 -

 Key: HDFS-8749
 URL: https://issues.apache.org/jira/browse/HDFS-8749
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
Priority: Minor
  Labels: newbie
 Attachments: HDFS-8749.patch, findbugs.png


 {code:title=BlockManager#checkBlocksProperlyReplicated}
 final BlockInfoUnderConstruction uc =
 (BlockInfoUnderConstruction)b;
 {code}
 {{uc}} is not needed and this causes findbugs warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8388) Time and Date format need to be in sync in Namenode UI page

2015-07-09 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621700#comment-14621700
 ] 

Surendra Singh Lilhore commented on HDFS-8388:
--

Thanks [~ajisakaa] for reviewing...

Attached updated patch.. Please review.

 Time and Date format need to be in sync in Namenode UI page
 ---

 Key: HDFS-8388
 URL: https://issues.apache.org/jira/browse/HDFS-8388
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Surendra Singh Lilhore
Priority: Minor
 Attachments: HDFS-8388-002.patch, HDFS-8388.patch, HDFS-8388_1.patch


 In NameNode UI Page, Date and Time FORMAT  displayed on the page are not in 
 sync currently.
 Started:Wed May 13 12:28:02 IST 2015
 Compiled:23 Apr 2015 12:22:59 
 Block Deletion Start Time   13 May 2015 12:28:02
 We can keep a common format in all the above places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8677) Ozone: Introduce KeyValueContainerDatasetSpi

2015-07-09 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8677:

Summary: Ozone: Introduce KeyValueContainerDatasetSpi  (was: Ozone: 
Introduce StorageContainerDatasetSpi)

 Ozone: Introduce KeyValueContainerDatasetSpi
 

 Key: HDFS-8677
 URL: https://issues.apache.org/jira/browse/HDFS-8677
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal

 StorageContainerDatasetSpi will be a new interface for Ozone containers, just 
 as FsDatasetSpi is an interface for manipulating HDFS block files.
 The interface will have support for both key-value containers for storing 
 Ozone metadata and blobs for storing user data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8749) Fix findbugs warning in BlockManager.java

2015-07-09 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8749:

Attachment: findbugs.png

Attaching the screenshot of {{mvn findbugs:gui}}.

 Fix findbugs warning in BlockManager.java
 -

 Key: HDFS-8749
 URL: https://issues.apache.org/jira/browse/HDFS-8749
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Akira AJISAKA
Priority: Minor
 Attachments: findbugs.png


 {code:title=BlockManager#checkBlocksProperlyReplicated}
 final BlockInfoUnderConstruction uc =
 (BlockInfoUnderConstruction)b;
 {code}
 {{uc}} is not needed and this causes findbugs warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2956) calling fetchdt without a --renewer argument throws NPE

2015-07-09 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621570#comment-14621570
 ] 

Akira AJISAKA commented on HDFS-2956:
-

bq. Findbugs is not actually there.
Not related to the patch. Filed HDFS-8749.

 calling fetchdt without a --renewer argument throws NPE
 ---

 Key: HDFS-2956
 URL: https://issues.apache.org/jira/browse/HDFS-2956
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Todd Lipcon
Assignee: Vinayakumar B
  Labels: BB2015-05-TBR
 Attachments: HDFS-2956-01.patch, HDFS-2956-02.patch, 
 HDFS-2956.03.patch, HDFS-2956.branch-2.03.patch, HDFS-2956.patch


 If I call bin/hdfs fetchdt /tmp/mytoken without a --renewer foo argument, 
 then it will throw a NullPointerException:
 Exception in thread main java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:830)
 this is because getDelegationToken is being called with a null renewer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8749) Fix findbugs warning in BlockManager.java

2015-07-09 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8749:

Affects Version/s: 2.8.0
   Labels: newbie  (was: )

 Fix findbugs warning in BlockManager.java
 -

 Key: HDFS-8749
 URL: https://issues.apache.org/jira/browse/HDFS-8749
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: findbugs.png


 {code:title=BlockManager#checkBlocksProperlyReplicated}
 final BlockInfoUnderConstruction uc =
 (BlockInfoUnderConstruction)b;
 {code}
 {{uc}} is not needed and this causes findbugs warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8749) Fix findbugs warning in BlockManager.java

2015-07-09 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8749:

Target Version/s: 2.8.0
  Status: Patch Available  (was: Open)

+1 pending Jenkins. Thanks Brahma.

 Fix findbugs warning in BlockManager.java
 -

 Key: HDFS-8749
 URL: https://issues.apache.org/jira/browse/HDFS-8749
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
Priority: Minor
  Labels: newbie
 Attachments: HDFS-8749.patch, findbugs.png


 {code:title=BlockManager#checkBlocksProperlyReplicated}
 final BlockInfoUnderConstruction uc =
 (BlockInfoUnderConstruction)b;
 {code}
 {{uc}} is not needed and this causes findbugs warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8750) FIleSystem does not honor Configuration.getClassLoader() while loading FileSystem implementations

2015-07-09 Thread Himanshu (JIRA)
Himanshu created HDFS-8750:
--

 Summary: FIleSystem does not honor Configuration.getClassLoader() 
while loading FileSystem implementations
 Key: HDFS-8750
 URL: https://issues.apache.org/jira/browse/HDFS-8750
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs, HDFS
Reporter: Himanshu


In FileSystem.loadFileSystems(), at 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2652

a scheme - FileSystem implementation map is created from the jars 
available on classpath. It uses Thread.currentThread().getClassLoader() via 
ServiceLoader.load(FileSystem.class)

Instead, loadFileSystems() should take Configuration as an argument and should 
first check if a classloader is configured in configuration.getClassLoader(), 
if yes then ServiceLoader.load(FileSystem.class, 
configuration.getClassLoader()) should be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8677) Ozone: Introduce KeyValueContainerDatasetSpi

2015-07-09 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8677:

Description: 
KeyValueContainerDatasetSpi will be a new interface for Ozone containers, just 
as FsDatasetSpi is an interface for manipulating HDFS block files.

The interface will have support for both key-value containers for storing Ozone 
metadata and blobs for storing user data.

  was:
StorageContainerDatasetSpi will be a new interface for Ozone containers, just 
as FsDatasetSpi is an interface for manipulating HDFS block files.

The interface will have support for both key-value containers for storing Ozone 
metadata and blobs for storing user data.


 Ozone: Introduce KeyValueContainerDatasetSpi
 

 Key: HDFS-8677
 URL: https://issues.apache.org/jira/browse/HDFS-8677
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal

 KeyValueContainerDatasetSpi will be a new interface for Ozone containers, 
 just as FsDatasetSpi is an interface for manipulating HDFS block files.
 The interface will have support for both key-value containers for storing 
 Ozone metadata and blobs for storing user data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8749) Fix findbugs warning in BlockManager.java

2015-07-09 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-8749:
---

 Summary: Fix findbugs warning in BlockManager.java
 Key: HDFS-8749
 URL: https://issues.apache.org/jira/browse/HDFS-8749
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Akira AJISAKA
Priority: Minor


{code:title=BlockManager#checkBlocksProperlyReplicated}
final BlockInfoUnderConstruction uc =
(BlockInfoUnderConstruction)b;
{code}
{{uc}} is not needed and this causes findbugs warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8749) Fix findbugs warning in BlockManager.java

2015-07-09 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621565#comment-14621565
 ] 

Akira AJISAKA commented on HDFS-8749:
-

It was fixed as a part of HDFS-8652 but it was reverted.

 Fix findbugs warning in BlockManager.java
 -

 Key: HDFS-8749
 URL: https://issues.apache.org/jira/browse/HDFS-8749
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Akira AJISAKA
Priority: Minor

 {code:title=BlockManager#checkBlocksProperlyReplicated}
 final BlockInfoUnderConstruction uc =
 (BlockInfoUnderConstruction)b;
 {code}
 {{uc}} is not needed and this causes findbugs warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8058) Erasure coding: use BlockInfo[] for both striped and contiguous blocks in INodeFile

2015-07-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621575#comment-14621575
 ] 

Hadoop QA commented on HDFS-8058:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 15s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 5 new or modified test files. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 42s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 14s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 44s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 25s | The patch appears to introduce 4 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 18s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 171m 11s | Tests failed in hadoop-hdfs. |
| | | 213m 34s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744415/HDFS-8058-HDFS-7285.004.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / e692c7d |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11648/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11648/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11648/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11648/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11648/console |


This message was automatically generated.

 Erasure coding: use BlockInfo[] for both striped and contiguous blocks in 
 INodeFile
 ---

 Key: HDFS-8058
 URL: https://issues.apache.org/jira/browse/HDFS-8058
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Yi Liu
Assignee: Zhe Zhang
 Attachments: HDFS-8058-HDFS-7285.003.patch, 
 HDFS-8058-HDFS-7285.004.patch, HDFS-8058.001.patch, HDFS-8058.002.patch


 This JIRA is to use {{BlockInfo[] blocks}} for both striped and contiguous 
 blocks in INodeFile.
 Currently {{FileWithStripedBlocksFeature}} keeps separate list for striped 
 blocks, and the methods there duplicate with those in INodeFile, and current 
 code need to judge {{isStriped}} then do different things. Also if file is 
 striped, the {{blocks}} in INodeFile occupy a reference memory space.
 These are not necessary, and we can use the same {{blocks}} to make code more 
 clear.
 I keep {{FileWithStripedBlocksFeature}} as empty for follow use: I will file 
 a new JIRA to move {{dataBlockNum}} and {{parityBlockNum}} from 
 *BlockInfoStriped* to INodeFile, since ideally they are the same for all 
 striped blocks in a file, and store them in block will waste NN memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8287) DFSStripedOutputStream.writeChunk should not wait for writing parity

2015-07-09 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621577#comment-14621577
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8287:
---

When a stripping cell is full, the parity packets is computed by the user 
client thread so that the user client cannot continue to write data.  It has to 
wait until all parity packets are generated and enqueued.

Instead of generating parity packets using the user client thread, one (the 
fastest streamer) of the parity streamers could generate the parity packets.  
Then the user client can continue to write.  Of course, we need more buffer to 
store one or more old stripping cells and the buffer cannot be released until 
the parity packets are generated.

 DFSStripedOutputStream.writeChunk should not wait for writing parity 
 -

 Key: HDFS-8287
 URL: https://issues.apache.org/jira/browse/HDFS-8287
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Kai Sasaki

 When a stripping cell is full, writeChunk computes and generates parity 
 packets.  It sequentially calls waitAndQueuePacket so that user client cannot 
 continue to write data until it finishes.
 We should allow user client to continue writing instead but not blocking it 
 when writing parity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8388) Time and Date format need to be in sync in Namenode UI page

2015-07-09 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621647#comment-14621647
 ] 

Akira AJISAKA commented on HDFS-8388:
-

One comment from me.
bq. The applied patch generated 1 release audit warnings.
We need to add a line to the pom.xml not to check moment.js.
{code:title=hadoop-hdfs-project/hadoop-hdfs/pom.xml}
  plugin
groupIdorg.apache.rat/groupId
artifactIdapache-rat-plugin/artifactId
configuration
  excludes
excludeCHANGES.txt/exclude
...
{code}

 Time and Date format need to be in sync in Namenode UI page
 ---

 Key: HDFS-8388
 URL: https://issues.apache.org/jira/browse/HDFS-8388
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Surendra Singh Lilhore
Priority: Minor
 Attachments: HDFS-8388.patch, HDFS-8388_1.patch


 In NameNode UI Page, Date and Time FORMAT  displayed on the page are not in 
 sync currently.
 Started:Wed May 13 12:28:02 IST 2015
 Compiled:23 Apr 2015 12:22:59 
 Block Deletion Start Time   13 May 2015 12:28:02
 We can keep a common format in all the above places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2956) calling fetchdt without a --renewer argument throws NPE

2015-07-09 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621588#comment-14621588
 ] 

Akira AJISAKA commented on HDFS-2956:
-

Thanks [~vinayrpet] for creating the patch for branch-2. Mostly looks good to 
me. I noticed there is a difference in the test between trunk and branch-2.
{code:title=branch-2}
  // make sure we got back exactly the 1 token we expected
  assertTrue(itr.hasNext());
  assertNotNull(Token without renewer shouldn't be null, itr.next());
  assertTrue(!itr.hasNext());
{code}
{code:title=trunk}
 assertTrue(token not exist error, itr.hasNext());
 assertNotNull(Token should be there without renewer, itr.next());
{code}
I'm thinking we can add {{assertTrue(!itr.hasNext())}} to the trunk as well.

 calling fetchdt without a --renewer argument throws NPE
 ---

 Key: HDFS-2956
 URL: https://issues.apache.org/jira/browse/HDFS-2956
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Todd Lipcon
Assignee: Vinayakumar B
  Labels: BB2015-05-TBR
 Attachments: HDFS-2956-01.patch, HDFS-2956-02.patch, 
 HDFS-2956.03.patch, HDFS-2956.branch-2.03.patch, HDFS-2956.patch


 If I call bin/hdfs fetchdt /tmp/mytoken without a --renewer foo argument, 
 then it will throw a NullPointerException:
 Exception in thread main java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:830)
 this is because getDelegationToken is being called with a null renewer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8743) Update document for hdfs fetchdt

2015-07-09 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621597#comment-14621597
 ] 

Akira AJISAKA commented on HDFS-8743:
-

Thanks [~brahmareddy] for the comment. I'll close this issue.

 Update document for hdfs fetchdt
 

 Key: HDFS-8743
 URL: https://issues.apache.org/jira/browse/HDFS-8743
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula

 Now hdfs fetchdt command accepts the following options:
 * --webservice
 * --renewer
 * --cancel
 * --renew
 * --print
 However, only --webservice option is documented. 
 http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#fetchdt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8628) Update missing command option for fetchdt

2015-07-09 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621625#comment-14621625
 ] 

Akira AJISAKA commented on HDFS-8628:
-

Hi [~vinayrpet], can I backport this to branch-2.7?

 Update missing command option for fetchdt
 -

 Key: HDFS-8628
 URL: https://issues.apache.org/jira/browse/HDFS-8628
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: J.Andreina
Assignee: J.Andreina
 Fix For: 2.8.0

 Attachments: HDFS-8628.1.patch, HDFS-8628.2.patch, HDFS-8628.3.patch, 
 HDFS-8628.4.patch, HDFS-8628.5.patch


 Update missing command option for fetchdt
 *Expected:*
 {noformat}
 fetchdt opts token file
 Options:
   --webservice url  Url to contact NN on (starts with http:// or https://)
   --renewer nameName of the delegation token renewer
   --cancelCancel the delegation token
   --renew Renew the delegation token.  Delegation token must have 
 been fetched using the --renewer name option.
   --print Print the delegation token
 {noformat}
 *Actual:*
 {noformat}
 Usage: hdfs fetchdt [--webservice namenode_http_addr] path
 COMMAND_OPTIONDescription
 --webservice https_address   use http protocol instead of RPC
 fileName File name to store 
 the token into.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-8732) Erasure Coding: Fail to read a file with corrupted blocks

2015-07-09 Thread Xinwei Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinwei Qin  reopened HDFS-8732:
---

 Erasure Coding: Fail to read a file with corrupted blocks
 -

 Key: HDFS-8732
 URL: https://issues.apache.org/jira/browse/HDFS-8732
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xinwei Qin 
Assignee: Walter Su
 Attachments: testReadCorruptedData.patch


 In system test of reading EC file(HDFS-8259), the methods 
 {{testReadCorruptedData*()}} failed to read a EC file with corrupted 
 blocks(overwrite some data to several blocks and this will make client get a  
 checksum exception). 
 Exception logs:
 {code}
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StatefulStripeReader.readChunk(DFSStripedInputStream.java:771)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:623)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:335)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:465)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:946)
 at java.io.DataInputStream.read(DataInputStream.java:149)
 at 
 org.apache.hadoop.hdfs.StripedFileTestUtil.verifyStatefulRead(StripedFileTestUtil.java:98)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.verifyRead(TestReadStripedFileWithDecoding.java:196)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testOneFileWithBlockCorrupted(TestReadStripedFileWithDecoding.java:246)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testReadCorruptedData11(TestReadStripedFileWithDecoding.java:114)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8732) Erasure Coding: Fail to read a file with corrupted blocks

2015-07-09 Thread Xinwei Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinwei Qin  resolved HDFS-8732.
---
Resolution: Duplicate

 Erasure Coding: Fail to read a file with corrupted blocks
 -

 Key: HDFS-8732
 URL: https://issues.apache.org/jira/browse/HDFS-8732
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xinwei Qin 
Assignee: Walter Su
 Attachments: testReadCorruptedData.patch


 In system test of reading EC file(HDFS-8259), the methods 
 {{testReadCorruptedData*()}} failed to read a EC file with corrupted 
 blocks(overwrite some data to several blocks and this will make client get a  
 checksum exception). 
 Exception logs:
 {code}
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StatefulStripeReader.readChunk(DFSStripedInputStream.java:771)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:623)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:335)
 at 
 org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:465)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:946)
 at java.io.DataInputStream.read(DataInputStream.java:149)
 at 
 org.apache.hadoop.hdfs.StripedFileTestUtil.verifyStatefulRead(StripedFileTestUtil.java:98)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.verifyRead(TestReadStripedFileWithDecoding.java:196)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testOneFileWithBlockCorrupted(TestReadStripedFileWithDecoding.java:246)
 at 
 org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding.testReadCorruptedData11(TestReadStripedFileWithDecoding.java:114)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8749) Fix findbugs warning in BlockManager.java

2015-07-09 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621613#comment-14621613
 ] 

Brahma Reddy Battula commented on HDFS-8749:


[~ajisakaa] attached the patch kindly review..

 Fix findbugs warning in BlockManager.java
 -

 Key: HDFS-8749
 URL: https://issues.apache.org/jira/browse/HDFS-8749
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
Priority: Minor
  Labels: newbie
 Attachments: HDFS-8749.patch, findbugs.png


 {code:title=BlockManager#checkBlocksProperlyReplicated}
 final BlockInfoUnderConstruction uc =
 (BlockInfoUnderConstruction)b;
 {code}
 {{uc}} is not needed and this causes findbugs warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8749) Fix findbugs warning in BlockManager.java

2015-07-09 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HDFS-8749:
--

Assignee: Brahma Reddy Battula

 Fix findbugs warning in BlockManager.java
 -

 Key: HDFS-8749
 URL: https://issues.apache.org/jira/browse/HDFS-8749
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
Priority: Minor
  Labels: newbie
 Attachments: HDFS-8749.patch, findbugs.png


 {code:title=BlockManager#checkBlocksProperlyReplicated}
 final BlockInfoUnderConstruction uc =
 (BlockInfoUnderConstruction)b;
 {code}
 {{uc}} is not needed and this causes findbugs warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8749) Fix findbugs warning in BlockManager.java

2015-07-09 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8749:
---
Attachment: HDFS-8749.patch

 Fix findbugs warning in BlockManager.java
 -

 Key: HDFS-8749
 URL: https://issues.apache.org/jira/browse/HDFS-8749
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HDFS-8749.patch, findbugs.png


 {code:title=BlockManager#checkBlocksProperlyReplicated}
 final BlockInfoUnderConstruction uc =
 (BlockInfoUnderConstruction)b;
 {code}
 {{uc}} is not needed and this causes findbugs warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8744) Erasure Coding: the number of chunks in packet is not updated when writing parity data

2015-07-09 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-8744:

Attachment: HDFS-8744-HDFS-7285-002.patch

update patch according to Jing's review

 Erasure Coding: the number of chunks in packet is not updated when writing 
 parity data
 --

 Key: HDFS-8744
 URL: https://issues.apache.org/jira/browse/HDFS-8744
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8744-HDFS-7285-001.patch, 
 HDFS-8744-HDFS-7285-002.patch


 The member {{numChunks}} in {{DFSPacket}} is always zero if this packet 
 contains parity data. The calling of {{getNumChunks}} may  cause potential 
 errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8388) Time and Date format need to be in sync in Namenode UI page

2015-07-09 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-8388:
-
Attachment: HDFS-8388-002.patch

 Time and Date format need to be in sync in Namenode UI page
 ---

 Key: HDFS-8388
 URL: https://issues.apache.org/jira/browse/HDFS-8388
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Surendra Singh Lilhore
Priority: Minor
 Attachments: HDFS-8388-002.patch, HDFS-8388.patch, HDFS-8388_1.patch


 In NameNode UI Page, Date and Time FORMAT  displayed on the page are not in 
 sync currently.
 Started:Wed May 13 12:28:02 IST 2015
 Compiled:23 Apr 2015 12:22:59 
 Block Deletion Start Time   13 May 2015 12:28:02
 We can keep a common format in all the above places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8744) Erasure Coding: the number of chunks in packet is not updated when writing parity data

2015-07-09 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-8744:

Description: The member {{numChunks}} in {{DFSPacket}} is always zero if 
this packet contains parity data. The calling of {{getNumChunks}} may  cause 
potential errors.  (was: The member {{numChunks}} in {{DFSPacket}} is always 
zero if this packet contains parity data. The calling of {{getNumChunks}} may  
cause potential)

 Erasure Coding: the number of chunks in packet is not updated when writing 
 parity data
 --

 Key: HDFS-8744
 URL: https://issues.apache.org/jira/browse/HDFS-8744
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8744-HDFS-7285-001.patch


 The member {{numChunks}} in {{DFSPacket}} is always zero if this packet 
 contains parity data. The calling of {{getNumChunks}} may  cause potential 
 errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8744) Erasure Coding: the number of chunks in packet is not updated when writing parity data

2015-07-09 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-8744:

Description: The member {{numChunks}} in {{DFSPacket}} is always zero if 
this packet contains parity data. The calling of {{getNumChunks}} may  cause 
potential

 Erasure Coding: the number of chunks in packet is not updated when writing 
 parity data
 --

 Key: HDFS-8744
 URL: https://issues.apache.org/jira/browse/HDFS-8744
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8744-HDFS-7285-001.patch


 The member {{numChunks}} in {{DFSPacket}} is always zero if this packet 
 contains parity data. The calling of {{getNumChunks}} may  cause potential



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2015-07-09 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6833:
-
Assignee: Shinichi Yamashita  (was: jiangyu)

 DirectoryScanner should not register a deleting block with memory of DataNode
 -

 Key: HDFS-6833
 URL: https://issues.apache.org/jira/browse/HDFS-6833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.5.0, 2.5.1
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-6833-10.patch, HDFS-6833-11.patch, 
 HDFS-6833-12.patch, HDFS-6833-13.patch, HDFS-6833-14.patch, 
 HDFS-6833-15.patch, HDFS-6833-16.patch, HDFS-6833-6-2.patch, 
 HDFS-6833-6-3.patch, HDFS-6833-6.patch, HDFS-6833-7-2.patch, 
 HDFS-6833-7.patch, HDFS-6833.8.patch, HDFS-6833.9.patch, HDFS-6833.patch, 
 HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch


 When a block is deleted in DataNode, the following messages are usually 
 output.
 {code}
 2014-08-07 17:53:11,606 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:11,617 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 However, DirectoryScanner may be executed when DataNode deletes the block in 
 the current implementation. And the following messsages are output.
 {code}
 2014-08-07 17:53:30,519 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:31,426 INFO 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
 BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
 files:0, missing block files:0, missing blocks in memory:1, mismatched 
 blocks:0
 2014-08-07 17:53:31,426 WARN 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
 missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
   getNumBytes() = 21230663
   getBytesOnDisk()  = 21230663
   getVisibleLength()= 21230663
   getVolume()   = /hadoop/data1/dfs/data/current
   getBlockFile()= 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
   unlinked  =false
 2014-08-07 17:53:31,531 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 Deleting block information is registered in DataNode's memory.
 And when DataNode sends a block report, NameNode receives wrong block 
 information.
 For example, when we execute recommission or change the number of 
 replication, NameNode may delete the right block as ExcessReplicate by this 
 problem.
 And Under-Replicated Blocks and Missing Blocks occur.
 When DataNode run DirectoryScanner, DataNode should not register a deleting 
 block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8744) Erasure Coding: the number of chunks in packet is not updated when writing parity data

2015-07-09 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-8744:

Status: Patch Available  (was: Open)

 Erasure Coding: the number of chunks in packet is not updated when writing 
 parity data
 --

 Key: HDFS-8744
 URL: https://issues.apache.org/jira/browse/HDFS-8744
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8744-HDFS-7285-001.patch


 The member {{numChunks}} in {{DFSPacket}} is always zero if this packet 
 contains parity data. The calling of {{getNumChunks}} may  cause potential 
 errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8642) Make TestFileTruncate more reliable

2015-07-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620313#comment-14620313
 ] 

Hudson commented on HDFS-8642:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #251 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/251/])
HDFS-8642. Make TestFileTruncate more reliable. (Contributed by Rakesh R) (arp: 
rev 4119ad3112dcfb7286ca68288489bbcb6235cf53)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Make TestFileTruncate more reliable
 ---

 Key: HDFS-8642
 URL: https://issues.apache.org/jira/browse/HDFS-8642
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, 
 HDFS-8642-02.patch


 I've observed {{TestFileTruncate#setup()}} function has to be improved by 
 making it more independent. Presently if any of the snapshots related test 
 failures will affect all the subsequent unit test cases. One such error has 
 been observed in the 
 [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
 {code}
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
 org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted 
 since /test is snapshottable and already has snapshots
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
   at org.apache.hadoop.ipc.Client.call(Client.java:1440)
   at org.apache.hadoop.ipc.Client.call(Client.java:1371)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy22.delete(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
   at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy23.delete(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1711)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8726) Move protobuf files that define the client-sever protocols to hdfs-client

2015-07-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620309#comment-14620309
 ] 

Hudson commented on HDFS-8726:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #251 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/251/])
HDFS-8726. Move protobuf files that define the client-sever protocols to 
hdfs-client. Contributed by Haohui Mai. (wheat9: rev 
fc6182d5ed92ac70de1f4633edd5265b7be1a8dc)
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/editlog.proto
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/inotify.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/acl.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/xattr.proto
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientDatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/acl.proto
* hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/encryption.proto


 Move protobuf files that define the client-sever protocols to hdfs-client
 -

 Key: HDFS-8726
 URL: https://issues.apache.org/jira/browse/HDFS-8726
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.8.0

 Attachments: HDFS-8726.000.patch, HDFS-8726.001.patch


 The protobuf files that defines the RPC protocols between the HDFS clients 
 and servers current sit in the hdfs package. They should be moved the the 
 hdfs-client package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8712) Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi

2015-07-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620314#comment-14620314
 ] 

Hudson commented on HDFS-8712:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #251 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/251/])
HDFS-8712. Remove 'public' and 'abstract' modifiers in FsVolumeSpi and 
FsDatasetSpi (Contributed by Lei (Eddy) Xu) (vinayakumarb: rev 
bd4e10900cc53a2768c31cc29fdb3698684bc2a0)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java


 Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi
 

 Key: HDFS-8712
 URL: https://issues.apache.org/jira/browse/HDFS-8712
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8712.000.patch, HDFS-8712.001.patch


 In [Java Language Specification 
 9.4|http://docs.oracle.com/javase/specs/jls/se7/html/jls-9.html#jls-9.4]:
 bq. It is permitted, but discouraged as a matter of style, to redundantly 
 specify the public and/or abstract modifier for a method declared in an 
 interface.
 {{FsDatasetSpi}} and {{FsVolumeSpi}} mark methods as public, which cause many 
 warnings in IDEs and {{checkstyle}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8729) Fix testTruncateWithDataNodesRestartImmediately occasionally failed

2015-07-09 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8729:

Attachment: HDFS-8729.02.patch

bq. In my test looks like the reason of the timeout is a race scenario in the 
block recovery process: the second dn sends block report after the block 
truncation is finished thus its replica is marked as corrupted. However the 
replication monitor cannot schedule an extra replica because there are only 3 
datanodes in the test. 

You are right. I knew one is corrupted, didn't know it's the second one. Thank 
you for thorough analysis!

What I'm doing in 01 patch is to trigger the *second* time blockReport so the 
corrupted block can get deleted on dn1. So ReplicationMonitor can schedule 
copying the block to dn1.

bq. To trigger block report or not before restarting DataNodes...

That's not what I do. In the 01 patch, checkBlockRecovery(p) will make sure 
truncation is completed. triggerBlockReports() is for *second* time blockReport.
{code}
cluster.waitActive();
checkBlockRecovery(p);
...
assertEquals(newBlock.getBlock().getGenerationStamp(),
oldBlock.getBlock().getGenerationStamp() + 1);

+cluster.triggerBlockReports();
 // Wait replicas come to 3
 DFSTestUtil.waitReplication(fs, p, REPLICATION);
{code}

bq. it is very possible that the truncate can be done before the restarting.
That's very unlikely, because fs.truncate(p, newLength); is non-blocking.
{code}
boolean isReady = fs.truncate(p, newLength); // non-blocking
assertFalse(isReady);

cluster.restartDataNode(dn0, true, true); // shutdown, restart and sends 
registration
cluster.restartDataNode(dn1, true, true); // shutdown, restart and sends 
registration
cluster.waitActive(); // wait until dn0,dn1 got response from NN about the 
registration
// dn0 or dn1 got DNA_RECOVERY command
{code}

bq. So maybe a quick fix is to change the total number of DN from 3 to 4.
It works too. I prefer my approach. Even though with my approach the time 
spending on DFSTestUtil.waitReplication(..) is 4-6 seconds longer. (waiting 
deletion and copy)
It worth it. Because the purpose of the test case is to schedule block recovery 
to dn0/dn1, which got restarted. Increasing the number of DNs will lower the 
chance.

Uploaded 02 patch. add {{Thread.sleep(2000)}} to make sure it's the second BR.

 Fix testTruncateWithDataNodesRestartImmediately occasionally failed
 ---

 Key: HDFS-8729
 URL: https://issues.apache.org/jira/browse/HDFS-8729
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Walter Su
Assignee: Walter Su
Priority: Minor
 Attachments: HDFS-8729.01.patch, HDFS-8729.02.patch


 https://builds.apache.org/job/PreCommit-HDFS-Build/11449/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11593/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11596/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11599/testReport/
 {noformat}
 java.util.concurrent.TimeoutException: Timed out waiting for 
 /test/testTruncateWithDataNodesRestartImmediately to reach 3 replicas
   at 
 org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:761)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestartImmediately(TestFileTruncate.java:814)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2956) calling fetchdt without a --renewer argument throws NPE

2015-07-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620058#comment-14620058
 ] 

Hadoop QA commented on HDFS-2956:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744434/HDFS-2956-02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 63d0365 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11644/console |


This message was automatically generated.

 calling fetchdt without a --renewer argument throws NPE
 ---

 Key: HDFS-2956
 URL: https://issues.apache.org/jira/browse/HDFS-2956
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Todd Lipcon
Assignee: Vinayakumar B
  Labels: BB2015-05-TBR
 Attachments: HDFS-2956-01.patch, HDFS-2956-02.patch, HDFS-2956.patch


 If I call bin/hdfs fetchdt /tmp/mytoken without a --renewer foo argument, 
 then it will throw a NullPointerException:
 Exception in thread main java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:830)
 this is because getDelegationToken is being called with a null renewer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8058) Erasure coding: use BlockInfo[] for both striped and contiguous blocks in INodeFile

2015-07-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619991#comment-14619991
 ] 

Hadoop QA commented on HDFS-8058:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 37s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 5 new or modified test files. |
| {color:green}+1{color} | javac |   7m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 50s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 38s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  3s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 30s | The patch appears to introduce 4 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |   0m 26s | Tests failed in hadoop-hdfs. |
| | |  43m 45s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed build | hadoop-hdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744415/HDFS-8058-HDFS-7285.004.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 2c494a8 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11642/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11642/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11642/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11642/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11642/console |


This message was automatically generated.

 Erasure coding: use BlockInfo[] for both striped and contiguous blocks in 
 INodeFile
 ---

 Key: HDFS-8058
 URL: https://issues.apache.org/jira/browse/HDFS-8058
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8058-HDFS-7285.003.patch, 
 HDFS-8058-HDFS-7285.004.patch, HDFS-8058.001.patch, HDFS-8058.002.patch


 This JIRA is to use {{BlockInfo[] blocks}} for both striped and contiguous 
 blocks in INodeFile.
 Currently {{FileWithStripedBlocksFeature}} keeps separate list for striped 
 blocks, and the methods there duplicate with those in INodeFile, and current 
 code need to judge {{isStriped}} then do different things. Also if file is 
 striped, the {{blocks}} in INodeFile occupy a reference memory space.
 These are not necessary, and we can use the same {{blocks}} to make code more 
 clear.
 I keep {{FileWithStripedBlocksFeature}} as empty for follow use: I will file 
 a new JIRA to move {{dataBlockNum}} and {{parityBlockNum}} from 
 *BlockInfoStriped* to INodeFile, since ideally they are the same for all 
 striped blocks in a file, and store them in block will waste NN memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-2956) calling fetchdt without a --renewer argument throws NPE

2015-07-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-2956:

Attachment: HDFS-2956-02.patch

IMO, Without renewer, non-renewable delegationToken should be able to fetch, 
But renew should fail.

Attached a test also to verify the same.


 calling fetchdt without a --renewer argument throws NPE
 ---

 Key: HDFS-2956
 URL: https://issues.apache.org/jira/browse/HDFS-2956
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Todd Lipcon
Assignee: Vinayakumar B
  Labels: BB2015-05-TBR
 Attachments: HDFS-2956-01.patch, HDFS-2956-02.patch, HDFS-2956.patch


 If I call bin/hdfs fetchdt /tmp/mytoken without a --renewer foo argument, 
 then it will throw a NullPointerException:
 Exception in thread main java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:830)
 this is because getDelegationToken is being called with a null renewer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8719) Erasure Coding: client generates too many small packets when writing parity data

2015-07-09 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8719:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7285
   Status: Resolved  (was: Patch Available)

I've committed this.
Thank [~libo-intel] for contribution!
Also Thank [~jingzhao] for reviewing!

 Erasure Coding: client generates too many small packets when writing parity 
 data
 

 Key: HDFS-8719
 URL: https://issues.apache.org/jira/browse/HDFS-8719
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Fix For: HDFS-7285

 Attachments: HDFS-8719-001.patch, HDFS-8719-HDFS-7285-001.patch, 
 HDFS-8719-HDFS-7285-002.patch, HDFS-8719-HDFS-7285-003.patch


 Typically a packet is about 64K, but when writing parity data, many small 
 packets with size 512 bytes are generated. This may slow the write speed and 
 increase the network IO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8744) Erasure Coding: the number of chunks in packet is not updated when writing parity data

2015-07-09 Thread Li Bo (JIRA)
Li Bo created HDFS-8744:
---

 Summary: Erasure Coding: the number of chunks in packet is not 
updated when writing parity data
 Key: HDFS-8744
 URL: https://issues.apache.org/jira/browse/HDFS-8744
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2956) calling fetchdt without a --renewer argument throws NPE

2015-07-09 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620140#comment-14620140
 ] 

Akira AJISAKA commented on HDFS-2956:
-

Thanks [~vinayrpet] for the clarification and creating the regression test. +1 
pending Jenkins.

 calling fetchdt without a --renewer argument throws NPE
 ---

 Key: HDFS-2956
 URL: https://issues.apache.org/jira/browse/HDFS-2956
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Todd Lipcon
Assignee: Vinayakumar B
  Labels: BB2015-05-TBR
 Attachments: HDFS-2956-01.patch, HDFS-2956-02.patch, 
 HDFS-2956.03.patch, HDFS-2956.patch


 If I call bin/hdfs fetchdt /tmp/mytoken without a --renewer foo argument, 
 then it will throw a NullPointerException:
 Exception in thread main java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:830)
 this is because getDelegationToken is being called with a null renewer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8742) Inotify: Support event for OP_TRUNCATE

2015-07-09 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-8742:


 Summary: Inotify: Support event for OP_TRUNCATE
 Key: HDFS-8742
 URL: https://issues.apache.org/jira/browse/HDFS-8742
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.7.0
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore


Currently inotify is not giving any event for Truncate operation. NN should 
send event for Truncate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8719) Erasure Coding: client generates too many small packets when writing parity data

2015-07-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620019#comment-14620019
 ] 

Hadoop QA commented on HDFS-8719:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m 13s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 45s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 56s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 14s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 39s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 33s | The patch appears to introduce 4 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 20s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 139m  8s | Tests failed in hadoop-hdfs. |
| | | 183m  3s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.fs.TestUrlStreamHandler |
|   | hadoop.hdfs.server.namenode.TestFileLimit |
|   | hadoop.TestRefreshCallQueue |
|   | hadoop.hdfs.protocolPB.TestPBHelper |
|   | hadoop.cli.TestCryptoAdminCLI |
|   | hadoop.fs.viewfs.TestViewFsWithAcls |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.fs.contract.hdfs.TestHDFSContractDelete |
|   | hadoop.fs.TestFcHdfsSetUMask |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.fs.contract.hdfs.TestHDFSContractOpen |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMkdir |
|   | hadoop.fs.contract.hdfs.TestHDFSContractAppend |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.namenode.TestHDFSConcat |
|   | hadoop.fs.TestSymlinkHdfsFileSystem |
|   | hadoop.fs.viewfs.TestViewFsDefaultValue |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestFSInputChecker |
|   | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.cli.TestAclCLI |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.fs.contract.hdfs.TestHDFSContractSeek |
|   | hadoop.hdfs.server.namenode.TestFileContextXAttr |
|   | hadoop.hdfs.server.namenode.TestAclConfigFlag |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.tracing.TestTracing |
|   | hadoop.fs.viewfs.TestViewFsWithXAttrs |
|   | hadoop.cli.TestErasureCodingCLI |
|   | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.TestDatanodeReport |
|   | hadoop.tools.TestJMXGet |
|   | hadoop.fs.contract.hdfs.TestHDFSContractCreate |
|   | hadoop.hdfs.server.namenode.TestCreateEditsLog |
|   | hadoop.hdfs.TestWriteRead |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.fs.TestEnhancedByteBufferAccess |
|   | hadoop.fs.TestFcHdfsPermission |
|   | hadoop.security.TestRefreshUserMappings |
|   | hadoop.fs.viewfs.TestViewFsHdfs |
|   | hadoop.fs.TestResolveHdfsSymlink |
|   | hadoop.hdfs.server.namenode.metrics.TestNNMetricFilesInGetListingOps |
|   | hadoop.tracing.TestTracingShortCircuitLocalRead |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.tracing.TestTraceAdmin |
|   | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark |
|   | hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs |
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.fs.TestFcHdfsCreateMkdir |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.server.namenode.TestDeadDatanode |
|   | hadoop.hdfs.server.namenode.TestCommitBlockSynchronization |
|   | hadoop.fs.viewfs.TestViewFileSystemWithXAttrs |
|   | hadoop.fs.viewfs.TestViewFileSystemWithAcls |
|   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
|   | hadoop.security.TestPermission |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
|   | hadoop.fs.contract.hdfs.TestHDFSContractConcat |
|   | hadoop.hdfs.server.namenode.TestFSPermissionChecker |
|   | hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade |
|   | 

[jira] [Commented] (HDFS-8323) Bump GenerationStamp for write faliure in DFSStripedOutputStream

2015-07-09 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620030#comment-14620030
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8323:
---

Good catch!  It does look like that the patch removed the setBlockToken call by 
mistake.  Thanks for checking it and fixing it.

 Bump GenerationStamp for write faliure in DFSStripedOutputStream
 

 Key: HDFS-8323
 URL: https://issues.apache.org/jira/browse/HDFS-8323
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: HDFS-7285

 Attachments: h8323_20150511.patch, h8323_20150511b.patch, 
 h8323_20150512.patch, h8323_20150518.patch, h8323_20150520.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8719) Erasure Coding: client generates too many small packets when writing parity data

2015-07-09 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620073#comment-14620073
 ] 

Walter Su commented on HDFS-8719:
-

bq. Do we also need to update the current writeChunk function? Also shall we 
put these two ops into the same function and always call the combined function?
Good idea. 003 patch did that.
LGTM. +1. Will commit shortly.

 Erasure Coding: client generates too many small packets when writing parity 
 data
 

 Key: HDFS-8719
 URL: https://issues.apache.org/jira/browse/HDFS-8719
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8719-001.patch, HDFS-8719-HDFS-7285-001.patch, 
 HDFS-8719-HDFS-7285-002.patch, HDFS-8719-HDFS-7285-003.patch


 Typically a packet is about 64K, but when writing parity data, many small 
 packets with size 512 bytes are generated. This may slow the write speed and 
 increase the network IO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8743) Update document for hdfs fetchdt

2015-07-09 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-8743:
---

 Summary: Update document for hdfs fetchdt
 Key: HDFS-8743
 URL: https://issues.apache.org/jira/browse/HDFS-8743
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Akira AJISAKA


Now hdfs fetchdt command accepts the following options:
* --webservice
* --renewer
* --cancel
* --renew
* --print

However, only --webservice option is documented. 
http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#fetchdt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-2956) calling fetchdt without a --renewer argument throws NPE

2015-07-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-2956:

Attachment: HDFS-2956.03.patch

Last patch was generated from branch-2.

Now attaching the patch for trunk.

 calling fetchdt without a --renewer argument throws NPE
 ---

 Key: HDFS-2956
 URL: https://issues.apache.org/jira/browse/HDFS-2956
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Todd Lipcon
Assignee: Vinayakumar B
  Labels: BB2015-05-TBR
 Attachments: HDFS-2956-01.patch, HDFS-2956-02.patch, 
 HDFS-2956.03.patch, HDFS-2956.patch


 If I call bin/hdfs fetchdt /tmp/mytoken without a --renewer foo argument, 
 then it will throw a NullPointerException:
 Exception in thread main java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:830)
 this is because getDelegationToken is being called with a null renewer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8743) Update document for hdfs fetchdt

2015-07-09 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HDFS-8743:
--

Assignee: Brahma Reddy Battula

 Update document for hdfs fetchdt
 

 Key: HDFS-8743
 URL: https://issues.apache.org/jira/browse/HDFS-8743
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula

 Now hdfs fetchdt command accepts the following options:
 * --webservice
 * --renewer
 * --cancel
 * --renew
 * --print
 However, only --webservice option is documented. 
 http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#fetchdt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8719) Erasure Coding: client generates too many small packets when writing parity data

2015-07-09 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620109#comment-14620109
 ] 

Li Bo commented on HDFS-8719:
-

Thanks [~jingzhao] and [~walter.k.su]!


 Erasure Coding: client generates too many small packets when writing parity 
 data
 

 Key: HDFS-8719
 URL: https://issues.apache.org/jira/browse/HDFS-8719
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Fix For: HDFS-7285

 Attachments: HDFS-8719-001.patch, HDFS-8719-HDFS-7285-001.patch, 
 HDFS-8719-HDFS-7285-002.patch, HDFS-8719-HDFS-7285-003.patch


 Typically a packet is about 64K, but when writing parity data, many small 
 packets with size 512 bytes are generated. This may slow the write speed and 
 increase the network IO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8744) Erasure Coding: the number of chunks in packet is not updated when writing parity data

2015-07-09 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-8744:

Attachment: HDFS-8744-HDFS-7285-001.patch

 Erasure Coding: the number of chunks in packet is not updated when writing 
 parity data
 --

 Key: HDFS-8744
 URL: https://issues.apache.org/jira/browse/HDFS-8744
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8744-HDFS-7285-001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2015-07-09 Thread jiangyu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiangyu reassigned HDFS-6833:
-

Assignee: jiangyu  (was: Shinichi Yamashita)

 DirectoryScanner should not register a deleting block with memory of DataNode
 -

 Key: HDFS-6833
 URL: https://issues.apache.org/jira/browse/HDFS-6833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.5.0, 2.5.1
Reporter: Shinichi Yamashita
Assignee: jiangyu
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-6833-10.patch, HDFS-6833-11.patch, 
 HDFS-6833-12.patch, HDFS-6833-13.patch, HDFS-6833-14.patch, 
 HDFS-6833-15.patch, HDFS-6833-16.patch, HDFS-6833-6-2.patch, 
 HDFS-6833-6-3.patch, HDFS-6833-6.patch, HDFS-6833-7-2.patch, 
 HDFS-6833-7.patch, HDFS-6833.8.patch, HDFS-6833.9.patch, HDFS-6833.patch, 
 HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch


 When a block is deleted in DataNode, the following messages are usually 
 output.
 {code}
 2014-08-07 17:53:11,606 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:11,617 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 However, DirectoryScanner may be executed when DataNode deletes the block in 
 the current implementation. And the following messsages are output.
 {code}
 2014-08-07 17:53:30,519 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:31,426 INFO 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
 BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
 files:0, missing block files:0, missing blocks in memory:1, mismatched 
 blocks:0
 2014-08-07 17:53:31,426 WARN 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
 missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
   getNumBytes() = 21230663
   getBytesOnDisk()  = 21230663
   getVisibleLength()= 21230663
   getVolume()   = /hadoop/data1/dfs/data/current
   getBlockFile()= 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
   unlinked  =false
 2014-08-07 17:53:31,531 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 Deleting block information is registered in DataNode's memory.
 And when DataNode sends a block report, NameNode receives wrong block 
 information.
 For example, when we execute recommission or change the number of 
 replication, NameNode may delete the right block as ExcessReplicate by this 
 problem.
 And Under-Replicated Blocks and Missing Blocks occur.
 When DataNode run DirectoryScanner, DataNode should not register a deleting 
 block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8744) Erasure Coding: the number of chunks in packet is not updated when writing parity data

2015-07-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620428#comment-14620428
 ] 

Hadoop QA commented on HDFS-8744:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m  4s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 26s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 34s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 38s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 24s | The patch appears to introduce 4 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 174m  8s | Tests failed in hadoop-hdfs. |
| | | 215m 58s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestClientReportBadBlock |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/1277/HDFS-8744-HDFS-7285-001.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 48f3830 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11646/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11646/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11646/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11646/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11646/console |


This message was automatically generated.

 Erasure Coding: the number of chunks in packet is not updated when writing 
 parity data
 --

 Key: HDFS-8744
 URL: https://issues.apache.org/jira/browse/HDFS-8744
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8744-HDFS-7285-001.patch


 The member {{numChunks}} in {{DFSPacket}} is always zero if this packet 
 contains parity data. The calling of {{getNumChunks}} may  cause potential 
 errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2956) calling fetchdt without a --renewer argument throws NPE

2015-07-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620346#comment-14620346
 ] 

Hadoop QA commented on HDFS-2956:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 14s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 35s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 37s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 32s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m 36s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  3s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 161m 49s | Tests failed in hadoop-hdfs. |
| | | 202m 59s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.TestAppendSnapshotTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/1271/HDFS-2956.03.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 63d0365 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11645/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11645/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11645/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11645/console |


This message was automatically generated.

 calling fetchdt without a --renewer argument throws NPE
 ---

 Key: HDFS-2956
 URL: https://issues.apache.org/jira/browse/HDFS-2956
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Todd Lipcon
Assignee: Vinayakumar B
  Labels: BB2015-05-TBR
 Attachments: HDFS-2956-01.patch, HDFS-2956-02.patch, 
 HDFS-2956.03.patch, HDFS-2956.branch-2.03.patch, HDFS-2956.patch


 If I call bin/hdfs fetchdt /tmp/mytoken without a --renewer foo argument, 
 then it will throw a NullPointerException:
 Exception in thread main java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:830)
 this is because getDelegationToken is being called with a null renewer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8742) Inotify: Support event for OP_TRUNCATE

2015-07-09 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-8742:
-
Fix Version/s: (was: 2.6.0)

 Inotify: Support event for OP_TRUNCATE
 --

 Key: HDFS-8742
 URL: https://issues.apache.org/jira/browse/HDFS-8742
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.7.0
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore

 Currently inotify is not giving any event for Truncate operation. NN should 
 send event for Truncate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8712) Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi

2015-07-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620356#comment-14620356
 ] 

Hudson commented on HDFS-8712:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #981 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/981/])
HDFS-8712. Remove 'public' and 'abstract' modifiers in FsVolumeSpi and 
FsDatasetSpi (Contributed by Lei (Eddy) Xu) (vinayakumarb: rev 
bd4e10900cc53a2768c31cc29fdb3698684bc2a0)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java


 Remove public and abstract modifiers in FsVolumeSpi and FsDatasetSpi
 

 Key: HDFS-8712
 URL: https://issues.apache.org/jira/browse/HDFS-8712
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8712.000.patch, HDFS-8712.001.patch


 In [Java Language Specification 
 9.4|http://docs.oracle.com/javase/specs/jls/se7/html/jls-9.html#jls-9.4]:
 bq. It is permitted, but discouraged as a matter of style, to redundantly 
 specify the public and/or abstract modifier for a method declared in an 
 interface.
 {{FsDatasetSpi}} and {{FsVolumeSpi}} mark methods as public, which cause many 
 warnings in IDEs and {{checkstyle}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8642) Make TestFileTruncate more reliable

2015-07-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620355#comment-14620355
 ] 

Hudson commented on HDFS-8642:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #981 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/981/])
HDFS-8642. Make TestFileTruncate more reliable. (Contributed by Rakesh R) (arp: 
rev 4119ad3112dcfb7286ca68288489bbcb6235cf53)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java


 Make TestFileTruncate more reliable
 ---

 Key: HDFS-8642
 URL: https://issues.apache.org/jira/browse/HDFS-8642
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8642-00.patch, HDFS-8642-01.patch, 
 HDFS-8642-02.patch


 I've observed {{TestFileTruncate#setup()}} function has to be improved by 
 making it more independent. Presently if any of the snapshots related test 
 failures will affect all the subsequent unit test cases. One such error has 
 been observed in the 
 [Hadoop-Hdfs-trunk-2163|https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart]
 {code}
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/2163/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestFileTruncate/testTruncateWithDataNodesRestart/
 org.apache.hadoop.ipc.RemoteException: The directory /test cannot be deleted 
 since /test is snapshottable and already has snapshots
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.checkSnapshot(FSDirSnapshotOp.java:226)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:54)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.deleteInternal(FSDirDeleteOp.java:177)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:104)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3046)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:939)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:608)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2172)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2168)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2166)
   at org.apache.hadoop.ipc.Client.call(Client.java:1440)
   at org.apache.hadoop.ipc.Client.call(Client.java:1371)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   at com.sun.proxy.$Proxy22.delete(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540)
   at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
   at com.sun.proxy.$Proxy23.delete(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1711)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:718)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setup(TestFileTruncate.java:119)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8726) Move protobuf files that define the client-sever protocols to hdfs-client

2015-07-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620351#comment-14620351
 ] 

Hudson commented on HDFS-8726:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #981 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/981/])
HDFS-8726. Move protobuf files that define the client-sever protocols to 
hdfs-client. Contributed by Haohui Mai. (wheat9: rev 
fc6182d5ed92ac70de1f4633edd5265b7be1a8dc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/inotify.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/encryption.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/xattr.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/acl.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientDatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/acl.proto
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/editlog.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/inotify.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientDatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto


 Move protobuf files that define the client-sever protocols to hdfs-client
 -

 Key: HDFS-8726
 URL: https://issues.apache.org/jira/browse/HDFS-8726
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.8.0

 Attachments: HDFS-8726.000.patch, HDFS-8726.001.patch


 The protobuf files that defines the RPC protocols between the HDFS clients 
 and servers current sit in the hdfs package. They should be moved the the 
 hdfs-client package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-2956) calling fetchdt without a --renewer argument throws NPE

2015-07-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-2956:

Attachment: HDFS-2956.branch-2.03.patch

Attaching the branch-2 patch.
Since DelegationTokenFetcher differs b/w trunk and branch-2, test also needs 
update in branch-2.
cherry-pick will not work.


 calling fetchdt without a --renewer argument throws NPE
 ---

 Key: HDFS-2956
 URL: https://issues.apache.org/jira/browse/HDFS-2956
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Todd Lipcon
Assignee: Vinayakumar B
  Labels: BB2015-05-TBR
 Attachments: HDFS-2956-01.patch, HDFS-2956-02.patch, 
 HDFS-2956.03.patch, HDFS-2956.branch-2.03.patch, HDFS-2956.patch


 If I call bin/hdfs fetchdt /tmp/mytoken without a --renewer foo argument, 
 then it will throw a NullPointerException:
 Exception in thread main java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:830)
 this is because getDelegationToken is being called with a null renewer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8729) Fix testTruncateWithDataNodesRestartImmediately occasionally failed

2015-07-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620219#comment-14620219
 ] 

Hadoop QA commented on HDFS-8729:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   6m  0s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 59s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 19s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 36s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m 36s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m  6s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 159m 20s | Tests failed in hadoop-hdfs. |
| | | 180m  4s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744426/HDFS-8729.02.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 63d0365 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11643/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11643/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11643/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11643/console |


This message was automatically generated.

 Fix testTruncateWithDataNodesRestartImmediately occasionally failed
 ---

 Key: HDFS-8729
 URL: https://issues.apache.org/jira/browse/HDFS-8729
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Walter Su
Assignee: Walter Su
Priority: Minor
 Attachments: HDFS-8729.01.patch, HDFS-8729.02.patch


 https://builds.apache.org/job/PreCommit-HDFS-Build/11449/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11593/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11596/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11599/testReport/
 {noformat}
 java.util.concurrent.TimeoutException: Timed out waiting for 
 /test/testTruncateWithDataNodesRestartImmediately to reach 3 replicas
   at 
 org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:761)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestartImmediately(TestFileTruncate.java:814)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8729) Fix testTruncateWithDataNodesRestartImmediately occasionally failed

2015-07-09 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620257#comment-14620257
 ] 

Walter Su commented on HDFS-8729:
-

Findbugs is broken recently. The time out test not related.

 Fix testTruncateWithDataNodesRestartImmediately occasionally failed
 ---

 Key: HDFS-8729
 URL: https://issues.apache.org/jira/browse/HDFS-8729
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Walter Su
Assignee: Walter Su
Priority: Minor
 Attachments: HDFS-8729.01.patch, HDFS-8729.02.patch


 https://builds.apache.org/job/PreCommit-HDFS-Build/11449/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11593/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11596/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11599/testReport/
 {noformat}
 java.util.concurrent.TimeoutException: Timed out waiting for 
 /test/testTruncateWithDataNodesRestartImmediately to reach 3 replicas
   at 
 org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:761)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestartImmediately(TestFileTruncate.java:814)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2956) calling fetchdt without a --renewer argument throws NPE

2015-07-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620451#comment-14620451
 ] 

Hadoop QA commented on HDFS-2956:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 30s | Findbugs (version ) appears to 
be broken on branch-2. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   6m  2s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 39s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 21s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  5s | The applied patch generated  
40 new checkstyle issues (total was 0, now 40). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 11s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 31s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m 35s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m 22s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 155m 58s | Tests passed in hadoop-hdfs. 
|
| | | 194m 19s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12744457/HDFS-2956.branch-2.03.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | branch-2 / d17a7bb |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11647/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11647/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11647/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11647/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11647/console |


This message was automatically generated.

 calling fetchdt without a --renewer argument throws NPE
 ---

 Key: HDFS-2956
 URL: https://issues.apache.org/jira/browse/HDFS-2956
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Todd Lipcon
Assignee: Vinayakumar B
  Labels: BB2015-05-TBR
 Attachments: HDFS-2956-01.patch, HDFS-2956-02.patch, 
 HDFS-2956.03.patch, HDFS-2956.branch-2.03.patch, HDFS-2956.patch


 If I call bin/hdfs fetchdt /tmp/mytoken without a --renewer foo argument, 
 then it will throw a NullPointerException:
 Exception in thread main java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:830)
 this is because getDelegationToken is being called with a null renewer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8728) Erasure coding: revisit and simplify BlockInfoStriped and INodeFile

2015-07-09 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14620505#comment-14620505
 ] 

Walter Su commented on HDFS-8728:
-

On first look, 2, 3, 4 is good. And FSEditLogLoader.java lost pieces of code 
from HDFS-7285. I will take a deep look later.

 Erasure coding: revisit and simplify BlockInfoStriped and INodeFile
 ---

 Key: HDFS-8728
 URL: https://issues.apache.org/jira/browse/HDFS-8728
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8728.00.patch, HDFS-8728.01.patch, 
 HDFS-8728.02.patch, Merge-1-codec.patch, Merge-2-ecZones.patch, 
 Merge-3-blockInfo.patch, Merge-4-blockmanagement.patch, 
 Merge-5-blockPlacementPolicies.patch, Merge-6-locatedStripedBlock.patch, 
 Merge-7-replicationMonitor.patch, Merge-8-inodeFile.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2554) Add separate metrics for missing blocks with desired replication level 1

2015-07-09 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-2554.
---
Resolution: Duplicate

I think we handled this over in HDFS-7165, which has been committed.

 Add separate metrics for missing blocks with desired replication level 1
 

 Key: HDFS-2554
 URL: https://issues.apache.org/jira/browse/HDFS-2554
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.0-alpha
Reporter: Todd Lipcon
Assignee: Andy Isaacson
Priority: Minor
 Attachments: hdfs-2554-1.txt, hdfs-2554.txt


 Some users use replication level set to 1 for datasets which are unimportant 
 and can be lost with no worry (eg the output of terasort tests). But other 
 data on the cluster is important and should not be lost. It would be useful 
 to separate the metric for missing blocks by the desired replication level of 
 those blocks, so that one could ignore missing blocks at repl 1 while still 
 alerting on missing blocks with higher desired replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8484) Erasure coding: Two contiguous blocks occupy IDs belong to same striped group

2015-07-09 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621212#comment-14621212
 ] 

Jing Zhao commented on HDFS-8484:
-

Good catch! +1. I will commit it shortly.

 Erasure coding: Two contiguous blocks occupy IDs belong to same striped group
 -

 Key: HDFS-8484
 URL: https://issues.apache.org/jira/browse/HDFS-8484
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
Priority: Trivial
 Attachments: HDFS-8484-HDFS-7285.001.patch


 There's very very little chance:
 Assume \[-1016,-1001\] is a block group ID.
 A contiguous block has ID -1016. There is another contiguous block has ID 
 -1009.
 When we want to get -1009 block, actually we get -1016.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8484) Erasure coding: Two contiguous blocks occupy IDs belong to same striped group

2015-07-09 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao resolved HDFS-8484.
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7285

I've committed this to the feature branch. Thanks Walter for the contribution!

BTW, It will be good to have a unit test for this. Could you please add it in a 
new jira or just include it in another jira when you have a chance?

 Erasure coding: Two contiguous blocks occupy IDs belong to same striped group
 -

 Key: HDFS-8484
 URL: https://issues.apache.org/jira/browse/HDFS-8484
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
Priority: Trivial
 Fix For: HDFS-7285

 Attachments: HDFS-8484-HDFS-7285.001.patch


 There's very very little chance:
 Assume \[-1016,-1001\] is a block group ID.
 A contiguous block has ID -1016. There is another contiguous block has ID 
 -1009.
 When we want to get -1009 block, actually we get -1016.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8679) Move DatasetSpi to new package

2015-07-09 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8679:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7240
Target Version/s:   (was: HDFS-7240)
  Status: Resolved  (was: Patch Available)

Thanks for the now binding review [~anu]!

I committed it to the feature branch with the following delta over the .02 
patch to clarify that the caller must synchronize if necessary.

{code}
- * Create a new dataset object for a specific service type
+ * Create a new dataset object for a specific service type.
+ * The caller must perform synchronization, if required.
{code}

 Move DatasetSpi to new package
 --

 Key: HDFS-8679
 URL: https://issues.apache.org/jira/browse/HDFS-8679
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: HDFS-7240

 Attachments: HDFS-8679-HDFS-7240.01.patch, 
 HDFS-8679-HDFS-7240.02.patch


 The DatasetSpi and VolumeSpi interfaces are currently in 
 {{org.apache.hadoop.hdfs.server.datanode.fsdataset}}. They can be moved to a 
 new package {{org.apache.hadoop.hdfs.server.datanode.dataset}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8745) Use Doxygen to generate documents for libhdfspp

2015-07-09 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621236#comment-14621236
 ] 

Haohui Mai commented on HDFS-8745:
--

This is for native c++ code where the code needs to be documented along with 
the code.

 Use Doxygen to generate documents for libhdfspp
 ---

 Key: HDFS-8745
 URL: https://issues.apache.org/jira/browse/HDFS-8745
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor

 This jira proposes to add Doxygen hooks to generate documentation for the 
 library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8745) Use Doxygen to generate documents for libhdfspp

2015-07-09 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8745:
-
Attachment: HDFS-8745.000.patch

 Use Doxygen to generate documents for libhdfspp
 ---

 Key: HDFS-8745
 URL: https://issues.apache.org/jira/browse/HDFS-8745
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-8745.000.patch


 This jira proposes to add Doxygen hooks to generate documentation for the 
 library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8729) Fix testTruncateWithDataNodesRestartImmediately occasionally failed

2015-07-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621250#comment-14621250
 ] 

Hudson commented on HDFS-8729:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8142 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8142/])
HDFS-8729. Fix TestFileTruncate#testTruncateWithDataNodesRestartImmediately 
which occasionally failed. Contributed by Walter Su. (jing9: rev 
f4ca530c1cc9ece25c5ef01f99a94eb9e678e890)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Fix testTruncateWithDataNodesRestartImmediately occasionally failed
 ---

 Key: HDFS-8729
 URL: https://issues.apache.org/jira/browse/HDFS-8729
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Walter Su
Assignee: Walter Su
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8729.01.patch, HDFS-8729.02.patch


 https://builds.apache.org/job/PreCommit-HDFS-Build/11449/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11593/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11596/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11599/testReport/
 {noformat}
 java.util.concurrent.TimeoutException: Timed out waiting for 
 /test/testTruncateWithDataNodesRestartImmediately to reach 3 replicas
   at 
 org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:761)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestartImmediately(TestFileTruncate.java:814)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8746) Reduce the latency of streaming reads by re-using DN connections

2015-07-09 Thread Bob Hansen (JIRA)
Bob Hansen created HDFS-8746:


 Summary: Reduce the latency of streaming reads by re-using DN 
connections
 Key: HDFS-8746
 URL: https://issues.apache.org/jira/browse/HDFS-8746
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Bob Hansen
Assignee: Bob Hansen


The current libhdfspp implementation opens a new connection for each pread.  
For streaming reads (especially streaming short-buffer reads coming from the C 
API, and especially once we get SSL handshake overhead), our throughput will be 
dominated by the connection latency of reconnecting to the DataNodes.

The target use case is a multi-block file that is being sequentially streamed 
and processed by the client application, which consumes the data as it comes 
from the DN and throws it away.  The data is read into moderately small buffers 
(~64k - ~1MB) owned by the consumer, and overall throughput is the critical 
metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8729) Fix testTruncateWithDataNodesRestartImmediately occasionally failed

2015-07-09 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-8729:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2. Thanks Walter for the contribution!

 Fix testTruncateWithDataNodesRestartImmediately occasionally failed
 ---

 Key: HDFS-8729
 URL: https://issues.apache.org/jira/browse/HDFS-8729
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Walter Su
Assignee: Walter Su
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8729.01.patch, HDFS-8729.02.patch


 https://builds.apache.org/job/PreCommit-HDFS-Build/11449/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11593/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11596/testReport/
 https://builds.apache.org/job/PreCommit-HDFS-Build/11599/testReport/
 {noformat}
 java.util.concurrent.TimeoutException: Timed out waiting for 
 /test/testTruncateWithDataNodesRestartImmediately to reach 3 replicas
   at 
 org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:761)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestartImmediately(TestFileTruncate.java:814)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >