[jira] [Updated] (HDFS-7496) Fix FsVolume removal race conditions on the DataNode

2015-01-13 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-7496:

Attachment: HDFS-7496.003.patch

Hi, [~cmccabe] Thanks for the reviews.

The original purpose of embedding {{FsVolumeReference}} in 
{{ReplicaInPipeline}} is that the volume obtained reference in 
{{FsVolumeList#getNextVolume}} and it needs to pass this reference object to 
{{BlockReceiver}}.

In this updated patch , I added a new class 
{{ReplicaInPipelineWithVolumeReference}} to pass the the reference object with 
the {{replicaInfo}}.
{code}
public class ReplicaInPipelineWithVolumeReference {
  private final ReplicaInPipelineInterface replica;
  private final FsVolumeReference volumeReference;
{code}

So that {{BlockReceiver}} can claim the ownership of volume reference object, 
while the size of {{replicaInfo}} is not changed.

bq. We don't want to keep volumes from being removed just because a ReplicaInfo 
exists somewhere in memory. 

It should not happen, because in {{FsDatasetImpl#removeVolumes}}, after 
removing volumes, the replicaInfo on these volumes are also removed.

bq. I think ReplicaInfo objects should just contain the unique storageID of a 
volume.

{{ReplicaInfo}} already has a pointer {{ReplicaInfo#volume}} pointing to the 
volume object. Also, {{ReplicaInfo}} should be cleaned if the volume is 
removed. So it might not need to hold a {{storageId}}.

Could you take another look?

 Fix FsVolume removal race conditions on the DataNode 
 -

 Key: HDFS-7496
 URL: https://issues.apache.org/jira/browse/HDFS-7496
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-7496.000.patch, HDFS-7496.001.patch, 
 HDFS-7496.002.patch, HDFS-7496.003.patch


 We discussed a few FsVolume removal race conditions on the DataNode in 
 HDFS-7489.  We should figure out a way to make removing an FsVolume safe.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3689) Add support for variable length block

2015-01-13 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-3689:

Attachment: HDFS-3689.003.patch

 Add support for variable length block
 -

 Key: HDFS-3689
 URL: https://issues.apache.org/jira/browse/HDFS-3689
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Affects Versions: 3.0.0
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, 
 HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch


 Currently HDFS supports fixed length blocks. Supporting variable length block 
 will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7189) Add trace spans for DFSClient metadata operations

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276445#comment-14276445
 ] 

Hadoop QA commented on HDFS-7189:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692143/HDFS-7189.004.patch
  against trunk revision f92e503.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9206//console

This message is automatically generated.

 Add trace spans for DFSClient metadata operations
 -

 Key: HDFS-7189
 URL: https://issues.apache.org/jira/browse/HDFS-7189
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7189.001.patch, HDFS-7189.003.patch, 
 HDFS-7189.004.patch


 We should add trace spans for DFSClient metadata operations.  For example, 
 {{DFSClient#rename}} should have a trace span, etc. etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7496) Fix FsVolume removal race conditions on the DataNode

2015-01-13 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-7496:

Attachment: HDFS-7496.003.patch

Rebase to trunk and trigger another run of tests.

 Fix FsVolume removal race conditions on the DataNode 
 -

 Key: HDFS-7496
 URL: https://issues.apache.org/jira/browse/HDFS-7496
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-7496.000.patch, HDFS-7496.001.patch, 
 HDFS-7496.002.patch, HDFS-7496.003.patch, HDFS-7496.003.patch


 We discussed a few FsVolume removal race conditions on the DataNode in 
 HDFS-7489.  We should figure out a way to make removing an FsVolume safe.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276416#comment-14276416
 ] 

Hadoop QA commented on HDFS-3689:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692131/HDFS-3689.003.patch
  against trunk revision f92e503.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9204//console

This message is automatically generated.

 Add support for variable length block
 -

 Key: HDFS-3689
 URL: https://issues.apache.org/jira/browse/HDFS-3689
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Affects Versions: 3.0.0
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, 
 HDFS-3689.002.patch, HDFS-3689.003.patch


 Currently HDFS supports fixed length blocks. Supporting variable length block 
 will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7189) Add trace spans for DFSClient metadata operations

2015-01-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7189:
---
Attachment: HDFS-7189.004.patch

 Add trace spans for DFSClient metadata operations
 -

 Key: HDFS-7189
 URL: https://issues.apache.org/jira/browse/HDFS-7189
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7189.001.patch, HDFS-7189.003.patch, 
 HDFS-7189.004.patch


 We should add trace spans for DFSClient metadata operations.  For example, 
 {{DFSClient#rename}} should have a trace span, etc. etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7189) Add trace spans for DFSClient metadata operations

2015-01-13 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276440#comment-14276440
 ] 

Colin Patrick McCabe commented on HDFS-7189:


rebased.

 Add trace spans for DFSClient metadata operations
 -

 Key: HDFS-7189
 URL: https://issues.apache.org/jira/browse/HDFS-7189
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7189.001.patch, HDFS-7189.003.patch, 
 HDFS-7189.004.patch


 We should add trace spans for DFSClient metadata operations.  For example, 
 {{DFSClient#rename}} should have a trace span, etc. etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7496) Fix FsVolume removal race conditions on the DataNode

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276468#comment-14276468
 ] 

Hadoop QA commented on HDFS-7496:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692153/HDFS-7496.003.patch
  against trunk revision f92e503.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9207//console

This message is automatically generated.

 Fix FsVolume removal race conditions on the DataNode 
 -

 Key: HDFS-7496
 URL: https://issues.apache.org/jira/browse/HDFS-7496
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-7496.000.patch, HDFS-7496.001.patch, 
 HDFS-7496.002.patch, HDFS-7496.003.patch, HDFS-7496.003.patch


 We discussed a few FsVolume removal race conditions on the DataNode in 
 HDFS-7489.  We should figure out a way to make removing an FsVolume safe.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-13 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-3443:

Attachment: HDFS-3443-003.patch

Attached the rebased patch.
Includes Amit's work along with moving of editLogTrailer initialization to 
constructor.

 Unable to catch up edits during standby to active switch due to NPE
 ---

 Key: HDFS-3443
 URL: https://issues.apache.org/jira/browse/HDFS-3443
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover
Reporter: suja s
Assignee: amith
 Attachments: HDFS-3443-003.patch, HDFS-3443_1.patch, HDFS-3443_1.patch


 Start NN
 Let NN standby services be started.
 Before the editLogTailer is initialised start ZKFC and allow the 
 activeservices start to proceed further.
 Here editLogTailer.catchupDuringFailover() will throw NPE.
 void startActiveServices() throws IOException {
 LOG.info(Starting services required for active state);
 writeLock();
 try {
   FSEditLog editLog = dir.fsImage.getEditLog();
   
   if (!editLog.isOpenForWrite()) {
 // During startup, we're already open for write during initialization.
 editLog.initJournalsForWrite();
 // May need to recover
 editLog.recoverUnclosedStreams();
 
 LOG.info(Catching up to latest edits from old active before  +
 taking over writer role in edits logs.);
 editLogTailer.catchupDuringFailover();
 {noformat}
 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
 Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
 XX.XX.XX.55:58003: output error
 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
 from XX.XX.XX.55:58004: error: java.lang.NullPointerException
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
   at 
 org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
   at 
 org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 9 on 8020 caught an exception
 java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
   at 
 org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-13 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276510#comment-14276510
 ] 

Vinayakumar B commented on HDFS-3443:
-

bq, How about adding a boolean for indicating namenode starting up so that 
NameNodeRpcServer could refuse all operations?
Option is good. currently I think only transition RPCs will have problem, which 
should be after this patch.
Remaining all requests will anyway will be rejected since the initial state 
will be STANDBY.

am I right?

 Unable to catch up edits during standby to active switch due to NPE
 ---

 Key: HDFS-3443
 URL: https://issues.apache.org/jira/browse/HDFS-3443
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover
Reporter: suja s
Assignee: amith
 Attachments: HDFS-3443-003.patch, HDFS-3443_1.patch, HDFS-3443_1.patch


 Start NN
 Let NN standby services be started.
 Before the editLogTailer is initialised start ZKFC and allow the 
 activeservices start to proceed further.
 Here editLogTailer.catchupDuringFailover() will throw NPE.
 void startActiveServices() throws IOException {
 LOG.info(Starting services required for active state);
 writeLock();
 try {
   FSEditLog editLog = dir.fsImage.getEditLog();
   
   if (!editLog.isOpenForWrite()) {
 // During startup, we're already open for write during initialization.
 editLog.initJournalsForWrite();
 // May need to recover
 editLog.recoverUnclosedStreams();
 
 LOG.info(Catching up to latest edits from old active before  +
 taking over writer role in edits logs.);
 editLogTailer.catchupDuringFailover();
 {noformat}
 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
 Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
 XX.XX.XX.55:58003: output error
 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
 from XX.XX.XX.55:58004: error: java.lang.NullPointerException
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
   at 
 org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
   at 
 org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 9 on 8020 caught an exception
 java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
   at 
 org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-13 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276510#comment-14276510
 ] 

Vinayakumar B edited comment on HDFS-3443 at 1/14/15 5:37 AM:
--

bq. How about adding a boolean for indicating namenode starting up so that 
NameNodeRpcServer could refuse all operations?
Option is good. currently I think only transition RPCs will have problem, which 
should be after this patch.
Remaining all requests will anyway will be rejected since the initial state 
will be STANDBY.

am I right?


was (Author: vinayrpet):
bq, How about adding a boolean for indicating namenode starting up so that 
NameNodeRpcServer could refuse all operations?
Option is good. currently I think only transition RPCs will have problem, which 
should be after this patch.
Remaining all requests will anyway will be rejected since the initial state 
will be STANDBY.

am I right?

 Unable to catch up edits during standby to active switch due to NPE
 ---

 Key: HDFS-3443
 URL: https://issues.apache.org/jira/browse/HDFS-3443
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover
Reporter: suja s
Assignee: amith
 Attachments: HDFS-3443-003.patch, HDFS-3443_1.patch, HDFS-3443_1.patch


 Start NN
 Let NN standby services be started.
 Before the editLogTailer is initialised start ZKFC and allow the 
 activeservices start to proceed further.
 Here editLogTailer.catchupDuringFailover() will throw NPE.
 void startActiveServices() throws IOException {
 LOG.info(Starting services required for active state);
 writeLock();
 try {
   FSEditLog editLog = dir.fsImage.getEditLog();
   
   if (!editLog.isOpenForWrite()) {
 // During startup, we're already open for write during initialization.
 editLog.initJournalsForWrite();
 // May need to recover
 editLog.recoverUnclosedStreams();
 
 LOG.info(Catching up to latest edits from old active before  +
 taking over writer role in edits logs.);
 editLogTailer.catchupDuringFailover();
 {noformat}
 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
 Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
 XX.XX.XX.55:58003: output error
 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
 from XX.XX.XX.55:58004: error: java.lang.NullPointerException
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
   at 
 org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
   at 
 org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 9 on 8020 caught an exception
 java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
   at 
 org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
   at 

[jira] [Updated] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2015-01-13 Thread Ayappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayappan updated HDFS-4681:
--
Target Version/s: 2.7.0

 TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
 using IBM java
 -

 Key: HDFS-4681
 URL: https://issues.apache.org/jira/browse/HDFS-4681
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.5.2
Reporter: Tian Hong Wang
Assignee: Suresh Srinivas
  Labels: patch
 Attachments: HDFS-4681-v1.patch, HDFS-4681.patch


 TestBlocksWithNotEnoughRacks unit test fails with the following error message:
 
 testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
   Time elapsed: 8997 sec   FAILURE!
 org.junit.ComparisonFailure: Corrupt replica 
 expected:...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02���:)$�{|�^@�-���|GvW��7g
  �/M��[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���   
 oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
 ��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C] but 
 was:...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02�:)$�{|�^@�-���|GvW��7g
  �/M�[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���  
 oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]
 at org.junit.Assert.assertEquals(Assert.java:123)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7589) Break the dependency between libnative_mini_dfs and libhdfs

2015-01-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275602#comment-14275602
 ] 

Chris Nauroth commented on HDFS-7589:
-

[~wangzw], this is done.  Branch HDFS-6994 is up to date with trunk as of 
commit hash 08ac06283a3e9bf0d49d873823aabd419b08e41f.

 Break the dependency between libnative_mini_dfs and libhdfs
 ---

 Key: HDFS-7589
 URL: https://issues.apache.org/jira/browse/HDFS-7589
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: libhdfs
Reporter: Zhanwei Wang
Assignee: Zhanwei Wang
 Fix For: 2.7.0

 Attachments: HDFS-7589.002.patch, HDFS-7589.patch


 Currently libnative_mini_dfs links with libhdfs to reuse some common code. 
 Other applications which want to use libnative_mini_dfs have to link to 
 libhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-01-13 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275579#comment-14275579
 ] 

Jesse Yates commented on HDFS-6440:
---

thanks for the comments.  I'll work on a new version, but in the meantime, some 
responses:
bq. StandbyCheckpointer#activeNNAddresses
The standby checkpointer doesn't necessarily run just on the SNN - it could be 
in multiple places. Further, I think you are presupposing that there is only 
one SNN and one ANN; since there will commonly be at least 3 NNs, any one of 
the two other NNs could be the active NN. I could see it being renamed as 
potentialActiveNNAddresses, but I don't think that gains that much more clarity 
for the increased verbosity.

bq.  I saw you removed {final}
I was trying to keep in the spirit of the original mini-cluster code. The final 
safety concern is really only necessary in this case when you are changing the 
number of configured NNs and then accessing them in different threads; I have 
no idea when that would even make sense. Even then you wouldn't have been 
thread-safe in the original code as it there is no locking on the array of NNs. 
I removed the finals to keep the same style as the original wrt to changing the 
topology.

bq. Are the changes in 'log4j.properties' necessary?

Not strictly, but its just the test log4j properties (so no effect on the 
production version) and just adds more debugging information, in this case, 
which thread is actually making the log message.

I'll update the others

 Support more than 2 NameNodes
 -

 Key: HDFS-6440
 URL: https://issues.apache.org/jira/browse/HDFS-6440
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: auto-failover, ha, namenode
Affects Versions: 2.4.0
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: Multiple-Standby-NameNodes_V1.pdf, 
 hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
 hdfs-multiple-snn-trunk-v0.patch


 Most of the work is already done to support more than 2 NameNodes (one 
 active, one standby). This would be the last bit to support running multiple 
 _standby_ NameNodes; one of the standbys should be available for fail-over.
 Mostly, this is a matter of updating how we parse configurations, some 
 complexity around managing the checkpointing, and updating a whole lot of 
 tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2015-01-13 Thread Ayappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayappan updated HDFS-4681:
--
Environment: PowerPC Big Endian architecture

 TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
 using IBM java
 -

 Key: HDFS-4681
 URL: https://issues.apache.org/jira/browse/HDFS-4681
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.5.2
 Environment: PowerPC Big Endian architecture
Reporter: Tian Hong Wang
Assignee: Suresh Srinivas
 Attachments: HDFS-4681-v1.patch, HDFS-4681.patch


 TestBlocksWithNotEnoughRacks unit test fails with the following error message:
 
 testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
   Time elapsed: 8997 sec   FAILURE!
 org.junit.ComparisonFailure: Corrupt replica 
 expected:...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02���:)$�{|�^@�-���|GvW��7g
  �/M��[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���   
 oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
 ��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C] but 
 was:...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02�:)$�{|�^@�-���|GvW��7g
  �/M�[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���  
 oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]
 at org.junit.Assert.assertEquals(Assert.java:123)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4681) TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java

2015-01-13 Thread Ayappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayappan updated HDFS-4681:
--
Labels:   (was: patch)

 TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails 
 using IBM java
 -

 Key: HDFS-4681
 URL: https://issues.apache.org/jira/browse/HDFS-4681
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.5.2
Reporter: Tian Hong Wang
Assignee: Suresh Srinivas
 Attachments: HDFS-4681-v1.patch, HDFS-4681.patch


 TestBlocksWithNotEnoughRacks unit test fails with the following error message:
 
 testCorruptBlockRereplicatedAcrossRacks(org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks)
   Time elapsed: 8997 sec   FAILURE!
 org.junit.ComparisonFailure: Corrupt replica 
 expected:...��^EI�u�[�{���[$�\hF�[�R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02���:)$�{|�^@�-���|GvW��7g
  �/M��[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���   
 oKtE�*�^\3u��]Ē:mŭ^^y�^H��_^T�^ZS4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
 ��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N��̋�p���`�~��`�^F;�^C] but 
 was:...��^EI�u�[�{���[$�\hF�[R{O�L^S��g�#�O��׼��Wv��6u4Hd)FaŔ��^W�0��H/�^ZU^@�6�02�:)$�{|�^@�-���|GvW��7g
  �/M�[U!eF�^N^?�4pR�d��|��Ŵ7j^O^Sh�^@�nu�(�^C^Y�;I�Q�K^Oc���  
 oKtE�*�^\3u��]Ē:mŭ^^y���^H��_^T�^ZS���4�7�C�^G�_���\|^W�vo���zgU�lmJ)_vq~�+^Mo^G^O�W}�.�4
��6b�S�G�^?��m4FW#^@
 D5��}�^Z�^]���mfR^G#T-�N�̋�p���`�~��`�^F;�]
 at org.junit.Assert.assertEquals(Assert.java:123)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks.testCorruptBlockRereplicatedAcrossRacks(TestBlocksWithNotEnoughRacks.java:229)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276583#comment-14276583
 ] 

Hadoop QA commented on HDFS-3689:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692140/HDFS-3689.003.patch
  against trunk revision f92e503.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-nfs:

  org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
  org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache
  org.apache.hadoop.hdfs.server.namenode.TestHDFSConcat

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9205//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9205//console

This message is automatically generated.

 Add support for variable length block
 -

 Key: HDFS-3689
 URL: https://issues.apache.org/jira/browse/HDFS-3689
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Affects Versions: 3.0.0
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, 
 HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch


 Currently HDFS supports fixed length blocks. Supporting variable length block 
 will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4929) [NNBench mark] Lease mismatch error when running with multiple mappers

2015-01-13 Thread caixiaofeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276601#comment-14276601
 ] 

caixiaofeng commented on HDFS-4929:
---

I have tried as  @Hari Krishna Dara   said.



 [NNBench mark] Lease mismatch error when running with multiple mappers
 --

 Key: HDFS-4929
 URL: https://issues.apache.org/jira/browse/HDFS-4929
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: benchmarks
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Critical

 Command :
 ./yarn jar 
 ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.0.1-tests.jar 
 nnbench -operation create_write -numberOfFiles 1000 -blockSize 268435456 
 -bytesToWrite 102400 -baseDir /benchmarks/NNBench`hostname -s` 
 -replicationFactorPerFile 3 -maps 100 -reduces 10
 Trace :
 013-06-21 10:44:53,763 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 7 on 9005, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 
 192.168.105.214:36320: error: 
 org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: Lease mismatch 
 on /benchmarks/NNBenchlinux-185/data/file_linux-214__0 owned by 
 DFSClient_attempt_1371782327901_0001_m_48_0_1383437860_1 but is accessed 
 by DFSClient_attempt_1371782327901_0001_m_84_0_1880545303_1
 org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: Lease mismatch 
 on /benchmarks/NNBenchlinux-185/data/file_linux-214__0 owned by 
 DFSClient_attempt_1371782327901_0001_m_48_0_1383437860_1 but is accessed 
 by DFSClient_attempt_1371782327901_0001_m_84_0_1880545303_1
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2351)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2098)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2019)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:213)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:52012)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:435)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:925)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1710)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1706)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-4929) [NNBench mark] Lease mismatch error when running with multiple mappers

2015-01-13 Thread caixiaofeng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caixiaofeng reassigned HDFS-4929:
-

Assignee: caixiaofeng  (was: Brahma Reddy Battula)

 [NNBench mark] Lease mismatch error when running with multiple mappers
 --

 Key: HDFS-4929
 URL: https://issues.apache.org/jira/browse/HDFS-4929
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: benchmarks
Reporter: Brahma Reddy Battula
Assignee: caixiaofeng
Priority: Critical

 Command :
 ./yarn jar 
 ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.0.1-tests.jar 
 nnbench -operation create_write -numberOfFiles 1000 -blockSize 268435456 
 -bytesToWrite 102400 -baseDir /benchmarks/NNBench`hostname -s` 
 -replicationFactorPerFile 3 -maps 100 -reduces 10
 Trace :
 013-06-21 10:44:53,763 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 7 on 9005, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 
 192.168.105.214:36320: error: 
 org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: Lease mismatch 
 on /benchmarks/NNBenchlinux-185/data/file_linux-214__0 owned by 
 DFSClient_attempt_1371782327901_0001_m_48_0_1383437860_1 but is accessed 
 by DFSClient_attempt_1371782327901_0001_m_84_0_1880545303_1
 org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: Lease mismatch 
 on /benchmarks/NNBenchlinux-185/data/file_linux-214__0 owned by 
 DFSClient_attempt_1371782327901_0001_m_48_0_1383437860_1 but is accessed 
 by DFSClient_attempt_1371782327901_0001_m_84_0_1880545303_1
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2351)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2098)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2019)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:213)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:52012)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:435)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:925)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1710)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1706)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7460) Rewrite httpfs to use new shell framework

2015-01-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-7460:
---
Attachment: HDFS-7460.patch

-00:
* Changed all the -daemon/-daemons to their --daemon equivalent.
* Added a ton of missing commands to the hdfs command manual.
* Some bins are really sbins.
* Re-arranged to conform to match the rest of the command documents.
* Stripped trailing spaces out of the docs that were touched in this patch.
* 

 Rewrite httpfs to use new shell framework
 -

 Key: HDFS-7460
 URL: https://issues.apache.org/jira/browse/HDFS-7460
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HDFS-7460.patch


 httpfs shell code was not rewritten during HADOOP-9902. It should be modified 
 to take advantage of the common shell framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7568) Support immutability (Write-once-read-many) in HDFS

2015-01-13 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275781#comment-14275781
 ] 

Charles Lamb commented on HDFS-7568:


Hi [~sureshms],

I'm just wanted to know if you plan on posting a more detailed design doc for 
this?

Thanks.

Charles


 Support immutability (Write-once-read-many) in HDFS
 ---

 Key: HDFS-7568
 URL: https://issues.apache.org/jira/browse/HDFS-7568
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: namenode
Affects Versions: 2.7.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas

 Many regulatory compliance requires storage to support WORM functionality to 
 protect sensitive data from being modified or deleted. This jira proposes 
 adding that feature to HDFS.
 See the following comment for more description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2219) Fsck should work with fully qualified file paths.

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275690#comment-14275690
 ] 

Hadoop QA commented on HDFS-2219:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12691953/h2219_20150113.patch
  against trunk revision 08ac062.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9197//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9197//console

This message is automatically generated.

 Fsck should work with fully qualified file paths.
 -

 Key: HDFS-2219
 URL: https://issues.apache.org/jira/browse/HDFS-2219
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 0.23.0
Reporter: Jitendra Nath Pandey
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Attachments: h2219_20150113.patch


 Fsck takes absolute paths, but doesn't work with fully qualified file path 
 URIs. In a federated cluster with multiple namenodes, it will be useful to be 
 able to specify a file path for any namenode using its fully qualified path. 
 Currently, a non-default file system can be specified using -fs option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HDFS-7460) Rewrite httpfs to use new shell framework

2015-01-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-7460:
---
Comment: was deleted

(was: -00:
* Changed all the -daemon/-daemons to their --daemon equivalent.
* Added a ton of missing commands to the hdfs command manual.
* Some bins are really sbins.
* Re-arranged to conform to match the rest of the command documents.
* Stripped trailing spaces out of the docs that were touched in this patch.
* )

 Rewrite httpfs to use new shell framework
 -

 Key: HDFS-7460
 URL: https://issues.apache.org/jira/browse/HDFS-7460
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Allen Wittenauer

 httpfs shell code was not rewritten during HADOOP-9902. It should be modified 
 to take advantage of the common shell framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7460) Rewrite httpfs to use new shell framework

2015-01-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-7460:
---
Attachment: (was: HDFS-7460.patch)

 Rewrite httpfs to use new shell framework
 -

 Key: HDFS-7460
 URL: https://issues.apache.org/jira/browse/HDFS-7460
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Allen Wittenauer

 httpfs shell code was not rewritten during HADOOP-9902. It should be modified 
 to take advantage of the common shell framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7581) HDFS documentation needs updating post-shell rewrite

2015-01-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-7581:
---
Attachment: HDFS-7581.patch

-00:
 *  Changed all the -daemon/-daemons to their --daemon equivalent.
 *  Added a ton of missing commands to the hdfs command manual.
 *  Some bins are really sbins.
 *  Re-arranged to conform to match the rest of the command documents.
 *  Stripped trailing spaces out of the docs that were touched in this patch.

 HDFS documentation needs updating post-shell rewrite
 

 Key: HDFS-7581
 URL: https://issues.apache.org/jira/browse/HDFS-7581
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HDFS-7581.patch


 After HADOOP-9902, some of the HDFS documentation is out of date.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7597) Clients seeking over webhdfs may crash the NN

2015-01-13 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275779#comment-14275779
 ] 

Colin Patrick McCabe commented on HDFS-7597:


Good point, Chris.  Actually, a Guava cache would be perfect here.  I think it 
would be fine to do this in a follow-on change as well, if that's more 
convenient.

 Clients seeking over webhdfs may crash the NN
 -

 Key: HDFS-7597
 URL: https://issues.apache.org/jira/browse/HDFS-7597
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Affects Versions: 2.0.0-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-7597.patch


 Webhdfs seeks involve closing the current connection, and reissuing a new 
 open request with the new offset.  The RPC layer caches connections so the DN 
 keeps a lingering connection open to the NN.  Connection caching is in part 
 based on UGI.  Although the client used the same token for the new offset 
 request, the UGI is different which forces the DN to open another unnecessary 
 connection to the NN.
 A job that performs many seeks will easily crash the NN due to fd exhaustion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7067) ClassCastException while using a key created by keytool to create encryption zone.

2015-01-13 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275795#comment-14275795
 ] 

Colin Patrick McCabe commented on HDFS-7067:


+1 pending jenkins

 ClassCastException while using a key created by keytool to create encryption 
 zone. 
 ---

 Key: HDFS-7067
 URL: https://issues.apache.org/jira/browse/HDFS-7067
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: encryption
Affects Versions: 2.6.0
Reporter: Yi Yao
Assignee: Charles Lamb
Priority: Minor
 Attachments: HDFS-7067.001.patch, HDFS-7067.002.patch, 
 hdfs7067.keystore


 I'm using transparent encryption. If I create a key for KMS keystore via 
 keytool and use the key to create an encryption zone. I get a 
 ClassCastException rather than an exception with decent error message. I know 
 we should use 'hadoop key create' to create a key. It's better to provide an 
 decent error message to remind user to use the right way to create a KMS key.
 [LOG]
 ERROR[user=hdfs] Method:'GET' Exception:'java.lang.ClassCastException: 
 javax.crypto.spec.SecretKeySpec cannot be cast to 
 org.apache.hadoop.crypto.key.JavaKeyStoreProvider$KeyMetadata'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3689) Add support for variable length block

2015-01-13 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-3689:

Attachment: HDFS-3689.003.patch

Update the patch with concat support. 

 Add support for variable length block
 -

 Key: HDFS-3689
 URL: https://issues.apache.org/jira/browse/HDFS-3689
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Affects Versions: 3.0.0
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, 
 HDFS-3689.002.patch, HDFS-3689.003.patch


 Currently HDFS supports fixed length blocks. Supporting variable length block 
 will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7587) Edit log corruption can happen if append fails with a quota violation

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276269#comment-14276269
 ] 

Hadoop QA commented on HDFS-7587:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692046/HDFS-7587.patch
  against trunk revision 10ac5ab.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9200//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9200//console

This message is automatically generated.

 Edit log corruption can happen if append fails with a quota violation
 -

 Key: HDFS-7587
 URL: https://issues.apache.org/jira/browse/HDFS-7587
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Kihwal Lee
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HDFS-7587.patch


 We have seen a standby namenode crashing due to edit log corruption. It was 
 complaining that {{OP_CLOSE}} cannot be applied because the file is not 
 under-construction.
 When a client was trying to append to the file, the remaining space quota was 
 very small. This caused a failure in {{prepareFileForWrite()}}, but after the 
 inode was already converted for writing and a lease added. Since these were 
 not undone when the quota violation was detected, the file was left in 
 under-construction with an active lease without edit logging {{OP_ADD}}.
 A subsequent {{append()}} eventually caused a lease recovery after the soft 
 limit period. This resulted in {{commitBlockSynchronization()}}, which closed 
 the file with {{OP_CLOSE}} being logged.  Since there was no corresponding 
 {{OP_ADD}}, edit replaying could not apply this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-5631) Expose interfaces required by FsDatasetSpi implementations

2015-01-13 Thread Joe Pallas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Pallas reassigned HDFS-5631:


Assignee: Joe Pallas  (was: David Powell)

 Expose interfaces required by FsDatasetSpi implementations
 --

 Key: HDFS-5631
 URL: https://issues.apache.org/jira/browse/HDFS-5631
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 3.0.0
Reporter: David Powell
Assignee: Joe Pallas
Priority: Minor
 Attachments: HDFS-5631-LazyPersist.patch, HDFS-5631.patch, 
 HDFS-5631.patch


 This sub-task addresses section 4.1 of the document attached to HDFS-5194,
 the exposure of interfaces needed by a FsDatasetSpi implementation.
 Specifically it makes ChunkChecksum public and BlockMetadataHeader's
 readHeader() and writeHeader() methods public.
 The changes to BlockReaderUtil (and related classes) discussed by section
 4.1 are only needed if supporting short-circuit, and should be addressed
 as part of an effort to provide such support rather than this JIRA.
 To help ensure these changes are complete and are not regressed in the
 future, tests that gauge the accessibility (though *not* behavior)
 of interfaces needed by a FsDatasetSpi subclass are also included.
 These take the form of a dummy FsDatasetSpi subclass -- a successful
 compilation is effectively a pass.  Trivial unit tests are included so
 that there is something tangible to track.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-5782) BlockListAsLongs should take lists of Replicas rather than concrete classes

2015-01-13 Thread Joe Pallas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Pallas reassigned HDFS-5782:


Assignee: Joe Pallas  (was: David Powell)

 BlockListAsLongs should take lists of Replicas rather than concrete classes
 ---

 Key: HDFS-5782
 URL: https://issues.apache.org/jira/browse/HDFS-5782
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 3.0.0
Reporter: David Powell
Assignee: Joe Pallas
Priority: Minor
 Attachments: HDFS-5782.patch, HDFS-5782.patch


 From HDFS-5194:
 {quote}
 BlockListAsLongs's constructor takes a list of Blocks and a list of 
 ReplicaInfos.  On the surface, the former is mildly irritating because it is 
 a concrete class, while the latter is a greater concern due to being a 
 File-based implementation of Replica.
 On deeper inspection, BlockListAsLongs passes members of both to an internal 
 method that accepts just Blocks, which conditionally casts them *back* to 
 ReplicaInfos (this cast only happens to the latter, though this isn't 
 immediately obvious to the reader).
 Conveniently, all methods called on these objects are found in the Replica 
 interface, and all functional (i.e. non-test) consumers of this interface 
 pass in Replica subclasses.  If this constructor took Lists of Replicas 
 instead, it would be more generally useful and its implementation would be 
 cleaner as well.
 {quote}
 Fixing this indeed makes the business end of BlockListAsLongs cleaner while 
 requiring no changes to FsDatasetImpl.  As suggested by the above 
 description, though, the HDFS tests use BlockListAsLongs differently from the 
 production code -- they pretty much universally provide a list of actual 
 Blocks.  To handle this:
 - In the case of SimulatedFSDataset, providing a list of Replicas is actually 
 less work.
 - In the case of NNThroughputBenchmark, rewriting to use Replicas is fairly 
 invasive.  Instead, the patch creates a second constructor in 
 BlockListOfLongs specifically for the use of NNThrougputBenchmark.  It turns 
 the stomach a little, but is clearer and requires less code than the 
 alternatives (and isn't without precedent).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7470) SecondaryNameNode need twice memory when calling reloadFromImageFile

2015-01-13 Thread zhaoyunjiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276347#comment-14276347
 ] 

zhaoyunjiong commented on HDFS-7470:


Chris, thanks for your time.

 SecondaryNameNode need twice memory when calling reloadFromImageFile
 

 Key: HDFS-7470
 URL: https://issues.apache.org/jira/browse/HDFS-7470
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Fix For: 2.7.0

 Attachments: HDFS-7470.1.patch, HDFS-7470.2.patch, HDFS-7470.patch, 
 secondaryNameNode.jstack.txt


 histo information at 2014-12-02 01:19
 {quote}
  num #instances #bytes  class name
 --
1: 18644963019326123016  [Ljava.lang.Object;
2: 15736664915107198304  
 org.apache.hadoop.hdfs.server.namenode.INodeFile
3: 18340903011738177920  
 org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo
4: 157358401 5244264024  
 [Lorg.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
5: 3 3489661000  
 [Lorg.apache.hadoop.util.LightWeightGSet$LinkedElement;
6:  29253275 1872719664  [B
7:   3230821  284312248  
 org.apache.hadoop.hdfs.server.namenode.INodeDirectory
8:   2756284  110251360  java.util.ArrayList
9:469158   22519584  org.apache.hadoop.fs.permission.AclEntry
   10:   847   17133032  [Ljava.util.HashMap$Entry;
   11:188471   17059632  [C
   12:314614   10067656  
 [Lorg.apache.hadoop.hdfs.server.namenode.INode$Feature;
   13:2345799383160  
 com.google.common.collect.RegularImmutableList
   14: 495846850280  constMethodKlass
   15: 495846356704  methodKlass
   16:1872705992640  java.lang.String
   17:2345795629896  
 org.apache.hadoop.hdfs.server.namenode.AclFeature
 {quote}
 histo information at 2014-12-02 01:32
 {quote}
  num #instances #bytes  class name
 --
1: 35583805135566651032  [Ljava.lang.Object;
2: 30227275829018184768  
 org.apache.hadoop.hdfs.server.namenode.INodeFile
3: 35250072322560046272  
 org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo
4: 30226451010075087952  
 [Lorg.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
5: 177120233 9374983920  [B
6: 3 3489661000  
 [Lorg.apache.hadoop.util.LightWeightGSet$LinkedElement;
7:   6191688  544868544  
 org.apache.hadoop.hdfs.server.namenode.INodeDirectory
8:   2799256  111970240  java.util.ArrayList
9:890728   42754944  org.apache.hadoop.fs.permission.AclEntry
   10:330986   29974408  [C
   11:596871   19099880  
 [Lorg.apache.hadoop.hdfs.server.namenode.INode$Feature;
   12:445364   17814560  
 com.google.common.collect.RegularImmutableList
   13:   844   17132816  [Ljava.util.HashMap$Entry;
   14:445364   10688736  
 org.apache.hadoop.hdfs.server.namenode.AclFeature
   15:329789   10553248  java.lang.String
   16: 917418807136  
 org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction
   17: 495846850280  constMethodKlass
 {quote}
 And the stack trace shows it was doing reloadFromImageFile:
 {quote}
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.getInode(FSDirectory.java:2426)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectorySection(FSImageFormatPBINode.java:160)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:243)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:168)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:121)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:902)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:888)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.reloadFromImageFile(FSImage.java:562)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:1048)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:536)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:388)
   at 
 

[jira] [Commented] (HDFS-6826) Plugin interface to enable delegation of HDFS authorization assertions

2015-01-13 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276408#comment-14276408
 ] 

Aaron T. Myers commented on HDFS-6826:
--

Just to be completely explicit, would this design allow for the plugin 
{{AccessControlPolicy}} to affect what's returned to the client for results 
from calls to {{DistributedFileSystem#listStatus}}? That's really the crux of 
what I'm after. If this proposal allows for that, then it'll work for me. I 
want to make sure that the `hadoop fs -ls ...' output is capable of actually 
displaying the permissions/ACLs that are being enforced at any moment in time, 
regardless of what backend policy is in fact determining what those 
permissions/ACLs are.

 Plugin interface to enable delegation of HDFS authorization assertions
 --

 Key: HDFS-6826
 URL: https://issues.apache.org/jira/browse/HDFS-6826
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HDFS-6826-idea.patch, HDFS-6826-idea2.patch, 
 HDFS-6826-permchecker.patch, HDFS-6826v3.patch, HDFS-6826v4.patch, 
 HDFS-6826v5.patch, HDFS-6826v6.patch, HDFS-6826v7.1.patch, 
 HDFS-6826v7.2.patch, HDFS-6826v7.3.patch, HDFS-6826v7.4.patch, 
 HDFS-6826v7.5.patch, HDFS-6826v7.6.patch, HDFS-6826v7.patch, 
 HDFS-6826v8.patch, HDFS-6826v9.patch, 
 HDFSPluggableAuthorizationProposal-v2.pdf, 
 HDFSPluggableAuthorizationProposal.pdf


 When Hbase data, HiveMetaStore data or Search data is accessed via services 
 (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce 
 permissions on corresponding entities (databases, tables, views, columns, 
 search collections, documents). It is desirable, when the data is accessed 
 directly by users accessing the underlying data files (i.e. from a MapReduce 
 job), that the permission of the data files map to the permissions of the 
 corresponding data entity (i.e. table, column family or search collection).
 To enable this we need to have the necessary hooks in place in the NameNode 
 to delegate authorization to an external system that can map HDFS 
 files/directories to data entities and resolve their permissions based on the 
 data entities permissions.
 I’ll be posting a design proposal in the next few days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7581) HDFS documentation needs updating post-shell rewrite

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276409#comment-14276409
 ] 

Hadoop QA commented on HDFS-7581:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692078/HDFS-7581-02.patch
  against trunk revision 10ac5ab.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9202//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9202//console

This message is automatically generated.

 HDFS documentation needs updating post-shell rewrite
 

 Key: HDFS-7581
 URL: https://issues.apache.org/jira/browse/HDFS-7581
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HDFS-7581-01.patch, HDFS-7581-02.patch, HDFS-7581.patch


 After HADOOP-9902, some of the HDFS documentation is out of date.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7570) DataXceiver could leak FileDescriptor

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276256#comment-14276256
 ] 

Hudson commented on HDFS-7570:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6855 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6855/])
HDFS-7570. SecondaryNameNode need twice memory when calling 
reloadFromImageFile. Contributed by zhaoyunjiong. (cnauroth: rev 
85aec75ce53445e1abf840076d2e10f1e3c6d69b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 DataXceiver could leak FileDescriptor
 -

 Key: HDFS-7570
 URL: https://issues.apache.org/jira/browse/HDFS-7570
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Juan Yu

 DataXceiver doesn't close inputstream all the time, There could be FD leakage 
 and overtime cause FDs exceed limit.
 {code}
 finally {
   if (LOG.isDebugEnabled()) {
 LOG.debug(datanode.getDisplayName() + :Number of active connections 
 is: 
 + datanode.getXceiverCount());
   }
   updateCurrentThreadName(Cleaning up);
   if (peer != null) {
 dataXceiverServer.closePeer(peer);
 IOUtils.closeStream(in);
   }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7057) Expose truncate API via FileSystem and shell command

2015-01-13 Thread Milan Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Milan Desai reassigned HDFS-7057:
-

Assignee: Milan Desai

 Expose truncate API via FileSystem and shell command
 

 Key: HDFS-7057
 URL: https://issues.apache.org/jira/browse/HDFS-7057
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Milan Desai

 Add truncate operation to FileSystem and expose it to users via shell command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7496) Fix FsVolume removal race conditions on the DataNode

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276319#comment-14276319
 ] 

Hadoop QA commented on HDFS-7496:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692101/HDFS-7496.003.patch
  against trunk revision 85aec75.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9203//console

This message is automatically generated.

 Fix FsVolume removal race conditions on the DataNode 
 -

 Key: HDFS-7496
 URL: https://issues.apache.org/jira/browse/HDFS-7496
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-7496.000.patch, HDFS-7496.001.patch, 
 HDFS-7496.002.patch, HDFS-7496.003.patch


 We discussed a few FsVolume removal race conditions on the DataNode in 
 HDFS-7489.  We should figure out a way to make removing an FsVolume safe.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7056) Snapshot support for truncate

2015-01-13 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7056:
--
Status: Open  (was: Patch Available)

 Snapshot support for truncate
 -

 Key: HDFS-7056
 URL: https://issues.apache.org/jira/browse/HDFS-7056
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Plamen Jeliazkov
 Attachments: HDFS-3107-HDFS-7056-combined-13.patch, 
 HDFS-3107-HDFS-7056-combined-15.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-7056-13.patch, HDFS-7056-15.patch, HDFS-7056.patch, HDFS-7056.patch, 
 HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, 
 HDFS-7056.patch, HDFS-7056.patch, HDFSSnapshotWithTruncateDesign.docx, 
 editsStored, editsStored.xml


 Implementation of truncate in HDFS-3107 does not allow truncating files which 
 are in a snapshot. It is desirable to be able to truncate and still keep the 
 old file state of the file in the snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7056) Snapshot support for truncate

2015-01-13 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7056:
--
Attachment: editsStored.xml
editsStored

This needs editsStored updated as well for TestOEV to pass, since the layout 
version changed.

 Snapshot support for truncate
 -

 Key: HDFS-7056
 URL: https://issues.apache.org/jira/browse/HDFS-7056
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Plamen Jeliazkov
 Attachments: HDFS-3107-HDFS-7056-combined-13.patch, 
 HDFS-3107-HDFS-7056-combined-15.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-7056-13.patch, HDFS-7056-15.patch, HDFS-7056.patch, HDFS-7056.patch, 
 HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, 
 HDFS-7056.patch, HDFS-7056.patch, HDFSSnapshotWithTruncateDesign.docx, 
 editsStored, editsStored.xml


 Implementation of truncate in HDFS-3107 does not allow truncating files which 
 are in a snapshot. It is desirable to be able to truncate and still keep the 
 old file state of the file in the snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7056) Snapshot support for truncate

2015-01-13 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-7056.
---
   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed

I just committed this to trunk. Thank you Plamen.
Thanks everybody who contributed with reviews, comments, design, testing.

 Snapshot support for truncate
 -

 Key: HDFS-7056
 URL: https://issues.apache.org/jira/browse/HDFS-7056
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Plamen Jeliazkov
 Fix For: 3.0.0

 Attachments: HDFS-3107-HDFS-7056-combined-13.patch, 
 HDFS-3107-HDFS-7056-combined-15.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
 HDFS-7056-13.patch, HDFS-7056-15.patch, HDFS-7056.patch, HDFS-7056.patch, 
 HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, 
 HDFS-7056.patch, HDFS-7056.patch, HDFSSnapshotWithTruncateDesign.docx, 
 editsStored, editsStored.xml


 Implementation of truncate in HDFS-3107 does not allow truncating files which 
 are in a snapshot. It is desirable to be able to truncate and still keep the 
 old file state of the file in the snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-3107) HDFS truncate

2015-01-13 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-3107.
---
   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed

I just committed this to trunk. Thank you Plamen.

Should we port it to branch 2 along with HDFS-7056? What people think?

 HDFS truncate
 -

 Key: HDFS-3107
 URL: https://issues.apache.org/jira/browse/HDFS-3107
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Lei Chang
Assignee: Plamen Jeliazkov
 Fix For: 3.0.0

 Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
 HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, 
 HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf, 
 HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml

   Original Estimate: 1,344h
  Remaining Estimate: 1,344h

 Systems with transaction support often need to undo changes made to the 
 underlying storage when a transaction is aborted. Currently HDFS does not 
 support truncate (a standard Posix operation) which is a reverse operation of 
 append, which makes upper layer applications use ugly workarounds (such as 
 keeping track of the discarded byte range per file in a separate metadata 
 store, and periodically running a vacuum process to rewrite compacted files) 
 to overcome this limitation of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7470) SecondaryNameNode need twice memory when calling reloadFromImageFile

2015-01-13 Thread zhaoyunjiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyunjiong updated HDFS-7470:
---
Attachment: HDFS-7470.2.patch

This patch will clear BlocksMap in FSNamesystem.clear().
I believe this should release the memory.

 SecondaryNameNode need twice memory when calling reloadFromImageFile
 

 Key: HDFS-7470
 URL: https://issues.apache.org/jira/browse/HDFS-7470
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Attachments: HDFS-7470.1.patch, HDFS-7470.2.patch, HDFS-7470.patch, 
 secondaryNameNode.jstack.txt


 histo information at 2014-12-02 01:19
 {quote}
  num #instances #bytes  class name
 --
1: 18644963019326123016  [Ljava.lang.Object;
2: 15736664915107198304  
 org.apache.hadoop.hdfs.server.namenode.INodeFile
3: 18340903011738177920  
 org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo
4: 157358401 5244264024  
 [Lorg.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
5: 3 3489661000  
 [Lorg.apache.hadoop.util.LightWeightGSet$LinkedElement;
6:  29253275 1872719664  [B
7:   3230821  284312248  
 org.apache.hadoop.hdfs.server.namenode.INodeDirectory
8:   2756284  110251360  java.util.ArrayList
9:469158   22519584  org.apache.hadoop.fs.permission.AclEntry
   10:   847   17133032  [Ljava.util.HashMap$Entry;
   11:188471   17059632  [C
   12:314614   10067656  
 [Lorg.apache.hadoop.hdfs.server.namenode.INode$Feature;
   13:2345799383160  
 com.google.common.collect.RegularImmutableList
   14: 495846850280  constMethodKlass
   15: 495846356704  methodKlass
   16:1872705992640  java.lang.String
   17:2345795629896  
 org.apache.hadoop.hdfs.server.namenode.AclFeature
 {quote}
 histo information at 2014-12-02 01:32
 {quote}
  num #instances #bytes  class name
 --
1: 35583805135566651032  [Ljava.lang.Object;
2: 30227275829018184768  
 org.apache.hadoop.hdfs.server.namenode.INodeFile
3: 35250072322560046272  
 org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo
4: 30226451010075087952  
 [Lorg.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
5: 177120233 9374983920  [B
6: 3 3489661000  
 [Lorg.apache.hadoop.util.LightWeightGSet$LinkedElement;
7:   6191688  544868544  
 org.apache.hadoop.hdfs.server.namenode.INodeDirectory
8:   2799256  111970240  java.util.ArrayList
9:890728   42754944  org.apache.hadoop.fs.permission.AclEntry
   10:330986   29974408  [C
   11:596871   19099880  
 [Lorg.apache.hadoop.hdfs.server.namenode.INode$Feature;
   12:445364   17814560  
 com.google.common.collect.RegularImmutableList
   13:   844   17132816  [Ljava.util.HashMap$Entry;
   14:445364   10688736  
 org.apache.hadoop.hdfs.server.namenode.AclFeature
   15:329789   10553248  java.lang.String
   16: 917418807136  
 org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction
   17: 495846850280  constMethodKlass
 {quote}
 And the stack trace shows it was doing reloadFromImageFile:
 {quote}
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.getInode(FSDirectory.java:2426)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectorySection(FSImageFormatPBINode.java:160)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:243)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:168)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:121)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:902)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:888)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.reloadFromImageFile(FSImage.java:562)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:1048)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:536)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:388)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:354)
  

[jira] [Commented] (HDFS-7056) Snapshot support for truncate

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14274949#comment-14274949
 ] 

Hudson commented on HDFS-7056:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6853 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6853/])
HDFS-7056. Snapshot support for truncate. Contributed by Konstantin Shvachko 
and Plamen Jeliazkov. (shv: rev 08ac06283a3e9bf0d49d873823aabd419b08e41f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockRecoveryCommand.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/InterDatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestInterDatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/InterDatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored


 Snapshot support for truncate
 -

 Key: HDFS-7056
 URL: https://issues.apache.org/jira/browse/HDFS-7056
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko

[jira] [Commented] (HDFS-3107) HDFS truncate

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14274950#comment-14274950
 ] 

Hudson commented on HDFS-3107:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6853 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6853/])
HDFS-3107. Introduce truncate. Contributed by Plamen Jeliazkov. (shv: rev 
7e9358feb326d48b8c4f00249e7af5023cebd2e2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockRecoveryCommand.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java


 HDFS truncate
 -

 Key: HDFS-3107
 URL: https://issues.apache.org/jira/browse/HDFS-3107
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Lei Chang
Assignee: Plamen Jeliazkov
 Fix For: 3.0.0

 Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
 HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, 
 HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf, 
 HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml

   Original Estimate: 1,344h
  Remaining Estimate: 1,344h

 Systems with transaction support often need to undo changes made to the 
 underlying storage when a transaction is aborted. Currently HDFS does not 
 support truncate (a standard Posix operation) which is a reverse operation of 
 append, which makes upper layer applications use ugly workarounds (such as 
 keeping track of the discarded byte range 

[jira] [Commented] (HDFS-7598) Remove dependency on old version of Guava in TestDFSClientCache#testEviction

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275250#comment-14275250
 ] 

Hudson commented on HDFS-7598:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #69 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/69/])
HDFS-7598. Remove dependency on old version of Guava in 
TestDFSClientCache#testEviction (Sangjin Lee via Colin P. McCabe) (cmccabe: rev 
b3ddd7ee39c92d2df8661ce5834a2831020cecb2)
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove dependency on old version of Guava in TestDFSClientCache#testEviction
 

 Key: HDFS-7598
 URL: https://issues.apache.org/jira/browse/HDFS-7598
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7598.001.patch


 TestDFSClientCache.testEviction() is not entirely accurate in its usage of 
 the guava LoadingCache.
 It sets the max size at 2, but asserts the loading cache will contain only 1 
 entry after inserting two entries. Guava's CacheBuilder.maximumSize() makes 
 only the following promise:
 {panel}
 Specifies the maximum number of entries the cache may contain. Note that the 
 cache may evict an entry before this limit is exceeded.
 {panel}
 Thus, the only invariant is that the loading cache will hold the maximum size 
 number of entries or fewer. The DFSClientCache.testEviction asserts it holds 
 maximum size - 1 exactly.
 For guava 11.0.2 this happens to be true at maximum size = 2 because of the 
 way it sets the maximum segment weight. With later versions of guava, the 
 maximum segment weight is set higher, and the eviction is less aggressive.
 The test should be fixed to assert only the true invariant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7056) Snapshot support for truncate

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275248#comment-14275248
 ] 

Hudson commented on HDFS-7056:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #69 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/69/])
HDFS-7056. Snapshot support for truncate. Contributed by Konstantin Shvachko 
and Plamen Jeliazkov. (shv: rev 08ac06283a3e9bf0d49d873823aabd419b08e41f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockRecoveryCommand.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/InterDatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolServerSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestInterDatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/InterDatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


 Snapshot support for truncate
 -

 Key: HDFS-7056
 URL: https://issues.apache.org/jira/browse/HDFS-7056
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko

[jira] [Commented] (HDFS-3107) HDFS truncate

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275254#comment-14275254
 ] 

Hudson commented on HDFS-3107:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #69 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/69/])
HDFS-3107. Introduce truncate. Contributed by Plamen Jeliazkov. (shv: rev 
7e9358feb326d48b8c4f00249e7af5023cebd2e2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockRecoveryCommand.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto


 HDFS truncate
 -

 Key: HDFS-3107
 URL: https://issues.apache.org/jira/browse/HDFS-3107
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Lei Chang
Assignee: Plamen Jeliazkov
 Fix For: 3.0.0

 Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
 HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, 
 HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf, 
 HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml

   Original Estimate: 1,344h
  Remaining Estimate: 1,344h

 Systems with transaction support often need to undo changes made to the 
 underlying storage when a transaction is aborted. Currently HDFS does not 
 support truncate (a standard Posix operation) which is a reverse operation of 
 append, which makes upper layer applications use ugly workarounds (such as 
 keeping track of the discarded byte 

[jira] [Commented] (HDFS-7600) Refine hdfs admin classes to reuse common code

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275255#comment-14275255
 ] 

Hudson commented on HDFS-7600:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #69 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/69/])
HDFS-7600. Refine hdfs admin classes to reuse common code. Contributed by Jing 
Zhao. (jing9: rev 6f3a63a41b90157c3e46ea20ca6170b854ea902e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/StoragePolicyAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Refine hdfs admin classes to reuse common code
 --

 Key: HDFS-7600
 URL: https://issues.apache.org/jira/browse/HDFS-7600
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Reporter: Yi Liu
Assignee: Jing Zhao
 Fix For: 2.7.0

 Attachments: HDFS-7600.000.patch, HDFS-7600.001.patch


 As the review comment in HDFS-7323. 
 In {{StoragePolicyAdmin}} and other class under 
 {{org.apache.hadoop.hdfs.tools}}, such as {{CacheAdmin}}, {{CryptoAdmin}}. 
 There are too many common methods ({{getDFS}}, {{prettifyException}}, 
 {{getOptionDescriptionListing}} ...) and also inner class ({{HelpCommand}}), 
 they are the same, it would be great if we can refine and shift them into a 
 common place.
 This makes the code clean and can be re-used if we add new hdfs admin class 
 in future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7326) Add documentation for hdfs debug commands

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275267#comment-14275267
 ] 

Hudson commented on HDFS-7326:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2004 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2004/])
HDFS-7326. Add documentation for hdfs debug commands (Vijay Bhat via Colin P. 
McCabe) (cmccabe: rev b78b4a1536b6d47a37ff7c309857a628a864c957)
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HDFSCommands.apt.vm
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Add documentation for hdfs debug commands
 -

 Key: HDFS-7326
 URL: https://issues.apache.org/jira/browse/HDFS-7326
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Colin Patrick McCabe
Assignee: Vijay Bhat
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7326.001.patch, HDFS-7326.002.patch


 We should add documentation for hdfs debug commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7056) Snapshot support for truncate

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275266#comment-14275266
 ] 

Hudson commented on HDFS-7056:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2004 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2004/])
HDFS-7056. Snapshot support for truncate. Contributed by Konstantin Shvachko 
and Plamen Jeliazkov. (shv: rev 08ac06283a3e9bf0d49d873823aabd419b08e41f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/InterDatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/InterDatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestInterDatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockRecoveryCommand.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java


 Snapshot support for truncate
 -

 Key: HDFS-7056
 URL: https://issues.apache.org/jira/browse/HDFS-7056
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: 

[jira] [Commented] (HDFS-7600) Refine hdfs admin classes to reuse common code

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275273#comment-14275273
 ] 

Hudson commented on HDFS-7600:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2004 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2004/])
HDFS-7600. Refine hdfs admin classes to reuse common code. Contributed by Jing 
Zhao. (jing9: rev 6f3a63a41b90157c3e46ea20ca6170b854ea902e)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/StoragePolicyAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java


 Refine hdfs admin classes to reuse common code
 --

 Key: HDFS-7600
 URL: https://issues.apache.org/jira/browse/HDFS-7600
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Reporter: Yi Liu
Assignee: Jing Zhao
 Fix For: 2.7.0

 Attachments: HDFS-7600.000.patch, HDFS-7600.001.patch


 As the review comment in HDFS-7323. 
 In {{StoragePolicyAdmin}} and other class under 
 {{org.apache.hadoop.hdfs.tools}}, such as {{CacheAdmin}}, {{CryptoAdmin}}. 
 There are too many common methods ({{getDFS}}, {{prettifyException}}, 
 {{getOptionDescriptionListing}} ...) and also inner class ({{HelpCommand}}), 
 they are the same, it would be great if we can refine and shift them into a 
 common place.
 This makes the code clean and can be re-used if we add new hdfs admin class 
 in future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7533) Datanode sometimes does not shutdown on receiving upgrade shutdown command

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275274#comment-14275274
 ] 

Hudson commented on HDFS-7533:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2004 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2004/])
HDFS-7533. Datanode sometimes does not shutdown on receiving upgrade shutdown 
command. Contributed by Eric Payne. (kihwal: rev 
6bbf9fdd041d2413dd78e2bce51abae15f3334c2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeExit.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Datanode sometimes does not shutdown on receiving upgrade shutdown command
 --

 Key: HDFS-7533
 URL: https://issues.apache.org/jira/browse/HDFS-7533
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Eric Payne
 Fix For: 2.7.0

 Attachments: HDFS-7533.v1.txt


 When datanode is told to shutdown via the dfsadmin command during rolling 
 upgrade, it may not shutdown.  This is because not all writers have responder 
 running, but sendOOB() tries anyway. This causes NPE and the shutdown thread 
 dies, halting the shutdown after only shutting down DataXceiverServer. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3107) HDFS truncate

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275272#comment-14275272
 ] 

Hudson commented on HDFS-3107:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2004 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2004/])
HDFS-3107. Introduce truncate. Contributed by Plamen Jeliazkov. (shv: rev 
7e9358feb326d48b8c4f00249e7af5023cebd2e2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockRecoveryCommand.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java


 HDFS truncate
 -

 Key: HDFS-3107
 URL: https://issues.apache.org/jira/browse/HDFS-3107
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Lei Chang
Assignee: Plamen Jeliazkov
 Fix For: 3.0.0

 Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
 HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, 
 HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf, 
 HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml

   Original Estimate: 1,344h
  Remaining Estimate: 1,344h

 Systems with transaction support often need to undo changes made to the 
 underlying storage when a transaction is aborted. Currently HDFS does not 
 support truncate (a standard Posix operation) which is a reverse operation of 
 append, which makes upper layer applications use ugly workarounds (such as 
 keeping track of the discarded byte range per 

[jira] [Commented] (HDFS-7598) Remove dependency on old version of Guava in TestDFSClientCache#testEviction

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275268#comment-14275268
 ] 

Hudson commented on HDFS-7598:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2004 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2004/])
HDFS-7598. Remove dependency on old version of Guava in 
TestDFSClientCache#testEviction (Sangjin Lee via Colin P. McCabe) (cmccabe: rev 
b3ddd7ee39c92d2df8661ce5834a2831020cecb2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java


 Remove dependency on old version of Guava in TestDFSClientCache#testEviction
 

 Key: HDFS-7598
 URL: https://issues.apache.org/jira/browse/HDFS-7598
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7598.001.patch


 TestDFSClientCache.testEviction() is not entirely accurate in its usage of 
 the guava LoadingCache.
 It sets the max size at 2, but asserts the loading cache will contain only 1 
 entry after inserting two entries. Guava's CacheBuilder.maximumSize() makes 
 only the following promise:
 {panel}
 Specifies the maximum number of entries the cache may contain. Note that the 
 cache may evict an entry before this limit is exceeded.
 {panel}
 Thus, the only invariant is that the loading cache will hold the maximum size 
 number of entries or fewer. The DFSClientCache.testEviction asserts it holds 
 maximum size - 1 exactly.
 For guava 11.0.2 this happens to be true at maximum size = 2 because of the 
 way it sets the maximum segment weight. With later versions of guava, the 
 maximum segment weight is set higher, and the eviction is less aggressive.
 The test should be fixed to assert only the true invariant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5445) PacketReceiver populates the packetLen field in PacketHeader incorrectly

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275271#comment-14275271
 ] 

Hudson commented on HDFS-5445:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2004 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2004/])
HDFS-5445. PacketReceiver populates the packetLen field in PacketHeader 
incorrectly (Jonathan Mace via Colin P. McCabe) (cmccabe: rev 
f761bd8fe472c311bdff7c9d469f2805b867855a)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PacketReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/TestPacketReceiver.java


 PacketReceiver populates the packetLen field in PacketHeader incorrectly
 

 Key: HDFS-5445
 URL: https://issues.apache.org/jira/browse/HDFS-5445
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.1.0-beta, 2.2.0
 Environment: Ubuntu 12.10, Hadoop 2.1.0-beta
Reporter: Jonathan Mace
Assignee: Jonathan Mace
Priority: Minor
  Labels: easyfix
 Fix For: 2.7.0

 Attachments: HDFS-5445.001.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Summary: PacketReceiver reconstructs PacketHeaders with a packetLen 4 bytes 
 fewer than it should be.  It doesn't cause any exceptions because the 
 reconstructed header is never reserialized, and the packetLen field is not 
 used in this part of the code.
 In the BlockSender class, when a Packet is constructed it must be passed the 
 field packetLen, which is defined as the data length, checksum data length, 
 PLUS the length of the packetLen field itself (4 byte integer).
 {code:title=BlockSender.java|borderStyle=solid}
 484:  private int sendPacket(ByteBuffer pkt, int maxChunks, OutputStream out,
 485:  boolean transferTo, DataTransferThrottler throttler) throws 
 IOException {
 ...
 491:int packetLen = dataLen + checksumDataLen + 4;
 ...
 504:int headerLen = writePacketHeader(pkt, dataLen, packetLen);
 ...
 586:  }
 ...
 792:  private int writePacketHeader(ByteBuffer pkt, int dataLen, int 
 packetLen) {
 793:pkt.clear();
 794:// both syncBlock and syncPacket are false
 795:PacketHeader header = new PacketHeader(packetLen, offset, seqno,
 796:(dataLen == 0), dataLen, false);
 ...
 802:  }
 {code}
 In the PacketReceiver class, the PacketHeader is reconstructed using the 
 method setFieldsFromData.  However, the 4 bytes for the packetLen field 
 length are missing.
 {code:title=PacketReceiver.java|borderStyle=solid}
 112:  private void doRead(ReadableByteChannel ch, InputStream in)
 113:  throws IOException {
 ...
 136:int payloadLen = curPacketBuf.getInt();
 ...
 144:int dataPlusChecksumLen = payloadLen - Ints.BYTES;
 ...
 181:curHeader.setFieldsFromData(dataPlusChecksumLen, headerBuf);
 ...
 192:  }
 {code}
 The solution would be instead to do:
 {code:title=PacketReceiver.java|borderStyle=solid}
 181:curHeader.setFieldsFromData(payloadLen, headerBuf);
 {code}
 I found this because I was making small modifications to the code that 
 exposed this inconsistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5445) PacketReceiver populates the packetLen field in PacketHeader incorrectly

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275253#comment-14275253
 ] 

Hudson commented on HDFS-5445:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #69 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/69/])
HDFS-5445. PacketReceiver populates the packetLen field in PacketHeader 
incorrectly (Jonathan Mace via Colin P. McCabe) (cmccabe: rev 
f761bd8fe472c311bdff7c9d469f2805b867855a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/TestPacketReceiver.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PacketReceiver.java


 PacketReceiver populates the packetLen field in PacketHeader incorrectly
 

 Key: HDFS-5445
 URL: https://issues.apache.org/jira/browse/HDFS-5445
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.1.0-beta, 2.2.0
 Environment: Ubuntu 12.10, Hadoop 2.1.0-beta
Reporter: Jonathan Mace
Assignee: Jonathan Mace
Priority: Minor
  Labels: easyfix
 Fix For: 2.7.0

 Attachments: HDFS-5445.001.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Summary: PacketReceiver reconstructs PacketHeaders with a packetLen 4 bytes 
 fewer than it should be.  It doesn't cause any exceptions because the 
 reconstructed header is never reserialized, and the packetLen field is not 
 used in this part of the code.
 In the BlockSender class, when a Packet is constructed it must be passed the 
 field packetLen, which is defined as the data length, checksum data length, 
 PLUS the length of the packetLen field itself (4 byte integer).
 {code:title=BlockSender.java|borderStyle=solid}
 484:  private int sendPacket(ByteBuffer pkt, int maxChunks, OutputStream out,
 485:  boolean transferTo, DataTransferThrottler throttler) throws 
 IOException {
 ...
 491:int packetLen = dataLen + checksumDataLen + 4;
 ...
 504:int headerLen = writePacketHeader(pkt, dataLen, packetLen);
 ...
 586:  }
 ...
 792:  private int writePacketHeader(ByteBuffer pkt, int dataLen, int 
 packetLen) {
 793:pkt.clear();
 794:// both syncBlock and syncPacket are false
 795:PacketHeader header = new PacketHeader(packetLen, offset, seqno,
 796:(dataLen == 0), dataLen, false);
 ...
 802:  }
 {code}
 In the PacketReceiver class, the PacketHeader is reconstructed using the 
 method setFieldsFromData.  However, the 4 bytes for the packetLen field 
 length are missing.
 {code:title=PacketReceiver.java|borderStyle=solid}
 112:  private void doRead(ReadableByteChannel ch, InputStream in)
 113:  throws IOException {
 ...
 136:int payloadLen = curPacketBuf.getInt();
 ...
 144:int dataPlusChecksumLen = payloadLen - Ints.BYTES;
 ...
 181:curHeader.setFieldsFromData(dataPlusChecksumLen, headerBuf);
 ...
 192:  }
 {code}
 The solution would be instead to do:
 {code:title=PacketReceiver.java|borderStyle=solid}
 181:curHeader.setFieldsFromData(payloadLen, headerBuf);
 {code}
 I found this because I was making small modifications to the code that 
 exposed this inconsistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7326) Add documentation for hdfs debug commands

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275249#comment-14275249
 ] 

Hudson commented on HDFS-7326:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #69 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/69/])
HDFS-7326. Add documentation for hdfs debug commands (Vijay Bhat via Colin P. 
McCabe) (cmccabe: rev b78b4a1536b6d47a37ff7c309857a628a864c957)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HDFSCommands.apt.vm


 Add documentation for hdfs debug commands
 -

 Key: HDFS-7326
 URL: https://issues.apache.org/jira/browse/HDFS-7326
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Colin Patrick McCabe
Assignee: Vijay Bhat
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7326.001.patch, HDFS-7326.002.patch


 We should add documentation for hdfs debug commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7533) Datanode sometimes does not shutdown on receiving upgrade shutdown command

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275256#comment-14275256
 ] 

Hudson commented on HDFS-7533:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #69 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/69/])
HDFS-7533. Datanode sometimes does not shutdown on receiving upgrade shutdown 
command. Contributed by Eric Payne. (kihwal: rev 
6bbf9fdd041d2413dd78e2bce51abae15f3334c2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeExit.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


 Datanode sometimes does not shutdown on receiving upgrade shutdown command
 --

 Key: HDFS-7533
 URL: https://issues.apache.org/jira/browse/HDFS-7533
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Eric Payne
 Fix For: 2.7.0

 Attachments: HDFS-7533.v1.txt


 When datanode is told to shutdown via the dfsadmin command during rolling 
 upgrade, it may not shutdown.  This is because not all writers have responder 
 running, but sendOOB() tries anyway. This causes NPE and the shutdown thread 
 dies, halting the shutdown after only shutting down DataXceiverServer. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7470) SecondaryNameNode need twice memory when calling reloadFromImageFile

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275013#comment-14275013
 ] 

Hadoop QA commented on HDFS-7470:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12691879/HDFS-7470.2.patch
  against trunk revision c4cba61.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.TestHFlush

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9196//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9196//console

This message is automatically generated.

 SecondaryNameNode need twice memory when calling reloadFromImageFile
 

 Key: HDFS-7470
 URL: https://issues.apache.org/jira/browse/HDFS-7470
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Attachments: HDFS-7470.1.patch, HDFS-7470.2.patch, HDFS-7470.patch, 
 secondaryNameNode.jstack.txt


 histo information at 2014-12-02 01:19
 {quote}
  num #instances #bytes  class name
 --
1: 18644963019326123016  [Ljava.lang.Object;
2: 15736664915107198304  
 org.apache.hadoop.hdfs.server.namenode.INodeFile
3: 18340903011738177920  
 org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo
4: 157358401 5244264024  
 [Lorg.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
5: 3 3489661000  
 [Lorg.apache.hadoop.util.LightWeightGSet$LinkedElement;
6:  29253275 1872719664  [B
7:   3230821  284312248  
 org.apache.hadoop.hdfs.server.namenode.INodeDirectory
8:   2756284  110251360  java.util.ArrayList
9:469158   22519584  org.apache.hadoop.fs.permission.AclEntry
   10:   847   17133032  [Ljava.util.HashMap$Entry;
   11:188471   17059632  [C
   12:314614   10067656  
 [Lorg.apache.hadoop.hdfs.server.namenode.INode$Feature;
   13:2345799383160  
 com.google.common.collect.RegularImmutableList
   14: 495846850280  constMethodKlass
   15: 495846356704  methodKlass
   16:1872705992640  java.lang.String
   17:2345795629896  
 org.apache.hadoop.hdfs.server.namenode.AclFeature
 {quote}
 histo information at 2014-12-02 01:32
 {quote}
  num #instances #bytes  class name
 --
1: 35583805135566651032  [Ljava.lang.Object;
2: 30227275829018184768  
 org.apache.hadoop.hdfs.server.namenode.INodeFile
3: 35250072322560046272  
 org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo
4: 30226451010075087952  
 [Lorg.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
5: 177120233 9374983920  [B
6: 3 3489661000  
 [Lorg.apache.hadoop.util.LightWeightGSet$LinkedElement;
7:   6191688  544868544  
 org.apache.hadoop.hdfs.server.namenode.INodeDirectory
8:   2799256  111970240  java.util.ArrayList
9:890728   42754944  org.apache.hadoop.fs.permission.AclEntry
   10:330986   29974408  [C
   11:596871   19099880  
 [Lorg.apache.hadoop.hdfs.server.namenode.INode$Feature;
   12:445364   17814560  
 com.google.common.collect.RegularImmutableList
   13:   844   17132816  [Ljava.util.HashMap$Entry;
   14:445364   10688736  
 org.apache.hadoop.hdfs.server.namenode.AclFeature
   15:329789   10553248  java.lang.String
   16: 917418807136  
 org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction
   17: 495846850280  constMethodKlass
 

[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-01-13 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Assignee: Zhang Guilin

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0
Reporter: Doris Gu
Assignee: Zhang Guilin
Priority: Minor

 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7056) Snapshot support for truncate

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275082#comment-14275082
 ] 

Hudson commented on HDFS-7056:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #72 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/72/])
HDFS-7056. Snapshot support for truncate. Contributed by Konstantin Shvachko 
and Plamen Jeliazkov. (shv: rev 08ac06283a3e9bf0d49d873823aabd419b08e41f)
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestInterDatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/InterDatanodeProtocol.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/InterDatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockRecoveryCommand.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java


 Snapshot support for truncate
 -

 Key: HDFS-7056
 URL: https://issues.apache.org/jira/browse/HDFS-7056
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko

[jira] [Commented] (HDFS-7598) Remove dependency on old version of Guava in TestDFSClientCache#testEviction

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275084#comment-14275084
 ] 

Hudson commented on HDFS-7598:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #72 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/72/])
HDFS-7598. Remove dependency on old version of Guava in 
TestDFSClientCache#testEviction (Sangjin Lee via Colin P. McCabe) (cmccabe: rev 
b3ddd7ee39c92d2df8661ce5834a2831020cecb2)
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove dependency on old version of Guava in TestDFSClientCache#testEviction
 

 Key: HDFS-7598
 URL: https://issues.apache.org/jira/browse/HDFS-7598
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7598.001.patch


 TestDFSClientCache.testEviction() is not entirely accurate in its usage of 
 the guava LoadingCache.
 It sets the max size at 2, but asserts the loading cache will contain only 1 
 entry after inserting two entries. Guava's CacheBuilder.maximumSize() makes 
 only the following promise:
 {panel}
 Specifies the maximum number of entries the cache may contain. Note that the 
 cache may evict an entry before this limit is exceeded.
 {panel}
 Thus, the only invariant is that the loading cache will hold the maximum size 
 number of entries or fewer. The DFSClientCache.testEviction asserts it holds 
 maximum size - 1 exactly.
 For guava 11.0.2 this happens to be true at maximum size = 2 because of the 
 way it sets the maximum segment weight. With later versions of guava, the 
 maximum segment weight is set higher, and the eviction is less aggressive.
 The test should be fixed to assert only the true invariant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3107) HDFS truncate

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275088#comment-14275088
 ] 

Hudson commented on HDFS-3107:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #72 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/72/])
HDFS-3107. Introduce truncate. Contributed by Plamen Jeliazkov. (shv: rev 
7e9358feb326d48b8c4f00249e7af5023cebd2e2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockRecoveryCommand.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java


 HDFS truncate
 -

 Key: HDFS-3107
 URL: https://issues.apache.org/jira/browse/HDFS-3107
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Lei Chang
Assignee: Plamen Jeliazkov
 Fix For: 3.0.0

 Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
 HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, 
 HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf, 
 HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml

   Original Estimate: 1,344h
  Remaining Estimate: 1,344h

 Systems with transaction support often need to undo changes made to the 
 underlying storage when a transaction is aborted. Currently HDFS does not 
 support truncate (a standard Posix operation) which is a reverse operation of 
 append, which makes upper layer applications use ugly workarounds (such as 
 keeping track of the discarded byte 

[jira] [Commented] (HDFS-7326) Add documentation for hdfs debug commands

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275083#comment-14275083
 ] 

Hudson commented on HDFS-7326:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #72 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/72/])
HDFS-7326. Add documentation for hdfs debug commands (Vijay Bhat via Colin P. 
McCabe) (cmccabe: rev b78b4a1536b6d47a37ff7c309857a628a864c957)
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HDFSCommands.apt.vm
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Add documentation for hdfs debug commands
 -

 Key: HDFS-7326
 URL: https://issues.apache.org/jira/browse/HDFS-7326
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Colin Patrick McCabe
Assignee: Vijay Bhat
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7326.001.patch, HDFS-7326.002.patch


 We should add documentation for hdfs debug commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5445) PacketReceiver populates the packetLen field in PacketHeader incorrectly

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275087#comment-14275087
 ] 

Hudson commented on HDFS-5445:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #72 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/72/])
HDFS-5445. PacketReceiver populates the packetLen field in PacketHeader 
incorrectly (Jonathan Mace via Colin P. McCabe) (cmccabe: rev 
f761bd8fe472c311bdff7c9d469f2805b867855a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PacketReceiver.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/TestPacketReceiver.java


 PacketReceiver populates the packetLen field in PacketHeader incorrectly
 

 Key: HDFS-5445
 URL: https://issues.apache.org/jira/browse/HDFS-5445
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.1.0-beta, 2.2.0
 Environment: Ubuntu 12.10, Hadoop 2.1.0-beta
Reporter: Jonathan Mace
Assignee: Jonathan Mace
Priority: Minor
  Labels: easyfix
 Fix For: 2.7.0

 Attachments: HDFS-5445.001.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Summary: PacketReceiver reconstructs PacketHeaders with a packetLen 4 bytes 
 fewer than it should be.  It doesn't cause any exceptions because the 
 reconstructed header is never reserialized, and the packetLen field is not 
 used in this part of the code.
 In the BlockSender class, when a Packet is constructed it must be passed the 
 field packetLen, which is defined as the data length, checksum data length, 
 PLUS the length of the packetLen field itself (4 byte integer).
 {code:title=BlockSender.java|borderStyle=solid}
 484:  private int sendPacket(ByteBuffer pkt, int maxChunks, OutputStream out,
 485:  boolean transferTo, DataTransferThrottler throttler) throws 
 IOException {
 ...
 491:int packetLen = dataLen + checksumDataLen + 4;
 ...
 504:int headerLen = writePacketHeader(pkt, dataLen, packetLen);
 ...
 586:  }
 ...
 792:  private int writePacketHeader(ByteBuffer pkt, int dataLen, int 
 packetLen) {
 793:pkt.clear();
 794:// both syncBlock and syncPacket are false
 795:PacketHeader header = new PacketHeader(packetLen, offset, seqno,
 796:(dataLen == 0), dataLen, false);
 ...
 802:  }
 {code}
 In the PacketReceiver class, the PacketHeader is reconstructed using the 
 method setFieldsFromData.  However, the 4 bytes for the packetLen field 
 length are missing.
 {code:title=PacketReceiver.java|borderStyle=solid}
 112:  private void doRead(ReadableByteChannel ch, InputStream in)
 113:  throws IOException {
 ...
 136:int payloadLen = curPacketBuf.getInt();
 ...
 144:int dataPlusChecksumLen = payloadLen - Ints.BYTES;
 ...
 181:curHeader.setFieldsFromData(dataPlusChecksumLen, headerBuf);
 ...
 192:  }
 {code}
 The solution would be instead to do:
 {code:title=PacketReceiver.java|borderStyle=solid}
 181:curHeader.setFieldsFromData(payloadLen, headerBuf);
 {code}
 I found this because I was making small modifications to the code that 
 exposed this inconsistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7056) Snapshot support for truncate

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275100#comment-14275100
 ] 

Hudson commented on HDFS-7056:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #806 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/806/])
HDFS-7056. Snapshot support for truncate. Contributed by Konstantin Shvachko 
and Plamen Jeliazkov. (shv: rev 08ac06283a3e9bf0d49d873823aabd419b08e41f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestInterDatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/InterDatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockRecoveryCommand.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/InterDatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolTranslatorPB.java


 Snapshot support for truncate
 -

 Key: HDFS-7056
 URL: https://issues.apache.org/jira/browse/HDFS-7056
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: 

[jira] [Commented] (HDFS-7533) Datanode sometimes does not shutdown on receiving upgrade shutdown command

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275108#comment-14275108
 ] 

Hudson commented on HDFS-7533:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #806 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/806/])
HDFS-7533. Datanode sometimes does not shutdown on receiving upgrade shutdown 
command. Contributed by Eric Payne. (kihwal: rev 
6bbf9fdd041d2413dd78e2bce51abae15f3334c2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeExit.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Datanode sometimes does not shutdown on receiving upgrade shutdown command
 --

 Key: HDFS-7533
 URL: https://issues.apache.org/jira/browse/HDFS-7533
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Eric Payne
 Fix For: 2.7.0

 Attachments: HDFS-7533.v1.txt


 When datanode is told to shutdown via the dfsadmin command during rolling 
 upgrade, it may not shutdown.  This is because not all writers have responder 
 running, but sendOOB() tries anyway. This causes NPE and the shutdown thread 
 dies, halting the shutdown after only shutting down DataXceiverServer. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7326) Add documentation for hdfs debug commands

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275101#comment-14275101
 ] 

Hudson commented on HDFS-7326:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #806 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/806/])
HDFS-7326. Add documentation for hdfs debug commands (Vijay Bhat via Colin P. 
McCabe) (cmccabe: rev b78b4a1536b6d47a37ff7c309857a628a864c957)
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HDFSCommands.apt.vm
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Add documentation for hdfs debug commands
 -

 Key: HDFS-7326
 URL: https://issues.apache.org/jira/browse/HDFS-7326
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Colin Patrick McCabe
Assignee: Vijay Bhat
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7326.001.patch, HDFS-7326.002.patch


 We should add documentation for hdfs debug commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7600) Refine hdfs admin classes to reuse common code

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275089#comment-14275089
 ] 

Hudson commented on HDFS-7600:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #72 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/72/])
HDFS-7600. Refine hdfs admin classes to reuse common code. Contributed by Jing 
Zhao. (jing9: rev 6f3a63a41b90157c3e46ea20ca6170b854ea902e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/StoragePolicyAdmin.java


 Refine hdfs admin classes to reuse common code
 --

 Key: HDFS-7600
 URL: https://issues.apache.org/jira/browse/HDFS-7600
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Reporter: Yi Liu
Assignee: Jing Zhao
 Fix For: 2.7.0

 Attachments: HDFS-7600.000.patch, HDFS-7600.001.patch


 As the review comment in HDFS-7323. 
 In {{StoragePolicyAdmin}} and other class under 
 {{org.apache.hadoop.hdfs.tools}}, such as {{CacheAdmin}}, {{CryptoAdmin}}. 
 There are too many common methods ({{getDFS}}, {{prettifyException}}, 
 {{getOptionDescriptionListing}} ...) and also inner class ({{HelpCommand}}), 
 they are the same, it would be great if we can refine and shift them into a 
 common place.
 This makes the code clean and can be re-used if we add new hdfs admin class 
 in future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7598) Remove dependency on old version of Guava in TestDFSClientCache#testEviction

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275102#comment-14275102
 ] 

Hudson commented on HDFS-7598:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #806 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/806/])
HDFS-7598. Remove dependency on old version of Guava in 
TestDFSClientCache#testEviction (Sangjin Lee via Colin P. McCabe) (cmccabe: rev 
b3ddd7ee39c92d2df8661ce5834a2831020cecb2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java


 Remove dependency on old version of Guava in TestDFSClientCache#testEviction
 

 Key: HDFS-7598
 URL: https://issues.apache.org/jira/browse/HDFS-7598
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7598.001.patch


 TestDFSClientCache.testEviction() is not entirely accurate in its usage of 
 the guava LoadingCache.
 It sets the max size at 2, but asserts the loading cache will contain only 1 
 entry after inserting two entries. Guava's CacheBuilder.maximumSize() makes 
 only the following promise:
 {panel}
 Specifies the maximum number of entries the cache may contain. Note that the 
 cache may evict an entry before this limit is exceeded.
 {panel}
 Thus, the only invariant is that the loading cache will hold the maximum size 
 number of entries or fewer. The DFSClientCache.testEviction asserts it holds 
 maximum size - 1 exactly.
 For guava 11.0.2 this happens to be true at maximum size = 2 because of the 
 way it sets the maximum segment weight. With later versions of guava, the 
 maximum segment weight is set higher, and the eviction is less aggressive.
 The test should be fixed to assert only the true invariant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5445) PacketReceiver populates the packetLen field in PacketHeader incorrectly

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275105#comment-14275105
 ] 

Hudson commented on HDFS-5445:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #806 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/806/])
HDFS-5445. PacketReceiver populates the packetLen field in PacketHeader 
incorrectly (Jonathan Mace via Colin P. McCabe) (cmccabe: rev 
f761bd8fe472c311bdff7c9d469f2805b867855a)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PacketReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/TestPacketReceiver.java


 PacketReceiver populates the packetLen field in PacketHeader incorrectly
 

 Key: HDFS-5445
 URL: https://issues.apache.org/jira/browse/HDFS-5445
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.1.0-beta, 2.2.0
 Environment: Ubuntu 12.10, Hadoop 2.1.0-beta
Reporter: Jonathan Mace
Assignee: Jonathan Mace
Priority: Minor
  Labels: easyfix
 Fix For: 2.7.0

 Attachments: HDFS-5445.001.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Summary: PacketReceiver reconstructs PacketHeaders with a packetLen 4 bytes 
 fewer than it should be.  It doesn't cause any exceptions because the 
 reconstructed header is never reserialized, and the packetLen field is not 
 used in this part of the code.
 In the BlockSender class, when a Packet is constructed it must be passed the 
 field packetLen, which is defined as the data length, checksum data length, 
 PLUS the length of the packetLen field itself (4 byte integer).
 {code:title=BlockSender.java|borderStyle=solid}
 484:  private int sendPacket(ByteBuffer pkt, int maxChunks, OutputStream out,
 485:  boolean transferTo, DataTransferThrottler throttler) throws 
 IOException {
 ...
 491:int packetLen = dataLen + checksumDataLen + 4;
 ...
 504:int headerLen = writePacketHeader(pkt, dataLen, packetLen);
 ...
 586:  }
 ...
 792:  private int writePacketHeader(ByteBuffer pkt, int dataLen, int 
 packetLen) {
 793:pkt.clear();
 794:// both syncBlock and syncPacket are false
 795:PacketHeader header = new PacketHeader(packetLen, offset, seqno,
 796:(dataLen == 0), dataLen, false);
 ...
 802:  }
 {code}
 In the PacketReceiver class, the PacketHeader is reconstructed using the 
 method setFieldsFromData.  However, the 4 bytes for the packetLen field 
 length are missing.
 {code:title=PacketReceiver.java|borderStyle=solid}
 112:  private void doRead(ReadableByteChannel ch, InputStream in)
 113:  throws IOException {
 ...
 136:int payloadLen = curPacketBuf.getInt();
 ...
 144:int dataPlusChecksumLen = payloadLen - Ints.BYTES;
 ...
 181:curHeader.setFieldsFromData(dataPlusChecksumLen, headerBuf);
 ...
 192:  }
 {code}
 The solution would be instead to do:
 {code:title=PacketReceiver.java|borderStyle=solid}
 181:curHeader.setFieldsFromData(payloadLen, headerBuf);
 {code}
 I found this because I was making small modifications to the code that 
 exposed this inconsistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7600) Refine hdfs admin classes to reuse common code

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275107#comment-14275107
 ] 

Hudson commented on HDFS-7600:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #806 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/806/])
HDFS-7600. Refine hdfs admin classes to reuse common code. Contributed by Jing 
Zhao. (jing9: rev 6f3a63a41b90157c3e46ea20ca6170b854ea902e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/StoragePolicyAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java


 Refine hdfs admin classes to reuse common code
 --

 Key: HDFS-7600
 URL: https://issues.apache.org/jira/browse/HDFS-7600
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Reporter: Yi Liu
Assignee: Jing Zhao
 Fix For: 2.7.0

 Attachments: HDFS-7600.000.patch, HDFS-7600.001.patch


 As the review comment in HDFS-7323. 
 In {{StoragePolicyAdmin}} and other class under 
 {{org.apache.hadoop.hdfs.tools}}, such as {{CacheAdmin}}, {{CryptoAdmin}}. 
 There are too many common methods ({{getDFS}}, {{prettifyException}}, 
 {{getOptionDescriptionListing}} ...) and also inner class ({{HelpCommand}}), 
 they are the same, it would be great if we can refine and shift them into a 
 common place.
 This makes the code clean and can be re-used if we add new hdfs admin class 
 in future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3107) HDFS truncate

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275106#comment-14275106
 ] 

Hudson commented on HDFS-3107:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #806 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/806/])
HDFS-3107. Introduce truncate. Contributed by Plamen Jeliazkov. (shv: rev 
7e9358feb326d48b8c4f00249e7af5023cebd2e2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockRecoveryCommand.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored


 HDFS truncate
 -

 Key: HDFS-3107
 URL: https://issues.apache.org/jira/browse/HDFS-3107
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Lei Chang
Assignee: Plamen Jeliazkov
 Fix For: 3.0.0

 Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
 HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, 
 HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf, 
 HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml

   Original Estimate: 1,344h
  Remaining Estimate: 1,344h

 Systems with transaction support often need to undo changes made to the 
 underlying storage when a transaction is aborted. Currently HDFS does not 
 support truncate (a standard Posix operation) which is a reverse operation of 
 append, which makes upper layer applications use ugly workarounds (such as 
 keeping track of the discarded byte range per 

[jira] [Commented] (HDFS-7533) Datanode sometimes does not shutdown on receiving upgrade shutdown command

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275090#comment-14275090
 ] 

Hudson commented on HDFS-7533:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #72 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/72/])
HDFS-7533. Datanode sometimes does not shutdown on receiving upgrade shutdown 
command. Contributed by Eric Payne. (kihwal: rev 
6bbf9fdd041d2413dd78e2bce51abae15f3334c2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeExit.java


 Datanode sometimes does not shutdown on receiving upgrade shutdown command
 --

 Key: HDFS-7533
 URL: https://issues.apache.org/jira/browse/HDFS-7533
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Eric Payne
 Fix For: 2.7.0

 Attachments: HDFS-7533.v1.txt


 When datanode is told to shutdown via the dfsadmin command during rolling 
 upgrade, it may not shutdown.  This is because not all writers have responder 
 running, but sendOOB() tries anyway. This causes NPE and the shutdown thread 
 dies, halting the shutdown after only shutting down DataXceiverServer. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7601) Operations(e.g. balance) failed due to deficient configuration parsing

2015-01-13 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-7601:
---
Assignee: (was: Zhang Guilin)

 Operations(e.g. balance) failed due to deficient configuration parsing
 --

 Key: HDFS-7601
 URL: https://issues.apache.org/jira/browse/HDFS-7601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.3.0
Reporter: Doris Gu
Priority: Minor

 Some operations, for example,balance,parses configuration(from 
 core-site.xml,hdfs-site.xml) to get NameServiceUris to link to.
 Current method considers those end with or without /  as two different 
 uris, then following operation may meet errors.
 bq. [hdfs://haCluster, hdfs://haCluster/] are considered to be two different 
 uris   which actually the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3107) HDFS truncate

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275329#comment-14275329
 ] 

Hudson commented on HDFS-3107:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #73 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/73/])
HDFS-3107. Introduce truncate. Contributed by Plamen Jeliazkov. (shv: rev 
7e9358feb326d48b8c4f00249e7af5023cebd2e2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockRecoveryCommand.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java


 HDFS truncate
 -

 Key: HDFS-3107
 URL: https://issues.apache.org/jira/browse/HDFS-3107
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Lei Chang
Assignee: Plamen Jeliazkov
 Fix For: 3.0.0

 Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
 HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, 
 HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf, 
 HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml

   Original Estimate: 1,344h
  Remaining Estimate: 1,344h

 Systems with transaction support often need to undo changes made to the 
 underlying storage when a transaction is aborted. Currently HDFS does not 
 support truncate (a standard Posix operation) which is a reverse operation of 
 append, which makes upper layer applications use ugly workarounds (such as 
 keeping track of the 

[jira] [Commented] (HDFS-7326) Add documentation for hdfs debug commands

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275324#comment-14275324
 ] 

Hudson commented on HDFS-7326:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #73 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/73/])
HDFS-7326. Add documentation for hdfs debug commands (Vijay Bhat via Colin P. 
McCabe) (cmccabe: rev b78b4a1536b6d47a37ff7c309857a628a864c957)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HDFSCommands.apt.vm


 Add documentation for hdfs debug commands
 -

 Key: HDFS-7326
 URL: https://issues.apache.org/jira/browse/HDFS-7326
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Colin Patrick McCabe
Assignee: Vijay Bhat
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7326.001.patch, HDFS-7326.002.patch


 We should add documentation for hdfs debug commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7533) Datanode sometimes does not shutdown on receiving upgrade shutdown command

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275331#comment-14275331
 ] 

Hudson commented on HDFS-7533:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #73 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/73/])
HDFS-7533. Datanode sometimes does not shutdown on receiving upgrade shutdown 
command. Contributed by Eric Payne. (kihwal: rev 
6bbf9fdd041d2413dd78e2bce51abae15f3334c2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeExit.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Datanode sometimes does not shutdown on receiving upgrade shutdown command
 --

 Key: HDFS-7533
 URL: https://issues.apache.org/jira/browse/HDFS-7533
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Eric Payne
 Fix For: 2.7.0

 Attachments: HDFS-7533.v1.txt


 When datanode is told to shutdown via the dfsadmin command during rolling 
 upgrade, it may not shutdown.  This is because not all writers have responder 
 running, but sendOOB() tries anyway. This causes NPE and the shutdown thread 
 dies, halting the shutdown after only shutting down DataXceiverServer. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5445) PacketReceiver populates the packetLen field in PacketHeader incorrectly

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275328#comment-14275328
 ] 

Hudson commented on HDFS-5445:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #73 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/73/])
HDFS-5445. PacketReceiver populates the packetLen field in PacketHeader 
incorrectly (Jonathan Mace via Colin P. McCabe) (cmccabe: rev 
f761bd8fe472c311bdff7c9d469f2805b867855a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PacketReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/TestPacketReceiver.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 PacketReceiver populates the packetLen field in PacketHeader incorrectly
 

 Key: HDFS-5445
 URL: https://issues.apache.org/jira/browse/HDFS-5445
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.1.0-beta, 2.2.0
 Environment: Ubuntu 12.10, Hadoop 2.1.0-beta
Reporter: Jonathan Mace
Assignee: Jonathan Mace
Priority: Minor
  Labels: easyfix
 Fix For: 2.7.0

 Attachments: HDFS-5445.001.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Summary: PacketReceiver reconstructs PacketHeaders with a packetLen 4 bytes 
 fewer than it should be.  It doesn't cause any exceptions because the 
 reconstructed header is never reserialized, and the packetLen field is not 
 used in this part of the code.
 In the BlockSender class, when a Packet is constructed it must be passed the 
 field packetLen, which is defined as the data length, checksum data length, 
 PLUS the length of the packetLen field itself (4 byte integer).
 {code:title=BlockSender.java|borderStyle=solid}
 484:  private int sendPacket(ByteBuffer pkt, int maxChunks, OutputStream out,
 485:  boolean transferTo, DataTransferThrottler throttler) throws 
 IOException {
 ...
 491:int packetLen = dataLen + checksumDataLen + 4;
 ...
 504:int headerLen = writePacketHeader(pkt, dataLen, packetLen);
 ...
 586:  }
 ...
 792:  private int writePacketHeader(ByteBuffer pkt, int dataLen, int 
 packetLen) {
 793:pkt.clear();
 794:// both syncBlock and syncPacket are false
 795:PacketHeader header = new PacketHeader(packetLen, offset, seqno,
 796:(dataLen == 0), dataLen, false);
 ...
 802:  }
 {code}
 In the PacketReceiver class, the PacketHeader is reconstructed using the 
 method setFieldsFromData.  However, the 4 bytes for the packetLen field 
 length are missing.
 {code:title=PacketReceiver.java|borderStyle=solid}
 112:  private void doRead(ReadableByteChannel ch, InputStream in)
 113:  throws IOException {
 ...
 136:int payloadLen = curPacketBuf.getInt();
 ...
 144:int dataPlusChecksumLen = payloadLen - Ints.BYTES;
 ...
 181:curHeader.setFieldsFromData(dataPlusChecksumLen, headerBuf);
 ...
 192:  }
 {code}
 The solution would be instead to do:
 {code:title=PacketReceiver.java|borderStyle=solid}
 181:curHeader.setFieldsFromData(payloadLen, headerBuf);
 {code}
 I found this because I was making small modifications to the code that 
 exposed this inconsistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7598) Remove dependency on old version of Guava in TestDFSClientCache#testEviction

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275325#comment-14275325
 ] 

Hudson commented on HDFS-7598:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #73 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/73/])
HDFS-7598. Remove dependency on old version of Guava in 
TestDFSClientCache#testEviction (Sangjin Lee via Colin P. McCabe) (cmccabe: rev 
b3ddd7ee39c92d2df8661ce5834a2831020cecb2)
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove dependency on old version of Guava in TestDFSClientCache#testEviction
 

 Key: HDFS-7598
 URL: https://issues.apache.org/jira/browse/HDFS-7598
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7598.001.patch


 TestDFSClientCache.testEviction() is not entirely accurate in its usage of 
 the guava LoadingCache.
 It sets the max size at 2, but asserts the loading cache will contain only 1 
 entry after inserting two entries. Guava's CacheBuilder.maximumSize() makes 
 only the following promise:
 {panel}
 Specifies the maximum number of entries the cache may contain. Note that the 
 cache may evict an entry before this limit is exceeded.
 {panel}
 Thus, the only invariant is that the loading cache will hold the maximum size 
 number of entries or fewer. The DFSClientCache.testEviction asserts it holds 
 maximum size - 1 exactly.
 For guava 11.0.2 this happens to be true at maximum size = 2 because of the 
 way it sets the maximum segment weight. With later versions of guava, the 
 maximum segment weight is set higher, and the eviction is less aggressive.
 The test should be fixed to assert only the true invariant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7600) Refine hdfs admin classes to reuse common code

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275330#comment-14275330
 ] 

Hudson commented on HDFS-7600:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #73 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/73/])
HDFS-7600. Refine hdfs admin classes to reuse common code. Contributed by Jing 
Zhao. (jing9: rev 6f3a63a41b90157c3e46ea20ca6170b854ea902e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/StoragePolicyAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java


 Refine hdfs admin classes to reuse common code
 --

 Key: HDFS-7600
 URL: https://issues.apache.org/jira/browse/HDFS-7600
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Reporter: Yi Liu
Assignee: Jing Zhao
 Fix For: 2.7.0

 Attachments: HDFS-7600.000.patch, HDFS-7600.001.patch


 As the review comment in HDFS-7323. 
 In {{StoragePolicyAdmin}} and other class under 
 {{org.apache.hadoop.hdfs.tools}}, such as {{CacheAdmin}}, {{CryptoAdmin}}. 
 There are too many common methods ({{getDFS}}, {{prettifyException}}, 
 {{getOptionDescriptionListing}} ...) and also inner class ({{HelpCommand}}), 
 they are the same, it would be great if we can refine and shift them into a 
 common place.
 This makes the code clean and can be re-used if we add new hdfs admin class 
 in future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7056) Snapshot support for truncate

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275323#comment-14275323
 ] 

Hudson commented on HDFS-7056:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #73 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/73/])
HDFS-7056. Snapshot support for truncate. Contributed by Konstantin Shvachko 
and Plamen Jeliazkov. (shv: rev 08ac06283a3e9bf0d49d873823aabd419b08e41f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/InterDatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/InterDatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockRecoveryCommand.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestInterDatanodeProtocol.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java


 Snapshot support for truncate
 -

 Key: HDFS-7056
 URL: https://issues.apache.org/jira/browse/HDFS-7056
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
   

[jira] [Commented] (HDFS-3107) HDFS truncate

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275400#comment-14275400
 ] 

Hudson commented on HDFS-3107:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2023 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2023/])
HDFS-3107. Introduce truncate. Contributed by Plamen Jeliazkov. (shv: rev 
7e9358feb326d48b8c4f00249e7af5023cebd2e2)
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockRecoveryCommand.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java


 HDFS truncate
 -

 Key: HDFS-3107
 URL: https://issues.apache.org/jira/browse/HDFS-3107
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Lei Chang
Assignee: Plamen Jeliazkov
 Fix For: 3.0.0

 Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
 HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, 
 HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf, 
 HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml

   Original Estimate: 1,344h
  Remaining Estimate: 1,344h

 Systems with transaction support often need to undo changes made to the 
 underlying storage when a transaction is aborted. Currently HDFS does not 
 support truncate (a standard Posix operation) which is a reverse operation of 
 append, which makes upper layer applications use ugly workarounds (such as 
 keeping track of the discarded byte 

[jira] [Commented] (HDFS-7056) Snapshot support for truncate

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275394#comment-14275394
 ] 

Hudson commented on HDFS-7056:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2023 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2023/])
HDFS-7056. Snapshot support for truncate. Contributed by Konstantin Shvachko 
and Plamen Jeliazkov. (shv: rev 08ac06283a3e9bf0d49d873823aabd419b08e41f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/InterDatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolServerSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestInterDatanodeProtocol.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/InterDatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockRecoveryCommand.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java


 Snapshot support for truncate
 -

 Key: HDFS-7056
 URL: https://issues.apache.org/jira/browse/HDFS-7056
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko

[jira] [Commented] (HDFS-7598) Remove dependency on old version of Guava in TestDFSClientCache#testEviction

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275396#comment-14275396
 ] 

Hudson commented on HDFS-7598:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2023 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2023/])
HDFS-7598. Remove dependency on old version of Guava in 
TestDFSClientCache#testEviction (Sangjin Lee via Colin P. McCabe) (cmccabe: rev 
b3ddd7ee39c92d2df8661ce5834a2831020cecb2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java


 Remove dependency on old version of Guava in TestDFSClientCache#testEviction
 

 Key: HDFS-7598
 URL: https://issues.apache.org/jira/browse/HDFS-7598
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7598.001.patch


 TestDFSClientCache.testEviction() is not entirely accurate in its usage of 
 the guava LoadingCache.
 It sets the max size at 2, but asserts the loading cache will contain only 1 
 entry after inserting two entries. Guava's CacheBuilder.maximumSize() makes 
 only the following promise:
 {panel}
 Specifies the maximum number of entries the cache may contain. Note that the 
 cache may evict an entry before this limit is exceeded.
 {panel}
 Thus, the only invariant is that the loading cache will hold the maximum size 
 number of entries or fewer. The DFSClientCache.testEviction asserts it holds 
 maximum size - 1 exactly.
 For guava 11.0.2 this happens to be true at maximum size = 2 because of the 
 way it sets the maximum segment weight. With later versions of guava, the 
 maximum segment weight is set higher, and the eviction is less aggressive.
 The test should be fixed to assert only the true invariant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7533) Datanode sometimes does not shutdown on receiving upgrade shutdown command

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275402#comment-14275402
 ] 

Hudson commented on HDFS-7533:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2023 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2023/])
HDFS-7533. Datanode sometimes does not shutdown on receiving upgrade shutdown 
command. Contributed by Eric Payne. (kihwal: rev 
6bbf9fdd041d2413dd78e2bce51abae15f3334c2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeExit.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


 Datanode sometimes does not shutdown on receiving upgrade shutdown command
 --

 Key: HDFS-7533
 URL: https://issues.apache.org/jira/browse/HDFS-7533
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Eric Payne
 Fix For: 2.7.0

 Attachments: HDFS-7533.v1.txt


 When datanode is told to shutdown via the dfsadmin command during rolling 
 upgrade, it may not shutdown.  This is because not all writers have responder 
 running, but sendOOB() tries anyway. This causes NPE and the shutdown thread 
 dies, halting the shutdown after only shutting down DataXceiverServer. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5445) PacketReceiver populates the packetLen field in PacketHeader incorrectly

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275399#comment-14275399
 ] 

Hudson commented on HDFS-5445:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2023 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2023/])
HDFS-5445. PacketReceiver populates the packetLen field in PacketHeader 
incorrectly (Jonathan Mace via Colin P. McCabe) (cmccabe: rev 
f761bd8fe472c311bdff7c9d469f2805b867855a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/TestPacketReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PacketReceiver.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 PacketReceiver populates the packetLen field in PacketHeader incorrectly
 

 Key: HDFS-5445
 URL: https://issues.apache.org/jira/browse/HDFS-5445
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.1.0-beta, 2.2.0
 Environment: Ubuntu 12.10, Hadoop 2.1.0-beta
Reporter: Jonathan Mace
Assignee: Jonathan Mace
Priority: Minor
  Labels: easyfix
 Fix For: 2.7.0

 Attachments: HDFS-5445.001.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Summary: PacketReceiver reconstructs PacketHeaders with a packetLen 4 bytes 
 fewer than it should be.  It doesn't cause any exceptions because the 
 reconstructed header is never reserialized, and the packetLen field is not 
 used in this part of the code.
 In the BlockSender class, when a Packet is constructed it must be passed the 
 field packetLen, which is defined as the data length, checksum data length, 
 PLUS the length of the packetLen field itself (4 byte integer).
 {code:title=BlockSender.java|borderStyle=solid}
 484:  private int sendPacket(ByteBuffer pkt, int maxChunks, OutputStream out,
 485:  boolean transferTo, DataTransferThrottler throttler) throws 
 IOException {
 ...
 491:int packetLen = dataLen + checksumDataLen + 4;
 ...
 504:int headerLen = writePacketHeader(pkt, dataLen, packetLen);
 ...
 586:  }
 ...
 792:  private int writePacketHeader(ByteBuffer pkt, int dataLen, int 
 packetLen) {
 793:pkt.clear();
 794:// both syncBlock and syncPacket are false
 795:PacketHeader header = new PacketHeader(packetLen, offset, seqno,
 796:(dataLen == 0), dataLen, false);
 ...
 802:  }
 {code}
 In the PacketReceiver class, the PacketHeader is reconstructed using the 
 method setFieldsFromData.  However, the 4 bytes for the packetLen field 
 length are missing.
 {code:title=PacketReceiver.java|borderStyle=solid}
 112:  private void doRead(ReadableByteChannel ch, InputStream in)
 113:  throws IOException {
 ...
 136:int payloadLen = curPacketBuf.getInt();
 ...
 144:int dataPlusChecksumLen = payloadLen - Ints.BYTES;
 ...
 181:curHeader.setFieldsFromData(dataPlusChecksumLen, headerBuf);
 ...
 192:  }
 {code}
 The solution would be instead to do:
 {code:title=PacketReceiver.java|borderStyle=solid}
 181:curHeader.setFieldsFromData(payloadLen, headerBuf);
 {code}
 I found this because I was making small modifications to the code that 
 exposed this inconsistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7600) Refine hdfs admin classes to reuse common code

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275401#comment-14275401
 ] 

Hudson commented on HDFS-7600:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2023 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2023/])
HDFS-7600. Refine hdfs admin classes to reuse common code. Contributed by Jing 
Zhao. (jing9: rev 6f3a63a41b90157c3e46ea20ca6170b854ea902e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/StoragePolicyAdmin.java


 Refine hdfs admin classes to reuse common code
 --

 Key: HDFS-7600
 URL: https://issues.apache.org/jira/browse/HDFS-7600
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Reporter: Yi Liu
Assignee: Jing Zhao
 Fix For: 2.7.0

 Attachments: HDFS-7600.000.patch, HDFS-7600.001.patch


 As the review comment in HDFS-7323. 
 In {{StoragePolicyAdmin}} and other class under 
 {{org.apache.hadoop.hdfs.tools}}, such as {{CacheAdmin}}, {{CryptoAdmin}}. 
 There are too many common methods ({{getDFS}}, {{prettifyException}}, 
 {{getOptionDescriptionListing}} ...) and also inner class ({{HelpCommand}}), 
 they are the same, it would be great if we can refine and shift them into a 
 common place.
 This makes the code clean and can be re-used if we add new hdfs admin class 
 in future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-2219) Fsck should work with fully qualified file paths.

2015-01-13 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-2219:
--
Attachment: h2219_20150113.patch

h2219_20150113.patch: uses path to get file system and adds a new 
TestFsckWithMultipleNameNodes.

 Fsck should work with fully qualified file paths.
 -

 Key: HDFS-2219
 URL: https://issues.apache.org/jira/browse/HDFS-2219
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 0.23.0
Reporter: Jitendra Nath Pandey
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Attachments: h2219_20150113.patch


 Fsck takes absolute paths, but doesn't work with fully qualified file path 
 URIs. In a federated cluster with multiple namenodes, it will be useful to be 
 able to specify a file path for any namenode using its fully qualified path. 
 Currently, a non-default file system can be specified using -fs option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-2219) Fsck should work with fully qualified file paths.

2015-01-13 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-2219:
--
Status: Patch Available  (was: Open)

 Fsck should work with fully qualified file paths.
 -

 Key: HDFS-2219
 URL: https://issues.apache.org/jira/browse/HDFS-2219
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 0.23.0
Reporter: Jitendra Nath Pandey
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Attachments: h2219_20150113.patch


 Fsck takes absolute paths, but doesn't work with fully qualified file path 
 URIs. In a federated cluster with multiple namenodes, it will be useful to be 
 able to specify a file path for any namenode using its fully qualified path. 
 Currently, a non-default file system can be specified using -fs option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-13 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275346#comment-14275346
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3443:
---

Hi [~amithdk], are you still working on this issue?  If not, I am happy to pick 
this up.

 Unable to catch up edits during standby to active switch due to NPE
 ---

 Key: HDFS-3443
 URL: https://issues.apache.org/jira/browse/HDFS-3443
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover
Reporter: suja s
Assignee: amith
 Attachments: HDFS-3443_1.patch, HDFS-3443_1.patch


 Start NN
 Let NN standby services be started.
 Before the editLogTailer is initialised start ZKFC and allow the 
 activeservices start to proceed further.
 Here editLogTailer.catchupDuringFailover() will throw NPE.
 void startActiveServices() throws IOException {
 LOG.info(Starting services required for active state);
 writeLock();
 try {
   FSEditLog editLog = dir.fsImage.getEditLog();
   
   if (!editLog.isOpenForWrite()) {
 // During startup, we're already open for write during initialization.
 editLog.initJournalsForWrite();
 // May need to recover
 editLog.recoverUnclosedStreams();
 
 LOG.info(Catching up to latest edits from old active before  +
 taking over writer role in edits logs.);
 editLogTailer.catchupDuringFailover();
 {noformat}
 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
 Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
 XX.XX.XX.55:58003: output error
 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
 from XX.XX.XX.55:58004: error: java.lang.NullPointerException
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
   at 
 org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
   at 
 org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 9 on 8020 caught an exception
 java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
   at 
 org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7575) NameNode not handling heartbeats properly after HDFS-2832

2015-01-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276114#comment-14276114
 ] 

Arpit Agarwal commented on HDFS-7575:
-

Any comments on the v04 patch? Be good to get this change in. Thanks.

 NameNode not handling heartbeats properly after HDFS-2832
 -

 Key: HDFS-7575
 URL: https://issues.apache.org/jira/browse/HDFS-7575
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0, 2.5.0, 2.6.0
Reporter: Lars Francke
Assignee: Arpit Agarwal
Priority: Critical
 Attachments: HDFS-7575.01.patch, HDFS-7575.02.patch, 
 HDFS-7575.03.binary.patch, HDFS-7575.03.patch, HDFS-7575.04.binary.patch, 
 HDFS-7575.04.patch, testUpgrade22via24GeneratesStorageIDs.tgz, 
 testUpgradeFrom22GeneratesStorageIDs.tgz, 
 testUpgradeFrom24PreservesStorageId.tgz


 Before HDFS-2832 each DataNode would have a unique storageId which included 
 its IP address. Since HDFS-2832 the DataNodes have a unique storageId per 
 storage directory which is just a random UUID.
 They send reports per storage directory in their heartbeats. This heartbeat 
 is processed on the NameNode in the 
 {{DatanodeDescriptor#updateHeartbeatState}} method. Pre HDFS-2832 this would 
 just store the information per Datanode. After the patch though each DataNode 
 can have multiple different storages so it's stored in a map keyed by the 
 storage Id.
 This works fine for all clusters that have been installed post HDFS-2832 as 
 they get a UUID for their storage Id. So a DN with 8 drives has a map with 8 
 different keys. On each Heartbeat the Map is searched and updated 
 ({{DatanodeStorageInfo storage = storageMap.get(s.getStorageID());}}):
 {code:title=DatanodeStorageInfo}
   void updateState(StorageReport r) {
 capacity = r.getCapacity();
 dfsUsed = r.getDfsUsed();
 remaining = r.getRemaining();
 blockPoolUsed = r.getBlockPoolUsed();
   }
 {code}
 On clusters that were upgraded from a pre HDFS-2832 version though the 
 storage Id has not been rewritten (at least not on the four clusters I 
 checked) so each directory will have the exact same storageId. That means 
 there'll be only a single entry in the {{storageMap}} and it'll be 
 overwritten by a random {{StorageReport}} from the DataNode. This can be seen 
 in the {{updateState}} method above. This just assigns the capacity from the 
 received report, instead it should probably sum it up per received heartbeat.
 The Balancer seems to be one of the only things that actually uses this 
 information so it now considers the utilization of a random drive per 
 DataNode for balancing purposes.
 Things get even worse when a drive has been added or replaced as this will 
 now get a new storage Id so there'll be two entries in the storageMap. As new 
 drives are usually empty it skewes the balancers decision in a way that this 
 node will never be considered over-utilized.
 Another problem is that old StorageReports are never removed from the 
 storageMap. So if I replace a drive and it gets a new storage Id the old one 
 will still be in place and used for all calculations by the Balancer until a 
 restart of the NameNode.
 I can try providing a patch that does the following:
 * Instead of using a Map I could just store the array we receive or instead 
 of storing an array sum up the values for reports with the same Id
 * On each heartbeat clear the map (so we know we have up to date information)
 Does that sound sensible?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7423) various typos and message formatting fixes in nfs daemon and doc

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276125#comment-14276125
 ] 

Hadoop QA commented on HDFS-7423:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12690085/HDFS-7423.003.patch
  against trunk revision 10ac5ab.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 5 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs 
hadoop-hdfs-project/hadoop-hdfs-nfs:

  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9198//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9198//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9198//console

This message is automatically generated.

 various typos and message formatting fixes in nfs daemon and doc
 

 Key: HDFS-7423
 URL: https://issues.apache.org/jira/browse/HDFS-7423
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HDFS-7423.001.patch, HDFS-7423.002.patch, 
 HDFS-7423.003.patch


 These are accumulated fixes for log messages, formatting, typos, etc. in the 
 nfs3 daemon that I came across while working on a customer issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7591) hdfs classpath command should support same options as hadoop classpath.

2015-01-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14276142#comment-14276142
 ] 

Chris Nauroth commented on HDFS-7591:
-

[~arpitagarwal], thank you for catching that.  Yes, you're right.  This 
condition works fine in the branch-2 code, but I missed the fact that the trunk 
code does an additional {{shift}} call before reaching this point.

[~varun_saxena], thank you for the patch.  In the trunk version of the batch 
code, we have a much better setup for code reuse.  I suggest adding a function 
called {{hadoop_do_classpath_subcommand}} to hadoop-functions.sh.  That 
function would contain the same code that your current patch adds to the hdfs 
script, along with the fix suggested by Arpit.  After that function is in 
place, all of the individual entry points can use one line of code to call it 
like this:

{code}
hadoop_do_classpath_subcommand $@
{code}

Feel free to make the change to the hadoop script too within the scope of this 
jira.


 hdfs classpath command should support same options as hadoop classpath.
 ---

 Key: HDFS-7591
 URL: https://issues.apache.org/jira/browse/HDFS-7591
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Reporter: Chris Nauroth
Assignee: Varun Saxena
Priority: Minor
 Attachments: HDFS-7591.001.patch


 HADOOP-10903 enhanced the {{hadoop classpath}} command to support optional 
 expansion of the wildcards and bundling the classpath into a jar file 
 containing a manifest with the Class-Path attribute.  The other classpath 
 commands should do the same for consistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7067) ClassCastException while using a key created by keytool to create encryption zone.

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275897#comment-14275897
 ] 

Hadoop QA commented on HDFS-7067:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686872/HDFS-7067.002.patch
  against trunk revision 10ac5ab.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.crypto.key.TestKeyProviderFactory

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9199//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9199//console

This message is automatically generated.

 ClassCastException while using a key created by keytool to create encryption 
 zone. 
 ---

 Key: HDFS-7067
 URL: https://issues.apache.org/jira/browse/HDFS-7067
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: encryption
Affects Versions: 2.6.0
Reporter: Yi Yao
Assignee: Charles Lamb
Priority: Minor
 Attachments: HDFS-7067.001.patch, HDFS-7067.002.patch, 
 hdfs7067.keystore


 I'm using transparent encryption. If I create a key for KMS keystore via 
 keytool and use the key to create an encryption zone. I get a 
 ClassCastException rather than an exception with decent error message. I know 
 we should use 'hadoop key create' to create a key. It's better to provide an 
 decent error message to remind user to use the right way to create a KMS key.
 [LOG]
 ERROR[user=hdfs] Method:'GET' Exception:'java.lang.ClassCastException: 
 javax.crypto.spec.SecretKeySpec cannot be cast to 
 org.apache.hadoop.crypto.key.JavaKeyStoreProvider$KeyMetadata'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7606) Missing null check in INodeFile#getBlocks()

2015-01-13 Thread Ted Yu (JIRA)
Ted Yu created HDFS-7606:


 Summary: Missing null check in INodeFile#getBlocks()
 Key: HDFS-7606
 URL: https://issues.apache.org/jira/browse/HDFS-7606
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


{code}
BlockInfo[] snapshotBlocks = diff == null ? getBlocks() : diff.getBlocks();
if(snapshotBlocks != null)
  return snapshotBlocks;
// Blocks are not in the current snapshot
// Find next snapshot with blocks present or return current file blocks
snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
{code}
If diff is null and snapshotBlocks is null, NullPointerException would result 
from the call to diff.getSnapshotId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7587) Edit log corruption can happen if append fails with a quota violation

2015-01-13 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-7587:
-
Attachment: HDFS-7587.patch

 Edit log corruption can happen if append fails with a quota violation
 -

 Key: HDFS-7587
 URL: https://issues.apache.org/jira/browse/HDFS-7587
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Kihwal Lee
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HDFS-7587.patch


 We have seen a standby namenode crashing due to edit log corruption. It was 
 complaining that {{OP_CLOSE}} cannot be applied because the file is not 
 under-construction.
 When a client was trying to append to the file, the remaining space quota was 
 very small. This caused a failure in {{prepareFileForWrite()}}, but after the 
 inode was already converted for writing and a lease added. Since these were 
 not undone when the quota violation was detected, the file was left in 
 under-construction with an active lease without edit logging {{OP_ADD}}.
 A subsequent {{append()}} eventually caused a lease recovery after the soft 
 limit period. This resulted in {{commitBlockSynchronization()}}, which closed 
 the file with {{OP_CLOSE}} being logged.  Since there was no corresponding 
 {{OP_ADD}}, edit replaying could not apply this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7587) Edit log corruption can happen if append fails with a quota violation

2015-01-13 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-7587:
-
Status: Patch Available  (was: Open)

 Edit log corruption can happen if append fails with a quota violation
 -

 Key: HDFS-7587
 URL: https://issues.apache.org/jira/browse/HDFS-7587
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Kihwal Lee
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HDFS-7587.patch


 We have seen a standby namenode crashing due to edit log corruption. It was 
 complaining that {{OP_CLOSE}} cannot be applied because the file is not 
 under-construction.
 When a client was trying to append to the file, the remaining space quota was 
 very small. This caused a failure in {{prepareFileForWrite()}}, but after the 
 inode was already converted for writing and a lease added. Since these were 
 not undone when the quota violation was detected, the file was left in 
 under-construction with an active lease without edit logging {{OP_ADD}}.
 A subsequent {{append()}} eventually caused a lease recovery after the soft 
 limit period. This resulted in {{commitBlockSynchronization()}}, which closed 
 the file with {{OP_CLOSE}} being logged.  Since there was no corresponding 
 {{OP_ADD}}, edit replaying could not apply this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7606) Missing null check in INodeFile#getBlocks()

2015-01-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14275920#comment-14275920
 ] 

Ted Yu commented on HDFS-7606:
--

In computeContentSummary():
{code}
counts.add(Content.LENGTH, diffs.getLast().getFileSize());
{code}
diffs.getLast() should be checked against null.

 Missing null check in INodeFile#getBlocks()
 ---

 Key: HDFS-7606
 URL: https://issues.apache.org/jira/browse/HDFS-7606
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor

 {code}
 BlockInfo[] snapshotBlocks = diff == null ? getBlocks() : 
 diff.getBlocks();
 if(snapshotBlocks != null)
   return snapshotBlocks;
 // Blocks are not in the current snapshot
 // Find next snapshot with blocks present or return current file blocks
 snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
 {code}
 If diff is null and snapshotBlocks is null, NullPointerException would result 
 from the call to diff.getSnapshotId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7581) HDFS documentation needs updating post-shell rewrite

2015-01-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-7581:
---
Status: Patch Available  (was: Open)

 HDFS documentation needs updating post-shell rewrite
 

 Key: HDFS-7581
 URL: https://issues.apache.org/jira/browse/HDFS-7581
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HDFS-7581-01.patch, HDFS-7581.patch


 After HADOOP-9902, some of the HDFS documentation is out of date.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >