[jira] [Updated] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-18 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-3443:

Component/s: ha
 auto-failover

 Unable to catch up edits during standby to active switch due to NPE
 ---

 Key: HDFS-3443
 URL: https://issues.apache.org/jira/browse/HDFS-3443
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover, ha
Reporter: suja s
Assignee: Vinayakumar B
 Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
 HDFS-3443_1.patch, HDFS-3443_1.patch


 Start NN
 Let NN standby services be started.
 Before the editLogTailer is initialised start ZKFC and allow the 
 activeservices start to proceed further.
 Here editLogTailer.catchupDuringFailover() will throw NPE.
 void startActiveServices() throws IOException {
 LOG.info(Starting services required for active state);
 writeLock();
 try {
   FSEditLog editLog = dir.fsImage.getEditLog();
   
   if (!editLog.isOpenForWrite()) {
 // During startup, we're already open for write during initialization.
 editLog.initJournalsForWrite();
 // May need to recover
 editLog.recoverUnclosedStreams();
 
 LOG.info(Catching up to latest edits from old active before  +
 taking over writer role in edits logs.);
 editLogTailer.catchupDuringFailover();
 {noformat}
 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
 Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
 XX.XX.XX.55:58003: output error
 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
 from XX.XX.XX.55:58004: error: java.lang.NullPointerException
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
   at 
 org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
   at 
 org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 9 on 8020 caught an exception
 java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
   at 
 org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-18 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reassigned HDFS-3443:
---

Assignee: Vinayakumar B  (was: amith)

 Unable to catch up edits during standby to active switch due to NPE
 ---

 Key: HDFS-3443
 URL: https://issues.apache.org/jira/browse/HDFS-3443
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover
Reporter: suja s
Assignee: Vinayakumar B
 Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
 HDFS-3443_1.patch, HDFS-3443_1.patch


 Start NN
 Let NN standby services be started.
 Before the editLogTailer is initialised start ZKFC and allow the 
 activeservices start to proceed further.
 Here editLogTailer.catchupDuringFailover() will throw NPE.
 void startActiveServices() throws IOException {
 LOG.info(Starting services required for active state);
 writeLock();
 try {
   FSEditLog editLog = dir.fsImage.getEditLog();
   
   if (!editLog.isOpenForWrite()) {
 // During startup, we're already open for write during initialization.
 editLog.initJournalsForWrite();
 // May need to recover
 editLog.recoverUnclosedStreams();
 
 LOG.info(Catching up to latest edits from old active before  +
 taking over writer role in edits logs.);
 editLogTailer.catchupDuringFailover();
 {noformat}
 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
 Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
 XX.XX.XX.55:58003: output error
 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
 from XX.XX.XX.55:58004: error: java.lang.NullPointerException
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
   at 
 org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
   at 
 org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 9 on 8020 caught an exception
 java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
   at 
 org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-18 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-3443:

Component/s: (was: auto-failover)

 Unable to catch up edits during standby to active switch due to NPE
 ---

 Key: HDFS-3443
 URL: https://issues.apache.org/jira/browse/HDFS-3443
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: suja s
Assignee: Vinayakumar B
 Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
 HDFS-3443_1.patch, HDFS-3443_1.patch


 Start NN
 Let NN standby services be started.
 Before the editLogTailer is initialised start ZKFC and allow the 
 activeservices start to proceed further.
 Here editLogTailer.catchupDuringFailover() will throw NPE.
 void startActiveServices() throws IOException {
 LOG.info(Starting services required for active state);
 writeLock();
 try {
   FSEditLog editLog = dir.fsImage.getEditLog();
   
   if (!editLog.isOpenForWrite()) {
 // During startup, we're already open for write during initialization.
 editLog.initJournalsForWrite();
 // May need to recover
 editLog.recoverUnclosedStreams();
 
 LOG.info(Catching up to latest edits from old active before  +
 taking over writer role in edits logs.);
 editLogTailer.catchupDuringFailover();
 {noformat}
 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
 Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
 XX.XX.XX.55:58003: output error
 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
 from XX.XX.XX.55:58004: error: java.lang.NullPointerException
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
   at 
 org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
   at 
 org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 9 on 8020 caught an exception
 java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
   at 
 org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7637) Fix the check condition for reserved path

2015-01-18 Thread Yi Liu (JIRA)
Yi Liu created HDFS-7637:


 Summary: Fix the check condition for reserved path
 Key: HDFS-7637
 URL: https://issues.apache.org/jira/browse/HDFS-7637
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor


Currently the {{.reserved}} patch check function is:
{code}
public static boolean isReservedName(String src) {
  return src.startsWith(DOT_RESERVED_PATH_PREFIX);
}
{code}
And {{DOT_RESERVED_PATH_PREFIX}} is {{/.reserved}}, it should be 
{{/.reserved/}}, other some directory may prefix with _/.reserved_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-18 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-3443:

Attachment: HDFS-3443-004.patch

Fixed tests

 Unable to catch up edits during standby to active switch due to NPE
 ---

 Key: HDFS-3443
 URL: https://issues.apache.org/jira/browse/HDFS-3443
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover
Reporter: suja s
Assignee: amith
 Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
 HDFS-3443_1.patch, HDFS-3443_1.patch


 Start NN
 Let NN standby services be started.
 Before the editLogTailer is initialised start ZKFC and allow the 
 activeservices start to proceed further.
 Here editLogTailer.catchupDuringFailover() will throw NPE.
 void startActiveServices() throws IOException {
 LOG.info(Starting services required for active state);
 writeLock();
 try {
   FSEditLog editLog = dir.fsImage.getEditLog();
   
   if (!editLog.isOpenForWrite()) {
 // During startup, we're already open for write during initialization.
 editLog.initJournalsForWrite();
 // May need to recover
 editLog.recoverUnclosedStreams();
 
 LOG.info(Catching up to latest edits from old active before  +
 taking over writer role in edits logs.);
 editLogTailer.catchupDuringFailover();
 {noformat}
 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
 Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
 XX.XX.XX.55:58003: output error
 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
 from XX.XX.XX.55:58004: error: java.lang.NullPointerException
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
   at 
 org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
   at 
 org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 9 on 8020 caught an exception
 java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
   at 
 org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6673) Add Delimited format supports for PB OIV tool

2015-01-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282099#comment-14282099
 ] 

Hadoop QA commented on HDFS-6673:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692993/HDFS-6673.004.patch
  against trunk revision 24315e7.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9261//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9261//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9261//console

This message is automatically generated.

 Add Delimited format supports for PB OIV tool
 -

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-6673.000.patch, HDFS-6673.001.patch, 
 HDFS-6673.002.patch, HDFS-6673.003.patch, HDFS-6673.004.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-18 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282148#comment-14282148
 ] 

Vinayakumar B commented on HDFS-3443:
-

{quote}Some methods such as saveNamespace() and refreshNodes are 
OperationCategory.UNCHECKED operations so that standby nn should serve them.
Some other methods such as blockReceivedAndDeleted(), 
refreshUserToGroupsMappings() and addSpanReceiver() do not check 
OperationCategory. Some of them probably are bugs.{quote}
All these will be processed once all the services (common and state specific) 
are started, because after this patch everything starts under same lock.
So I feel not a problem.

 Unable to catch up edits during standby to active switch due to NPE
 ---

 Key: HDFS-3443
 URL: https://issues.apache.org/jira/browse/HDFS-3443
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover
Reporter: suja s
Assignee: amith
 Attachments: HDFS-3443-003.patch, HDFS-3443_1.patch, HDFS-3443_1.patch


 Start NN
 Let NN standby services be started.
 Before the editLogTailer is initialised start ZKFC and allow the 
 activeservices start to proceed further.
 Here editLogTailer.catchupDuringFailover() will throw NPE.
 void startActiveServices() throws IOException {
 LOG.info(Starting services required for active state);
 writeLock();
 try {
   FSEditLog editLog = dir.fsImage.getEditLog();
   
   if (!editLog.isOpenForWrite()) {
 // During startup, we're already open for write during initialization.
 editLog.initJournalsForWrite();
 // May need to recover
 editLog.recoverUnclosedStreams();
 
 LOG.info(Catching up to latest edits from old active before  +
 taking over writer role in edits logs.);
 editLogTailer.catchupDuringFailover();
 {noformat}
 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
 Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
 XX.XX.XX.55:58003: output error
 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
 from XX.XX.XX.55:58004: error: java.lang.NullPointerException
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
   at 
 org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
   at 
 org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 9 on 8020 caught an exception
 java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
   at 
 org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7612) TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir

2015-01-18 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282207#comment-14282207
 ] 

Vinayakumar B commented on HDFS-7612:
-

Yes, [~shv].

MVN does some pre-work before running tests and sets the value as below
{code}test.cache.data${project.build.directory}/test-classes/test.cache.data{code}
You can also set this value and verify

 TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir
 -

 Key: HDFS-7612
 URL: https://issues.apache.org/jira/browse/HDFS-7612
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko

 {code}
 final String cacheDir = System.getProperty(test.cache.data,
 build/test/cache);
 {code}
 results in
 {{FileNotFoundException: build/test/cache/editsStoredParsed.xml (No such file 
 or directory)}}
 when {{test.cache.data}} is not set.
 I can see this failing while running in Eclipse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7573) Consolidate the implementation of delete() into a single class

2015-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281825#comment-14281825
 ] 

Hudson commented on HDFS-7573:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #78 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/78/])
HDFS-7573. Consolidate the implementation of delete() into a single class. 
Contributed by Haohui Mai. (wheat9: rev 
24315e7d374a1ddd4329b64350cf96fc9ab6f59c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 Consolidate the implementation of delete() into a single class
 --

 Key: HDFS-7573
 URL: https://issues.apache.org/jira/browse/HDFS-7573
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.7.0

 Attachments: HDFS-7573.000.patch, HDFS-7573.001.patch, 
 HDFS-7573.002.patch


 This jira proposes to consolidate the implementation of delete() in 
 {{FSNamesystem}} and {{FSDirectory}} into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7573) Consolidate the implementation of delete() into a single class

2015-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281810#comment-14281810
 ] 

Hudson commented on HDFS-7573:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2009 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2009/])
HDFS-7573. Consolidate the implementation of delete() into a single class. 
Contributed by Haohui Mai. (wheat9: rev 
24315e7d374a1ddd4329b64350cf96fc9ab6f59c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 Consolidate the implementation of delete() into a single class
 --

 Key: HDFS-7573
 URL: https://issues.apache.org/jira/browse/HDFS-7573
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.7.0

 Attachments: HDFS-7573.000.patch, HDFS-7573.001.patch, 
 HDFS-7573.002.patch


 This jira proposes to consolidate the implementation of delete() in 
 {{FSNamesystem}} and {{FSDirectory}} into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7573) Consolidate the implementation of delete() into a single class

2015-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281814#comment-14281814
 ] 

Hudson commented on HDFS-7573:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #74 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/74/])
HDFS-7573. Consolidate the implementation of delete() into a single class. 
Contributed by Haohui Mai. (wheat9: rev 
24315e7d374a1ddd4329b64350cf96fc9ab6f59c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java


 Consolidate the implementation of delete() into a single class
 --

 Key: HDFS-7573
 URL: https://issues.apache.org/jira/browse/HDFS-7573
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.7.0

 Attachments: HDFS-7573.000.patch, HDFS-7573.001.patch, 
 HDFS-7573.002.patch


 This jira proposes to consolidate the implementation of delete() in 
 {{FSNamesystem}} and {{FSDirectory}} into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7573) Consolidate the implementation of delete() into a single class

2015-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281830#comment-14281830
 ] 

Hudson commented on HDFS-7573:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2028 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2028/])
HDFS-7573. Consolidate the implementation of delete() into a single class. 
Contributed by Haohui Mai. (wheat9: rev 
24315e7d374a1ddd4329b64350cf96fc9ab6f59c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 Consolidate the implementation of delete() into a single class
 --

 Key: HDFS-7573
 URL: https://issues.apache.org/jira/browse/HDFS-7573
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.7.0

 Attachments: HDFS-7573.000.patch, HDFS-7573.001.patch, 
 HDFS-7573.002.patch


 This jira proposes to consolidate the implementation of delete() in 
 {{FSNamesystem}} and {{FSDirectory}} into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7573) Consolidate the implementation of delete() into a single class

2015-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281747#comment-14281747
 ] 

Hudson commented on HDFS-7573:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #77 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/77/])
HDFS-7573. Consolidate the implementation of delete() into a single class. 
Contributed by Haohui Mai. (wheat9: rev 
24315e7d374a1ddd4329b64350cf96fc9ab6f59c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java


 Consolidate the implementation of delete() into a single class
 --

 Key: HDFS-7573
 URL: https://issues.apache.org/jira/browse/HDFS-7573
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.7.0

 Attachments: HDFS-7573.000.patch, HDFS-7573.001.patch, 
 HDFS-7573.002.patch


 This jira proposes to consolidate the implementation of delete() in 
 {{FSNamesystem}} and {{FSDirectory}} into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7573) Consolidate the implementation of delete() into a single class

2015-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281768#comment-14281768
 ] 

Hudson commented on HDFS-7573:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #811 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/811/])
HDFS-7573. Consolidate the implementation of delete() into a single class. 
Contributed by Haohui Mai. (wheat9: rev 
24315e7d374a1ddd4329b64350cf96fc9ab6f59c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


 Consolidate the implementation of delete() into a single class
 --

 Key: HDFS-7573
 URL: https://issues.apache.org/jira/browse/HDFS-7573
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.7.0

 Attachments: HDFS-7573.000.patch, HDFS-7573.001.patch, 
 HDFS-7573.002.patch


 This jira proposes to consolidate the implementation of delete() in 
 {{FSNamesystem}} and {{FSDirectory}} into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7188) support build libhdfs3 on windows

2015-01-18 Thread Thanh Do (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281873#comment-14281873
 ] 

Thanh Do commented on HDFS-7188:


Hi Folks. I've submitted a patch to add necessary headers needed by Windows in 
HDFS-7577. Can somebody take a look and give comment? Thanks.

 support build libhdfs3 on windows
 -

 Key: HDFS-7188
 URL: https://issues.apache.org/jira/browse/HDFS-7188
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
 Environment: Windows System, Visual Studio 2010
Reporter: Zhanwei Wang
Assignee: Thanh Do
 Attachments: HDFS-7188-branch-HDFS-6994-0.patch, 
 HDFS-7188-branch-HDFS-6994-1.patch


 libhdfs3 should work on windows



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6673) Add Delimited format supports for PB OIV tool

2015-01-18 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282014#comment-14282014
 ] 

Lei (Eddy) Xu commented on HDFS-6673:
-

[~andrew.wang] Here is the performance results 

The tests were run on a 2-socket Xeon E5540 machine, 24GB RAM. The 3.2 GB 
fsimage and the leveldb are stored in two separate local-attached SATA 1TB 
disks, respectively.

{code}
$ HADOOP_HEAPSIZE=1024 time -p hdfs oiv -i fsimage_new_pb -o /dev/null -p 
Delimited -t /data/1/lei/fse.db
15/01/18 15:14:14 INFO offlineImageViewer.FSImageHandler: Loading 68 strings
15/01/18 15:14:14 INFO offlineImageViewer.PBImageTextWriter: Loading directories
15/01/18 15:14:14 INFO offlineImageViewer.PBImageTextWriter: Loading 
directories in INode section.
15/01/18 15:15:34 INFO offlineImageViewer.PBImageTextWriter: Found 4188717 
INode directories.
15/01/18 15:15:34 INFO offlineImageViewer.PBImageTextWriter: Finished loading 
directories: 80235ms
15/01/18 15:15:34 INFO offlineImageViewer.PBImageTextWriter: Loading INode 
directory section.
15/01/18 15:17:12 INFO offlineImageViewer.PBImageTextWriter: Scanned 3731860 
INode directories to build namespace.
15/01/18 15:17:12 INFO offlineImageViewer.PBImageTextWriter: Finished loading 
INode directory section in 97964ms
15/01/18 15:17:12 INFO offlineImageViewer.PBImageTextWriter: Found 30600809 
inodes in inode section
15/01/18 15:24:50 INFO offlineImageViewer.PBImageTextWriter: Outputted  
30600809 inodes.
real 638.05
user 665.23
sys 31.04
{code}

It uses {{10:38}} to generate Delimited outputs for this 3.2 GB fsimage, using 
only 1GB heap size. As compared to this OIV delimited outputs, the time to 
transform the old format fsimage to this 3.2 GB protobuf-based fsimage is 
{{5:39}}, with 16 GB heap size.


 Add Delimited format supports for PB OIV tool
 -

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-6673.000.patch, HDFS-6673.001.patch, 
 HDFS-6673.002.patch, HDFS-6673.003.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6673) Add Delimited format supports for PB OIV tool

2015-01-18 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-6673:

Attachment: HDFS-6673.004.patch

This patch firstly pre-scans the fsimage to load directories and uses directory 
metadata to build namespace in LevelDB. 
The second scan directly outputs inode as Delimited output as the order the 
inode presented in the fsimage.

Referred INode is not currently supported, so that each inode only generates 
one text entry. 

 Add Delimited format supports for PB OIV tool
 -

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-6673.000.patch, HDFS-6673.001.patch, 
 HDFS-6673.002.patch, HDFS-6673.003.patch, HDFS-6673.004.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6673) Add Delimited format supports for PB OIV tool

2015-01-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282018#comment-14282018
 ] 

Hadoop QA commented on HDFS-6673:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12673941/HDFS-6673.003.patch
  against trunk revision 24315e7.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9260//console

This message is automatically generated.

 Add Delimited format supports for PB OIV tool
 -

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-6673.000.patch, HDFS-6673.001.patch, 
 HDFS-6673.002.patch, HDFS-6673.003.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7057) Expose truncate API via FileSystem and shell command

2015-01-18 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14281985#comment-14281985
 ] 

Konstantin Shvachko commented on HDFS-7057:
---

Two nits:
# It is better to use {{String.valueOf(newLnegth)}} instead of {{+newLength}}.
# {{testTruncateShellHelper()}} could have more informative name, like 
{{runTruncateShellCommand()}}.

 Expose truncate API via FileSystem and shell command
 

 Key: HDFS-7057
 URL: https://issues.apache.org/jira/browse/HDFS-7057
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Milan Desai
 Attachments: HDFS-7057-2.patch, HDFS-7057-3.patch, HDFS-7057-4.patch, 
 HDFS-7057.patch


 Add truncate operation to FileSystem and expose it to users via shell command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)