[jira] [Updated] (HDFS-11161) Incorporate Baidu Yun BOS file system implementation

2016-11-20 Thread Faen Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Faen Zhang updated HDFS-11161:
--
Summary: Incorporate Baidu Yun BOS file system implementation  (was: 
Incorporate Baidu BOS file system implementation)

> Incorporate Baidu Yun BOS file system implementation
> 
>
> Key: HDFS-11161
> URL: https://issues.apache.org/jira/browse/HDFS-11161
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Faen Zhang
>   Original Estimate: 840h
>  Remaining Estimate: 840h
>
> Baidu Yun ( https://cloud.baidu.com/ ) is one of top tier cloud computing 
> provider. Baidu Yun BOS is widely used among China's cloud users, but 
> currently it is not easy to access data laid on BOS storage from user's 
> Hadoop/Spark application, because of no original support for BOS in Hadoop.
> This work aims to integrate Baidu Yun BOS with Hadoop. By simple 
> configuration, Spark/Hadoop applications can read/write data from BOS without 
> any code change. Narrowing the gap between user's APP and data storage, like 
> what have been done for S3 and Aliyun OSS in Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11161) Incorporate Baidu BOS file system implementation

2016-11-20 Thread Faen Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15682789#comment-15682789
 ] 

Faen Zhang commented on HDFS-11161:
---

This issue is very similar to 
https://issues.apache.org/jira/browse/HADOOP-12756 , which is about 
incorporating Aliyun OSS.

> Incorporate Baidu BOS file system implementation
> 
>
> Key: HDFS-11161
> URL: https://issues.apache.org/jira/browse/HDFS-11161
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Faen Zhang
>   Original Estimate: 840h
>  Remaining Estimate: 840h
>
> Baidu Yun ( https://cloud.baidu.com/ ) is one of top tier cloud computing 
> provider. Baidu Yun BOS is widely used among China's cloud users, but 
> currently it is not easy to access data laid on BOS storage from user's 
> Hadoop/Spark application, because of no original support for BOS in Hadoop.
> This work aims to integrate Baidu Yun BOS with Hadoop. By simple 
> configuration, Spark/Hadoop applications can read/write data from BOS without 
> any code change. Narrowing the gap between user's APP and data storage, like 
> what have been done for S3 and Aliyun OSS in Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11161) Incorporate Baidu BOS file system implementation

2016-11-20 Thread Faen Zhang (JIRA)
Faen Zhang created HDFS-11161:
-

 Summary: Incorporate Baidu BOS file system implementation
 Key: HDFS-11161
 URL: https://issues.apache.org/jira/browse/HDFS-11161
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: fs
Reporter: Faen Zhang


Baidu Yun ( https://cloud.baidu.com/ ) is one of top tier cloud computing 
provider. Baidu Yun BOS is widely used among China's cloud users, but currently 
it is not easy to access data laid on BOS storage from user's Hadoop/Spark 
application, because of no original support for BOS in Hadoop.

This work aims to integrate Baidu Yun BOS with Hadoop. By simple configuration, 
Spark/Hadoop applications can read/write data from BOS without any code change. 
Narrowing the gap between user's APP and data storage, like what have been done 
for S3 and Aliyun OSS in Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11101) TestDFSShell#testMoveWithTargetPortEmpty fails intermittently

2016-11-20 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15682651#comment-15682651
 ] 

Brahma Reddy Battula commented on HDFS-11101:
-

Hmm..[~ajisakaa] Thanks for review..will commit today.

> TestDFSShell#testMoveWithTargetPortEmpty fails intermittently
> -
>
> Key: HDFS-11101
> URL: https://issues.apache.org/jira/browse/HDFS-11101
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11101.patch
>
>
> {noformat}
> java.io.IOException: Port is already in use; giving up after 10 times.
>   at 
> org.apache.hadoop.net.ServerSocketUtil.waitForPort(ServerSocketUtil.java:98)
>   at 
> org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:778)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11101) TestDFSShell#testMoveWithTargetPortEmpty fails intermittently

2016-11-20 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15682634#comment-15682634
 ] 

Akira Ajisaka commented on HDFS-11101:
--

+1. In this test, the following setting is necessary.
{code}
  .nameNodePort(ServerSocketUtil.waitForPort(
  HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT, 10))
{code}
because hdfs://localhost/testfile2 is implicitly treated as 
hdfs://localhost:9820/testfile2 in FsShell regardless of the configuration used 
by FsShell. Therefore I'm +1 for increasing the retry count.

> TestDFSShell#testMoveWithTargetPortEmpty fails intermittently
> -
>
> Key: HDFS-11101
> URL: https://issues.apache.org/jira/browse/HDFS-11101
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11101.patch
>
>
> {noformat}
> java.io.IOException: Port is already in use; giving up after 10 times.
>   at 
> org.apache.hadoop.net.ServerSocketUtil.waitForPort(ServerSocketUtil.java:98)
>   at 
> org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:778)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11144) TestFileCreationDelete#testFileCreationDeleteParent fails wind bind exception

2016-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15682629#comment-15682629
 ] 

Hudson commented on HDFS-11144:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10866 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10866/])
HDFS-11144. TestFileCreationDelete#testFileCreationDeleteParent fails (brahma: 
rev c68dad18ab5cdf01f3dea1bb5988f896609956b4)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationDelete.java


> TestFileCreationDelete#testFileCreationDeleteParent fails wind bind exception
> -
>
> Key: HDFS-11144
> URL: https://issues.apache.org/jira/browse/HDFS-11144
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11144.patch
>
>
> {noformat}
> java.net.BindException: Problem binding to [localhost:57908] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:535)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
>   at org.apache.hadoop.ipc.Server.(Server.java:2667)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
>   at 
> org.apache.hadoop.hdfs.TestFileCreationDelete.testFileCreationDeleteParent(TestFileCreationDelete.java:77)
> {noformat}
>  *Reference* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/testReport/junit/org.apache.hadoop.hdfs/TestFileCreationDelete/testFileCreationDeleteParent/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6874) Add GET_BLOCK_LOCATIONS operation to HttpFS

2016-11-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-6874:
--
Status: In Progress  (was: Patch Available)

> Add GET_BLOCK_LOCATIONS operation to HttpFS
> ---
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.3, 2.4.1
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.02.patch, HDFS-6874.03.patch, HDFS-6874.patch
>
>
> GET_BLOCK_LOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11144) TestFileCreationDelete#testFileCreationDeleteParent fails wind bind exception

2016-11-20 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11144:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk,branch-2 and branch-2.8.
[~ajisakaa] thanks a lot for your review.

> TestFileCreationDelete#testFileCreationDeleteParent fails wind bind exception
> -
>
> Key: HDFS-11144
> URL: https://issues.apache.org/jira/browse/HDFS-11144
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11144.patch
>
>
> {noformat}
> java.net.BindException: Problem binding to [localhost:57908] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:535)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
>   at org.apache.hadoop.ipc.Server.(Server.java:2667)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
>   at 
> org.apache.hadoop.hdfs.TestFileCreationDelete.testFileCreationDeleteParent(TestFileCreationDelete.java:77)
> {noformat}
>  *Reference* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/testReport/junit/org.apache.hadoop.hdfs/TestFileCreationDelete/testFileCreationDeleteParent/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11113) Document dfs.client.read.striped configuration in hdfs-default.xml

2016-11-20 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15682583#comment-15682583
 ] 

Rakesh R commented on HDFS-3:
-

Thanks a lot [~ajisakaa] for the help in resolving this.

> Document dfs.client.read.striped configuration in hdfs-default.xml
> --
>
> Key: HDFS-3
> URL: https://issues.apache.org/jira/browse/HDFS-3
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation, hdfs-client
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-3-00.patch, HDFS-3-01.patch
>
>
> {{dfs.client.read.striped.threadpool.size}} should be covered in 
> hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11113) Document dfs.client.read.striped configuration in hdfs-default.xml

2016-11-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15682561#comment-15682561
 ] 

Hudson commented on HDFS-3:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10865 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10865/])
HDFS-3. Document dfs.client.read.striped configuration in (aajisaka: rev 
d232625f735e06b89360d8f5847c4331076ac477)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java


> Document dfs.client.read.striped configuration in hdfs-default.xml
> --
>
> Key: HDFS-3
> URL: https://issues.apache.org/jira/browse/HDFS-3
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation, hdfs-client
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-3-00.patch, HDFS-3-01.patch
>
>
> {{dfs.client.read.striped.threadpool.size}} should be covered in 
> hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11144) TestFileCreationDelete#testFileCreationDeleteParent fails wind bind exception

2016-11-20 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15682548#comment-15682548
 ] 

Akira Ajisaka commented on HDFS-11144:
--

+1, the test failure is unrelated to the patch.

> TestFileCreationDelete#testFileCreationDeleteParent fails wind bind exception
> -
>
> Key: HDFS-11144
> URL: https://issues.apache.org/jira/browse/HDFS-11144
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-11144.patch
>
>
> {noformat}
> java.net.BindException: Problem binding to [localhost:57908] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:535)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:919)
>   at org.apache.hadoop.ipc.Server.(Server.java:2667)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
>   at 
> org.apache.hadoop.hdfs.TestFileCreationDelete.testFileCreationDeleteParent(TestFileCreationDelete.java:77)
> {noformat}
>  *Reference* 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/testReport/junit/org.apache.hadoop.hdfs/TestFileCreationDelete/testFileCreationDeleteParent/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11113) Document dfs.client.read.striped configuration in hdfs-default.xml

2016-11-20 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-3:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~rakeshr] for the contribution.

> Document dfs.client.read.striped configuration in hdfs-default.xml
> --
>
> Key: HDFS-3
> URL: https://issues.apache.org/jira/browse/HDFS-3
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation, hdfs-client
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-3-00.patch, HDFS-3-01.patch
>
>
> {{dfs.client.read.striped.threadpool.size}} should be covered in 
> hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11113) Document dfs.client.read.striped configuration in hdfs-default.xml

2016-11-20 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15682535#comment-15682535
 ] 

Akira Ajisaka commented on HDFS-3:
--

+1, checking this in.

> Document dfs.client.read.striped configuration in hdfs-default.xml
> --
>
> Key: HDFS-3
> URL: https://issues.apache.org/jira/browse/HDFS-3
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation, hdfs-client
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HDFS-3-00.patch, HDFS-3-01.patch
>
>
> {{dfs.client.read.striped.threadpool.size}} should be covered in 
> hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10815) The state of the EC file is erroneously recognized when you restart the NameNode.

2016-11-20 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15682397#comment-15682397
 ] 

Takanobu Asanuma commented on HDFS-10815:
-

Thanks for reporting this issue, [~ademu].

I think this bug (and HDFS-10775) might have already been solved by HDFS-10858. 
Before fixing the bug, when datanodes sent full block reports which contained 
ec blocks and replicated blocks, namenode sometimes handled it wrongly. 
Eventually, it stopped the recovery process.

Please try to do the test with the latest trunk branch.

> The state of the EC file is erroneously recognized when you restart the 
> NameNode.
> -
>
> Key: HDFS-10815
> URL: https://issues.apache.org/jira/browse/HDFS-10815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
> Environment: 2 NameNodes, 5 DataNodes, Erasured code policy is set as 
> "RS-DEFAULT-3-2-64k"
>Reporter: Eisuke Umeda
>
> After carrying out an examination in the following procedures, an EC files 
> came to be recognized as corrupt files.
> These files were able to get in "hdfs dfs -get".
> NameNode might be causing the false recognition.
> DataNodes: datanode[1-5]
> Rack awareness: not set
> Copy target files: /tmp/tpcds-generate/25/store_sales/*
> {code}
> $ hdfs dfs -ls /tmp/tpcds-generate/25/store_sales
> Found 25 items
> -rw-r--r--   0 root supergroup  399430918 2016-08-16 15:11 
> /tmp/tpcds-generate/25/store_sales/data-m-0
> -rw-r--r--   0 root supergroup  399054598 2016-08-16 15:11 
> /tmp/tpcds-generate/25/store_sales/data-m-1
> -rw-r--r--   0 root supergroup  399329373 2016-08-16 15:11 
> /tmp/tpcds-generate/25/store_sales/data-m-2
> -rw-r--r--   0 root supergroup  399528459 2016-08-16 15:11 
> /tmp/tpcds-generate/25/store_sales/data-m-3
> -rw-r--r--   0 root supergroup  399329624 2016-08-16 15:11 
> /tmp/tpcds-generate/25/store_sales/data-m-4
> -rw-r--r--   0 root supergroup  399085924 2016-08-16 15:11 
> /tmp/tpcds-generate/25/store_sales/data-m-5
> -rw-r--r--   0 root supergroup  399337384 2016-08-16 15:12 
> /tmp/tpcds-generate/25/store_sales/data-m-6
> -rw-r--r--   0 root supergroup  399199458 2016-08-16 15:12 
> /tmp/tpcds-generate/25/store_sales/data-m-7
> -rw-r--r--   0 root supergroup  399679096 2016-08-16 15:12 
> /tmp/tpcds-generate/25/store_sales/data-m-8
> -rw-r--r--   0 root supergroup  399440431 2016-08-16 15:12 
> /tmp/tpcds-generate/25/store_sales/data-m-9
> -rw-r--r--   0 root supergroup  399403931 2016-08-16 15:12 
> /tmp/tpcds-generate/25/store_sales/data-m-00010
> -rw-r--r--   0 root supergroup  399472465 2016-08-16 15:12 
> /tmp/tpcds-generate/25/store_sales/data-m-00011
> -rw-r--r--   0 root supergroup  399451784 2016-08-16 15:12 
> /tmp/tpcds-generate/25/store_sales/data-m-00012
> -rw-r--r--   0 root supergroup  399240168 2016-08-16 15:12 
> /tmp/tpcds-generate/25/store_sales/data-m-00013
> -rw-r--r--   0 root supergroup  399370507 2016-08-16 15:12 
> /tmp/tpcds-generate/25/store_sales/data-m-00014
> -rw-r--r--   0 root supergroup  399633351 2016-08-16 15:12 
> /tmp/tpcds-generate/25/store_sales/data-m-00015
> -rw-r--r--   0 root supergroup  396532952 2016-08-16 15:13 
> /tmp/tpcds-generate/25/store_sales/data-m-00016
> -rw-r--r--   0 root supergroup  396258715 2016-08-16 15:13 
> /tmp/tpcds-generate/25/store_sales/data-m-00017
> -rw-r--r--   0 root supergroup  396382486 2016-08-16 15:13 
> /tmp/tpcds-generate/25/store_sales/data-m-00018
> -rw-r--r--   0 root supergroup  399016456 2016-08-16 15:13 
> /tmp/tpcds-generate/25/store_sales/data-m-00019
> -rw-r--r--   0 root supergroup  399465745 2016-08-16 15:13 
> /tmp/tpcds-generate/25/store_sales/data-m-00020
> -rw-r--r--   0 root supergroup  399208235 2016-08-16 15:13 
> /tmp/tpcds-generate/25/store_sales/data-m-00021
> -rw-r--r--   0 root supergroup  399198296 2016-08-16 15:13 
> /tmp/tpcds-generate/25/store_sales/data-m-00022
> -rw-r--r--   0 root supergroup  399599711 2016-08-16 15:13 
> /tmp/tpcds-generate/25/store_sales/data-m-00023
> -rw-r--r--   0 root supergroup  395150855 2016-08-16 15:13 
> /tmp/tpcds-generate/25/store_sales/data-m-00024
> {code}
> NameNodes:
>   namenode1(active)
>   namenode2(standby)
> The directory which there is "Under-erasure-coded block groups": 
> /tmp/tpcds-generate/test
> {code}
> $ sudo -u hdfs hdfs erasurecode -getPolicy /tmp/tpcds-generate/test
> ErasureCodingPolicy=[Name=RS-DEFAULT-3-2-64k, 
> Schema=[ECSchema=[Codec=rs-default, numDataUnits=3, numParityUnits=2]], 
> CellSize=65536 ]
> {code}
> The following is the steps to reproduce:
> 1) hdfs dfs -cp /tmp/tpcds-generate/25/store_sales/* /tmp/tpcds-generate/test
> 2) datanode1: (in the middle of the copy) sudo pkill -9 -f datanode
> 3) start a process 

[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-11-20 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15682298#comment-15682298
 ] 

Jingcheng Du commented on HDFS-9668:


Thanks for comments.
[~arpitagarwal], if we need more discussion for the currrent implementation, 
would you mind to upload the read-write lock patch that you suggested in this 
JIRA? Or it is okay to ask me do it?
Thanks [~eddyxu]! Sure, I can prepare a document for the current 
implementation. I can file another JIRA to address this if it is decided to use 
a read-write lock in this JIRA suggested by Arpit, is it ok?

> Optimize the locking in FsDatasetImpl
> -
>
> Key: HDFS-9668
> URL: https://issues.apache.org/jira/browse/HDFS-9668
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, 
> HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, 
> HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, 
> HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, 
> HDFS-9668-19.patch, HDFS-9668-19.patch, HDFS-9668-2.patch, 
> HDFS-9668-20.patch, HDFS-9668-21.patch, HDFS-9668-22.patch, 
> HDFS-9668-23.patch, HDFS-9668-23.patch, HDFS-9668-3.patch, HDFS-9668-4.patch, 
> HDFS-9668-5.patch, HDFS-9668-6.patch, HDFS-9668-7.patch, HDFS-9668-8.patch, 
> HDFS-9668-9.patch, execution_time.png
>
>
> During the HBase test on a tiered storage of HDFS (WAL is stored in 
> SSD/RAMDISK, and all other files are stored in HDD), we observe many 
> long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part 
> of the jstack result:
> {noformat}
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48521 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread 
> t@93336
>java.lang.Thread.State: BLOCKED
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:)
>   - waiting to lock <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by 
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
>   
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread 
> t@93335
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140)
>   - locked <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
> {noformat}
> We measured the execution of some operations in FsDatasetImpl during the 
> test. Here following is the result.
> !execution_time.png!
> The operations of 

[jira] [Updated] (HDFS-11160) VolumeScanner reports write-in-progress replicas as corrupt incorrectly

2016-11-20 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-11160:
---
Summary: VolumeScanner reports write-in-progress replicas as corrupt 
incorrectly  (was: VolumeScanner incorrectly reports good replicas as corrupt 
due to race condition)

> VolumeScanner reports write-in-progress replicas as corrupt incorrectly
> ---
>
> Key: HDFS-11160
> URL: https://issues.apache.org/jira/browse/HDFS-11160
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
> Environment: CDH5.7.4
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11160.reproduce.patch
>
>
> Due to a race condition initially reported in HDFS-6804, VolumeScanner may 
> erroneously detect good replicas as corrupt. This is serious because in some 
> cases it results in data loss if all replicas are declared corrupt.
> We are investigating an incidence that caused very high block corruption rate 
> in a relatively small cluster. Initially, we thought HDFS-11056 is to blame. 
> However, after applying HDFS-11056, we are still seeing VolumeScanner 
> reporting corrupt replicas.
> It turns out that if a replica is being appended while VolumeScanner is 
> scanning it, VolumeScanner may use the new checksum to compare against old 
> data, causing checksum mismatch.
> I have a unit test to reproduce the error. Will attach later.
> To fix it, I propose a FinalizedReplica object should also have a 
> lastChecksum field like ReplicaBeingWritten, and BlockSender should use the 
> in-memory lastChecksum to verify the partial data in the last chunk on disk. 
> File this jira to discuss a good fix for this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11112) Journal Nodes should refuse to format non-empty directories

2016-11-20 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15681269#comment-15681269
 ] 

Yiqun Lin edited comment on HDFS-2 at 11/20/16 2:58 PM:


Hi [~jojochuang], there are some comments from me:
{quote}
if current/VERSION is missing, it will start with no problem like nothing has 
happened. No auto-format. But when NN tries to send edits to the JN, the JN 
will respond with JournalNotFormattedException.
{quote}
Can I understand this as that we should start JN failed if current/VERSION is 
missing and avoid the other exceptions that will be threw in the subsequent 
operations? I haven't tested for this special case and I think this should not 
be the situation that addressed in the patch of this JIRA. If this is  a 
problem, one way we can to do a additional "empty check" similar HDFS-10360. 
Now the method {{JNStorage.analyzeAndRecoverStorage}} doesn't do the check for 
missing current/VERSION file when initialize the {{JNStorage}} instance.

{quote}
Do you know the behavior of JN after your patch? Does JN simply ignores format? 
Does JN continues to function normally?
{quote}
My patch makes JN  format failed if there is already storage directories. I'm 
sure It will ignores the format operations since it will throws the exception 
and terminate the subsequent  format operations.

Correct me if I am wrong.Thanks.


was (Author: linyiqun):
Hi [~jojochuang], there are some comments from me:
{quote}
if current/VERSION is missing, it will start with no problem like nothing has 
happened. No auto-format. But when NN tries to send edits to the JN, the JN 
will respond with JournalNotFormattedException.
{quote}
Can I understand this as that we should start JN failed if current/VERSION is 
missing and avoid the other exceptions that will be threw in the subsequent 
operations? I haven't tested for this special case and I think this should not 
be the situation that addressed in the patch of this JIRA. If this is  a 
problem, one way we can to do a additional "empty check" similar HDFS-10360. 
Now the method {{JNStorage.analyzeAndRecoverStorage}} doesn't do the check for 
current/VERSION file.

{quote}
Do you know the behavior of JN after your patch? Does JN simply ignores format? 
Does JN continues to function normally?
{quote}
My patch makes JN  format failed if there is already storage directories. I'm 
sure It will ignores the format operations since it will throws the exception 
and terminate the subsequent  format operations.

Correct me if I am wrong.Thanks.

> Journal Nodes should refuse to format non-empty directories
> ---
>
> Key: HDFS-2
> URL: https://issues.apache.org/jira/browse/HDFS-2
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
> Attachments: HDFS-2.001.patch, HDFS-2.002.patch
>
>
> Journal Nodes should reject the {{format}} RPC request if a storage directory 
> is non-empty. The relevant code is in {{JNStorage#format}}.
> {code}
>   void format(NamespaceInfo nsInfo) throws IOException {
> setStorageInfo(nsInfo);
> ...
> unlockAll();
> sd.clearDirectory();
> writeProperties(sd);
> createPaxosDir();
> analyzeStorage();
> {code}
> This would make the behavior similar to {{namenode -format -nonInteractive}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9475) execution of org.apache.hadoop.hdfs.net.TcpPeerServer.close() causes timeout on Hadoop-2.6.0 with IBM-JDK-1.8

2016-11-20 Thread Nasser Ebrahim (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15681274#comment-15681274
 ] 

Nasser Ebrahim commented on HDFS-9475:
--

Hi Rakesh,

We are interested to know whether this problem still exists with the latest 
level of IBM JDK 8.  We are happy to analyze the issue if it still exists. In 
that case, we have couple of more questions to understand the issue better.
 - Whether the issue occurred only with s390x linux or it can be recreatable 
with intel/power architecture of linux. 
 - If the problem is recreatable with other architectures of linux, whether the 
problem is recreatable with openJDK or Oracle JDK 8?.

Thank you,
Nasser Ebrahim

> execution of org.apache.hadoop.hdfs.net.TcpPeerServer.close() causes timeout 
> on Hadoop-2.6.0 with IBM-JDK-1.8
> -
>
> Key: HDFS-9475
> URL: https://issues.apache.org/jira/browse/HDFS-9475
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0, 2.7.1
> Environment: IBM JDK 1.8.0
> Architecture:  s390x GNU/Linux
>Reporter: Rakesh Sharma
>Priority: Blocker
>
> ---
> Test set: org.apache.hadoop.hdfs.server.balancer.TestBalancer
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 101.69 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.server.balancer.TestBalancer
> testTwoReplicaShouldNotInSameDN(org.apache.hadoop.hdfs.server.balancer.TestBalancer)
>   Time elapsed: 100.008 sec  <<< ERROR!
> java.lang.Exception: test timed out after 10 milliseconds
>   at 
> java.nio.channels.spi.AbstractSelectableChannel.implCloseChannel(AbstractSelectableChannel.java:245)
>   at 
> java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:126)
>   at sun.nio.ch.ServerSocketAdaptor.close(ServerSocketAdaptor.java:149)
>   at 
> org.apache.hadoop.hdfs.net.TcpPeerServer.close(TcpPeerServer.java:153)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.kill(DataXceiverServer.java:223)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:1663)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1750)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1721)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1705)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.testTwoReplicaShouldNotInSameDN(TestBalancer.java:1382)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11112) Journal Nodes should refuse to format non-empty directories

2016-11-20 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15681269#comment-15681269
 ] 

Yiqun Lin commented on HDFS-2:
--

Hi [~jojochuang], there are some comments from me:
{quote}
if current/VERSION is missing, it will start with no problem like nothing has 
happened. No auto-format. But when NN tries to send edits to the JN, the JN 
will respond with JournalNotFormattedException.
{quote}
Can I understand this as that we should start JN failed if current/VERSION is 
missing and avoid the other exceptions that will be threw in the subsequent 
operations? I haven't tested for this special case and I think this should not 
be the situation that addressed in the patch of this JIRA. If this is  a 
problem, one way we can to do a additional "empty check" similar HDFS-10360. 
Now the method {{JNStorage.analyzeAndRecoverStorage}} doesn't do the check for 
current/VERSION file.

{quote}
Do you know the behavior of JN after your patch? Does JN simply ignores format? 
Does JN continues to function normally?
{quote}
My patch makes JN  format failed if there is already storage directories. I'm 
sure It will ignores the format operations since it will throws the exception 
and terminate the subsequent  format operations.

Correct me if I am wrong.Thanks.

> Journal Nodes should refuse to format non-empty directories
> ---
>
> Key: HDFS-2
> URL: https://issues.apache.org/jira/browse/HDFS-2
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
> Attachments: HDFS-2.001.patch, HDFS-2.002.patch
>
>
> Journal Nodes should reject the {{format}} RPC request if a storage directory 
> is non-empty. The relevant code is in {{JNStorage#format}}.
> {code}
>   void format(NamespaceInfo nsInfo) throws IOException {
> setStorageInfo(nsInfo);
> ...
> unlockAll();
> sd.clearDirectory();
> writeProperties(sd);
> createPaxosDir();
> analyzeStorage();
> {code}
> This would make the behavior similar to {{namenode -format -nonInteractive}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org