[jira] [Updated] (HDFS-14274) EC: "hdfs dfs -ls -e" throws NPE When EC Policy of Directory set as "-replicate"

2019-02-12 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14274:

Status: Patch Available  (was: Open)

> EC: "hdfs dfs -ls -e" throws NPE When EC Policy of Directory set as 
> "-replicate"
> 
>
> Key: HDFS-14274
> URL: https://issues.apache.org/jira/browse/HDFS-14274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.1.1
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14274-01.patch
>
>
> "hdfs dfs -ls -e" throws NPE When EC Policy of Directory set as "-replicate"
> Steps :-
> - Create a Directory 
> - Set EC Policy for the Directory as "-replicate" 
> - Check the folder details with command "hdfs dfs -ls -e "
>  Null Pointer exception will display < ls: java.lang.NullPointerException >
>  
> Actual Result :- 
>  ls: java.lang.NullPointerException
>  
> Expected Result :-
>  Should not throw NullPointerException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14230) RBF: Throw RetriableException instead of IOException when no namenodes available

2019-02-12 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766866#comment-16766866
 ] 

Brahma Reddy Battula edited comment on HDFS-14230 at 2/13/19 7:14 AM:
--

My late +1.
{quote}HDFS-14090 plans to introduce better resource isolation, in such a case 
RetriableException would make more sense as dedicated/isolated resources will 
be allotted per name node.
{quote}
May be we can introduce seperate handlers for each nameservice. will check in 
HDFS-14090.


was (Author: brahmareddy):
My late +1.

> RBF: Throw RetriableException instead of IOException when no namenodes 
> available
> 
>
> Key: HDFS-14230
> URL: https://issues.apache.org/jira/browse/HDFS-14230
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14230-HDFS-13891.001.patch, 
> HDFS-14230-HDFS-13891.002.patch, HDFS-14230-HDFS-13891.003.patch, 
> HDFS-14230-HDFS-13891.004.patch, HDFS-14230-HDFS-13891.005.patch, 
> HDFS-14230-HDFS-13891.006.patch
>
>
> Failover usually happens when upgrading namenodes. And there are no active 
> namenodes within some seconds, Accessing HDFS through router fails at this 
> moment. This could make jobs  failure or hang. Some hive jobs logs are as 
> follow  
> {code:java}
> 2019-01-03 16:12:08,337 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 133.33 sec
> MapReduce Total cumulative CPU time: 2 minutes 13 seconds 330 msec
> Ended Job = job_1542178952162_24411913
> Launching Job 4 out of 6
> Exception in thread "Thread-86" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): No namenode 
> available under nameservice Cluster3
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.shouldRetry(RouterRpcClient.java:328)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:488)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:495)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:385)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:760)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1804)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1338)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3925)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1014)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
> at 

[jira] [Commented] (HDFS-14230) RBF: Throw RetriableException instead of IOException when no namenodes available

2019-02-12 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766866#comment-16766866
 ] 

Brahma Reddy Battula commented on HDFS-14230:
-

My late +1.

> RBF: Throw RetriableException instead of IOException when no namenodes 
> available
> 
>
> Key: HDFS-14230
> URL: https://issues.apache.org/jira/browse/HDFS-14230
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14230-HDFS-13891.001.patch, 
> HDFS-14230-HDFS-13891.002.patch, HDFS-14230-HDFS-13891.003.patch, 
> HDFS-14230-HDFS-13891.004.patch, HDFS-14230-HDFS-13891.005.patch, 
> HDFS-14230-HDFS-13891.006.patch
>
>
> Failover usually happens when upgrading namenodes. And there are no active 
> namenodes within some seconds, Accessing HDFS through router fails at this 
> moment. This could make jobs  failure or hang. Some hive jobs logs are as 
> follow  
> {code:java}
> 2019-01-03 16:12:08,337 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 133.33 sec
> MapReduce Total cumulative CPU time: 2 minutes 13 seconds 330 msec
> Ended Job = job_1542178952162_24411913
> Launching Job 4 out of 6
> Exception in thread "Thread-86" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): No namenode 
> available under nameservice Cluster3
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.shouldRetry(RouterRpcClient.java:328)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:488)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:495)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:385)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:760)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1804)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1338)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3925)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1014)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
> {code}
> Deep into the code. Maybe we can throw StandbyException when no namenodes 
> available. Client will 

[jira] [Commented] (HDDS-1034) TestOzoneRpcClient and TestOzoneRpcClientWithRatis failure

2019-02-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766844#comment-16766844
 ] 

Hudson commented on HDDS-1034:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15945 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15945/])
HDDS-1034. TestOzoneRpcClient and TestOzoneRpcClientWithRatis failure. (bharat: 
rev cf4aeccfa09e3233aba2b53590c22fb4e8b27120)
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestReadRetries.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java


> TestOzoneRpcClient and TestOzoneRpcClientWithRatis failure
> --
>
> Key: HDDS-1034
> URL: https://issues.apache.org/jira/browse/HDDS-1034
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-1034.001.patch
>
>
> Sometimes on Jenkins run, we see the test testPutKey failing with below error.
> See below Jenkins run, not only this other tests like 
> testReadKeyWithCorruptedData, 
> testMultipartUploadWithPartsMisMatchWithIncorrectPartName all fail with the 
> same error when writing key.
> https://builds.apache.org/job/PreCommit-HDDS-Build/2139/testReport/
> {code:java}
> java.io.IOException: Unexpected Storage Container Exception: 
> java.util.concurrent.ExecutionException: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:622)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:464)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:480)
>  at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.close(BlockOutputStreamEntry.java:137)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:488)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:321)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:258)
>  at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
>  at java.io.OutputStream.write(OutputStream.java:75) at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testPutKey(TestOzoneRpcClientAbstract.java:557)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
>  at org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>  at 
> 

[jira] [Commented] (HDDS-1034) TestOzoneRpcClient and TestOzoneRpcClientWithRatis failure

2019-02-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766835#comment-16766835
 ] 

Bharat Viswanadham commented on HDDS-1034:
--

+1. Thank You [~msingh] for root causing the issue. I think now I understood 
the cause, as in other tests, keys are created with STANDALONE and replication 
factor 1, this might be causing the issue. 

I will commit this shortly.

> TestOzoneRpcClient and TestOzoneRpcClientWithRatis failure
> --
>
> Key: HDDS-1034
> URL: https://issues.apache.org/jira/browse/HDDS-1034
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-1034.001.patch
>
>
> Sometimes on Jenkins run, we see the test testPutKey failing with below error.
> See below Jenkins run, not only this other tests like 
> testReadKeyWithCorruptedData, 
> testMultipartUploadWithPartsMisMatchWithIncorrectPartName all fail with the 
> same error when writing key.
> https://builds.apache.org/job/PreCommit-HDDS-Build/2139/testReport/
> {code:java}
> java.io.IOException: Unexpected Storage Container Exception: 
> java.util.concurrent.ExecutionException: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:622)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:464)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:480)
>  at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.close(BlockOutputStreamEntry.java:137)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:488)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:321)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:258)
>  at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
>  at java.io.OutputStream.write(OutputStream.java:75) at 
> org.apache.hadoop.ozone.client.rpc.TestOzoneRpcClientAbstract.testPutKey(TestOzoneRpcClientAbstract.java:557)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
>  at org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
> Caused by: java.util.concurrent.ExecutionException: 
> 

[jira] [Commented] (HDDS-972) Add support for configuring multiple OMs

2019-02-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766834#comment-16766834
 ] 

Hudson commented on HDDS-972:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15944 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15944/])
HDDS-972. Add support for configuring multiple OMs. Contributed by (bharat: rev 
917ac9f108fa980a90720ba87e5e5b35d39aa358)
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneIllegalArgumentException.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneConfigurationFields.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisClient.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerRatisServer.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneHAClusterImpl.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMNodeDetails.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneCluster.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java


> Add support for configuring multiple OMs
> 
>
> Key: HDDS-972
> URL: https://issues.apache.org/jira/browse/HDDS-972
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-972.000.patch, HDDS-972.001.patch, 
> HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch, 
> HDDS-972.005.patch, HDDS-972.006.patch, HDDS-972.007.patch
>
>
> For OM HA, we would need to run multiple (atleast 3) OM services so that we 
> can form a replicated Ratis ring of OMs. This Jira aims to add support for 
> configuring multiple OMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-972) Add support for configuring multiple OMs

2019-02-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-972:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank You [~hanishakoneru] for the contribution and [~shashikant] and 
[~linyiqun] for the reviews.

I have committed this to trunk.

> Add support for configuring multiple OMs
> 
>
> Key: HDDS-972
> URL: https://issues.apache.org/jira/browse/HDDS-972
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-972.000.patch, HDDS-972.001.patch, 
> HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch, 
> HDDS-972.005.patch, HDDS-972.006.patch, HDDS-972.007.patch
>
>
> For OM HA, we would need to run multiple (atleast 3) OM services so that we 
> can form a replicated Ratis ring of OMs. This Jira aims to add support for 
> configuring multiple OMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-972) Add support for configuring multiple OMs

2019-02-12 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766829#comment-16766829
 ] 

Hanisha Koneru commented on HDDS-972:
-

Thank you [~bharatviswa], [~shashikant] and [~linyiqun].

> Add support for configuring multiple OMs
> 
>
> Key: HDDS-972
> URL: https://issues.apache.org/jira/browse/HDDS-972
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-972.000.patch, HDDS-972.001.patch, 
> HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch, 
> HDDS-972.005.patch, HDDS-972.006.patch, HDDS-972.007.patch
>
>
> For OM HA, we would need to run multiple (atleast 3) OM services so that we 
> can form a replicated Ratis ring of OMs. This Jira aims to add support for 
> configuring multiple OMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1038) Datanode fails to connect with secure SCM

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766825#comment-16766825
 ] 

Hadoop QA commented on HDDS-1038:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m  
8s{color} | {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} root: The patch generated 3 new + 0 unchanged - 
0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
52s{color} | {color:red} root in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 24s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 52s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1038 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958510/HDDS-1038.02.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  
shellcheck  |
| uname | Linux 73c68692f3bf 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 7b11b40 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| shellcheck | v0.4.6 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2248/artifact/out/patch-mvninstall-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2248/artifact/out/diff-checkstyle-root.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2248/artifact/out/patch-javadoc-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2248/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2248/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2248/testReport/ |
| Max. process+thread count | 1228 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/server-scm hadoop-ozone/dist U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2248/console |
| Powered by | Apache Yetus 

[jira] [Comment Edited] (HDDS-972) Add support for configuring multiple OMs

2019-02-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766817#comment-16766817
 ] 

Bharat Viswanadham edited comment on HDDS-972 at 2/13/19 6:09 AM:
--

Thank You [~hanishakoneru] for updating the patch and offline discussion.

+1 LGTM.

Will commit this shortly. 


was (Author: bharatviswa):
Thank You [~hanishakoneru] for updating the patch.

+1 LGTM.

Will commit this shortly. 

> Add support for configuring multiple OMs
> 
>
> Key: HDDS-972
> URL: https://issues.apache.org/jira/browse/HDDS-972
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-972.000.patch, HDDS-972.001.patch, 
> HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch, 
> HDDS-972.005.patch, HDDS-972.006.patch, HDDS-972.007.patch
>
>
> For OM HA, we would need to run multiple (atleast 3) OM services so that we 
> can form a replicated Ratis ring of OMs. This Jira aims to add support for 
> configuring multiple OMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-972) Add support for configuring multiple OMs

2019-02-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-972:

Fix Version/s: 0.5.0

> Add support for configuring multiple OMs
> 
>
> Key: HDDS-972
> URL: https://issues.apache.org/jira/browse/HDDS-972
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-972.000.patch, HDDS-972.001.patch, 
> HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch, 
> HDDS-972.005.patch, HDDS-972.006.patch, HDDS-972.007.patch
>
>
> For OM HA, we would need to run multiple (atleast 3) OM services so that we 
> can form a replicated Ratis ring of OMs. This Jira aims to add support for 
> configuring multiple OMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-972) Add support for configuring multiple OMs

2019-02-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766817#comment-16766817
 ] 

Bharat Viswanadham commented on HDDS-972:
-

Thank You [~hanishakoneru] for updating the patch.

+1 LGTM.

Will commit this shortly. 

> Add support for configuring multiple OMs
> 
>
> Key: HDDS-972
> URL: https://issues.apache.org/jira/browse/HDDS-972
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-972.000.patch, HDDS-972.001.patch, 
> HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch, 
> HDDS-972.005.patch, HDDS-972.006.patch, HDDS-972.007.patch
>
>
> For OM HA, we would need to run multiple (atleast 3) OM services so that we 
> can form a replicated Ratis ring of OMs. This Jira aims to add support for 
> configuring multiple OMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766798#comment-16766798
 ] 

Hadoop QA commented on HDDS-134:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} root: The patch generated 6 new + 1 unchanged - 
0 fixed = 7 total (was 1) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 26s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 25s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.client.rpc.TestContainerStateMachine |
|   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
|   | hadoop.ozone.TestSecureOzoneCluster |
|   | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-134 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958508/HDDS-134.01.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux a35ff59131ce 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 7b11b40 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2247/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2247/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2247/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2247/testReport/ |
| Max. process+thread count | 1217 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2247/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> SCM CA: OM sends CSR and uses certificate issued by SCM
> 

[jira] [Assigned] (HDFS-14274) EC: "hdfs dfs -ls -e" throws NPE When EC Policy of Directory set as "-replicate"

2019-02-12 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HDFS-14274:
---

Assignee: Ayush Saxena

> EC: "hdfs dfs -ls -e" throws NPE When EC Policy of Directory set as 
> "-replicate"
> 
>
> Key: HDFS-14274
> URL: https://issues.apache.org/jira/browse/HDFS-14274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.1.1
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Major
>
> "hdfs dfs -ls -e" throws NPE When EC Policy of Directory set as "-replicate"
> Steps :-
> - Create a Directory 
> - Set EC Policy for the Directory as "-replicate" 
> - Check the folder details with command "hdfs dfs -ls -e "
>  Null Pointer exception will display < ls: java.lang.NullPointerException >
>  
> Actual Result :- 
>  ls: java.lang.NullPointerException
>  
> Expected Result :-
>  Should not throw NullPointerException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14274) EC: "hdfs dfs -ls -e" throws NPE When EC Policy of Directory set as "-replicate"

2019-02-12 Thread Souryakanta Dwivedy (JIRA)
Souryakanta Dwivedy created HDFS-14274:
--

 Summary: EC: "hdfs dfs -ls -e" throws NPE When EC Policy of 
Directory set as "-replicate"
 Key: HDFS-14274
 URL: https://issues.apache.org/jira/browse/HDFS-14274
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.1.1
Reporter: Souryakanta Dwivedy


"hdfs dfs -ls -e" throws NPE When EC Policy of Directory set as "-replicate"

Steps :-

- Create a Directory 
- Set EC Policy for the Directory as "-replicate" 
- Check the folder details with command "hdfs dfs -ls -e "
 Null Pointer exception will display < ls: java.lang.NullPointerException >
 
Actual Result :- 
 ls: java.lang.NullPointerException
 
Expected Result :-
 Should not throw NullPointerException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1038) Datanode fails to connect with secure SCM

2019-02-12 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766780#comment-16766780
 ] 

Ajay Kumar commented on HDDS-1038:
--

[~xyao] thanks for review. patch v2 addresses both comments.

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1038) Datanode fails to connect with secure SCM

2019-02-12 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1038:
-
Attachment: HDDS-1038.02.patch

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch, 
> HDDS-1038.02.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-12 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-134:

Attachment: HDDS-134.01.patch

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-134.00.patch, HDDS-134.01.patch
>
>
> Initialize OM keypair and get SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1061:
---
Target Version/s: 0.4.0  (was: 0.5.0)

> DelegationToken: Add certificate serial  id to Ozone Delegation Token 
> Identifier
> 
>
> Key: HDDS-1061
> URL: https://issues.apache.org/jira/browse/HDDS-1061
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1061.00.patch, HDDS-1061.01.patch, 
> HDDS-1061.02.patch
>
>
> 1. Add certificate serial  id to Ozone Delegation Token Identifier. Required 
> for OM HA support.
> 2. Validate Ozone token based on public key from OM certificate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-12 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766755#comment-16766755
 ] 

Ajay Kumar commented on HDDS-134:
-

cc: [~xyao], [~anu] 
While adding test i noticed hostname in subject and issuer DN are case 
sensitive.

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-134.00.patch
>
>
> Initialize OM keypair and get SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766759#comment-16766759
 ] 

Hadoop QA commented on HDDS-134:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDDS-134 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-134 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958507/HDDS-134.00.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2246/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-134.00.patch
>
>
> Initialize OM keypair and get SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766758#comment-16766758
 ] 

Anu Engineer commented on HDDS-134:
---

{quote}While adding test i noticed hostname in subject and issuer DN are case 
sensitive.
{quote}
I am sorry, but I don't completely understand this statement. So my response 
might be off. AFAIK, DNS A records are case sensitive, so hostname field would 
be expected to maintain that, right ?

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-134.00.patch
>
>
> Initialize OM keypair and get SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-12 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-134:

Attachment: HDDS-134.00.patch

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-134.00.patch
>
>
> Initialize OM keypair and get SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-12 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-134:

Description: Initialize OM keypair and get SCM signed certificate.

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-134.00.patch
>
>
> Initialize OM keypair and get SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-12 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-134:

Attachment: (was: HDDS-134.00.patch)

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> Initialize OM keypair and get SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-12 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-134:

Status: Patch Available  (was: Open)

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-134.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-02-12 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-134:

Attachment: HDDS-134.00.patch

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-134.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1047) Fix TestRatisPipelineProvider#testCreatePipelineWithFactor

2019-02-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766716#comment-16766716
 ] 

Hudson commented on HDDS-1047:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15942 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15942/])
HDDS-1047. Fix TestRatisPipelineProvider#testCreatePipelineWithFactor. (yqlin: 
rev 06d7890bdd3e597824f9ca02b453d45eef445f49)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineProvider.java


> Fix TestRatisPipelineProvider#testCreatePipelineWithFactor
> --
>
> Key: HDDS-1047
> URL: https://issues.apache.org/jira/browse/HDDS-1047
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Nilotpal Nandi
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1047.001.patch, HDDS-1047.002.patch, 
> HDDS-1047.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766709#comment-16766709
 ] 

Hadoop QA commented on HDFS-14268:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
18s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 24s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14268 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958483/HDFS-14268-HDFS-13891.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 70f877162c34 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 36aca66 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26201/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26201/testReport/ |
| Max. process+thread count | 1758 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Commented] (HDFS-14162) Balancer should work with ObserverNode

2019-02-12 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766707#comment-16766707
 ] 

Konstantin Shvachko commented on HDFS-14162:


# I guess you missed one license, in BalancerProtocols.
# In {{CombinedProxyInvocationHandler.invole()}} it may be cheaper (or not?) to 
first check if the underlying class has the method rather than straightforward 
invoking it. Using: {{method.getDeclaringClass().getMethod(...)}}.

> Balancer should work with ObserverNode
> --
>
> Key: HDFS-14162
> URL: https://issues.apache.org/jira/browse/HDFS-14162
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14162-HDFS-12943.wip0.patch, HDFS-14162.000.patch, 
> HDFS-14162.001.patch, HDFS-14162.002.patch, testBalancerWithObserver-3.patch, 
> testBalancerWithObserver.patch
>
>
> Balancer provides a substantial RPC load on NameNode. It would be good to 
> divert Balancer RPCs {{getBlocks()}}, etc. to ObserverNode. The main problem 
> is that Balancer uses {{NamenodeProtocol}}, while ORPP currently supports 
> only {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1094) Performance testing infrastructure : Special handling for zero-filled chunks on the Datanode

2019-02-12 Thread Supratim Deka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766689#comment-16766689
 ] 

Supratim Deka commented on HDDS-1094:
-

yes, would be definitely easier with a ramdisk.
But, a 10GigE network would pump in data at a GB/s rate. Will be impossible to 
sustain peak throughput for any meaningful length of time. The dataset for the 
test runs will be quite constrained in size - the OM database also will not be 
stressed at all. 

> Performance testing infrastructure : Special handling for zero-filled chunks 
> on the Datanode
> 
>
> Key: HDDS-1094
> URL: https://issues.apache.org/jira/browse/HDDS-1094
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Priority: Major
>
> Goal:
> Make Ozone chunk Read/Write operations CPU/network bound for specially 
> constructed performance micro benchmarks.
> Remove disk bandwidth and latency constraints - running ozone data path 
> against extreme low-latency & high throughput storage will expose performance 
> bottlenecks in the flow. But low-latency storage(NVME flash drives, Storage 
> class memory etc) is expensive and availability is limited. Is there a 
> workaround which achieves similar running conditions for the software without 
> actually having the low latency storage? At least for specially constructed 
> datasets -  for example zero-filled blocks (*not* zero-length blocks).
> Required characteristics of the solution:
> No changes in Ozone client, OM and SCM. Changes limited to Datanode, Minimal 
> footprint in datanode code.
> Possible High level Approach:
> The ChunkManager and ChunkUtils can enable writeChunk for zero-filled chunks 
> to be dropped without actually writing to the local filesystem. Similarly, if 
> readChunk can construct a zero-filled buffer without reading from the local 
> filesystem whenever it detects a zero-filled chunk. Specifics of how to 
> detect and record a zero-filled chunk can be discussed on this jira. Also 
> discuss how to control this behaviour and make it available only for internal 
> testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1047) Fix TestRatisPipelineProvider#testCreatePipelineWithFactor

2019-02-12 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-1047:

   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Committed this.

Thanks [~nilotpalnandi] for the contribution and thanks [~bharatviswa] for the 
review.

> Fix TestRatisPipelineProvider#testCreatePipelineWithFactor
> --
>
> Key: HDDS-1047
> URL: https://issues.apache.org/jira/browse/HDDS-1047
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Nilotpal Nandi
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1047.001.patch, HDDS-1047.002.patch, 
> HDDS-1047.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1047) Fix TestRatisPipelineProvider#testCreatePipelineWithFactor

2019-02-12 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766691#comment-16766691
 ] 

Yiqun Lin commented on HDDS-1047:
-

+1. Commit shortly.

> Fix TestRatisPipelineProvider#testCreatePipelineWithFactor
> --
>
> Key: HDDS-1047
> URL: https://issues.apache.org/jira/browse/HDDS-1047
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-1047.001.patch, HDDS-1047.002.patch, 
> HDDS-1047.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14273) Fix checkstyle issues in BlockLocation's method javadoc

2019-02-12 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14273:
--
Description: BlockLocation. java has checkstyle issues for most of 
methods's javadoc and an indentation error. 

> Fix checkstyle issues in BlockLocation's method javadoc
> ---
>
> Key: HDFS-14273
> URL: https://issues.apache.org/jira/browse/HDFS-14273
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shweta
>Assignee: Shweta
>Priority: Trivial
> Attachments: HDFS-14273.001.patch
>
>
> BlockLocation. java has checkstyle issues for most of methods's javadoc and 
> an indentation error. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14273) Fix checkstyle issues in BlockLocation's method javadoc

2019-02-12 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14273:
--
Attachment: HDFS-14273.001.patch

> Fix checkstyle issues in BlockLocation's method javadoc
> ---
>
> Key: HDFS-14273
> URL: https://issues.apache.org/jira/browse/HDFS-14273
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shweta
>Assignee: Shweta
>Priority: Trivial
> Attachments: HDFS-14273.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14273) Fix checkstyle issues in BlockLocation's method javadoc

2019-02-12 Thread Shweta (JIRA)
Shweta created HDFS-14273:
-

 Summary: Fix checkstyle issues in BlockLocation's method javadoc
 Key: HDFS-14273
 URL: https://issues.apache.org/jira/browse/HDFS-14273
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Shweta






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14273) Fix checkstyle issues in BlockLocation's method javadoc

2019-02-12 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta reassigned HDFS-14273:
-

Assignee: Shweta

> Fix checkstyle issues in BlockLocation's method javadoc
> ---
>
> Key: HDFS-14273
> URL: https://issues.apache.org/jira/browse/HDFS-14273
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shweta
>Assignee: Shweta
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14269) [SBN read] Observer node switches back to Standby after restart

2019-02-12 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-14269.

Resolution: Won't Fix

> [SBN read] Observer node switches back to Standby after restart
> ---
>
> Key: HDFS-14269
> URL: https://issues.apache.org/jira/browse/HDFS-14269
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> The Observer state is not persistent. Once it restarts it becomes Standby 
> node again. Since it does not participate in NameNode failover for now, 
> should we assume an Observer node is always an Observer node? This state 
> should be persisted somewhere, like in Zookeeper.
> CC: [~csun] [~shv]. Like to get inputs from you. Have you discussed this 
> before?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14272) [SBN read] HDFS command line tools does not guarantee consistency

2019-02-12 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766675#comment-16766675
 ] 

Wei-Chiu Chuang commented on HDFS-14272:


Perhaps we should use wall clock time as the timestamp, just like what's done 
in [Spanner|https://ai.google/research/pubs/pub39966] :)

> [SBN read] HDFS command line tools does not guarantee consistency
> -
>
> Key: HDFS-14272
> URL: https://issues.apache.org/jira/browse/HDFS-14272
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
> Environment: CDH6.1 (Hadoop 3.0.x) + Consistency Reads from Standby + 
> SSL + Kerberos + RPC encryption
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> It is typical for integration tests to create some files and then check their 
> existence. For example, like the following simple bash script:
> {code:java}
> # hdfs dfs -touchz /tmp/abc
> # hdfs dfs -ls /tmp/abc
> {code}
> The test executes HDFS bash command sequentially, but it may fail with 
> Consistent Standby Read because the -ls does not find the file.
> Analysis: the second bash command, while launched sequentially after the 
> first one, is not aware of the state id returned from the first bash command. 
> So ObserverNode wouldn't wait for the the edits to get propagated, and thus 
> fails.
> I've got a cluster where the Observer has tens of seconds of RPC latency, and 
> this becomes very annoying. (I am still trying to figure out why this 
> Observer has such a long RPC latency. But that's another story.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14269) [SBN read] Observer node switches back to Standby after restart

2019-02-12 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766674#comment-16766674
 ] 

Wei-Chiu Chuang commented on HDFS-14269:


Ah thanks. You're right. In my environment, Cloudera Manager takes care of the 
transitioning for typical HA scenario, so I forgot this is the standard startup 
procedure.

> [SBN read] Observer node switches back to Standby after restart
> ---
>
> Key: HDFS-14269
> URL: https://issues.apache.org/jira/browse/HDFS-14269
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> The Observer state is not persistent. Once it restarts it becomes Standby 
> node again. Since it does not participate in NameNode failover for now, 
> should we assume an Observer node is always an Observer node? This state 
> should be persisted somewhere, like in Zookeeper.
> CC: [~csun] [~shv]. Like to get inputs from you. Have you discussed this 
> before?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14162) Balancer should work with ObserverNode

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766671#comment-16766671
 ] 

Hadoop QA commented on HDFS-14162:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 51s{color} | {color:orange} root: The patch generated 8 new + 30 unchanged - 
10 fixed = 38 total (was 40) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 53s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
52s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
53s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
52s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}193m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.web.TestWebHDFS |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14162 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958462/HDFS-14162.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 479c37b9b561 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 

[jira] [Commented] (HDDS-972) Add support for configuring multiple OMs

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1673#comment-1673
 ] 

Hadoop QA commented on HDDS-972:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 52s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  5s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-972 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958475/HDDS-972.007.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 6093399d6b81 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 3dc2523 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2245/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2245/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2245/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2245/testReport/ |
| Max. process+thread count | 1309 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2245/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add support for configuring multiple OMs
> 
>
> Key: HDDS-972
> URL: https://issues.apache.org/jira/browse/HDDS-972
> Project: Hadoop Distributed Data Store
>  Issue Type: 

[jira] [Assigned] (HDFS-14270) [SBN Read] StateId and TrasactionId not present in Trace level logging

2019-02-12 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta reassigned HDFS-14270:
-

Assignee: Shweta

> [SBN Read] StateId and TrasactionId not present in Trace level logging
> --
>
> Key: HDFS-14270
> URL: https://issues.apache.org/jira/browse/HDFS-14270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Shweta
>Assignee: Shweta
>Priority: Trivial
>
> While running the command "hdfs --loglevel TRACE dfs -ls /" it was seen that 
> stateId and TransactionId do not appear in the logs. 
How does one see the 
> stateId and TransactionId in the logs? Is there a different approach?
> CC: [~jojochuang], [~csun], [~shv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14270) [SBN Read] StateId and TrasactionId not present in Trace level logging

2019-02-12 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1674#comment-1674
 ] 

Shweta commented on HDFS-14270:
---

[~shv] Sure, will assign it to myself.

> [SBN Read] StateId and TrasactionId not present in Trace level logging
> --
>
> Key: HDFS-14270
> URL: https://issues.apache.org/jira/browse/HDFS-14270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Shweta
>Priority: Trivial
>
> While running the command "hdfs --loglevel TRACE dfs -ls /" it was seen that 
> stateId and TransactionId do not appear in the logs. 
How does one see the 
> stateId and TransactionId in the logs? Is there a different approach?
> CC: [~jojochuang], [~csun], [~shv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14269) [SBN read] Observer node switches back to Standby after restart

2019-02-12 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766651#comment-16766651
 ] 

Konstantin Shvachko commented on HDFS-14269:


Hey this is the standard startup procedure. If you restart Active NN it will 
start as Standby, and you will call transitionToActive  to make it active. With 
ZK failover though this is automated. HDFS-14130 targets to exclude Observer 
from voting.
If you want to bypass transitioning to Observer you can start NN with 
{{StartupOption.OBSERVER}}. Hope this works for you.


> [SBN read] Observer node switches back to Standby after restart
> ---
>
> Key: HDFS-14269
> URL: https://issues.apache.org/jira/browse/HDFS-14269
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> The Observer state is not persistent. Once it restarts it becomes Standby 
> node again. Since it does not participate in NameNode failover for now, 
> should we assume an Observer node is always an Observer node? This state 
> should be persisted somewhere, like in Zookeeper.
> CC: [~csun] [~shv]. Like to get inputs from you. Have you discussed this 
> before?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14270) [SBN Read] StateId and TrasactionId not present in Trace level logging

2019-02-12 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766640#comment-16766640
 ] 

Konstantin Shvachko commented on HDFS-14270:


It shouldn't be a problem to add stateId into trace level log messages for RPCs.
[~shwetayakkali] do you want to contribute?

> [SBN Read] StateId and TrasactionId not present in Trace level logging
> --
>
> Key: HDFS-14270
> URL: https://issues.apache.org/jira/browse/HDFS-14270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Shweta
>Priority: Trivial
>
> While running the command "hdfs --loglevel TRACE dfs -ls /" it was seen that 
> stateId and TransactionId do not appear in the logs. 
How does one see the 
> stateId and TransactionId in the logs? Is there a different approach?
> CC: [~jojochuang], [~csun], [~shv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-12 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14268:
---
Attachment: HDFS-14268-HDFS-13891.002.patch

> RBF: Fix the location of the DNs in getDatanodeReport()
> ---
>
> Key: HDFS-14268
> URL: https://issues.apache.org/jira/browse/HDFS-14268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14268-HDFS-13891.000.patch, 
> HDFS-14268-HDFS-13891.001.patch, HDFS-14268-HDFS-13891.002.patch
>
>
> When getting all the DNs in the federation, the Router queries each of the 
> subclusters and aggregates them assigning the subcluster id to the location. 
> This query uses a {{HashSet}} which provides a "random" order for the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14272) [SBN read] HDFS command line tools does not guarantee consistency

2019-02-12 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-14272:
--

 Summary: [SBN read] HDFS command line tools does not guarantee 
consistency
 Key: HDFS-14272
 URL: https://issues.apache.org/jira/browse/HDFS-14272
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
 Environment: CDH6.1 (Hadoop 3.0.x) + Consistency Reads from Standby + 
SSL + Kerberos + RPC encryption
Reporter: Wei-Chiu Chuang


It is typical for integration tests to create some files and then check its 
existence. For example, like the following simple bash script:
{code:java}
# hdfs dfs -touchz /tmp/abc
# hdfs dfs -ls /tmp/abc
{code}
The test execute HDFS bash command sequentially, but it may fail with 
Consistent Standby Read because the -ls may not find the file.

Analysis: the second bash command, while launched sequentially after the first 
one, is not aware of the state id returned from the first bash command. So 
ObserverNode wouldn't wait for the the edits to get propagated, and thus fails.

I've got a cluster where the Observer has tens of seconds of RPC latency, and 
this becomes very annoying. (I am still trying to figure out why this Observer 
has such a long RPC latency. But that's another story.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13358) RBF: Support for Delegation Token (RPC)

2019-02-12 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766634#comment-16766634
 ] 

CR Hota commented on HDFS-13358:


[~brahmareddy] Gentle ping ! 

[~surendrasingh] Could you please help take a look at this patch and share your 
thoughts?

> RBF: Support for Delegation Token (RPC)
> ---
>
> Key: HDFS-13358
> URL: https://issues.apache.org/jira/browse/HDFS-13358
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Sherwood Zheng
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13358-HDFS-13891.001.patch, 
> HDFS-13358-HDFS-13891.002.patch, HDFS-13358-HDFS-13891.003.patch, 
> HDFS-13358-HDFS-13891.004.patch, HDFS-13358-HDFS-13891.005.patch, 
> HDFS-13358-HDFS-13891.006.patch, HDFS-13358-HDFS-13891.007.patch, 
> HDFS-13358-HDFS-13891.008.patch, RBF_ Delegation token design.pdf
>
>
> HDFS Router should support issuing / managing HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14272) [SBN read] HDFS command line tools does not guarantee consistency

2019-02-12 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14272:
---
Description: 
It is typical for integration tests to create some files and then check their 
existence. For example, like the following simple bash script:
{code:java}
# hdfs dfs -touchz /tmp/abc
# hdfs dfs -ls /tmp/abc
{code}
The test executes HDFS bash command sequentially, but it may fail with 
Consistent Standby Read because the -ls does not find the file.

Analysis: the second bash command, while launched sequentially after the first 
one, is not aware of the state id returned from the first bash command. So 
ObserverNode wouldn't wait for the the edits to get propagated, and thus fails.

I've got a cluster where the Observer has tens of seconds of RPC latency, and 
this becomes very annoying. (I am still trying to figure out why this Observer 
has such a long RPC latency. But that's another story.)

  was:
It is typical for integration tests to create some files and then check its 
existence. For example, like the following simple bash script:
{code:java}
# hdfs dfs -touchz /tmp/abc
# hdfs dfs -ls /tmp/abc
{code}
The test execute HDFS bash command sequentially, but it may fail with 
Consistent Standby Read because the -ls may not find the file.

Analysis: the second bash command, while launched sequentially after the first 
one, is not aware of the state id returned from the first bash command. So 
ObserverNode wouldn't wait for the the edits to get propagated, and thus fails.

I've got a cluster where the Observer has tens of seconds of RPC latency, and 
this becomes very annoying. (I am still trying to figure out why this Observer 
has such a long RPC latency. But that's another story.)


> [SBN read] HDFS command line tools does not guarantee consistency
> -
>
> Key: HDFS-14272
> URL: https://issues.apache.org/jira/browse/HDFS-14272
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
> Environment: CDH6.1 (Hadoop 3.0.x) + Consistency Reads from Standby + 
> SSL + Kerberos + RPC encryption
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> It is typical for integration tests to create some files and then check their 
> existence. For example, like the following simple bash script:
> {code:java}
> # hdfs dfs -touchz /tmp/abc
> # hdfs dfs -ls /tmp/abc
> {code}
> The test executes HDFS bash command sequentially, but it may fail with 
> Consistent Standby Read because the -ls does not find the file.
> Analysis: the second bash command, while launched sequentially after the 
> first one, is not aware of the state id returned from the first bash command. 
> So ObserverNode wouldn't wait for the the edits to get propagated, and thus 
> fails.
> I've got a cluster where the Observer has tens of seconds of RPC latency, and 
> this becomes very annoying. (I am still trying to figure out why this 
> Observer has such a long RPC latency. But that's another story.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766631#comment-16766631
 ] 

Hadoop QA commented on HDFS-14268:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 6s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m  4s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpc |
|   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14268 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958467/HDFS-14268-HDFS-13891.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ae07fa33059f 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 36aca66 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26200/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26200/testReport/ |
| Max. process+thread count | 1733 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console 

[jira] [Updated] (HDFS-14270) [SBN Read] StateId and TrasactionId not present in Trace level logging

2019-02-12 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14270:
--
Summary: [SBN Read] StateId and TrasactionId not present in Trace level 
logging  (was: StateId and TrasactionId not present in Trace level logging)

> [SBN Read] StateId and TrasactionId not present in Trace level logging
> --
>
> Key: HDFS-14270
> URL: https://issues.apache.org/jira/browse/HDFS-14270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Shweta
>Priority: Trivial
>
> While running the command "hdfs --loglevel TRACE dfs -ls /" it was seen that 
> stateId and TransactionId do not appear in the logs. 
How does one see the 
> stateId and TransactionId in the logs? Is there a different approach?
> CC: [~jojochuang], [~csun], [~shv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-936) Need a tool to map containers to ozone objects

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766623#comment-16766623
 ] 

Hadoop QA commented on HDDS-936:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDDS-936 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-936 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958474/HDDS-936.07.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2244/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Need a tool to map containers to ozone objects
> --
>
> Key: HDDS-936
> URL: https://issues.apache.org/jira/browse/HDDS-936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Jitendra Nath Pandey
>Assignee: sarun singla
>Priority: Major
> Attachments: HDDS-936.00.patch, HDDS-936.01.patch, HDDS-936.02.patch, 
> HDDS-936.03.patch, HDDS-936.04.patch, HDDS-936.05.patch, HDDS-936.06.patch, 
> HDDS-936.07.patch
>
>
> Ozone should have a tool to get list of objects that a container contains. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-936) Need a tool to map containers to ozone objects

2019-02-12 Thread sarun singla (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sarun singla updated HDDS-936:
--
Attachment: HDDS-936.07.patch

> Need a tool to map containers to ozone objects
> --
>
> Key: HDDS-936
> URL: https://issues.apache.org/jira/browse/HDDS-936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Jitendra Nath Pandey
>Assignee: sarun singla
>Priority: Major
> Attachments: HDDS-936.00.patch, HDDS-936.01.patch, HDDS-936.02.patch, 
> HDDS-936.03.patch, HDDS-936.04.patch, HDDS-936.05.patch, HDDS-936.06.patch, 
> HDDS-936.07.patch
>
>
> Ozone should have a tool to get list of objects that a container contains. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-972) Add support for configuring multiple OMs

2019-02-12 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-972:

Attachment: HDDS-972.007.patch

> Add support for configuring multiple OMs
> 
>
> Key: HDDS-972
> URL: https://issues.apache.org/jira/browse/HDDS-972
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-972.000.patch, HDDS-972.001.patch, 
> HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch, 
> HDDS-972.005.patch, HDDS-972.006.patch, HDDS-972.007.patch
>
>
> For OM HA, we would need to run multiple (atleast 3) OM services so that we 
> can form a replicated Ratis ring of OMs. This Jira aims to add support for 
> configuring multiple OMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14271) [SBN read] StandbyException is logged if Observer is the first NameNode

2019-02-12 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-14271:
--

 Summary: [SBN read] StandbyException is logged if Observer is the 
first NameNode
 Key: HDFS-14271
 URL: https://issues.apache.org/jira/browse/HDFS-14271
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.3.0
Reporter: Wei-Chiu Chuang


If I transition the first NameNode into Observer state, and then I create a 
file from command line, it prints the following StandbyException log message, 
as if the command failed. But it actually completed successfully:
{noformat}
[root@weichiu-sbsr-1 ~]# hdfs dfs -touchz /tmp/abf
19/02/12 16:35:17 INFO retry.RetryInvocationHandler: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category WRITE is not supported in state observer. Visit 
https://s.apache.org/sbnn-error
at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1987)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1424)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:762)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:458)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:918)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:853)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2782)
, while invoking $Proxy4.create over 
[weichiu-sbsr-1.gce.cloudera.com/172.31.121.145:8020,weichiu-sbsr-2.gce.cloudera.com/172.31.121.140:8020].
 Trying to failover immediately.
{noformat}

This is unlike the case when the first NameNode is the Standby, where this 
StandbyException is suppressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-936) Need a tool to map containers to ozone objects

2019-02-12 Thread sarun singla (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766612#comment-16766612
 ] 

sarun singla commented on HDDS-936:
---

[~nandakumar131] [~elek] Added the updated patch after fixing the 
recommendations. [~bharatviswa] Thanks for your help.

> Need a tool to map containers to ozone objects
> --
>
> Key: HDDS-936
> URL: https://issues.apache.org/jira/browse/HDDS-936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Jitendra Nath Pandey
>Assignee: sarun singla
>Priority: Major
> Attachments: HDDS-936.00.patch, HDDS-936.01.patch, HDDS-936.02.patch, 
> HDDS-936.03.patch, HDDS-936.04.patch, HDDS-936.05.patch, HDDS-936.06.patch, 
> HDDS-936.07.patch
>
>
> Ozone should have a tool to get list of objects that a container contains. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-972) Add support for configuring multiple OMs

2019-02-12 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766609#comment-16766609
 ] 

Hanisha Koneru commented on HDDS-972:
-

[~bharatviswa], sorry I missed addressing your comments before.
{quote} # In loginUser, we call InetSocketAddress socAddr = 
OmUtils.getOmAddress(conf), but still getOMAddress is not modified to address 
HA scenario. I think we should update it here.{quote}
done
{quote}One question in MiniOzoneHAClusterImpl, we are generating ports randomly 

basePort = 1 + RANDOM.nextInt(1000) * 4; But here we are not checking 
whether ports are free or not. Otherwise, when we start MiniOzoneHAClusterImpl 
we get an error during start right?
{quote}
In case the ports are not free, we catch the BindException and try assigning 
new ports again. 

> Add support for configuring multiple OMs
> 
>
> Key: HDDS-972
> URL: https://issues.apache.org/jira/browse/HDDS-972
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-972.000.patch, HDDS-972.001.patch, 
> HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch, 
> HDDS-972.005.patch, HDDS-972.006.patch
>
>
> For OM HA, we would need to run multiple (atleast 3) OM services so that we 
> can form a replicated Ratis ring of OMs. This Jira aims to add support for 
> configuring multiple OMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-972) Add support for configuring multiple OMs

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766608#comment-16766608
 ] 

Hadoop QA commented on HDDS-972:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDDS-972 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-972 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958473/HDDS-972.006.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2243/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add support for configuring multiple OMs
> 
>
> Key: HDDS-972
> URL: https://issues.apache.org/jira/browse/HDDS-972
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-972.000.patch, HDDS-972.001.patch, 
> HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch, 
> HDDS-972.005.patch, HDDS-972.006.patch
>
>
> For OM HA, we would need to run multiple (atleast 3) OM services so that we 
> can form a replicated Ratis ring of OMs. This Jira aims to add support for 
> configuring multiple OMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-972) Add support for configuring multiple OMs

2019-02-12 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-972:

Attachment: HDDS-972.006.patch

> Add support for configuring multiple OMs
> 
>
> Key: HDDS-972
> URL: https://issues.apache.org/jira/browse/HDDS-972
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-972.000.patch, HDDS-972.001.patch, 
> HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch, 
> HDDS-972.005.patch, HDDS-972.006.patch
>
>
> For OM HA, we would need to run multiple (atleast 3) OM services so that we 
> can form a replicated Ratis ring of OMs. This Jira aims to add support for 
> configuring multiple OMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14267) Add test_libhdfs_ops to libhdfs tests, mark libhdfs_read/write.c as examples

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766597#comment-16766597
 ] 

Hadoop QA commented on HDFS-14267:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
13s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 12 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 31s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_test_libhdfs_ops_hdfs_static |
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14267 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958447/HDFS-14267.001.patch |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  
shellcheck  shelldocs  |
| uname | Linux 60d8de999c61 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3dc2523 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| shellcheck | v0.4.6 |
| whitespace | 

[jira] [Commented] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766592#comment-16766592
 ] 

Íñigo Goiri commented on HDFS-14268:


Now the issue is we may hit the issue described in HDFS-14226.
[~ayushtkn], thoughts on leaving this being flaky and getting fixed in 
HDFS-14226?

> RBF: Fix the location of the DNs in getDatanodeReport()
> ---
>
> Key: HDFS-14268
> URL: https://issues.apache.org/jira/browse/HDFS-14268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14268-HDFS-13891.000.patch, 
> HDFS-14268-HDFS-13891.001.patch
>
>
> When getting all the DNs in the federation, the Router queries each of the 
> subclusters and aggregates them assigning the subcluster id to the location. 
> This query uses a {{HashSet}} which provides a "random" order for the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14226) RBF: Setting attributes should set on all subclusters' directories.

2019-02-12 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-14226:
--

Assignee: Ayush Saxena

> RBF: Setting attributes should set on all subclusters' directories.
> ---
>
> Key: HDFS-14226
> URL: https://issues.apache.org/jira/browse/HDFS-14226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14226-HDFS-13891-01.patch, 
> HDFS-14226-HDFS-13891-02.patch, HDFS-14226-HDFS-13891-03.patch, 
> HDFS-14226-HDFS-13891-WIP1.patch
>
>
> Only one subcluster is set now.
> {noformat}
> // create a mount point of multiple subclusters
> hdfs dfsrouteradmin -add /all_data ns1 /data1
> hdfs dfsrouteradmin -add /all_data ns2 /data2
> hdfs ec -Dfs.defaultFS=hdfs://router: -setPolicy -path /all_data -policy 
> RS-3-2-1024k
> Set RS-3-2-1024k erasure coding policy on /all_data
> hdfs ec -Dfs.defaultFS=hdfs://router: -getPolicy -path /all_data
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns1-namenode:8020 -getPolicy -path /data1
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns2-namenode:8020 -getPolicy -path /data2
> The erasure coding policy of /data2 is unspecified
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766591#comment-16766591
 ] 

Íñigo Goiri commented on HDFS-14268:


In  [^HDFS-14268-HDFS-13891.000.patch], we have the DNs not joining the two 
subclusters anymore.
For this reason, EC is a little tight.
I increased the number of DNs in [^HDFS-14268-HDFS-13891.001.patch].

> RBF: Fix the location of the DNs in getDatanodeReport()
> ---
>
> Key: HDFS-14268
> URL: https://issues.apache.org/jira/browse/HDFS-14268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14268-HDFS-13891.000.patch, 
> HDFS-14268-HDFS-13891.001.patch
>
>
> When getting all the DNs in the federation, the Router queries each of the 
> subclusters and aggregates them assigning the subcluster id to the location. 
> This query uses a {{HashSet}} which provides a "random" order for the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-12 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14268:
---
Attachment: HDFS-14268-HDFS-13891.001.patch

> RBF: Fix the location of the DNs in getDatanodeReport()
> ---
>
> Key: HDFS-14268
> URL: https://issues.apache.org/jira/browse/HDFS-14268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14268-HDFS-13891.000.patch, 
> HDFS-14268-HDFS-13891.001.patch
>
>
> When getting all the DNs in the federation, the Router queries each of the 
> subclusters and aggregates them assigning the subcluster id to the location. 
> This query uses a {{HashSet}} which provides a "random" order for the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14270) StateId and TrasactionId not present in Trace level logging

2019-02-12 Thread Shweta (JIRA)
Shweta created HDFS-14270:
-

 Summary: StateId and TrasactionId not present in Trace level 
logging
 Key: HDFS-14270
 URL: https://issues.apache.org/jira/browse/HDFS-14270
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Shweta


While running the command "hdfs --loglevel TRACE dfs -ls /" it was seen that 
stateId and TransactionId do not appear in the logs. 
How does one see the 
stateId and TransactionId in the logs? Is there a different approach?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14270) StateId and TrasactionId not present in Trace level logging

2019-02-12 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14270:
--
Description: 
While running the command "hdfs --loglevel TRACE dfs -ls /" it was seen that 
stateId and TransactionId do not appear in the logs. 
How does one see the 
stateId and TransactionId in the logs? Is there a different approach?

CC: [~jojochuang], [~csun], [~shv]

  was:While running the command "hdfs --loglevel TRACE dfs -ls /" it was seen 
that stateId and TransactionId do not appear in the logs. 
How does one see the 
stateId and TransactionId in the logs? Is there a different approach?


> StateId and TrasactionId not present in Trace level logging
> ---
>
> Key: HDFS-14270
> URL: https://issues.apache.org/jira/browse/HDFS-14270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Shweta
>Priority: Trivial
>
> While running the command "hdfs --loglevel TRACE dfs -ls /" it was seen that 
> stateId and TransactionId do not appear in the logs. 
How does one see the 
> stateId and TransactionId in the logs? Is there a different approach?
> CC: [~jojochuang], [~csun], [~shv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14081) hdfs dfsadmin -metasave metasave_test results NPE

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766564#comment-16766564
 ] 

Hadoop QA commented on HDFS-14081:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS 
|
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14081 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958446/HDFS-14081.007.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 89598363d45e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7806403 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26196/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26196/testReport/ |
| Max. process+thread count | 3143 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Created] (HDFS-14269) [SBN read] Observer node switches back to Standby after restart

2019-02-12 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-14269:
--

 Summary: [SBN read] Observer node switches back to Standby after 
restart
 Key: HDFS-14269
 URL: https://issues.apache.org/jira/browse/HDFS-14269
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.3.0
Reporter: Wei-Chiu Chuang


The Observer state is not persistent. Once it restarts it becomes Standby node 
again. Since it does not participate in NameNode failover for now, should we 
assume an Observer node is always an Observer node? This state should be 
persisted somewhere, like in Zookeeper.

CC: [~csun] [~shv]. Like to get inputs from you. Have you discussed this before?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766562#comment-16766562
 ] 

Hadoop QA commented on HDFS-14268:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
16s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 43s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14268 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958448/HDFS-14268-HDFS-13891.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 291b30c65054 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 36aca66 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26198/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26198/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-14162) Balancer should work with ObserverNode

2019-02-12 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766553#comment-16766553
 ] 

Erik Krogen commented on HDFS-14162:


Thanks [~shv]. I just attached v002 proxy with the pluralization and other 
cleanup (license, Javadoc, etc.). I think it should be ready for a thorough 
review now.

> Balancer should work with ObserverNode
> --
>
> Key: HDFS-14162
> URL: https://issues.apache.org/jira/browse/HDFS-14162
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14162-HDFS-12943.wip0.patch, HDFS-14162.000.patch, 
> HDFS-14162.001.patch, HDFS-14162.002.patch, testBalancerWithObserver-3.patch, 
> testBalancerWithObserver.patch
>
>
> Balancer provides a substantial RPC load on NameNode. It would be good to 
> divert Balancer RPCs {{getBlocks()}}, etc. to ObserverNode. The main problem 
> is that Balancer uses {{NamenodeProtocol}}, while ORPP currently supports 
> only {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14162) Balancer should work with ObserverNode

2019-02-12 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-14162:
---
Attachment: HDFS-14162.002.patch

> Balancer should work with ObserverNode
> --
>
> Key: HDFS-14162
> URL: https://issues.apache.org/jira/browse/HDFS-14162
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14162-HDFS-12943.wip0.patch, HDFS-14162.000.patch, 
> HDFS-14162.001.patch, HDFS-14162.002.patch, testBalancerWithObserver-3.patch, 
> testBalancerWithObserver.patch
>
>
> Balancer provides a substantial RPC load on NameNode. It would be good to 
> divert Balancer RPCs {{getBlocks()}}, etc. to ObserverNode. The main problem 
> is that Balancer uses {{NamenodeProtocol}}, while ORPP currently supports 
> only {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766537#comment-16766537
 ] 

Hadoop QA commented on HDFS-13209:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 13s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
1030 unchanged - 1 fixed = 1031 total (was 1031) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
42s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
22s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}203m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13209 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958439/HDFS-13209-05.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
| uname 

[jira] [Resolved] (HDDS-1063) Implement OM init in secure cluster

2019-02-12 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-1063.
--
Resolution: Duplicate

> Implement OM init in secure cluster
> ---
>
> Key: HDDS-1063
> URL: https://issues.apache.org/jira/browse/HDDS-1063
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> Implement OM init in secure cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-581) Bootstrap DN with private/public key pair

2019-02-12 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-581.
-
Resolution: Duplicate

> Bootstrap DN with private/public key pair
> -
>
> Key: HDDS-581
> URL: https://issues.apache.org/jira/browse/HDDS-581
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-581-HDDS-4.00.patch
>
>
> This will create public/private key pair for HDDS datanode if there isn't one 
> available during secure dn startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.

2019-02-12 Thread Virajith Jalaparti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766502#comment-16766502
 ] 

Virajith Jalaparti commented on HDFS-13794:
---

Fixed the checkstyle and committed  [^HDFS-13794-HDFS-12090.006.patch] to 
HDFS-12090 branch. Thanks for working on this [~ehiggs]

> [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
> --
>
> Key: HDFS-13794
> URL: https://issues.apache.org/jira/browse/HDFS-13794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13794-HDFS-12090.001.patch, 
> HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, 
> HDFS-13794-HDFS-12090.004.patch, HDFS-13794-HDFS-12090.005.patch, 
> HDFS-13794-HDFS-12090.006.patch
>
>
> When updating the BlockAliasMap we may need to deal with deleted blocks. 
> Otherwise the BlockAliasMap will grow indefinitely(!).
> Therefore, the BlockAliasMap.Writer needs a method for removing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.

2019-02-12 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13794:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
> --
>
> Key: HDFS-13794
> URL: https://issues.apache.org/jira/browse/HDFS-13794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13794-HDFS-12090.001.patch, 
> HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, 
> HDFS-13794-HDFS-12090.004.patch, HDFS-13794-HDFS-12090.005.patch, 
> HDFS-13794-HDFS-12090.006.patch
>
>
> When updating the BlockAliasMap we may need to deal with deleted blocks. 
> Otherwise the BlockAliasMap will grow indefinitely(!).
> Therefore, the BlockAliasMap.Writer needs a method for removing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766472#comment-16766472
 ] 

Íñigo Goiri commented on HDFS-14268:


In addition to fix the random order issue, I also did:
* Change the list from {{LinkedList}} to {{ArrayList}}.
* Use the java 8 syntax to avoid explicitly creating the callables.

> RBF: Fix the location of the DNs in getDatanodeReport()
> ---
>
> Key: HDFS-14268
> URL: https://issues.apache.org/jira/browse/HDFS-14268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14268-HDFS-13891.000.patch
>
>
> When getting all the DNs in the federation, the Router queries each of the 
> subclusters and aggregates them assigning the subcluster id to the location. 
> This query uses a {{HashSet}} which provides a "random" order for the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-12 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14268:
---
Status: Patch Available  (was: Open)

> RBF: Fix the location of the DNs in getDatanodeReport()
> ---
>
> Key: HDFS-14268
> URL: https://issues.apache.org/jira/browse/HDFS-14268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14268-HDFS-13891.000.patch
>
>
> When getting all the DNs in the federation, the Router queries each of the 
> subclusters and aggregates them assigning the subcluster id to the location. 
> This query uses a {{HashSet}} which provides a "random" order for the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-12 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14268:
---
Attachment: HDFS-14268-HDFS-13891.000.patch

> RBF: Fix the location of the DNs in getDatanodeReport()
> ---
>
> Key: HDFS-14268
> URL: https://issues.apache.org/jira/browse/HDFS-14268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14268-HDFS-13891.000.patch
>
>
> When getting all the DNs in the federation, the Router queries each of the 
> subclusters and aggregates them assigning the subcluster id to the location. 
> This query uses a {{HashSet}} which provides a "random" order for the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14267) Add test_libhdfs_ops to libhdfs tests, mark libhdfs_read/write.c as examples

2019-02-12 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-14267:

Attachment: HDFS-14267.001.patch

> Add test_libhdfs_ops to libhdfs tests, mark libhdfs_read/write.c as examples
> 
>
> Key: HDFS-14267
> URL: https://issues.apache.org/jira/browse/HDFS-14267
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs, native, test
>Reporter: Sahil Takiar
>Priority: Major
> Attachments: HDFS-14267.001.patch
>
>
> {{test_libhdfs_ops.c}} provides test coverage for basic operations against 
> libhdfs, but currently has to be run manually (e.g. {{mvn install}} does not 
> run these tests). The goal of this patch is to add {{test_libhdfs_ops.c}} to 
> the list of tests that are automatically run for libhdfs.
> It looks like {{test_libhdfs_ops.c}} was used in conjunction with 
> {{hadoop-hdfs-project/hadoop-hdfs/src/main/native/tests/test-libhdfs.sh}} to 
> run some tests against a mini DFS cluster. Now that the 
> {{NativeMiniDfsCluster}} exists, it makes more sense to use that rather than 
> rely on an external bash script to start a mini DFS cluster.
> The {{libhdfs-tests}} directory (which contains {{test_libhdfs_ops.c}}) 
> contains two other files: {{test_libhdfs_read.c}} and 
> {{test_libhdfs_write.c}}. At some point, these files might have been used in 
> conjunction with {{test-libhdfs.sh}} to run some tests manually. However, 
> they (1) largely overlap with the test coverage provided by 
> {{test_libhdfs_ops.c}} and (2) are not designed to be run as unit tests. Thus 
> I suggest we move these two files into a new folder called 
> {{libhdfs-examples}} and use them to further document how users of libhdfs 
> can use the API. We can move {{test-libhdfs.sh}} into the examples folder as 
> well given that example files probably require the script to actually work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14267) Add test_libhdfs_ops to libhdfs tests, mark libhdfs_read/write.c as examples

2019-02-12 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-14267:

Status: Patch Available  (was: Open)

> Add test_libhdfs_ops to libhdfs tests, mark libhdfs_read/write.c as examples
> 
>
> Key: HDFS-14267
> URL: https://issues.apache.org/jira/browse/HDFS-14267
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs, native, test
>Reporter: Sahil Takiar
>Priority: Major
> Attachments: HDFS-14267.001.patch
>
>
> {{test_libhdfs_ops.c}} provides test coverage for basic operations against 
> libhdfs, but currently has to be run manually (e.g. {{mvn install}} does not 
> run these tests). The goal of this patch is to add {{test_libhdfs_ops.c}} to 
> the list of tests that are automatically run for libhdfs.
> It looks like {{test_libhdfs_ops.c}} was used in conjunction with 
> {{hadoop-hdfs-project/hadoop-hdfs/src/main/native/tests/test-libhdfs.sh}} to 
> run some tests against a mini DFS cluster. Now that the 
> {{NativeMiniDfsCluster}} exists, it makes more sense to use that rather than 
> rely on an external bash script to start a mini DFS cluster.
> The {{libhdfs-tests}} directory (which contains {{test_libhdfs_ops.c}}) 
> contains two other files: {{test_libhdfs_read.c}} and 
> {{test_libhdfs_write.c}}. At some point, these files might have been used in 
> conjunction with {{test-libhdfs.sh}} to run some tests manually. However, 
> they (1) largely overlap with the test coverage provided by 
> {{test_libhdfs_ops.c}} and (2) are not designed to be run as unit tests. Thus 
> I suggest we move these two files into a new folder called 
> {{libhdfs-examples}} and use them to further document how users of libhdfs 
> can use the API. We can move {{test-libhdfs.sh}} into the examples folder as 
> well given that example files probably require the script to actually work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-02-12 Thread JIRA
Íñigo Goiri created HDFS-14268:
--

 Summary: RBF: Fix the location of the DNs in getDatanodeReport()
 Key: HDFS-14268
 URL: https://issues.apache.org/jira/browse/HDFS-14268
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri


When getting all the DNs in the federation, the Router queries each of the 
subclusters and aggregates them assigning the subcluster id to the location. 
This query uses a {{HashSet}} which provides a "random" order for the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14086) Failure in test_libhdfs_ops

2019-02-12 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766459#comment-16766459
 ] 

Sahil Takiar commented on HDFS-14086:
-

The issue here is that out of the box, the {{test_libhdfs_ops.c}} runs agains 
the local filesystem. However, the test was written to run against a mini DFS 
cluster. Modifying the test to run against a {{NativeMiniDfsCluster}} fixes 
most of the failures seen when running this file.

More details can be found in HDFS-14267. However, I suggest we close this JIRA 
in favor of HDFS-14267 which fixes the issues with {{test_libhdfs_ops.c}} and 
does some additional test cleanup.

> Failure in test_libhdfs_ops
> ---
>
> Key: HDFS-14086
> URL: https://issues.apache.org/jira/browse/HDFS-14086
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.0.3
>Reporter: Pranay Singh
>Priority: Minor
>  Labels: test
>
> test_libhdfs_ops hdfs_static test was not getting executed,  the issue that I 
> have fixed in HDFS-14083 is 
> seen because this test program is not getting executed so I had to change the 
> below file to 
> execute this test binary as a part of normal run. There are some failures 
> that are seen when this
> test program is run. This jira tracks those failures.
> Details of change to enable this test
> 
> hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/CMakeLists.txt
> add_libhdfs_test(test_libhdfs_ops hdfs_static) --->
> Failures that are seen when this test is run.
> -
> Name: file:/tmp/hsperfdata_root, Type: D, Replication: 1, BlockSize: 
> 33554432, Size: 0, LastMod: Tue Nov 13 18:03:20 2018
> Owner: root, Group: root, Permissions: 493 (rwxr-xr-x)
> hdfsGetHosts - SUCCESS! ... 
> hdfsChown(path=/tmp/testfile.txt, owner=(null), group=users): 
> FileSystem#setOwner error:
> Shell.ExitCodeException: chown: changing group of '/tmp/testfile.txt': 
> Operation not permitted
> ExitCodeException exitCode=1: chown: changing group of '/tmp/testfile.txt': 
> Operation not permitted
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
>   at org.apache.hadoop.util.Shell.run(Shell.java:901)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
>   at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
>   at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
>   at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1350)
>   at org.apache.hadoop.fs.FileUtil.setOwner(FileUtil.java:1152)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.setOwner(RawLocalFileSystem.java:851)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$2.apply(ChecksumFileSystem.java:520)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:489)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.setOwner(ChecksumFileSystem.java:523)
> hdfsChown: Failed!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14267) Add test_libhdfs_ops to libhdfs tests, mark libhdfs_read/write.c as examples

2019-02-12 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766455#comment-16766455
 ] 

Sahil Takiar commented on HDFS-14267:
-

I can't actually assign this to myself, but I'm actively working on this. Will 
post a patch soon.

> Add test_libhdfs_ops to libhdfs tests, mark libhdfs_read/write.c as examples
> 
>
> Key: HDFS-14267
> URL: https://issues.apache.org/jira/browse/HDFS-14267
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs, native, test
>Reporter: Sahil Takiar
>Priority: Major
>
> {{test_libhdfs_ops.c}} provides test coverage for basic operations against 
> libhdfs, but currently has to be run manually (e.g. {{mvn install}} does not 
> run these tests). The goal of this patch is to add {{test_libhdfs_ops.c}} to 
> the list of tests that are automatically run for libhdfs.
> It looks like {{test_libhdfs_ops.c}} was used in conjunction with 
> {{hadoop-hdfs-project/hadoop-hdfs/src/main/native/tests/test-libhdfs.sh}} to 
> run some tests against a mini DFS cluster. Now that the 
> {{NativeMiniDfsCluster}} exists, it makes more sense to use that rather than 
> rely on an external bash script to start a mini DFS cluster.
> The {{libhdfs-tests}} directory (which contains {{test_libhdfs_ops.c}}) 
> contains two other files: {{test_libhdfs_read.c}} and 
> {{test_libhdfs_write.c}}. At some point, these files might have been used in 
> conjunction with {{test-libhdfs.sh}} to run some tests manually. However, 
> they (1) largely overlap with the test coverage provided by 
> {{test_libhdfs_ops.c}} and (2) are not designed to be run as unit tests. Thus 
> I suggest we move these two files into a new folder called 
> {{libhdfs-examples}} and use them to further document how users of libhdfs 
> can use the API. We can move {{test-libhdfs.sh}} into the examples folder as 
> well given that example files probably require the script to actually work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14267) Add test_libhdfs_ops to libhdfs tests, mark libhdfs_read/write.c as examples

2019-02-12 Thread Sahil Takiar (JIRA)
Sahil Takiar created HDFS-14267:
---

 Summary: Add test_libhdfs_ops to libhdfs tests, mark 
libhdfs_read/write.c as examples
 Key: HDFS-14267
 URL: https://issues.apache.org/jira/browse/HDFS-14267
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs, native, test
Reporter: Sahil Takiar


{{test_libhdfs_ops.c}} provides test coverage for basic operations against 
libhdfs, but currently has to be run manually (e.g. {{mvn install}} does not 
run these tests). The goal of this patch is to add {{test_libhdfs_ops.c}} to 
the list of tests that are automatically run for libhdfs.

It looks like {{test_libhdfs_ops.c}} was used in conjunction with 
{{hadoop-hdfs-project/hadoop-hdfs/src/main/native/tests/test-libhdfs.sh}} to 
run some tests against a mini DFS cluster. Now that the 
{{NativeMiniDfsCluster}} exists, it makes more sense to use that rather than 
rely on an external bash script to start a mini DFS cluster.

The {{libhdfs-tests}} directory (which contains {{test_libhdfs_ops.c}}) 
contains two other files: {{test_libhdfs_read.c}} and {{test_libhdfs_write.c}}. 
At some point, these files might have been used in conjunction with 
{{test-libhdfs.sh}} to run some tests manually. However, they (1) largely 
overlap with the test coverage provided by {{test_libhdfs_ops.c}} and (2) are 
not designed to be run as unit tests. Thus I suggest we move these two files 
into a new folder called {{libhdfs-examples}} and use them to further document 
how users of libhdfs can use the API. We can move {{test-libhdfs.sh}} into the 
examples folder as well given that example files probably require the script to 
actually work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14081) hdfs dfsadmin -metasave metasave_test results NPE

2019-02-12 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766411#comment-16766411
 ] 

Shweta commented on HDFS-14081:
---

Posted new patch to correct checkstyle issues.
Also, failed tests pass locally.

> hdfs dfsadmin -metasave metasave_test results NPE
> -
>
> Key: HDFS-14081
> URL: https://issues.apache.org/jira/browse/HDFS-14081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-14081.001.patch, HDFS-14081.002.patch, 
> HDFS-14081.003.patch, HDFS-14081.004.patch, HDFS-14081.005.patch, 
> HDFS-14081.006.patch, HDFS-14081.007.patch
>
>
> Race condition is encountered while adding Block to 
> postponedMisreplicatedBlocks which in turn tried to retrieve Block from 
> BlockManager in which it may not be present. 
> This happens in HA, metasave in first NN succeeded but failed in second NN, 
> StackTrace showing NPE is as follows:
> {code}
> 2018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:602342018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: 
> IPC Server handler 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:60234java.lang.NullPointerException at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseSourceDatanodes(BlockManager.java:2175)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.dumpBlockMeta(BlockManager.java:830)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.metaSave(BlockManager.java:762)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1782)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1766)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.metaSave(NameNodeRpcServer.java:1320)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.metaSave(ClientNamenodeProtocolServerSideTranslatorPB.java:928)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14081) hdfs dfsadmin -metasave metasave_test results NPE

2019-02-12 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14081:
--
Attachment: HDFS-14081.007.patch

> hdfs dfsadmin -metasave metasave_test results NPE
> -
>
> Key: HDFS-14081
> URL: https://issues.apache.org/jira/browse/HDFS-14081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-14081.001.patch, HDFS-14081.002.patch, 
> HDFS-14081.003.patch, HDFS-14081.004.patch, HDFS-14081.005.patch, 
> HDFS-14081.006.patch, HDFS-14081.007.patch
>
>
> Race condition is encountered while adding Block to 
> postponedMisreplicatedBlocks which in turn tried to retrieve Block from 
> BlockManager in which it may not be present. 
> This happens in HA, metasave in first NN succeeded but failed in second NN, 
> StackTrace showing NPE is as follows:
> {code}
> 2018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:602342018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: 
> IPC Server handler 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:60234java.lang.NullPointerException at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseSourceDatanodes(BlockManager.java:2175)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.dumpBlockMeta(BlockManager.java:830)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.metaSave(BlockManager.java:762)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1782)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1766)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.metaSave(NameNodeRpcServer.java:1320)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.metaSave(ClientNamenodeProtocolServerSideTranslatorPB.java:928)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-12 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766375#comment-16766375
 ] 

BELUGA BEHR commented on HDFS-14258:


[~elgoiri] Ya, the comment formats don't bother me at all, both are valid.  One 
marks the sections, one marks the line-by-line.  Also, I think sprinkling a 
{{-1}} in there does not help code readability.

I can't stop you from making those changes on commit, but please consider the 
latest patch for inclusion.

> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766357#comment-16766357
 ] 

Íñigo Goiri commented on HDFS-14258:


{quote}
I am sorry. I do not understand this comment. Are you referring to Simulate 
grabbing 2 threads ?? If so, this is the appropriate comment format.
{quote}
I'm referring to {{TestDataNodeReconfiguration#234}} and 
{{TestDataNodeReconfiguration#249}} for example.

{quote}
I think it is more clear calling Math.abs than to introduce a magic number and 
assuming that everyone understands basic math  I'm not sure what about this you 
do not like.
{quote}
I would personally go for:
{code}
final int delta = this.maxThreads - newMaxThreads;
LOG.debug("Change concurrent thread count to {} from {}", newMaxThreads, 
this.maxThreads);
if (delta == 0) {
  return true;
}
if (delta < 0) {
  LOG.debug("Adding thread capacity: {}", -1*delta);
  this.semaphore.release(-1*delta);
  this.maxThreads = newMaxThreads;
  return true;
}
...
boolean acquired = this.semaphore.tryAcquire(-1 * delta, duration, 
TimeUnit.SECONDS);
{code}

> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1076) TestSCMNodeManager crashed the jvm

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766349#comment-16766349
 ] 

Hadoop QA commented on HDDS-1076:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} root: The patch generated 3 new + 0 unchanged - 
0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 43s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  3s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
|   | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1076 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958435/HDDS-1076.003.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux d5c64a4de980 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build@2/ozone.sh |
| git revision | trunk / 7806403 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2241/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2241/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2241/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2241/testReport/ |
| Max. process+thread count | 1177 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/server-scm hadoop-ozone/integration-test U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2241/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestSCMNodeManager crashed the jvm
> --
>
> Key: HDDS-1076
> URL: https://issues.apache.org/jira/browse/HDDS-1076
> Project: Hadoop Distributed Data Store
>  

[jira] [Commented] (HDDS-726) Ozone Client should update SCM to move the container out of allocation path in case a write transaction fails

2019-02-12 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766340#comment-16766340
 ] 

Shashikant Banerjee commented on HDDS-726:
--

1st patch V0 updated. Will add more tests in the next patch.

> Ozone Client should update SCM to move the container out of allocation path 
> in case a write transaction fails
> -
>
> Key: HDDS-726
> URL: https://issues.apache.org/jira/browse/HDDS-726
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-726.000.patch
>
>
> Once an container write transaction fails, it will be marked corrupted. Once 
> Ozone client gets an exception in such case it should tell SCM to move the 
> container out of allocation path. SCM will eventually close the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1047) Fix TestRatisPipelineProvider#testCreatePipelineWithFactor

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766347#comment-16766347
 ] 

Hadoop QA commented on HDDS-1047:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 47s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  2s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1047 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958436/HDDS-1047.003.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 9492f6e6d857 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 7806403 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2240/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2240/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2240/testReport/ |
| Max. process+thread count | 1191 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/integration-test U: hadoop-ozone/integration-test |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2240/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix TestRatisPipelineProvider#testCreatePipelineWithFactor
> --
>
> Key: HDDS-1047
> URL: https://issues.apache.org/jira/browse/HDDS-1047
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Nilotpal Nandi
>Priority: Major
> 

[jira] [Commented] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766343#comment-16766343
 ] 

Hadoop QA commented on HDDS-1061:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} root: The patch generated 1 new + 1 unchanged - 
0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 10s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 57s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1061 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958433/HDDS-1061.02.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux eec42dff4c29 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 7806403 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2239/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2239/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2239/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2239/testReport/ |
| Max. process+thread count | 1226 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2239/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DelegationToken: Add certificate serial  id to Ozone Delegation Token 
> Identifier
> 
>
> Key: HDDS-1061
> URL: https://issues.apache.org/jira/browse/HDDS-1061
> Project: Hadoop 

[jira] [Commented] (HDDS-726) Ozone Client should update SCM to move the container out of allocation path in case a write transaction fails

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766344#comment-16766344
 ] 

Hadoop QA commented on HDDS-726:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDDS-726 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-726 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958440/HDDS-726.000.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2242/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone Client should update SCM to move the container out of allocation path 
> in case a write transaction fails
> -
>
> Key: HDDS-726
> URL: https://issues.apache.org/jira/browse/HDDS-726
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-726.000.patch
>
>
> Once an container write transaction fails, it will be marked corrupted. Once 
> Ozone client gets an exception in such case it should tell SCM to move the 
> container out of allocation path. SCM will eventually close the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-726) Ozone Client should update SCM to move the container out of allocation path in case a write transaction fails

2019-02-12 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-726:
-
Status: Patch Available  (was: Open)

> Ozone Client should update SCM to move the container out of allocation path 
> in case a write transaction fails
> -
>
> Key: HDDS-726
> URL: https://issues.apache.org/jira/browse/HDDS-726
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-726.000.patch
>
>
> Once an container write transaction fails, it will be marked corrupted. Once 
> Ozone client gets an exception in such case it should tell SCM to move the 
> container out of allocation path. SCM will eventually close the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-726) Ozone Client should update SCM to move the container out of allocation path in case a write transaction fails

2019-02-12 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-726:
-
Attachment: HDDS-726.000.patch

> Ozone Client should update SCM to move the container out of allocation path 
> in case a write transaction fails
> -
>
> Key: HDDS-726
> URL: https://issues.apache.org/jira/browse/HDDS-726
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-726.000.patch
>
>
> Once an container write transaction fails, it will be marked corrupted. Once 
> Ozone client gets an exception in such case it should tell SCM to move the 
> container out of allocation path. SCM will eventually close the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class

2019-02-12 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766322#comment-16766322
 ] 

BELUGA BEHR commented on HDFS-14258:


[~elgoiri]

{quote}
Just one minor comment: you added some javadoc comments in the middle to mark 
the different phases, we should make them regular comments.
{quote}

I am sorry.  I do not understand this comment.  Are you referring to {{Simulate 
grabbing 2 threads}} ?? If so, this is the appropriate comment format.

I think it is more clear calling {{Math.abs}} than to introduce a magic number 
and assuming that everyone understands basic math :)  I'm not sure what about 
this you do not like.

I hope you will consider accepting the latest patch.  I very much appreciate 
all your feedback and time.

> Introduce Java Concurrent Package To DataXceiverServer Class
> 
>
> Key: HDFS-14258
> URL: https://issues.apache.org/jira/browse/HDFS-14258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, 
> HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, 
> HDFS-14258.6.patch, HDFS-14258.7.patch
>
>
> * Use Java concurrent package to replace current facilities in 
> {{DataXceiverServer}}.
> * A little bit of extra clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy

2019-02-12 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13209:

Attachment: HDFS-13209-05.patch

> DistributedFileSystem.create should allow an option to provide StoragePolicy
> 
>
> Key: HDFS-13209
> URL: https://issues.apache.org/jira/browse/HDFS-13209
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Jean-Marc Spaggiari
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13209-01.patch, HDFS-13209-02.patch, 
> HDFS-13209-03.patch, HDFS-13209-04.patch, HDFS-13209-05.patch
>
>
> DistributedFileSystem.create allows to get a FSDataOutputStream. The stored 
> file and related blocks will used the directory based StoragePolicy.
>  
> However, sometime, we might need to keep all files in the same directory 
> (consistency constraint) but might want some of them on SSD (small, in my 
> case) until they are processed and merger/removed. Then they will go on the 
> default policy.
>  
> When creating a file, it will be useful to have an option to specify a 
> different StoragePolicy...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1038) Datanode fails to connect with secure SCM

2019-02-12 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766320#comment-16766320
 ] 

Xiaoyu Yao commented on HDDS-1038:
--

Thanks [~ajayydv] for the patch. It looks good to me overall. Here are a few 
comments:

 
 # Should we enable service level auth for security protocol as well here?
 # Can we enable the hadoop.security.authorization by default in the 
securedocker and make sure it work as expected?

 

 

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch, HDDS-1038.01.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >