[jira] [Commented] (HDDS-1060) Token: Add api to get OM certificate from SCM
[ https://issues.apache.org/jira/browse/HDDS-1060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765768#comment-16765768 ] Hadoop QA commented on HDDS-1060: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} root: The patch generated 0 new + 3 unchanged - 2 fixed = 3 total (was 5) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 23s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 26s{color} | {color:red} hadoop-hdds in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 57m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.web.TestOzoneWebAccess | | | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDDS-1060 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12958352/HDDS-1060.02.patch | | Optional Tests | asflicense javac javadoc unit findbugs checkstyle | | uname | Linux bb598f80d450 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / d48e61d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2233/artifact/out/patch-unit-hadoop-ozone.txt | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2233/artifact/out/patch-unit-hadoop-hdds.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/2233/testReport/ | | Max. process+thread count | 1350 (vs. ulimit of 1) | | modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-hdds/server-scm hadoop-ozone/integration-test U: . | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/2233/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Token: Add api to get OM certificate from SCM > - > > Key: HDDS-1060 > URL: https://issues.apache.org/jira/browse/HDDS-1060 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee:
[jira] [Commented] (HDFS-14255) Tail Follow Interval Should Allow To Specify The Sleep Interval To Save Unnecessary RPC's
[ https://issues.apache.org/jira/browse/HDFS-14255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765755#comment-16765755 ] Vinayakumar B commented on HDFS-14255: -- +1, committing later today. > Tail Follow Interval Should Allow To Specify The Sleep Interval To Save > Unnecessary RPC's > -- > > Key: HDFS-14255 > URL: https://issues.apache.org/jira/browse/HDFS-14255 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Harshakiran Reddy >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14255-01.patch, HDFS-14255-02.patch > > > As of now tail -f follows every 5 seconds. We should allow a parameter to > specify this sleep interval. Linux has this configurable as in form of -s > parameter. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14266) EC : Unable To Get Datanode Info for EC Blocks if One Block Is Not Available.
[ https://issues.apache.org/jira/browse/HDFS-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765751#comment-16765751 ] Vinayakumar B commented on HDFS-14266: -- +1. Committing later today. > EC : Unable To Get Datanode Info for EC Blocks if One Block Is Not Available. > - > > Key: HDFS-14266 > URL: https://issues.apache.org/jira/browse/HDFS-14266 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Harshakiran Reddy >Assignee: Ayush Saxena >Priority: Major > Labels: EC > Attachments: HDFS-14266-01.patch, HDFS-14266-02.patch > > > If one block gets removed from the block group then the datanode information > for the block group comes shows null. > > {noformat} > Block Id: blk_-9223372036854775792 > Block belongs to: /ec/file1 > No. of Expected Replica: 2 > No. of live Replica: 2 > No. of excess Replica: 0 > No. of stale Replica: 0 > No. of decommissioned Replica: 0 > No. of decommissioning Replica: 0 > No. of corrupted Replica: 0 > null > Fsck on blockId 'blk_-9223372036854775792 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1060) Token: Add api to get OM certificate from SCM
[ https://issues.apache.org/jira/browse/HDDS-1060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-1060: - Attachment: HDDS-1060.02.patch > Token: Add api to get OM certificate from SCM > - > > Key: HDDS-1060 > URL: https://issues.apache.org/jira/browse/HDDS-1060 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: Blocker, Security > Fix For: 0.4.0 > > Attachments: HDDS-1060.00.patch, HDDS-1060.01.patch, > HDDS-1060.02.patch > > > Datanodes/OM need OM certificate to validate block tokens and delegation > tokens. > Add API for: > 1. getCertificate(String certSerialId): To get certificate from SCM based on > certificate serial id. > 2. getCACertificate(): To get CA certificate. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1060) Token: Add api to get OM certificate from SCM
[ https://issues.apache.org/jira/browse/HDDS-1060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765740#comment-16765740 ] Ajay Kumar commented on HDDS-1060: -- [~xyao] thanks for review. {quote}Line 61: Need to clarify if this return null if certificate is not found or throw at the interface level? Based on the code I found later in SCMSecurityProtocolServer.java Line 162, it seems throw IOE if certificate is not found. {quote} You are right {{CertificateServer}} returns null if certificate with given serial id doesn't exist but API in {{SCMSecurityProtocolServer}} throws exception so that certificate clients don't ignore it silently. Updated javadoc for {{CertificateServer}}. Line 64: the comments need to be updated. the certSerialId is not the certificate for this CA. Done {quote}StorageContainerManager.java Line 227: can you add more comments on the usage of this flag and what to expect to work without a SCM login?{quote} removed the flag, added test in {{TestSecureOzoneCluster}} instead. It validates rpc call with and without Kerberos. {quote}TestStorageContainerManager.java Line 460: can we put this in try{} final{}?{quote} done. > Token: Add api to get OM certificate from SCM > - > > Key: HDDS-1060 > URL: https://issues.apache.org/jira/browse/HDDS-1060 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: Blocker, Security > Fix For: 0.4.0 > > Attachments: HDDS-1060.00.patch, HDDS-1060.01.patch > > > Datanodes/OM need OM certificate to validate block tokens and delegation > tokens. > Add API for: > 1. getCertificate(String certSerialId): To get certificate from SCM based on > certificate serial id. > 2. getCACertificate(): To get CA certificate. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14230) RBF: Throw RetriableException instead of IOException when no namenodes available
[ https://issues.apache.org/jira/browse/HDFS-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765734#comment-16765734 ] CR Hota commented on HDFS-14230: [~ferhui] Thanks for working on the patch. Can we assert exceptions using ExpectedException rules? It makes it easy to understand the code without verbose try/catch. Look at TestConnectionManager.testGetConnectionWithException. Other than that, LGTM. > RBF: Throw RetriableException instead of IOException when no namenodes > available > > > Key: HDFS-14230 > URL: https://issues.apache.org/jira/browse/HDFS-14230 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3 >Reporter: Fei Hui >Assignee: Fei Hui >Priority: Major > Attachments: HDFS-14230-HDFS-13891.001.patch, > HDFS-14230-HDFS-13891.002.patch, HDFS-14230-HDFS-13891.003.patch, > HDFS-14230-HDFS-13891.004.patch, HDFS-14230-HDFS-13891.005.patch > > > Failover usually happens when upgrading namenodes. And there are no active > namenodes within some seconds, Accessing HDFS through router fails at this > moment. This could make jobs failure or hang. Some hive jobs logs are as > follow > {code:java} > 2019-01-03 16:12:08,337 Stage-1 map = 100%, reduce = 100%, Cumulative CPU > 133.33 sec > MapReduce Total cumulative CPU time: 2 minutes 13 seconds 330 msec > Ended Job = job_1542178952162_24411913 > Launching Job 4 out of 6 > Exception in thread "Thread-86" java.lang.RuntimeException: > org.apache.hadoop.ipc.RemoteException(java.io.IOException): No namenode > available under nameservice Cluster3 > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.shouldRetry(RouterRpcClient.java:328) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:488) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:495) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:385) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:760) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): > Operation category READ is not supported in state standby > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87) > at > org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1804) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1338) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3925) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1014) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867) > at
[jira] [Updated] (HDDS-1082) OutOfMemoryError while reading key of size 100GB
[ https://issues.apache.org/jira/browse/HDDS-1082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDDS-1082: Fix Version/s: 0.4.0 > OutOfMemoryError while reading key of size 100GB > > > Key: HDDS-1082 > URL: https://issues.apache.org/jira/browse/HDDS-1082 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Nilotpal Nandi >Assignee: Supratim Deka >Priority: Blocker > Fix For: 0.4.0 > > > steps taken : > > # put key with size 100GB > # Tried to read back the key. > error thrown: > -- > {noformat} > java.lang.OutOfMemoryError: Java heap space > Dumping heap to /tmp/heapdump.bin ... > Heap dump file created [3883178021 bytes in 10.667 secs] > Exception in thread "main" java.lang.OutOfMemoryError: Java heap space > at > org.apache.ratis.thirdparty.com.google.protobuf.ByteString.toByteArray(ByteString.java:643) > at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:217) > at > org.apache.hadoop.hdds.scm.storage.BlockInputStream.readChunkFromContainer(BlockInputStream.java:227) > at > org.apache.hadoop.hdds.scm.storage.BlockInputStream.prepareRead(BlockInputStream.java:188) > at > org.apache.hadoop.hdds.scm.storage.BlockInputStream.read(BlockInputStream.java:130) > at > org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.read(KeyInputStream.java:232) > at > org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:126) > at > org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:49) > at java.io.InputStream.read(InputStream.java:101) > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100) > at > org.apache.hadoop.ozone.web.ozShell.keys.GetKeyHandler.call(GetKeyHandler.java:98) > at > org.apache.hadoop.ozone.web.ozShell.keys.GetKeyHandler.call(GetKeyHandler.java:48) > at picocli.CommandLine.execute(CommandLine.java:919) > at picocli.CommandLine.access$700(CommandLine.java:104) > at picocli.CommandLine$RunLast.handle(CommandLine.java:1083) > at picocli.CommandLine$RunLast.handle(CommandLine.java:1051) > at > picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959) > at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242) > at picocli.CommandLine.parseWithHandler(CommandLine.java:1181) > at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61) > at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52) > at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:83){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1082) OutOfMemoryError while reading key of size 100GB
[ https://issues.apache.org/jira/browse/HDDS-1082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDDS-1082: Priority: Blocker (was: Major) > OutOfMemoryError while reading key of size 100GB > > > Key: HDDS-1082 > URL: https://issues.apache.org/jira/browse/HDDS-1082 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Nilotpal Nandi >Assignee: Supratim Deka >Priority: Blocker > > steps taken : > > # put key with size 100GB > # Tried to read back the key. > error thrown: > -- > {noformat} > java.lang.OutOfMemoryError: Java heap space > Dumping heap to /tmp/heapdump.bin ... > Heap dump file created [3883178021 bytes in 10.667 secs] > Exception in thread "main" java.lang.OutOfMemoryError: Java heap space > at > org.apache.ratis.thirdparty.com.google.protobuf.ByteString.toByteArray(ByteString.java:643) > at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:217) > at > org.apache.hadoop.hdds.scm.storage.BlockInputStream.readChunkFromContainer(BlockInputStream.java:227) > at > org.apache.hadoop.hdds.scm.storage.BlockInputStream.prepareRead(BlockInputStream.java:188) > at > org.apache.hadoop.hdds.scm.storage.BlockInputStream.read(BlockInputStream.java:130) > at > org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.read(KeyInputStream.java:232) > at > org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:126) > at > org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:49) > at java.io.InputStream.read(InputStream.java:101) > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100) > at > org.apache.hadoop.ozone.web.ozShell.keys.GetKeyHandler.call(GetKeyHandler.java:98) > at > org.apache.hadoop.ozone.web.ozShell.keys.GetKeyHandler.call(GetKeyHandler.java:48) > at picocli.CommandLine.execute(CommandLine.java:919) > at picocli.CommandLine.access$700(CommandLine.java:104) > at picocli.CommandLine$RunLast.handle(CommandLine.java:1083) > at picocli.CommandLine$RunLast.handle(CommandLine.java:1051) > at > picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959) > at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242) > at picocli.CommandLine.parseWithHandler(CommandLine.java:1181) > at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61) > at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52) > at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:83){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1082) OutOfMemoryError while reading key of size 100GB
[ https://issues.apache.org/jira/browse/HDDS-1082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDDS-1082: Target Version/s: 0.4.0 > OutOfMemoryError while reading key of size 100GB > > > Key: HDDS-1082 > URL: https://issues.apache.org/jira/browse/HDDS-1082 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Nilotpal Nandi >Assignee: Supratim Deka >Priority: Major > > steps taken : > > # put key with size 100GB > # Tried to read back the key. > error thrown: > -- > {noformat} > java.lang.OutOfMemoryError: Java heap space > Dumping heap to /tmp/heapdump.bin ... > Heap dump file created [3883178021 bytes in 10.667 secs] > Exception in thread "main" java.lang.OutOfMemoryError: Java heap space > at > org.apache.ratis.thirdparty.com.google.protobuf.ByteString.toByteArray(ByteString.java:643) > at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:217) > at > org.apache.hadoop.hdds.scm.storage.BlockInputStream.readChunkFromContainer(BlockInputStream.java:227) > at > org.apache.hadoop.hdds.scm.storage.BlockInputStream.prepareRead(BlockInputStream.java:188) > at > org.apache.hadoop.hdds.scm.storage.BlockInputStream.read(BlockInputStream.java:130) > at > org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.read(KeyInputStream.java:232) > at > org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:126) > at > org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:49) > at java.io.InputStream.read(InputStream.java:101) > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100) > at > org.apache.hadoop.ozone.web.ozShell.keys.GetKeyHandler.call(GetKeyHandler.java:98) > at > org.apache.hadoop.ozone.web.ozShell.keys.GetKeyHandler.call(GetKeyHandler.java:48) > at picocli.CommandLine.execute(CommandLine.java:919) > at picocli.CommandLine.access$700(CommandLine.java:104) > at picocli.CommandLine$RunLast.handle(CommandLine.java:1083) > at picocli.CommandLine$RunLast.handle(CommandLine.java:1051) > at > picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959) > at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242) > at picocli.CommandLine.parseWithHandler(CommandLine.java:1181) > at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61) > at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52) > at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:83){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1082) OutOfMemoryError while reading key of size 100GB
[ https://issues.apache.org/jira/browse/HDDS-1082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDDS-1082: Summary: OutOfMemoryError while reading key of size 100GB (was: OutOfMemoryError while reading key) > OutOfMemoryError while reading key of size 100GB > > > Key: HDDS-1082 > URL: https://issues.apache.org/jira/browse/HDDS-1082 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Nilotpal Nandi >Assignee: Supratim Deka >Priority: Major > > steps taken : > > # put key with size 100GB > # Tried to read back the key. > error thrown: > -- > {noformat} > java.lang.OutOfMemoryError: Java heap space > Dumping heap to /tmp/heapdump.bin ... > Heap dump file created [3883178021 bytes in 10.667 secs] > Exception in thread "main" java.lang.OutOfMemoryError: Java heap space > at > org.apache.ratis.thirdparty.com.google.protobuf.ByteString.toByteArray(ByteString.java:643) > at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:217) > at > org.apache.hadoop.hdds.scm.storage.BlockInputStream.readChunkFromContainer(BlockInputStream.java:227) > at > org.apache.hadoop.hdds.scm.storage.BlockInputStream.prepareRead(BlockInputStream.java:188) > at > org.apache.hadoop.hdds.scm.storage.BlockInputStream.read(BlockInputStream.java:130) > at > org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.read(KeyInputStream.java:232) > at > org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:126) > at > org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:49) > at java.io.InputStream.read(InputStream.java:101) > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100) > at > org.apache.hadoop.ozone.web.ozShell.keys.GetKeyHandler.call(GetKeyHandler.java:98) > at > org.apache.hadoop.ozone.web.ozShell.keys.GetKeyHandler.call(GetKeyHandler.java:48) > at picocli.CommandLine.execute(CommandLine.java:919) > at picocli.CommandLine.access$700(CommandLine.java:104) > at picocli.CommandLine$RunLast.handle(CommandLine.java:1083) > at picocli.CommandLine$RunLast.handle(CommandLine.java:1051) > at > picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959) > at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242) > at picocli.CommandLine.parseWithHandler(CommandLine.java:1181) > at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61) > at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52) > at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:83){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier
[ https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765709#comment-16765709 ] Hadoop QA commented on HDDS-1061: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 56s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 42s{color} | {color:orange} root: The patch generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 57s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 0s{color} | {color:red} hadoop-hdds in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.security.TestOzoneTokenIdentifier | | | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDDS-1061 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12958346/HDDS-1061.01.patch | | Optional Tests | asflicense javac javadoc unit findbugs checkstyle | | uname | Linux d8ffad50c824 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / d48e61d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | checkstyle | https://builds.apache.org/job/PreCommit-HDDS-Build/2232/artifact/out/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2232/artifact/out/patch-unit-hadoop-ozone.txt | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2232/artifact/out/patch-unit-hadoop-hdds.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/2232/testReport/ | | Max. process+thread count | 133 (vs. ulimit of 1) | | modules | C: hadoop-hdds/common hadoop-ozone/common hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/2232/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > DelegationToken: Add certificate serial id to Ozone Delegation Token > Identifier > > > Key: HDDS-1061 >
[jira] [Updated] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier
[ https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-1061: - Status: Patch Available (was: In Progress) > DelegationToken: Add certificate serial id to Ozone Delegation Token > Identifier > > > Key: HDDS-1061 > URL: https://issues.apache.org/jira/browse/HDDS-1061 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HDDS-1061.00.patch, HDDS-1061.01.patch > > > 1. Add certificate serial id to Ozone Delegation Token Identifier. Required > for OM HA support. > 2. Validate Ozone token based on public key from OM certificate -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier
[ https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-1061: - Attachment: HDDS-1061.01.patch > DelegationToken: Add certificate serial id to Ozone Delegation Token > Identifier > > > Key: HDDS-1061 > URL: https://issues.apache.org/jira/browse/HDDS-1061 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HDDS-1061.00.patch, HDDS-1061.01.patch > > > 1. Add certificate serial id to Ozone Delegation Token Identifier. Required > for OM HA support. > 2. Validate Ozone token based on public key from OM certificate -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-360) Use RocksDBStore and TableStore for SCM Metadata
[ https://issues.apache.org/jira/browse/HDDS-360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765692#comment-16765692 ] Lokesh Jain commented on HDDS-360: -- [~anu] Thanks for working on this! The patch looks good to me. +1. > Use RocksDBStore and TableStore for SCM Metadata > > > Key: HDDS-360 > URL: https://issues.apache.org/jira/browse/HDDS-360 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Anu Engineer >Priority: Major > Attachments: HDDS-360.001.patch, HDDS-360.002.patch, > HDDS-360.003.patch, HDDS-360.004.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-1088) Add blockade Tests to test Replica Manager
Nilotpal Nandi created HDDS-1088: Summary: Add blockade Tests to test Replica Manager Key: HDDS-1088 URL: https://issues.apache.org/jira/browse/HDDS-1088 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Nilotpal Nandi We need to add tests for testing Replica Manager for scenarios like loss of node, adding new nodes, under-replicated containers -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14263) Remove unnecessary block file exists check from FsDatasetImpl#getBlockInputStream()
[ https://issues.apache.org/jira/browse/HDFS-14263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765689#comment-16765689 ] Surendra Singh Lilhore commented on HDFS-14263: --- Thanks [~virajith] for review. attached v2 patch. > Remove unnecessary block file exists check from > FsDatasetImpl#getBlockInputStream() > --- > > Key: HDFS-14263 > URL: https://issues.apache.org/jira/browse/HDFS-14263 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 3.1.1 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Major > Attachments: HDFS-14263.001.patch, HDFS-14263.002.patch > > > As discussed in HDFS-10636, {{FsDatasetImpl#getBlockInputStream()}} doing > unnecessary block replica exist check. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14263) Remove unnecessary block file exists check from FsDatasetImpl#getBlockInputStream()
[ https://issues.apache.org/jira/browse/HDFS-14263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore updated HDFS-14263: -- Attachment: HDFS-14263.002.patch > Remove unnecessary block file exists check from > FsDatasetImpl#getBlockInputStream() > --- > > Key: HDFS-14263 > URL: https://issues.apache.org/jira/browse/HDFS-14263 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 3.1.1 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Major > Attachments: HDFS-14263.001.patch, HDFS-14263.002.patch > > > As discussed in HDFS-10636, {{FsDatasetImpl#getBlockInputStream()}} doing > unnecessary block replica exist check. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-972) Add support for configuring multiple OMs
[ https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765673#comment-16765673 ] Bharat Viswanadham edited comment on HDDS-972 at 2/12/19 4:46 AM: -- Not sure if above comments are addressed in the latest patch, as I don't see them. Let me know if something I am missing here. Pasting them again here. # In loginUser, we call InetSocketAddress socAddr = OmUtils.getOmAddress(conf), but still getOMAddress is not modified to address HA scenario. I think we should update it here. As still, getOmRpcAddress is just reading as below *final Optional host = getHostNameFromConfigKeys(conf,* *OZONE_OM_ADDRESS_KEY);* 2. One question in MiniOzoneHAClusterImpl, we are generating ports randomly basePort = 1 + RANDOM.nextInt(1000) * 4; But here we are not checking whether ports are free or not. Otherwise, when we start MiniOzoneHAClusterImpl we get an error during start right? was (Author: bharatviswa): Not sure if above comments are addressed in the latest patch, as I don't see them. Pasting them again here. # In loginUser, we call InetSocketAddress socAddr = OmUtils.getOmAddress(conf), but still getOMAddress is not modified to address HA scenario. I think we should update it here. As still, getOmRpcAddress is just reading as below *final Optional host = getHostNameFromConfigKeys(conf,* *OZONE_OM_ADDRESS_KEY);* 2. One question in MiniOzoneHAClusterImpl, we are generating ports randomly basePort = 1 + RANDOM.nextInt(1000) * 4; But here we are not checking whether ports are free or not. Otherwise, when we start MiniOzoneHAClusterImpl we get an error during start right? > Add support for configuring multiple OMs > > > Key: HDDS-972 > URL: https://issues.apache.org/jira/browse/HDDS-972 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HDDS-972.000.patch, HDDS-972.001.patch, > HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch, HDDS-972.005.patch > > > For OM HA, we would need to run multiple (atleast 3) OM services so that we > can form a replicated Ratis ring of OMs. This Jira aims to add support for > configuring multiple OMs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-360) Use RocksDBStore and TableStore for SCM Metadata
[ https://issues.apache.org/jira/browse/HDDS-360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765674#comment-16765674 ] Hadoop QA commented on HDDS-360: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 11 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} root: The patch generated 0 new + 0 unchanged - 4 fixed = 0 total (was 4) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s{color} | {color:green} root generated 0 new + 19 unchanged - 1 fixed = 19 total (was 20) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m 21s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 3s{color} | {color:red} hadoop-hdds in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 59m 30s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDDS-360 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12958343/HDDS-360.004.patch | | Optional Tests | asflicense javac javadoc unit findbugs checkstyle | | uname | Linux 2f8c9863d18b 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / d48e61d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2231/artifact/out/patch-unit-hadoop-ozone.txt | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2231/artifact/out/patch-unit-hadoop-hdds.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/2231/testReport/ | | Max. process+thread count | 1216 (vs. ulimit of 1) | | modules | C: hadoop-hdds/common hadoop-hdds/server-scm hadoop-ozone/common hadoop-ozone/integration-test hadoop-ozone/tools U: . | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/2231/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Use RocksDBStore and TableStore for SCM Metadata > > > Key: HDDS-360 > URL: https://issues.apache.org/jira/browse/HDDS-360 >
[jira] [Commented] (HDDS-972) Add support for configuring multiple OMs
[ https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765673#comment-16765673 ] Bharat Viswanadham commented on HDDS-972: - Not sure if above comments are addressed in the latest patch, as I don't see them. Pasting them again here. # In loginUser, we call InetSocketAddress socAddr = OmUtils.getOmAddress(conf), but still getOMAddress is not modified to address HA scenario. I think we should update it here. As still, getOmRpcAddress is just reading as below *final Optional host = getHostNameFromConfigKeys(conf,* *OZONE_OM_ADDRESS_KEY);* 2. One question in MiniOzoneHAClusterImpl, we are generating ports randomly basePort = 1 + RANDOM.nextInt(1000) * 4; But here we are not checking whether ports are free or not. Otherwise, when we start MiniOzoneHAClusterImpl we get an error during start right? > Add support for configuring multiple OMs > > > Key: HDDS-972 > URL: https://issues.apache.org/jira/browse/HDDS-972 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HDDS-972.000.patch, HDDS-972.001.patch, > HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch, HDDS-972.005.patch > > > For OM HA, we would need to run multiple (atleast 3) OM services so that we > can form a replicated Ratis ring of OMs. This Jira aims to add support for > configuring multiple OMs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14266) EC : Unable To Get Datanode Info for EC Blocks if One Block Is Not Available.
[ https://issues.apache.org/jira/browse/HDFS-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765667#comment-16765667 ] Harshakiran Reddy commented on HDFS-14266: -- Thanx [~ayushtkn] for the fix. Verified in my setup with the changes. It now works for me. (Gives the Dn info now instead of null) > EC : Unable To Get Datanode Info for EC Blocks if One Block Is Not Available. > - > > Key: HDFS-14266 > URL: https://issues.apache.org/jira/browse/HDFS-14266 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Harshakiran Reddy >Assignee: Ayush Saxena >Priority: Major > Labels: EC > Attachments: HDFS-14266-01.patch, HDFS-14266-02.patch > > > If one block gets removed from the block group then the datanode information > for the block group comes shows null. > > {noformat} > Block Id: blk_-9223372036854775792 > Block belongs to: /ec/file1 > No. of Expected Replica: 2 > No. of live Replica: 2 > No. of excess Replica: 0 > No. of stale Replica: 0 > No. of decommissioned Replica: 0 > No. of decommissioning Replica: 0 > No. of corrupted Replica: 0 > null > Fsck on blockId 'blk_-9223372036854775792 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
[ https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765648#comment-16765648 ] Hadoop QA commented on HDFS-13794: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 34s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HDFS-12090 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 29s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 52s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 50s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 16s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} HDFS-12090 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 25m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 51s{color} | {color:orange} root: The patch generated 1 new + 461 unchanged - 0 fixed = 462 total (was 461) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 38s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 6s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 45s{color} | {color:green} hadoop-fs2img in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}256m 9s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeReconfiguration | | | hadoop.hdfs.server.datanode.TestDataNodeMetricsLogger | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDFS-13794 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12958318/HDFS-13794-HDFS-12090.006.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 328b4844d15b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64
[jira] [Commented] (HDFS-14230) RBF: Throw RetriableException instead of IOException when no namenodes available
[ https://issues.apache.org/jira/browse/HDFS-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765640#comment-16765640 ] Fei Hui commented on HDFS-14230: Ping again [~elgoiri][~crh][~brahmareddy] > RBF: Throw RetriableException instead of IOException when no namenodes > available > > > Key: HDFS-14230 > URL: https://issues.apache.org/jira/browse/HDFS-14230 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3 >Reporter: Fei Hui >Assignee: Fei Hui >Priority: Major > Attachments: HDFS-14230-HDFS-13891.001.patch, > HDFS-14230-HDFS-13891.002.patch, HDFS-14230-HDFS-13891.003.patch, > HDFS-14230-HDFS-13891.004.patch, HDFS-14230-HDFS-13891.005.patch > > > Failover usually happens when upgrading namenodes. And there are no active > namenodes within some seconds, Accessing HDFS through router fails at this > moment. This could make jobs failure or hang. Some hive jobs logs are as > follow > {code:java} > 2019-01-03 16:12:08,337 Stage-1 map = 100%, reduce = 100%, Cumulative CPU > 133.33 sec > MapReduce Total cumulative CPU time: 2 minutes 13 seconds 330 msec > Ended Job = job_1542178952162_24411913 > Launching Job 4 out of 6 > Exception in thread "Thread-86" java.lang.RuntimeException: > org.apache.hadoop.ipc.RemoteException(java.io.IOException): No namenode > available under nameservice Cluster3 > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.shouldRetry(RouterRpcClient.java:328) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:488) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:495) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:385) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:760) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): > Operation category READ is not supported in state standby > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87) > at > org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1804) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1338) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3925) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1014) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130) > {code} > Deep into the code. Maybe we can throw StandbyException when no namenodes > available. Client will fail after some retries -- This message was sent by Atlassian JIRA
[jira] [Assigned] (HDDS-1082) OutOfMemoryError while reading key
[ https://issues.apache.org/jira/browse/HDDS-1082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Supratim Deka reassigned HDDS-1082: --- Assignee: Supratim Deka > OutOfMemoryError while reading key > -- > > Key: HDDS-1082 > URL: https://issues.apache.org/jira/browse/HDDS-1082 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Nilotpal Nandi >Assignee: Supratim Deka >Priority: Major > > steps taken : > > # put key with size 100GB > # Tried to read back the key. > error thrown: > -- > {noformat} > java.lang.OutOfMemoryError: Java heap space > Dumping heap to /tmp/heapdump.bin ... > Heap dump file created [3883178021 bytes in 10.667 secs] > Exception in thread "main" java.lang.OutOfMemoryError: Java heap space > at > org.apache.ratis.thirdparty.com.google.protobuf.ByteString.toByteArray(ByteString.java:643) > at org.apache.hadoop.ozone.common.Checksum.verifyChecksum(Checksum.java:217) > at > org.apache.hadoop.hdds.scm.storage.BlockInputStream.readChunkFromContainer(BlockInputStream.java:227) > at > org.apache.hadoop.hdds.scm.storage.BlockInputStream.prepareRead(BlockInputStream.java:188) > at > org.apache.hadoop.hdds.scm.storage.BlockInputStream.read(BlockInputStream.java:130) > at > org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.read(KeyInputStream.java:232) > at > org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:126) > at > org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:49) > at java.io.InputStream.read(InputStream.java:101) > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100) > at > org.apache.hadoop.ozone.web.ozShell.keys.GetKeyHandler.call(GetKeyHandler.java:98) > at > org.apache.hadoop.ozone.web.ozShell.keys.GetKeyHandler.call(GetKeyHandler.java:48) > at picocli.CommandLine.execute(CommandLine.java:919) > at picocli.CommandLine.access$700(CommandLine.java:104) > at picocli.CommandLine$RunLast.handle(CommandLine.java:1083) > at picocli.CommandLine$RunLast.handle(CommandLine.java:1051) > at > picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959) > at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242) > at picocli.CommandLine.parseWithHandler(CommandLine.java:1181) > at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61) > at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52) > at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:83){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-360) Use RocksDBStore and TableStore for SCM Metadata
[ https://issues.apache.org/jira/browse/HDDS-360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765641#comment-16765641 ] Anu Engineer commented on HDDS-360: --- Fix another minor issue in test, this will allow the testStorageContainerManager test to pass too. > Use RocksDBStore and TableStore for SCM Metadata > > > Key: HDDS-360 > URL: https://issues.apache.org/jira/browse/HDDS-360 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Anu Engineer >Priority: Major > Attachments: HDDS-360.001.patch, HDDS-360.002.patch, > HDDS-360.003.patch, HDDS-360.004.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-360) Use RocksDBStore and TableStore for SCM Metadata
[ https://issues.apache.org/jira/browse/HDDS-360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDDS-360: -- Attachment: HDDS-360.004.patch > Use RocksDBStore and TableStore for SCM Metadata > > > Key: HDDS-360 > URL: https://issues.apache.org/jira/browse/HDDS-360 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Anu Engineer >Priority: Major > Attachments: HDDS-360.001.patch, HDDS-360.002.patch, > HDDS-360.003.patch, HDDS-360.004.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1012) Add Default CertificateClient implementation
[ https://issues.apache.org/jira/browse/HDDS-1012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765635#comment-16765635 ] Hudson commented on HDDS-1012: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15934 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15934/]) HDDS-1012. Add Default CertificateClient implementation. Contributed by (ajay: rev d48e61dd3603417073c83ec94e5939dc94d051a7) * (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/DNCertificateClient.java * (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/DefaultCertificateClient.java * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/KeyCodec.java * (edit) hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java * (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/OMCertificateClient.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/exceptions/CertificateException.java * (add) hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/client/TestCertificateClientInit.java * (add) hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/client/TestDefaultCertificateClient.java * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneSecurityUtil.java * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/CertificateClient.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/CertificateClientTestImpl.java * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/exception/SCMSecurityException.java > Add Default CertificateClient implementation > > > Key: HDDS-1012 > URL: https://issues.apache.org/jira/browse/HDDS-1012 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: Blocker > Attachments: HDDS-1012.01.patch, HDDS-1012.02.patch, > HDDS-1012.03.patch, HDDS-1012.04.patch, HDDS-1012.05.patch, > HDDS-1012.06.patch, HDDS-1012.07.patch, HDDS-1012.08.patch, HDDS-1012.09.patch > > > Add Default CertificateClient implementation -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1087) Investigate failure of TestDefaultCertificateClient#testSignDataStream in jenkins run
[ https://issues.apache.org/jira/browse/HDDS-1087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765627#comment-16765627 ] Ajay Kumar commented on HDDS-1087: -- java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient.validateHash(TestDefaultCertificateClient.java:179) at org.apache.hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient.testSignDataStream(TestDefaultCertificateClient.java:166) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > Investigate failure of TestDefaultCertificateClient#testSignDataStream in > jenkins run > - > > Key: HDDS-1087 > URL: https://issues.apache.org/jira/browse/HDDS-1087 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: blocker, security > > Investigate failure of TestDefaultCertificateClient#testSignDataStream in > jenkins run. > https://builds.apache.org/job/PreCommit-HDDS-Build/2217/testReport/org.apache.hadoop.hdds.security.x509.certificate.client/TestDefaultCertificateClient/testSignDataStream/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1012) Add Default CertificateClient implementation
[ https://issues.apache.org/jira/browse/HDDS-1012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-1012: - Resolution: Fixed Status: Resolved (was: Patch Available) > Add Default CertificateClient implementation > > > Key: HDDS-1012 > URL: https://issues.apache.org/jira/browse/HDDS-1012 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: Blocker > Attachments: HDDS-1012.01.patch, HDDS-1012.02.patch, > HDDS-1012.03.patch, HDDS-1012.04.patch, HDDS-1012.05.patch, > HDDS-1012.06.patch, HDDS-1012.07.patch, HDDS-1012.08.patch, HDDS-1012.09.patch > > > Add Default CertificateClient implementation -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1012) Add Default CertificateClient implementation
[ https://issues.apache.org/jira/browse/HDDS-1012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765629#comment-16765629 ] Ajay Kumar commented on HDDS-1012: -- [~xyao], [~anu] thanks for reviews. [HDDS-1087] to track failure in TestDefaultCertificateClient#testSignDataStream. > Add Default CertificateClient implementation > > > Key: HDDS-1012 > URL: https://issues.apache.org/jira/browse/HDDS-1012 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: Blocker > Attachments: HDDS-1012.01.patch, HDDS-1012.02.patch, > HDDS-1012.03.patch, HDDS-1012.04.patch, HDDS-1012.05.patch, > HDDS-1012.06.patch, HDDS-1012.07.patch, HDDS-1012.08.patch, HDDS-1012.09.patch > > > Add Default CertificateClient implementation -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1087) Investigate failure of TestDefaultCertificateClient#testSignDataStream in jenkins run
[ https://issues.apache.org/jira/browse/HDDS-1087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-1087: - Issue Type: Sub-task (was: Bug) Parent: HDDS-4 > Investigate failure of TestDefaultCertificateClient#testSignDataStream in > jenkins run > - > > Key: HDDS-1087 > URL: https://issues.apache.org/jira/browse/HDDS-1087 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: blocker, security > > Investigate failure of TestDefaultCertificateClient#testSignDataStream in > jenkins run. > https://builds.apache.org/job/PreCommit-HDDS-Build/2217/testReport/org.apache.hadoop.hdds.security.x509.certificate.client/TestDefaultCertificateClient/testSignDataStream/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-972) Add support for configuring multiple OMs
[ https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765625#comment-16765625 ] Hadoop QA commented on HDDS-972: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 7s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 42s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 22s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 21s{color} | {color:red} hadoop-hdds in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 64m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer | | | hadoop.ozone.om.TestOzoneManager | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDDS-972 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12958331/HDDS-972.005.patch | | Optional Tests | asflicense javac javadoc unit findbugs checkstyle | | uname | Linux b36ea6306e04 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / 7536488 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | checkstyle | https://builds.apache.org/job/PreCommit-HDDS-Build/2230/artifact/out/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2230/artifact/out/patch-unit-hadoop-ozone.txt | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2230/artifact/out/patch-unit-hadoop-hdds.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/2230/testReport/ | | Max. process+thread count | 1216 (vs. ulimit of 1) | | modules | C: hadoop-hdds/common hadoop-ozone/common hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/2230/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add support for configuring multiple OMs > > > Key: HDDS-972 > URL: https://issues.apache.org/jira/browse/HDDS-972 > Project: Hadoop Distributed Data Store >
[jira] [Created] (HDDS-1087) Investigate failure of TestDefaultCertificateClient#testSignDataStream in jenkins run
Ajay Kumar created HDDS-1087: Summary: Investigate failure of TestDefaultCertificateClient#testSignDataStream in jenkins run Key: HDDS-1087 URL: https://issues.apache.org/jira/browse/HDDS-1087 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Ajay Kumar Assignee: Ajay Kumar Investigate failure of TestDefaultCertificateClient#testSignDataStream in jenkins run. https://builds.apache.org/job/PreCommit-HDDS-Build/2217/testReport/org.apache.hadoop.hdds.security.x509.certificate.client/TestDefaultCertificateClient/testSignDataStream/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-972) Add support for configuring multiple OMs
[ https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765610#comment-16765610 ] Hadoop QA commented on HDDS-972: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 57s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 43s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 16s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 45s{color} | {color:green} hadoop-hdds in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 55m 18s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.om.TestOzoneManagerConfiguration | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDDS-972 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12957962/HDDS-972.004.patch | | Optional Tests | asflicense javac javadoc unit findbugs checkstyle | | uname | Linux aaf74f304cf2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / 7536488 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | checkstyle | https://builds.apache.org/job/PreCommit-HDDS-Build/2229/artifact/out/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2229/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/2229/testReport/ | | Max. process+thread count | 1233 (vs. ulimit of 1) | | modules | C: hadoop-hdds/common hadoop-ozone/common hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/2229/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add support for configuring multiple OMs > > > Key: HDDS-972 > URL: https://issues.apache.org/jira/browse/HDDS-972 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments:
[jira] [Created] (HDDS-1086) Remove RaftClient from OM
Hanisha Koneru created HDDS-1086: Summary: Remove RaftClient from OM Key: HDDS-1086 URL: https://issues.apache.org/jira/browse/HDDS-1086 Project: Hadoop Distributed Data Store Issue Type: Sub-task Components: HA, OM Reporter: Hanisha Koneru Assignee: Hanisha Koneru Currently we run RaftClient in OM which takes the incoming client requests and submits it to the OM's Ratis server. This hop can be avoided if OM submits the incoming client request directly to its Ratis server. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-972) Add support for configuring multiple OMs
[ https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDDS-972: Attachment: HDDS-972.005.patch > Add support for configuring multiple OMs > > > Key: HDDS-972 > URL: https://issues.apache.org/jira/browse/HDDS-972 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HDDS-972.000.patch, HDDS-972.001.patch, > HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch, HDDS-972.005.patch > > > For OM HA, we would need to run multiple (atleast 3) OM services so that we > can form a replicated Ratis ring of OMs. This Jira aims to add support for > configuring multiple OMs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-972) Add support for configuring multiple OMs
[ https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDDS-972: Attachment: HDDS-972.005.patch > Add support for configuring multiple OMs > > > Key: HDDS-972 > URL: https://issues.apache.org/jira/browse/HDDS-972 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HDDS-972.000.patch, HDDS-972.001.patch, > HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch, HDDS-972.005.patch > > > For OM HA, we would need to run multiple (atleast 3) OM services so that we > can form a replicated Ratis ring of OMs. This Jira aims to add support for > configuring multiple OMs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-972) Add support for configuring multiple OMs
[ https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDDS-972: Attachment: (was: HDDS-972.005.patch) > Add support for configuring multiple OMs > > > Key: HDDS-972 > URL: https://issues.apache.org/jira/browse/HDDS-972 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HDDS-972.000.patch, HDDS-972.001.patch, > HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch > > > For OM HA, we would need to run multiple (atleast 3) OM services so that we > can form a replicated Ratis ring of OMs. This Jira aims to add support for > configuring multiple OMs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14262) [SBN read] Unclear Log.WARN message in GlobalStateIdContext
[ https://issues.apache.org/jira/browse/HDFS-14262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765590#comment-16765590 ] Hadoop QA commented on HDFS-14262: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 7s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 1s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 42s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}166m 32s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics | | | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-14262 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12958314/HDFS-14262.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 1f4505635562 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1ce2e91 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/26189/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/26189/testReport/ | | Max. process+thread count | 2664
[jira] [Commented] (HDDS-972) Add support for configuring multiple OMs
[ https://issues.apache.org/jira/browse/HDDS-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765588#comment-16765588 ] Hanisha Koneru commented on HDDS-972: - Thanks [~linyiqun] for the review. I have added a unit test to cover multiple OM serviceIds in the configuration. > Add support for configuring multiple OMs > > > Key: HDDS-972 > URL: https://issues.apache.org/jira/browse/HDDS-972 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HDDS-972.000.patch, HDDS-972.001.patch, > HDDS-972.002.patch, HDDS-972.003.patch, HDDS-972.004.patch, HDDS-972.005.patch > > > For OM HA, we would need to run multiple (atleast 3) OM services so that we > can form a replicated Ratis ring of OMs. This Jira aims to add support for > configuring multiple OMs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-360) Use RocksDBStore and TableStore for SCM Metadata
[ https://issues.apache.org/jira/browse/HDDS-360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765580#comment-16765580 ] Hadoop QA commented on HDDS-360: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 11 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} root: The patch generated 0 new + 0 unchanged - 4 fixed = 0 total (was 4) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} root generated 0 new + 19 unchanged - 1 fixed = 19 total (was 20) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 16s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 51s{color} | {color:red} hadoop-hdds in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis | | | hadoop.ozone.TestStorageContainerManager | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDDS-360 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12958326/HDDS-360.003.patch | | Optional Tests | asflicense javac javadoc unit findbugs checkstyle | | uname | Linux 828db8dcfb05 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / 7536488 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2228/artifact/out/patch-unit-hadoop-ozone.txt | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2228/artifact/out/patch-unit-hadoop-hdds.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/2228/testReport/ | | Max. process+thread count | 1311 (vs. ulimit of 1) | | modules | C: hadoop-hdds/common hadoop-hdds/server-scm hadoop-ozone/common hadoop-ozone/integration-test hadoop-ozone/tools U: . | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/2228/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Use RocksDBStore and TableStore for SCM Metadata > > > Key: HDDS-360 > URL:
[jira] [Updated] (HDDS-360) Use RocksDBStore and TableStore for SCM Metadata
[ https://issues.apache.org/jira/browse/HDDS-360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDDS-360: -- Attachment: HDDS-360.003.patch > Use RocksDBStore and TableStore for SCM Metadata > > > Key: HDDS-360 > URL: https://issues.apache.org/jira/browse/HDDS-360 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Anu Engineer >Priority: Major > Attachments: HDDS-360.001.patch, HDDS-360.002.patch, > HDDS-360.003.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-360) Use RocksDBStore and TableStore for SCM Metadata
[ https://issues.apache.org/jira/browse/HDDS-360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765564#comment-16765564 ] Anu Engineer commented on HDDS-360: --- Fix the test failure in TestSecureOzoneCluster. Other tests failures are not related to this patch. > Use RocksDBStore and TableStore for SCM Metadata > > > Key: HDDS-360 > URL: https://issues.apache.org/jira/browse/HDDS-360 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Anu Engineer >Priority: Major > Attachments: HDDS-360.001.patch, HDDS-360.002.patch, > HDDS-360.003.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1012) Add Default CertificateClient implementation
[ https://issues.apache.org/jira/browse/HDDS-1012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765556#comment-16765556 ] Xiaoyu Yao commented on HDDS-1012: -- Thanks [~ajayydv] for the update. +1 for v9 patch. Let's file a separate ticket to fix TestDefaultCertificateClient due to RSA signature verification issue on certain Linux (Ubuntu). > Add Default CertificateClient implementation > > > Key: HDDS-1012 > URL: https://issues.apache.org/jira/browse/HDDS-1012 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: Blocker > Attachments: HDDS-1012.01.patch, HDDS-1012.02.patch, > HDDS-1012.03.patch, HDDS-1012.04.patch, HDDS-1012.05.patch, > HDDS-1012.06.patch, HDDS-1012.07.patch, HDDS-1012.08.patch, HDDS-1012.09.patch > > > Add Default CertificateClient implementation -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1084) Ozone FSCK server
[ https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765532#comment-16765532 ] Siddharth Wagle commented on HDDS-1084: --- [~hanishakoneru] suggested Ozone Recon, I definitely like the sound of that. > Ozone FSCK server > - > > Key: HDDS-1084 > URL: https://issues.apache.org/jira/browse/HDDS-1084 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: fsck >Affects Versions: 0.4.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > > Fsck Server at a high level will maintain a global view of Ozone that is not > available from SCM or OM. Things like how many volumes exist; and how many > buckets exist per volume; which volume has maximum buckets; which are buckets > that have not been accessed for a year, which are the corrupt blocks, which > are blocks on data nodes which are not used; and answer similar queries. > I will work on a design document and attach it in a few days. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-1085) Create an OM API to serve snapshots to FSCK server
Siddharth Wagle created HDDS-1085: - Summary: Create an OM API to serve snapshots to FSCK server Key: HDDS-1085 URL: https://issues.apache.org/jira/browse/HDDS-1085 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Siddharth Wagle Assignee: Aravindan Vijayan We need to add an API to OM so that we can serve snapshots from the OM server. - The snapshot should be streamed to fsck server with the ability to throttle network utilization (like TransferFsImage) - Use the RockDB getUpdateSince() to correlate sequence number to decide whether snapshot can be used to bootstrap -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
[ https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765524#comment-16765524 ] Virajith Jalaparti commented on HDFS-13794: --- [^HDFS-13794-HDFS-12090.006.patch] fixes the checkstyle and modifies {{TestInMemoryLevelDBAliasMapClient#writeRead}} to {{TestInMemoryLevelDBAliasMapClient#writeReadRemove}}. > [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method. > -- > > Key: HDFS-13794 > URL: https://issues.apache.org/jira/browse/HDFS-13794 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13794-HDFS-12090.001.patch, > HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, > HDFS-13794-HDFS-12090.004.patch, HDFS-13794-HDFS-12090.005.patch, > HDFS-13794-HDFS-12090.006.patch > > > When updating the BlockAliasMap we may need to deal with deleted blocks. > Otherwise the BlockAliasMap will grow indefinitely(!). > Therefore, the BlockAliasMap.Writer needs a method for removing blocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
[ https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-13794: -- Status: Open (was: Patch Available) > [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method. > -- > > Key: HDFS-13794 > URL: https://issues.apache.org/jira/browse/HDFS-13794 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13794-HDFS-12090.001.patch, > HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, > HDFS-13794-HDFS-12090.004.patch, HDFS-13794-HDFS-12090.005.patch, > HDFS-13794-HDFS-12090.006.patch > > > When updating the BlockAliasMap we may need to deal with deleted blocks. > Otherwise the BlockAliasMap will grow indefinitely(!). > Therefore, the BlockAliasMap.Writer needs a method for removing blocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
[ https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-13794: -- Status: Patch Available (was: Open) > [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method. > -- > > Key: HDFS-13794 > URL: https://issues.apache.org/jira/browse/HDFS-13794 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13794-HDFS-12090.001.patch, > HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, > HDFS-13794-HDFS-12090.004.patch, HDFS-13794-HDFS-12090.005.patch, > HDFS-13794-HDFS-12090.006.patch > > > When updating the BlockAliasMap we may need to deal with deleted blocks. > Otherwise the BlockAliasMap will grow indefinitely(!). > Therefore, the BlockAliasMap.Writer needs a method for removing blocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
[ https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-13794: -- Attachment: HDFS-13794-HDFS-12090.006.patch > [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method. > -- > > Key: HDFS-13794 > URL: https://issues.apache.org/jira/browse/HDFS-13794 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13794-HDFS-12090.001.patch, > HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, > HDFS-13794-HDFS-12090.004.patch, HDFS-13794-HDFS-12090.005.patch, > HDFS-13794-HDFS-12090.006.patch > > > When updating the BlockAliasMap we may need to deal with deleted blocks. > Otherwise the BlockAliasMap will grow indefinitely(!). > Therefore, the BlockAliasMap.Writer needs a method for removing blocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13358) RBF: Support for Delegation Token (RPC)
[ https://issues.apache.org/jira/browse/HDFS-13358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765513#comment-16765513 ] Hadoop QA commented on HDFS-13358: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} HDFS-13891 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 1s{color} | {color:green} HDFS-13891 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} HDFS-13891 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} HDFS-13891 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} HDFS-13891 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 53s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 50s{color} | {color:green} HDFS-13891 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} HDFS-13891 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 17s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 58s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 79m 48s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-13358 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12958308/HDFS-13358-HDFS-13891.008.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux b7fb25045afe 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-13891 / ecd90a6 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/26188/testReport/ | | Max. process+thread count | 1021 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/26188/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This
[jira] [Updated] (HDFS-14241) Provide feedback on successful renameSnapshot and deleteSnapshot
[ https://issues.apache.org/jira/browse/HDFS-14241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDFS-14241: -- Description: {code:bash|title=Shell (Before)} # createSnapshot has feedback $ hdfs dfs -createSnapshot /dst/ snap1 Created snapshot /dst/.snapshot/snap1 # renameSnapshot doesn't have feedback if it succeeded $ hdfs dfs -renameSnapshot snap1 snap2 /dst/ (No output) # deleteSnapshot doesn't have feedback if it succeeded $ hdfs dfs -deleteSnapshot /dst/ snap2 (No output) # rm has feedback $ hdfs dfs -rm -skipTrash /dst/2.txt Deleted /dst/2.txt {code} {code:bash|title=Shell (After)} ... $ hdfs dfs -renameSnapshot snap1 snap2 /dst/ Renamed snapshot snap1 to snap2 under hdfs://server:port/dst $ hdfs dfs -deleteSnapshot /dst/ snap2 Deleted snapshot snap2 under hdfs://server:port/dst {code} was: {code:bash|title=Shell (Before)} # createSnapshot has feedback $ hdfs dfs -createSnapshot /dst/ snap1 Created snapshot /dst/.snapshot/snap1 # renameSnapshot doesn't have feedback if it succeeded $ hdfs dfs -renameSnapshot snap1 snap2 /dst/ (No output) # deleteSnapshot doesn't have feedback if it succeeded $ hdfs dfs -deleteSnapshot /dst/ snap2 (No output) # rm has feedback $ hdfs dfs -rm -skipTrash /dst/2.txt Deleted /dst/2.txt {code} {code:bash|title=Shell (After)} ... $ hdfs dfs -deleteSnapshot /dst/ snap2 Deleted snapshot snap2 under hdfs://server:port/dst {code} > Provide feedback on successful renameSnapshot and deleteSnapshot > > > Key: HDFS-14241 > URL: https://issues.apache.org/jira/browse/HDFS-14241 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, shell >Affects Versions: 3.2.0, 3.1.2 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Minor > Attachments: HDFS-14241.001.patch, HDFS-14241.002.patch > > > {code:bash|title=Shell (Before)} > # createSnapshot has feedback > $ hdfs dfs -createSnapshot /dst/ snap1 > Created snapshot /dst/.snapshot/snap1 > # renameSnapshot doesn't have feedback if it succeeded > $ hdfs dfs -renameSnapshot snap1 snap2 /dst/ > (No output) > # deleteSnapshot doesn't have feedback if it succeeded > $ hdfs dfs -deleteSnapshot /dst/ snap2 > (No output) > # rm has feedback > $ hdfs dfs -rm -skipTrash /dst/2.txt > Deleted /dst/2.txt > {code} > {code:bash|title=Shell (After)} > ... > $ hdfs dfs -renameSnapshot snap1 snap2 /dst/ > Renamed snapshot snap1 to snap2 under hdfs://server:port/dst > $ hdfs dfs -deleteSnapshot /dst/ snap2 > Deleted snapshot snap2 under hdfs://server:port/dst > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-14162) Balancer should work with ObserverNode
[ https://issues.apache.org/jira/browse/HDFS-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765497#comment-16765497 ] Konstantin Shvachko edited comment on HDFS-14162 at 2/11/19 11:04 PM: -- Hey [~xkrogen], this looks very good. I was trying similar approach in my hack. I additionally was trying to combine PB proxies in the RpcEngine. Yours is using two PB objects, but it is probably a good thing. We can combine those in a follow up jira. For now your patch sets a pattern for combining RPC protocols, exactly what is needed. My only suggestion is to use {{BalancerProtocols}} (plural) in order to emphasize this is a combination of other protocols. was (Author: shv): Hey [~xkrogen], this looks very good. I was trying similar approach in my hack. I additionally was trying to combine PB proxies in the RpcEngine. Yours is using two PB objects, but it is probably a good thing. We can combine those in a follow up patch. For now your patch sets a pattern for combining rpc protocols, exactly what is needed. My suggestion is to use {{BalancerProtocols}} (plural) in order to emphasize this is a combination of other protocols. > Balancer should work with ObserverNode > -- > > Key: HDFS-14162 > URL: https://issues.apache.org/jira/browse/HDFS-14162 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Konstantin Shvachko >Assignee: Erik Krogen >Priority: Major > Attachments: HDFS-14162-HDFS-12943.wip0.patch, HDFS-14162.000.patch, > HDFS-14162.001.patch, testBalancerWithObserver-3.patch, > testBalancerWithObserver.patch > > > Balancer provides a substantial RPC load on NameNode. It would be good to > divert Balancer RPCs {{getBlocks()}}, etc. to ObserverNode. The main problem > is that Balancer uses {{NamenodeProtocol}}, while ORPP currently supports > only {{ClientProtocol}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14162) Balancer should work with ObserverNode
[ https://issues.apache.org/jira/browse/HDFS-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765497#comment-16765497 ] Konstantin Shvachko commented on HDFS-14162: Hey [~xkrogen], this looks very good. I was trying similar approach in my hack. I additionally was trying to combine PB proxies in the RpcEngine. Yours is using two PB objects, but it is probably a good thing. We can combine those in a follow up patch. For now your patch sets a pattern for combining rpc protocols, exactly what is needed. My suggestion is to use {{BalancerProtocols}} (plural) in order to emphasize this is a combination of other protocols. > Balancer should work with ObserverNode > -- > > Key: HDFS-14162 > URL: https://issues.apache.org/jira/browse/HDFS-14162 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Konstantin Shvachko >Assignee: Erik Krogen >Priority: Major > Attachments: HDFS-14162-HDFS-12943.wip0.patch, HDFS-14162.000.patch, > HDFS-14162.001.patch, testBalancerWithObserver-3.patch, > testBalancerWithObserver.patch > > > Balancer provides a substantial RPC load on NameNode. It would be good to > divert Balancer RPCs {{getBlocks()}}, etc. to ObserverNode. The main problem > is that Balancer uses {{NamenodeProtocol}}, while ORPP currently supports > only {{ClientProtocol}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14262) [SBN read] Unclear Log.WARN message in GlobalStateIdContext
[ https://issues.apache.org/jira/browse/HDFS-14262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shweta updated HDFS-14262: -- Attachment: HDFS-14262.001.patch > [SBN read] Unclear Log.WARN message in GlobalStateIdContext > --- > > Key: HDFS-14262 > URL: https://issues.apache.org/jira/browse/HDFS-14262 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Shweta >Assignee: Shweta >Priority: Major > Attachments: HDFS-14262.001.patch > > > The check clientStateId > serverStateId during active HA status might never > occur and the log message is pretty unclear, should it throw an Exception > instead? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14241) Provide feedback on successful renameSnapshot and deleteSnapshot
[ https://issues.apache.org/jira/browse/HDFS-14241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDFS-14241: -- Description: {code:bash|title=Shell (Before)} # createSnapshot has feedback $ hdfs dfs -createSnapshot /dst/ snap1 Created snapshot /dst/.snapshot/snap1 # renameSnapshot doesn't have feedback if it succeeded $ hdfs dfs -renameSnapshot snap1 snap2 /dst/ (No output) # deleteSnapshot doesn't have feedback if it succeeded $ hdfs dfs -deleteSnapshot /dst/ snap2 (No output) # rm has feedback $ hdfs dfs -rm -skipTrash /dst/2.txt Deleted /dst/2.txt {code} {code:bash|title=Shell (After)} ... $ hdfs dfs -deleteSnapshot /dst/ snap2 Deleted snapshot snap2 under hdfs://server:port/dst {code} was: {code:bash|title=Shell (Before)} $ hdfs dfs -createSnapshot /dst/ snap2 Created snapshot /dst/.snapshot/snap2 $ hdfs dfs -deleteSnapshot /dst/ snap2 (No output on success) $ hdfs dfs -rm -skipTrash /dst/2.txt Deleted /dst/2.txt {code} {code:bash|title=Shell (After)} ... $ hdfs dfs -deleteSnapshot /dst/ snap2 Deleted snapshot snap2 under hdfs://server:port/dst {code} > Provide feedback on successful renameSnapshot and deleteSnapshot > > > Key: HDFS-14241 > URL: https://issues.apache.org/jira/browse/HDFS-14241 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, shell >Affects Versions: 3.2.0, 3.1.2 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Minor > Attachments: HDFS-14241.001.patch, HDFS-14241.002.patch > > > {code:bash|title=Shell (Before)} > # createSnapshot has feedback > $ hdfs dfs -createSnapshot /dst/ snap1 > Created snapshot /dst/.snapshot/snap1 > # renameSnapshot doesn't have feedback if it succeeded > $ hdfs dfs -renameSnapshot snap1 snap2 /dst/ > (No output) > # deleteSnapshot doesn't have feedback if it succeeded > $ hdfs dfs -deleteSnapshot /dst/ snap2 > (No output) > # rm has feedback > $ hdfs dfs -rm -skipTrash /dst/2.txt > Deleted /dst/2.txt > {code} > {code:bash|title=Shell (After)} > ... > $ hdfs dfs -deleteSnapshot /dst/ snap2 > Deleted snapshot snap2 under hdfs://server:port/dst > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14241) Provide feedback on successful renameSnapshot and deleteSnapshot
[ https://issues.apache.org/jira/browse/HDFS-14241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDFS-14241: -- Summary: Provide feedback on successful renameSnapshot and deleteSnapshot (was: Provide feedback when dfs renameSnapshot and deleteSnapshot succeeds) > Provide feedback on successful renameSnapshot and deleteSnapshot > > > Key: HDFS-14241 > URL: https://issues.apache.org/jira/browse/HDFS-14241 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, shell >Affects Versions: 3.2.0, 3.1.2 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Minor > Attachments: HDFS-14241.001.patch, HDFS-14241.002.patch > > > {code:bash|title=Shell (Before)} > $ hdfs dfs -createSnapshot /dst/ snap2 > Created snapshot /dst/.snapshot/snap2 > $ hdfs dfs -deleteSnapshot /dst/ snap2 > (No output on success) > $ hdfs dfs -rm -skipTrash /dst/2.txt > Deleted /dst/2.txt > {code} > {code:bash|title=Shell (After)} > ... > $ hdfs dfs -deleteSnapshot /dst/ snap2 > Deleted snapshot snap2 under hdfs://server:port/dst > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-936) Need a tool to map containers to ozone objects
[ https://issues.apache.org/jira/browse/HDDS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765495#comment-16765495 ] Elek, Marton commented on HDDS-936: --- I think the fsck pom dependency should be removed from the patch. {code:java} org.apache.hadoop ozone-fsck 0.4.0-SNAPSHOT test {code} As the fsck project is removed from the latest version (Fix me if I am wrong) But agree, we can commit the patch (if the patch can be built) and incremental improve it. > Need a tool to map containers to ozone objects > -- > > Key: HDDS-936 > URL: https://issues.apache.org/jira/browse/HDDS-936 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Manager >Reporter: Jitendra Nath Pandey >Assignee: sarun singla >Priority: Major > Attachments: HDDS-936.00.patch, HDDS-936.01.patch, HDDS-936.02.patch, > HDDS-936.03.patch, HDDS-936.04.patch, HDDS-936.05.patch, HDDS-936.06.patch > > > Ozone should have a tool to get list of objects that a container contains. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14262) [SBN read] Unclear Log.WARN message in GlobalStateIdContext
[ https://issues.apache.org/jira/browse/HDFS-14262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765492#comment-16765492 ] Shweta commented on HDFS-14262: --- Thanks for the discussions [~jojochuang] and [~xkrogen]. I have uploaded a patch to reflect the latest suggestion by Erik. Please review. > [SBN read] Unclear Log.WARN message in GlobalStateIdContext > --- > > Key: HDFS-14262 > URL: https://issues.apache.org/jira/browse/HDFS-14262 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Shweta >Assignee: Shweta >Priority: Major > Attachments: HDFS-14262.001.patch > > > The check clientStateId > serverStateId during active HA status might never > occur and the log message is pretty unclear, should it throw an Exception > instead? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14262) [SBN read] Unclear Log.WARN message in GlobalStateIdContext
[ https://issues.apache.org/jira/browse/HDFS-14262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shweta updated HDFS-14262: -- Attachment: HDFS-14262.001.patch Status: Patch Available (was: Open) > [SBN read] Unclear Log.WARN message in GlobalStateIdContext > --- > > Key: HDFS-14262 > URL: https://issues.apache.org/jira/browse/HDFS-14262 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Shweta >Assignee: Shweta >Priority: Major > Attachments: HDFS-14262.001.patch > > > The check clientStateId > serverStateId during active HA status might never > occur and the log message is pretty unclear, should it throw an Exception > instead? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14262) [SBN read] Unclear Log.WARN message in GlobalStateIdContext
[ https://issues.apache.org/jira/browse/HDFS-14262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shweta updated HDFS-14262: -- Attachment: (was: HDFS-14262.001.patch) > [SBN read] Unclear Log.WARN message in GlobalStateIdContext > --- > > Key: HDFS-14262 > URL: https://issues.apache.org/jira/browse/HDFS-14262 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Shweta >Assignee: Shweta >Priority: Major > Attachments: HDFS-14262.001.patch > > > The check clientStateId > serverStateId during active HA status might never > occur and the log message is pretty unclear, should it throw an Exception > instead? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class
[ https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765461#comment-16765461 ] Íñigo Goiri commented on HDFS-14258: OK, if we change this behavior, I think we need to test it. Agreed on 30 seconds extra time being too much. Having a configuration parameter would be the right thing but I agree with you that another knob to ignore is bad. What about having a setter and just tune it for the test? Minor comment: I think we can get rid of the {{Math.abs()}} for delta in {{setMaxConcurrentMovers()}} as we are filtering the bad cases. > Introduce Java Concurrent Package To DataXceiverServer Class > > > Key: HDFS-14258 > URL: https://issues.apache.org/jira/browse/HDFS-14258 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, > HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, HDFS-14258.6.patch > > > * Use Java concurrent package to replace current facilities in > {{DataXceiverServer}}. > * A little bit of extra clean up -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13617) Allow wrapping NN QOP into token in encrypted message
[ https://issues.apache.org/jira/browse/HDFS-13617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765428#comment-16765428 ] Konstantin Shvachko commented on HDFS-13617: +1 looks good. > Allow wrapping NN QOP into token in encrypted message > - > > Key: HDFS-13617 > URL: https://issues.apache.org/jira/browse/HDFS-13617 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13617.001.patch, HDFS-13617.002.patch, > HDFS-13617.003.patch, HDFS-13617.004.patch, HDFS-13617.005.patch, > HDFS-13617.006.patch, HDFS-13617.007.patch, HDFS-13617.008.patch, > HDFS-13617.009.patch > > > This Jira allows NN to configurably wrap the QOP it has established with the > client into the token message sent back to the client. The QOP is sent back > in encrypted message, using BlockAccessToken encryption key as the key. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class
[ https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765426#comment-16765426 ] Hadoop QA commented on HDFS-14258: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 6s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 478 unchanged - 4 fixed = 478 total (was 482) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 48s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 168 unchanged - 7 fixed = 172 total (was 175) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}155m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-14258 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12958291/HDFS-14258.6.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 83e742463dda 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 5c10630 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/26187/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit |
[jira] [Commented] (HDDS-1080) Ozonefs Isolated class loader should support FsStorageStatistics
[ https://issues.apache.org/jira/browse/HDDS-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765422#comment-16765422 ] Anu Engineer commented on HDDS-1080: +1, looks good to me. Thanks for getting this fixed. > Ozonefs Isolated class loader should support FsStorageStatistics > > > Key: HDDS-1080 > URL: https://issues.apache.org/jira/browse/HDDS-1080 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Filesystem >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Attachments: HDDS-1080.001.patch > > > HDDS-1033 introduced the storage statistics for ozonefs. Unfortunately the > isolated classloader (HDDS-922) doesn't work any more after this change. > To explain the problem, let's use the specific expression > _classname[classloader]_ for a class (named by classname) which is loaded by > the classloader. > We have two classloaders the _main_ which is the standard classloader of the > application and the _isolated_ classloader which is created by the > OzoneClientAdapterFactory.java. > By default java classloaders delegate the loading to the parent class loader, > and they load all the classes from the parent first (if possible). The > isolated class loader does the oposit, loads all the classes from a specific > location of the jar files. With this approach we can use hadoop3.2+ozone > classes together with older hadoop versions. > But back to the problem: > OzoneFilesystem[main] is loaded by the application. In this class an > OzoneFSStorageStatistics[main] is created and with the help of > OzoneClientAdapterFactory[main] a new OzoneClientAdapterImpl[isolated!!] is > instantiated which implements the OzoneClientAdapter[main] and will do all > the main work[isolated]. > OzoneClientAdapterImp[isolated] has a new constructor which requires > (String[system], String[system], OzoneFSStorageStatistics[isolated]). > And this is the problem, it requires OzoneFSStorageStatistics[isolated] but > we have a OzoneFSStorageStatistics[main]. > The fix is very straightforward. In the FilteredClassLoader.java we ha a list > for the classes which should be shared by the two classloaders. For these > classes the isolated classloader will delegate the loading to the parent > ([main]) classloader and we will have one (and only one) > OzoneFSStorageStatistics[main] everywhere. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13358) RBF: Support for Delegation Token (RPC)
[ https://issues.apache.org/jira/browse/HDFS-13358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765410#comment-16765410 ] Íñigo Goiri commented on HDFS-13358: Just a minor comment: instead of assertTrue for ==, we should just use assertEquals(). Other than that, this is good. [~brahmareddy] can you take a look regarding your comments? > RBF: Support for Delegation Token (RPC) > --- > > Key: HDFS-13358 > URL: https://issues.apache.org/jira/browse/HDFS-13358 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Sherwood Zheng >Assignee: CR Hota >Priority: Major > Attachments: HDFS-13358-HDFS-13891.001.patch, > HDFS-13358-HDFS-13891.002.patch, HDFS-13358-HDFS-13891.003.patch, > HDFS-13358-HDFS-13891.004.patch, HDFS-13358-HDFS-13891.005.patch, > HDFS-13358-HDFS-13891.006.patch, HDFS-13358-HDFS-13891.007.patch, > HDFS-13358-HDFS-13891.008.patch, RBF_ Delegation token design.pdf > > > HDFS Router should support issuing / managing HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1074) Remove dead variable from KeyOutputStream#addKeyLocationInfo
[ https://issues.apache.org/jira/browse/HDDS-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDDS-1074: - Target Version/s: 0.4.0 > Remove dead variable from KeyOutputStream#addKeyLocationInfo > > > Key: HDDS-1074 > URL: https://issues.apache.org/jira/browse/HDDS-1074 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Siddharth Wagle >Priority: Trivial > Attachments: HDDS-1074.01.patch > > > The following can be removed. > {code} > XceiverClientSpi xceiverClient = > xceiverClientManager.acquireClient(containerWithPipeline.getPipeline()); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13358) RBF: Support for Delegation Token (RPC)
[ https://issues.apache.org/jira/browse/HDFS-13358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] CR Hota updated HDFS-13358: --- Attachment: HDFS-13358-HDFS-13891.008.patch > RBF: Support for Delegation Token (RPC) > --- > > Key: HDFS-13358 > URL: https://issues.apache.org/jira/browse/HDFS-13358 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Sherwood Zheng >Assignee: CR Hota >Priority: Major > Attachments: HDFS-13358-HDFS-13891.001.patch, > HDFS-13358-HDFS-13891.002.patch, HDFS-13358-HDFS-13891.003.patch, > HDFS-13358-HDFS-13891.004.patch, HDFS-13358-HDFS-13891.005.patch, > HDFS-13358-HDFS-13891.006.patch, HDFS-13358-HDFS-13891.007.patch, > HDFS-13358-HDFS-13891.008.patch, RBF_ Delegation token design.pdf > > > HDFS Router should support issuing / managing HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-360) Use RocksDBStore and TableStore for SCM Metadata
[ https://issues.apache.org/jira/browse/HDDS-360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765397#comment-16765397 ] Hadoop QA commented on HDDS-360: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 10 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 0s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 41s{color} | {color:orange} root: The patch generated 1 new + 3 unchanged - 1 fixed = 4 total (was 4) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} root generated 0 new + 19 unchanged - 1 fixed = 19 total (was 20) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 2s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 5m 5s{color} | {color:red} hadoop-hdds in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.TestStorageContainerManager | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.TestSecureOzoneCluster | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDDS-360 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12958305/HDDS-360.002.patch | | Optional Tests | asflicense javac javadoc unit findbugs checkstyle | | uname | Linux 1bc265eb7df0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / ca4e46a | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | checkstyle | https://builds.apache.org/job/PreCommit-HDDS-Build/2227/artifact/out/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2227/artifact/out/patch-unit-hadoop-ozone.txt | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2227/artifact/out/patch-unit-hadoop-hdds.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/2227/testReport/ | | Max. process+thread count | 1227 (vs. ulimit of 1) | | modules | C: hadoop-hdds/common hadoop-hdds/server-scm hadoop-ozone/common hadoop-ozone/integration-test hadoop-ozone/tools U: . | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/2227/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Use RocksDBStore and TableStore for SCM Metadata > > > Key: HDDS-360 >
[jira] [Commented] (HDDS-1074) Remove dead variable from KeyOutputStream#addKeyLocationInfo
[ https://issues.apache.org/jira/browse/HDDS-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765390#comment-16765390 ] Siddharth Wagle commented on HDDS-1074: --- [~xyao] Failure seems unrelated to 1 line change. > Remove dead variable from KeyOutputStream#addKeyLocationInfo > > > Key: HDDS-1074 > URL: https://issues.apache.org/jira/browse/HDDS-1074 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Siddharth Wagle >Priority: Trivial > Attachments: HDDS-1074.01.patch > > > The following can be removed. > {code} > XceiverClientSpi xceiverClient = > xceiverClientManager.acquireClient(containerWithPipeline.getPipeline()); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-360) Use RocksDBStore and TableStore for SCM Metadata
[ https://issues.apache.org/jira/browse/HDDS-360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765376#comment-16765376 ] Anu Engineer commented on HDDS-360: --- [~ljain] Thanks for the comments. I have fixed the test failure and have addressed all review comments. Please see below for details. bq.DeletedBlockLogImpl:80 - We can remove the throws clause. Fixed. bq. DeletedBlockLogImpl#getFailedTransactions - We need to iterate through the entries in the table. Currently, we are just checking the first entry in the table. Very good catch, thanks. Fixed. bq. DeletedBlockLogImpl#getNumOfValidTransactions - same as point 2. Thanks, fixed. bq. DeletedBlockLogImpl#addTransactions - We can make it as a batch operation. Thanks, fixed. bq. SCMMetadataStore#getNextTXID can be renamed to getNextDeleteBlockTxnID? Fixed. > Use RocksDBStore and TableStore for SCM Metadata > > > Key: HDDS-360 > URL: https://issues.apache.org/jira/browse/HDDS-360 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Anu Engineer >Priority: Major > Attachments: HDDS-360.001.patch, HDDS-360.002.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-360) Use RocksDBStore and TableStore for SCM Metadata
[ https://issues.apache.org/jira/browse/HDDS-360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDDS-360: -- Attachment: HDDS-360.002.patch > Use RocksDBStore and TableStore for SCM Metadata > > > Key: HDDS-360 > URL: https://issues.apache.org/jira/browse/HDDS-360 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Anu Engineer >Priority: Major > Attachments: HDDS-360.001.patch, HDDS-360.002.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1060) Token: Add api to get OM certificate from SCM
[ https://issues.apache.org/jira/browse/HDDS-1060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765370#comment-16765370 ] Xiaoyu Yao commented on HDDS-1060: -- Thanks [~ajayydv] for the patch. It looks good to me overall. Here are a few comments: CertificateServer.java Line 61: Need to clarify if this return null if certificate is not found or throw at the interface level? Based on the code I found later in SCMSecurityProtocolServer.java Line 162, it seems throw IOE if certificate is not found. Line 64: the comments need to be updated. the certSerialId is not the certificate for this CA. StorageContainerManager.java Line 227: can you add more comments on the usage of this flag and what to expect to work without a SCM login? TestStorageContainerManager.java Line 460: can we put this in try{} final{}? > Token: Add api to get OM certificate from SCM > - > > Key: HDDS-1060 > URL: https://issues.apache.org/jira/browse/HDDS-1060 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: Blocker, Security > Fix For: 0.4.0 > > Attachments: HDDS-1060.00.patch, HDDS-1060.01.patch > > > Datanodes/OM need OM certificate to validate block tokens and delegation > tokens. > Add API for: > 1. getCertificate(String certSerialId): To get certificate from SCM based on > certificate serial id. > 2. getCACertificate(): To get CA certificate. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1075) Fix CertificateUtil#parseRSAPublicKey charsetName
[ https://issues.apache.org/jira/browse/HDDS-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765362#comment-16765362 ] Hudson commented on HDDS-1075: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15930 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15930/]) HDDS-1075. Fix CertificateUtil#parseRSAPublicKey charsetName. (xyao: rev ca4e46a05eb20106d69db481d6ac1988696a9f01) * (edit) hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/CertificateUtil.java > Fix CertificateUtil#parseRSAPublicKey charsetName > - > > Key: HDDS-1075 > URL: https://issues.apache.org/jira/browse/HDDS-1075 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Siddharth Wagle >Priority: Minor > Fix For: 0.4.0 > > Attachments: HDDS-1075.01.patch > > > We should use "UTF-8" instead of "UTF8". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1084) Ozone FSCK server
[ https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765360#comment-16765360 ] Arpit Agarwal commented on HDDS-1084: - Right we cannot use Ganesha itself. However he has 108 names so there might be some options. :) However _Ozone Registry_ sounds good to me unless there is a another proposal on the table. > Ozone FSCK server > - > > Key: HDDS-1084 > URL: https://issues.apache.org/jira/browse/HDDS-1084 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: fsck >Affects Versions: 0.4.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > > Fsck Server at a high level will maintain a global view of Ozone that is not > available from SCM or OM. Things like how many volumes exist; and how many > buckets exist per volume; which volume has maximum buckets; which are buckets > that have not been accessed for a year, which are the corrupt blocks, which > are blocks on data nodes which are not used; and answer similar queries. > I will work on a design document and attach it in a few days. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1084) Ozone FSCK server
[ https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765355#comment-16765355 ] Anu Engineer commented on HDDS-1084: Ganesha is the name of the famous NFS server stack, NFS Ganesha. > Ozone FSCK server > - > > Key: HDDS-1084 > URL: https://issues.apache.org/jira/browse/HDDS-1084 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: fsck >Affects Versions: 0.4.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > > Fsck Server at a high level will maintain a global view of Ozone that is not > available from SCM or OM. Things like how many volumes exist; and how many > buckets exist per volume; which volume has maximum buckets; which are buckets > that have not been accessed for a year, which are the corrupt blocks, which > are blocks on data nodes which are not used; and answer similar queries. > I will work on a design document and attach it in a few days. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1075) Fix CertificateUtil#parseRSAPublicKey charsetName
[ https://issues.apache.org/jira/browse/HDDS-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDDS-1075: - Resolution: Fixed Fix Version/s: 0.4.0 Status: Resolved (was: Patch Available) Thanks [~swagle] for the contribution. I've committed the patch to trunk. > Fix CertificateUtil#parseRSAPublicKey charsetName > - > > Key: HDDS-1075 > URL: https://issues.apache.org/jira/browse/HDDS-1075 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Siddharth Wagle >Priority: Minor > Fix For: 0.4.0 > > Attachments: HDDS-1075.01.patch > > > We should use "UTF-8" instead of "UTF8". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1084) Ozone FSCK server
[ https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765346#comment-16765346 ] Arpit Agarwal commented on HDDS-1084: - Yeah the name is a placeholder. I like Ozone Registry. Another suggestion floating in the ether was Ganesha (the great scribe). :) > Ozone FSCK server > - > > Key: HDDS-1084 > URL: https://issues.apache.org/jira/browse/HDDS-1084 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: fsck >Affects Versions: 0.4.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > > Fsck Server at a high level will maintain a global view of Ozone that is not > available from SCM or OM. Things like how many volumes exist; and how many > buckets exist per volume; which volume has maximum buckets; which are buckets > that have not been accessed for a year, which are the corrupt blocks, which > are blocks on data nodes which are not used; and answer similar queries. > I will work on a design document and attach it in a few days. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1075) Fix CertificateUtil#parseRSAPublicKey charsetName
[ https://issues.apache.org/jira/browse/HDDS-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765341#comment-16765341 ] Xiaoyu Yao commented on HDDS-1075: -- +1, I will commit it shortly. > Fix CertificateUtil#parseRSAPublicKey charsetName > - > > Key: HDDS-1075 > URL: https://issues.apache.org/jira/browse/HDDS-1075 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Siddharth Wagle >Priority: Minor > Attachments: HDDS-1075.01.patch > > > We should use "UTF-8" instead of "UTF8". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1084) Ozone FSCK server
[ https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765340#comment-16765340 ] Anu Engineer commented on HDDS-1084: I am fine with any other name > Ozone FSCK server > - > > Key: HDDS-1084 > URL: https://issues.apache.org/jira/browse/HDDS-1084 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: fsck >Affects Versions: 0.4.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > > Fsck Server at a high level will maintain a global view of Ozone that is not > available from SCM or OM. Things like how many volumes exist; and how many > buckets exist per volume; which volume has maximum buckets; which are buckets > that have not been accessed for a year, which are the corrupt blocks, which > are blocks on data nodes which are not used; and answer similar queries. > I will work on a design document and attach it in a few days. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-1084) Ozone FSCK server
[ https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765337#comment-16765337 ] Anu Engineer edited comment on HDDS-1084 at 2/11/19 7:55 PM: - [~anu] - In one of my earlier discussions, I had proposed the name 'ozone registry' and [~arpitagarwal] seemed to have liked it :D was (Author: dineshchitlangia): [~anu] - In one of my earlier discussions, I had proposed the name 'ozone registry' and [~arpaga] seemed to have liked it :D > Ozone FSCK server > - > > Key: HDDS-1084 > URL: https://issues.apache.org/jira/browse/HDDS-1084 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: fsck >Affects Versions: 0.4.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > > Fsck Server at a high level will maintain a global view of Ozone that is not > available from SCM or OM. Things like how many volumes exist; and how many > buckets exist per volume; which volume has maximum buckets; which are buckets > that have not been accessed for a year, which are the corrupt blocks, which > are blocks on data nodes which are not used; and answer similar queries. > I will work on a design document and attach it in a few days. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1084) Ozone FSCK server
[ https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765337#comment-16765337 ] Dinesh Chitlangia commented on HDDS-1084: - [~anu] - In one of my earlier discussions, I had proposed the name 'ozone registry' and [~arpaga] seemed to have liked it :D > Ozone FSCK server > - > > Key: HDDS-1084 > URL: https://issues.apache.org/jira/browse/HDDS-1084 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: fsck >Affects Versions: 0.4.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > > Fsck Server at a high level will maintain a global view of Ozone that is not > available from SCM or OM. Things like how many volumes exist; and how many > buckets exist per volume; which volume has maximum buckets; which are buckets > that have not been accessed for a year, which are the corrupt blocks, which > are blocks on data nodes which are not used; and answer similar queries. > I will work on a design document and attach it in a few days. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1084) Ozone FSCK server
[ https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDDS-1084: --- Affects Version/s: (was: 0.2.1) 0.4.0 > Ozone FSCK server > - > > Key: HDDS-1084 > URL: https://issues.apache.org/jira/browse/HDDS-1084 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: Tools >Affects Versions: 0.4.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > > Fsck Server at a high level will maintain a global view of Ozone that is not > available from SCM or OM. Things like how many volumes exist; and how many > buckets exist per volume; which volume has maximum buckets; which are buckets > that have not been accessed for a year, which are the corrupt blocks, which > are blocks on data nodes which are not used; and answer similar queries. > I will work on a design document and attach it in a few days. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1084) Ozone FSCK server
[ https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765330#comment-16765330 ] Anu Engineer commented on HDDS-1084: We should probably name this something better. > Ozone FSCK server > - > > Key: HDDS-1084 > URL: https://issues.apache.org/jira/browse/HDDS-1084 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: fsck >Affects Versions: 0.4.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > > Fsck Server at a high level will maintain a global view of Ozone that is not > available from SCM or OM. Things like how many volumes exist; and how many > buckets exist per volume; which volume has maximum buckets; which are buckets > that have not been accessed for a year, which are the corrupt blocks, which > are blocks on data nodes which are not used; and answer similar queries. > I will work on a design document and attach it in a few days. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1084) Ozone FSCK server
[ https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDDS-1084: --- Component/s: Ozone Manager > Ozone FSCK server > - > > Key: HDDS-1084 > URL: https://issues.apache.org/jira/browse/HDDS-1084 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: Ozone Manager, Tools >Affects Versions: 0.4.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > > Fsck Server at a high level will maintain a global view of Ozone that is not > available from SCM or OM. Things like how many volumes exist; and how many > buckets exist per volume; which volume has maximum buckets; which are buckets > that have not been accessed for a year, which are the corrupt blocks, which > are blocks on data nodes which are not used; and answer similar queries. > I will work on a design document and attach it in a few days. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1084) Ozone FSCK server
[ https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDDS-1084: --- Component/s: (was: Tools) (was: Ozone Manager) fsck > Ozone FSCK server > - > > Key: HDDS-1084 > URL: https://issues.apache.org/jira/browse/HDDS-1084 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: fsck >Affects Versions: 0.4.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > > Fsck Server at a high level will maintain a global view of Ozone that is not > available from SCM or OM. Things like how many volumes exist; and how many > buckets exist per volume; which volume has maximum buckets; which are buckets > that have not been accessed for a year, which are the corrupt blocks, which > are blocks on data nodes which are not used; and answer similar queries. > I will work on a design document and attach it in a few days. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class
[ https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HDFS-14258: --- Status: Patch Available (was: Open) [~elgoiri] Putting up a new patch with the requested changes. Setting the wait time to a large number or zero still changes the behavior. Right now, if the reconfiguration happens, it is "fire and forget." With the use of a {{Semaphore}} it's "all or nothing", it's just a matter of how long to wait for "all" to occur. Setting to a large number blocks until it completes (maybe this is the direction to go, but then there's always the risk of a hang if some number of the block mover threads are stuck; an unlikely, but possible scenario). Setting it to zero means try, but if it's not able to immediately reduce the pool of workers, it fails. Also a possible solution. Let me know. > Introduce Java Concurrent Package To DataXceiverServer Class > > > Key: HDFS-14258 > URL: https://issues.apache.org/jira/browse/HDFS-14258 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, > HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, HDFS-14258.6.patch > > > * Use Java concurrent package to replace current facilities in > {{DataXceiverServer}}. > * A little bit of extra clean up -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class
[ https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HDFS-14258: --- Attachment: HDFS-14258.6.patch > Introduce Java Concurrent Package To DataXceiverServer Class > > > Key: HDFS-14258 > URL: https://issues.apache.org/jira/browse/HDFS-14258 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, > HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, HDFS-14258.6.patch > > > * Use Java concurrent package to replace current facilities in > {{DataXceiverServer}}. > * A little bit of extra clean up -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14258) Introduce Java Concurrent Package To DataXceiverServer Class
[ https://issues.apache.org/jira/browse/HDFS-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HDFS-14258: --- Status: Open (was: Patch Available) > Introduce Java Concurrent Package To DataXceiverServer Class > > > Key: HDFS-14258 > URL: https://issues.apache.org/jira/browse/HDFS-14258 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HDFS-14258.1.patch, HDFS-14258.2.patch, > HDFS-14258.3.patch, HDFS-14258.4.patch, HDFS-14258.5.patch, HDFS-14258.6.patch > > > * Use Java concurrent package to replace current facilities in > {{DataXceiverServer}}. > * A little bit of extra clean up -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-1084) Ozone FSCK server
Siddharth Wagle created HDDS-1084: - Summary: Ozone FSCK server Key: HDDS-1084 URL: https://issues.apache.org/jira/browse/HDDS-1084 Project: Hadoop Distributed Data Store Issue Type: New Feature Components: Tools Affects Versions: 0.2.1 Reporter: Siddharth Wagle Assignee: Siddharth Wagle Fsck Server at a high level will maintain a global view of Ozone that is not available from SCM or OM. Things like how many volumes exist; and how many buckets exist per volume; which volume has maximum buckets; which are buckets that have not been accessed for a year, which are the corrupt blocks, which are blocks on data nodes which are not used; and answer similar queries. I will work on a design document and attach it in a few days. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14262) [SBN read] Unclear Log.WARN message in GlobalStateIdContext
[ https://issues.apache.org/jira/browse/HDFS-14262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-14262: --- Summary: [SBN read] Unclear Log.WARN message in GlobalStateIdContext (was: Unclear Log.WARN message in GlobalStateIdContext) > [SBN read] Unclear Log.WARN message in GlobalStateIdContext > --- > > Key: HDFS-14262 > URL: https://issues.apache.org/jira/browse/HDFS-14262 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Shweta >Assignee: Shweta >Priority: Major > > The check clientStateId > serverStateId during active HA status might never > occur and the log message is pretty unclear, should it throw an Exception > instead? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1074) Remove dead variable from KeyOutputStream#addKeyLocationInfo
[ https://issues.apache.org/jira/browse/HDDS-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765322#comment-16765322 ] Hadoop QA commented on HDDS-1074: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 52s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 18s{color} | {color:green} hadoop-hdds in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 51m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDDS-1074 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12958284/HDDS-1074.01.patch | | Optional Tests | asflicense javac javadoc unit findbugs checkstyle | | uname | Linux 3989a6f2e013 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / 0ceb1b7 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2226/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/2226/testReport/ | | Max. process+thread count | 1478 (vs. ulimit of 1) | | modules | C: hadoop-ozone/client U: hadoop-ozone/client | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/2226/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Remove dead variable from KeyOutputStream#addKeyLocationInfo > > > Key: HDDS-1074 > URL: https://issues.apache.org/jira/browse/HDDS-1074 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Siddharth Wagle >
[jira] [Commented] (HDDS-1075) Fix CertificateUtil#parseRSAPublicKey charsetName
[ https://issues.apache.org/jira/browse/HDDS-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765298#comment-16765298 ] Hadoop QA commented on HDDS-1075: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 32s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 5m 3s{color} | {color:red} hadoop-hdds in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 47m 48s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDDS-1075 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12958283/HDDS-1075.01.patch | | Optional Tests | asflicense javac javadoc unit findbugs checkstyle | | uname | Linux 2f6e6e380d46 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / 0ceb1b7 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2225/artifact/out/patch-unit-hadoop-ozone.txt | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2225/artifact/out/patch-unit-hadoop-hdds.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/2225/testReport/ | | Max. process+thread count | 1228 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-auth U: hadoop-common-project/hadoop-auth | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/2225/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix CertificateUtil#parseRSAPublicKey charsetName > - > > Key: HDDS-1075 > URL: https://issues.apache.org/jira/browse/HDDS-1075 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Xiaoyu Yao >
[jira] [Comment Edited] (HDFS-14263) Remove unnecessary block file exists check from FsDatasetImpl#getBlockInputStream()
[ https://issues.apache.org/jira/browse/HDFS-14263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765310#comment-16765310 ] Virajith Jalaparti edited comment on HDFS-14263 at 2/11/19 7:17 PM: Hi [~surendrasingh], thanks for posting this. Re [^HDFS-14263.001.patch] , we can skip the check for {{ProvidedReplica}}. it can just be the null check. Ideally, there shouldn't be anything special we do for {{ProvidedReplica}}. +1 once this is fixed. was (Author: virajith): Hi [~surendrasingh], thanks for posting this. Re [^HDFS-14263.001.patch] , we can skip the check for {{ProvidedReplica}}. it can just be the null check. Ideally, there shouldn't be anything special we do for {{ProvidedReplica}}. > Remove unnecessary block file exists check from > FsDatasetImpl#getBlockInputStream() > --- > > Key: HDFS-14263 > URL: https://issues.apache.org/jira/browse/HDFS-14263 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 3.1.1 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Major > Attachments: HDFS-14263.001.patch > > > As discussed in HDFS-10636, {{FsDatasetImpl#getBlockInputStream()}} doing > unnecessary block replica exist check. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14263) Remove unnecessary block file exists check from FsDatasetImpl#getBlockInputStream()
[ https://issues.apache.org/jira/browse/HDFS-14263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765310#comment-16765310 ] Virajith Jalaparti commented on HDFS-14263: --- Hi [~surendrasingh], thanks for posting this. Re [^HDFS-14263.001.patch] , we can skip the check for {{ProvidedReplica}}. it can just be the null check. Ideally, there shouldn't be anything special we do for {{ProvidedReplica}}. > Remove unnecessary block file exists check from > FsDatasetImpl#getBlockInputStream() > --- > > Key: HDFS-14263 > URL: https://issues.apache.org/jira/browse/HDFS-14263 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 3.1.1 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Major > Attachments: HDFS-14263.001.patch > > > As discussed in HDFS-10636, {{FsDatasetImpl#getBlockInputStream()}} doing > unnecessary block replica exist check. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14261) Kerberize JournalNodeSyncer unit test
[ https://issues.apache.org/jira/browse/HDFS-14261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765305#comment-16765305 ] Hudson commented on HDFS-14261: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15929 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15929/]) HDFS-14261. Kerberize JournalNodeSyncer unit test. Contributed by Siyao (weichiu: rev 5c10630ad8c976380491adec8e2d9f3e49ea8fa9) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNodeSync.java > Kerberize JournalNodeSyncer unit test > - > > Key: HDFS-14261 > URL: https://issues.apache.org/jira/browse/HDFS-14261 > Project: Hadoop HDFS > Issue Type: Test > Components: journal-node, security, test >Affects Versions: 3.2.0, 3.1.2 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-14261.001.patch > > > This jira is an addition to HDFS-14140. Making the unit tests in > TestJournalNodeSync run on a Kerberized cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14083) libhdfs logs errors when opened FS doesn't support ByteBufferReadable
[ https://issues.apache.org/jira/browse/HDFS-14083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-14083: --- Component/s: (was: test) libhdfs > libhdfs logs errors when opened FS doesn't support ByteBufferReadable > - > > Key: HDFS-14083 > URL: https://issues.apache.org/jira/browse/HDFS-14083 > Project: Hadoop HDFS > Issue Type: Improvement > Components: libhdfs, native >Affects Versions: 3.0.3 >Reporter: Pranay Singh >Assignee: Pranay Singh >Priority: Minor > Attachments: HADOOP-15928.001.patch, HADOOP-15928.002.patch, > HDFS-14083.003.patch, HDFS-14083.004.patch, HDFS-14083.005.patch, > HDFS-14083.006.patch, HDFS-14083.007.patch, HDFS-14083.008.patch, > HDFS-14083.009.patch > > > Problem: > > There is excessive error logging when a file is opened by libhdfs > (DFSClient/HDFS) in S3 environment, this issue is caused because buffered > read is not supported in S3 environment, HADOOP-14603 "S3A input stream to > support ByteBufferReadable" > The following message is printed repeatedly in the error log/ to STDERR: > {code} > -- > UnsupportedOperationException: Byte-buffer read unsupported by input > streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported > by input stream > at > org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150) > {code} > h3. Root cause > After investigating the issue, it appears that the above exception is printed > because > when a file is opened via {{hdfsOpenFileImpl()}} calls {{readDirect()}} which > is hitting this > exception. > h3. Fix: > Since the hdfs client is not initiating the byte buffered read but is > happening in a implicit manner, we should not be generating the error log > during open of a file. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14262) [SBN read] Unclear Log.WARN message in GlobalStateIdContext
[ https://issues.apache.org/jira/browse/HDFS-14262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765297#comment-16765297 ] Erik Krogen commented on HDFS-14262: Thanks for the ping [~jojochuang]! I don't think throwing an exception is the right move, since it's a harmless (though unexpected) situation. I support enhancing the log message to make it more clear what happened. I would prefer language like "Resetting client stateId to server stateId" as opposed to the NameNode referring to itself as "I" cc [~chliang] [~shv] > [SBN read] Unclear Log.WARN message in GlobalStateIdContext > --- > > Key: HDFS-14262 > URL: https://issues.apache.org/jira/browse/HDFS-14262 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Shweta >Assignee: Shweta >Priority: Major > > The check clientStateId > serverStateId during active HA status might never > occur and the log message is pretty unclear, should it throw an Exception > instead? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14262) Unclear Log.WARN message in GlobalStateIdContext
[ https://issues.apache.org/jira/browse/HDFS-14262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765277#comment-16765277 ] Wei-Chiu Chuang commented on HDFS-14262: To elaborate, here's the code in discussion {code:title=GlobalStateIdContext#receiveRequestState} if (clientStateId > serverStateId && HAServiceState.ACTIVE.equals(namesystem.getState())) { FSNamesystem.LOG.warn("A client sent stateId: " + clientStateId + ", but server state is: " + serverStateId); return serverStateId; } {code} If this condition is ever satisfied, an admin wouldn't understand why this is logged. Instead, it should log "A client sent stateId: " + clientStateId + " is larger than server state: " + serverStateId + ". This is unexpected. I will reset client stateId to server stateId" FYI [~xkrogen] > Unclear Log.WARN message in GlobalStateIdContext > > > Key: HDFS-14262 > URL: https://issues.apache.org/jira/browse/HDFS-14262 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Shweta >Assignee: Shweta >Priority: Major > > The check clientStateId > serverStateId during active HA status might never > occur and the log message is pretty unclear, should it throw an Exception > instead? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14083) libhdfs logs errors when opened FS doesn't support ByteBufferReadable
[ https://issues.apache.org/jira/browse/HDFS-14083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16765296#comment-16765296 ] Todd Lipcon commented on HDFS-14083: {quote} if (errno != 0) { file->flags &= ~HDFS_FILE_SUPPORTS_DIRECT_READ; } {quote} this doesn't seem quite right, since there isn't a guarantee that 'readDirect' sets errno to 0 in the case of success. It might just pick up some previously-existing value there. Also, the handling of those static variables seems thread-unsafe. It might work OK since the values are small enough to be naturally atomic on x86 but it seems like we might be better off addressing the root of this issue a better way instead of throttling the error logging? > libhdfs logs errors when opened FS doesn't support ByteBufferReadable > - > > Key: HDFS-14083 > URL: https://issues.apache.org/jira/browse/HDFS-14083 > Project: Hadoop HDFS > Issue Type: Improvement > Components: native, test >Affects Versions: 3.0.3 >Reporter: Pranay Singh >Assignee: Pranay Singh >Priority: Minor > Attachments: HADOOP-15928.001.patch, HADOOP-15928.002.patch, > HDFS-14083.003.patch, HDFS-14083.004.patch, HDFS-14083.005.patch, > HDFS-14083.006.patch, HDFS-14083.007.patch, HDFS-14083.008.patch, > HDFS-14083.009.patch > > > Problem: > > There is excessive error logging when a file is opened by libhdfs > (DFSClient/HDFS) in S3 environment, this issue is caused because buffered > read is not supported in S3 environment, HADOOP-14603 "S3A input stream to > support ByteBufferReadable" > The following message is printed repeatedly in the error log/ to STDERR: > {code} > -- > UnsupportedOperationException: Byte-buffer read unsupported by input > streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported > by input stream > at > org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150) > {code} > h3. Root cause > After investigating the issue, it appears that the above exception is printed > because > when a file is opened via {{hdfsOpenFileImpl()}} calls {{readDirect()}} which > is hitting this > exception. > h3. Fix: > Since the hdfs client is not initiating the byte buffered read but is > happening in a implicit manner, we should not be generating the error log > during open of a file. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org