[jira] [Updated] (HDFS-13422) Ozone: Fix whitespaces and license issues in HDFS-7240 branch
[ https://issues.apache.org/jira/browse/HDFS-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDFS-13422: --- Attachment: HDFS-13422-HDFS-7240.002.patch > Ozone: Fix whitespaces and license issues in HDFS-7240 branch > - > > Key: HDFS-13422 > URL: https://issues.apache.org/jira/browse/HDFS-13422 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Lokesh Jain >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13422-HDFS-7240.001.patch, > HDFS-13422-HDFS-7240.002.patch > > > This jira will be used to fix various findbugs, javac, license and findbugs > issues in HDFS-7240 branch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13441) DataNode missed BlockKey update from NameNode due to HeartbeatResponse was dropped
[ https://issues.apache.org/jira/browse/HDFS-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438968#comment-16438968 ] He Xiaoqiao commented on HDFS-13441: [~zhaoyunjiong] it is my misunderstand above One minor suggestion for patch-v1, if catch IOException in {{DataXceiver}} and re-register DataNode to NameNode, it also could be failure, so I thinks this solution can reduce the possibility of failure only but not solute it completely. Is there possibility that change the mode of getting BlockKey from *Push* by NameNode to *Pull* periodically by DataNode, just like scheduler {{BlockReport}} of DataNode. > DataNode missed BlockKey update from NameNode due to HeartbeatResponse was > dropped > -- > > Key: HDFS-13441 > URL: https://issues.apache.org/jira/browse/HDFS-13441 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, namenode >Affects Versions: 2.7.1 >Reporter: yunjiong zhao >Assignee: yunjiong zhao >Priority: Major > Attachments: HDFS-13441.patch > > > After NameNode failover, lots of application failed due to some DataNodes > can't re-compute password from block token. > {code:java} > 2018-04-11 20:10:52,448 ERROR > org.apache.hadoop.hdfs.server.datanode.DataNode: > hdc3-lvs01-400-1701-048.stratus.lvs.ebay.com:50010:DataXceiver error > processing unknown operation src: /10.142.74.116:57404 dst: > /10.142.77.45:50010 > javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password > [Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't > re-compute password for block_token_identifier (expiryDate=1523538652448, > keyId=1762737944, userId=hadoop, > blockPoolId=BP-36315570-10.103.108.13-1423055488042, blockId=12142862700, > access modes=[WRITE]), since the required block key (keyID=1762737944) > doesn't exist.] > at > com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:598) > at > com.sun.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java:244) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslParticipant.evaluateChallengeOrResponse(SaslParticipant.java:115) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:376) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:300) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:127) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:194) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't > re-compute password for block_token_identifier (expiryDate=1523538652448, > keyId=1762737944, userId=hadoop, > blockPoolId=BP-36315570-10.103.108.13-1423055488042, blockId=12142862700, > access modes=[WRITE]), since the required block key (keyID=1762737944) > doesn't exist. > at > org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.retrievePassword(BlockTokenSecretManager.java:382) > at > org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.retrievePassword(BlockPoolTokenSecretManager.java:79) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.buildServerPassword(SaslDataTransferServer.java:318) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.access$100(SaslDataTransferServer.java:73) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer$2.apply(SaslDataTransferServer.java:297) > at > org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer$SaslServerCallbackHandler.handle(SaslDataTransferServer.java:241) > at > com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:589) > ... 7 more > {code} > > In the DataNode log, we didn't see DataNode update block keys around > 2018-04-11 09:55:00 and around 2018-04-11 19:55:00. > {code:java} > 2018-04-10 14:51:36,424 INFO > org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager: Setting > block keys > 2018-04-10 23:55:38,420 INFO > org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager: Setting > block keys > 2018-04-11 00:51:34,792 INFO > org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager: Setting > block keys > 2018-04-11 10:51:39,403 INFO > org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager: Setting > block keys > 2018-04-11 20:51:44,422 INFO >
[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time
[ https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438963#comment-16438963 ] genericqa commented on HDFS-13388: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 5s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 16s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 82m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13388 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12919137/HADOOP-13388.0013.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 0be9f970e20e 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 896b473 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23941/testReport/ | | Max. process+thread count | 356 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23941/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > RequestHedgingProxyProvider calls multiple configured NNs all the time > -- > >
[jira] [Comment Edited] (HDFS-13422) Ozone: Fix whitespaces and license issues in HDFS-7240 branch
[ https://issues.apache.org/jira/browse/HDFS-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438808#comment-16438808 ] Lokesh Jain edited comment on HDFS-13422 at 4/16/18 3:42 AM: - With hdds profile: I was able to resolve the license problems for hadoop-hdds-server-framework and hadoop-ozone after enabling hdds profile but hadoop-main still contains license issues of hadoop-cblock. Without hdds profile: All the issues appear in hadoop-main. It can be resolved by adding exclude configuration for hadoop-hdds and hadoop-ozone in hadoop-main pom.xml. After this change only hadoop-cblock license issues appear in hadoop-main. was (Author: ljain): I was able to resolve the license problems for hadoop-hdds-server-framework and hadoop-ozone after enabling hdds profile but hadoop-main still contains license issues of hadoop-cblock. Without hdds profile all the issues appear in hadoop-main. > Ozone: Fix whitespaces and license issues in HDFS-7240 branch > - > > Key: HDFS-13422 > URL: https://issues.apache.org/jira/browse/HDFS-13422 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Lokesh Jain >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13422-HDFS-7240.001.patch > > > This jira will be used to fix various findbugs, javac, license and findbugs > issues in HDFS-7240 branch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13453) RBF: getMountPointDates should fetch latest subdir time/date when parent dir is not present but /parent/child dirs are present in mount table
[ https://issues.apache.org/jira/browse/HDFS-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438953#comment-16438953 ] genericqa commented on HDFS-13453: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 2s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 4s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 70m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13453 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12919134/HDFS-13453-000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d59cae4e58b0 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 896b473 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23940/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23940/testReport/ | | Max. process+thread count | 935 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23940/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT
[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time
[ https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438934#comment-16438934 ] Jinglun commented on HDFS-13388: Thanks [~elgoiri] for your comments, I learned a lot from them. I upload a new patch 0013 with some fixes, please give a review ~ > RequestHedgingProxyProvider calls multiple configured NNs all the time > -- > > Key: HDFS-13388 > URL: https://issues.apache.org/jira/browse/HDFS-13388 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, > HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, > HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch, > HADOOP-13388.0009.patch, HADOOP-13388.0010.patch, HADOOP-13388.0011.patch, > HADOOP-13388.0012.patch, HADOOP-13388.0013.patch > > > In HDFS-7858 RequestHedgingProxyProvider was designed to "first > simultaneously call multiple configured NNs to decide which is the active > Namenode and then for subsequent calls it will invoke the previously > successful NN ." But the current code call multiple configured NNs every time > even when we already got the successful NN. > That's because in RetryInvocationHandler.java, ProxyDescriptor's member > proxyInfo is assigned only when it is constructed or when failover occurs. > RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the > only proxy we can get is always a dynamic proxy handled by > RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class > handles invoked method by calling multiple configured NNs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time
[ https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinglun updated HDFS-13388: --- Attachment: HADOOP-13388.0013.patch > RequestHedgingProxyProvider calls multiple configured NNs all the time > -- > > Key: HDFS-13388 > URL: https://issues.apache.org/jira/browse/HDFS-13388 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, > HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, > HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch, > HADOOP-13388.0009.patch, HADOOP-13388.0010.patch, HADOOP-13388.0011.patch, > HADOOP-13388.0012.patch, HADOOP-13388.0013.patch > > > In HDFS-7858 RequestHedgingProxyProvider was designed to "first > simultaneously call multiple configured NNs to decide which is the active > Namenode and then for subsequent calls it will invoke the previously > successful NN ." But the current code call multiple configured NNs every time > even when we already got the successful NN. > That's because in RetryInvocationHandler.java, ProxyDescriptor's member > proxyInfo is assigned only when it is constructed or when failover occurs. > RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the > only proxy we can get is always a dynamic proxy handled by > RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class > handles invoked method by calling multiple configured NNs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13453) RBF: getMountPointDates should fetch latest subdir time/date when parent dir is not present but /parent/child dirs are present in mount table
[ https://issues.apache.org/jira/browse/HDFS-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438923#comment-16438923 ] Dibyendu Karmakar commented on HDFS-13453: -- added the patch > RBF: getMountPointDates should fetch latest subdir time/date when parent dir > is not present but /parent/child dirs are present in mount table > - > > Key: HDFS-13453 > URL: https://issues.apache.org/jira/browse/HDFS-13453 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > Attachments: HDFS-13453-000.patch > > > [HDFS-13386|https://issues.apache.org/jira/browse/HDFS-13386] is not handling > the case when /parent in not present in mount table but /parent/subdir is in > mount table. > In this case getMountPointDates is not able to fetch the latest time for > /parent as /parent is not present in mount table. > For this scenario we will display latest modified subdir date/time as /parent > modified time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13453) RBF: getMountPointDates should fetch latest subdir time/date when parent dir is not present but /parent/child dirs are present in mount table
[ https://issues.apache.org/jira/browse/HDFS-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dibyendu Karmakar updated HDFS-13453: - Status: Patch Available (was: Open) > RBF: getMountPointDates should fetch latest subdir time/date when parent dir > is not present but /parent/child dirs are present in mount table > - > > Key: HDFS-13453 > URL: https://issues.apache.org/jira/browse/HDFS-13453 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > Attachments: HDFS-13453-000.patch > > > [HDFS-13386|https://issues.apache.org/jira/browse/HDFS-13386] is not handling > the case when /parent in not present in mount table but /parent/subdir is in > mount table. > In this case getMountPointDates is not able to fetch the latest time for > /parent as /parent is not present in mount table. > For this scenario we will display latest modified subdir date/time as /parent > modified time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13453) RBF: getMountPointDates should fetch latest subdir time/date when parent dir is not present but /parent/child dirs are present in mount table
[ https://issues.apache.org/jira/browse/HDFS-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dibyendu Karmakar updated HDFS-13453: - Attachment: HDFS-13453-000.patch > RBF: getMountPointDates should fetch latest subdir time/date when parent dir > is not present but /parent/child dirs are present in mount table > - > > Key: HDFS-13453 > URL: https://issues.apache.org/jira/browse/HDFS-13453 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > Attachments: HDFS-13453-000.patch > > > [HDFS-13386|https://issues.apache.org/jira/browse/HDFS-13386] is not handling > the case when /parent in not present in mount table but /parent/subdir is in > mount table. > In this case getMountPointDates is not able to fetch the latest time for > /parent as /parent is not present in mount table. > For this scenario we will display latest modified subdir date/time as /parent > modified time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13453) RBF: getMountPointDates should fetch latest subdir time/date when parent dir is not present but /parent/child dirs are present in mount table
[ https://issues.apache.org/jira/browse/HDFS-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dibyendu Karmakar updated HDFS-13453: - Summary: RBF: getMountPointDates should fetch latest subdir time/date when parent dir is not present but /parent/child dirs are present in mount table (was: getMountPointDates should fetch latest subdir time/date when parent dir is not present but /parent/child dirs are present in mount table) > RBF: getMountPointDates should fetch latest subdir time/date when parent dir > is not present but /parent/child dirs are present in mount table > - > > Key: HDFS-13453 > URL: https://issues.apache.org/jira/browse/HDFS-13453 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > > [HDFS-13386|https://issues.apache.org/jira/browse/HDFS-13386] is not handling > the case when /parent in not present in mount table but /parent/subdir is in > mount table. > In this case getMountPointDates is not able to fetch the latest time for > /parent as /parent is not present in mount table. > For this scenario we will display latest modified subdir date/time as /parent > modified time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13453) getMountPointDates should fetch latest subdir time/date when parent dir is not present but /parent/child dirs are present in mount table
[ https://issues.apache.org/jira/browse/HDFS-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dibyendu Karmakar updated HDFS-13453: - Description: [HDFS-13386|https://issues.apache.org/jira/browse/HDFS-13386] is not handling the case when /parent in not present in mount table but /parent/subdir is in mount table. In this case getMountPointDates is not able to fetch the latest time for /parent as /parent is not present in mount table. For this scenario we will display latest modified subdir date/time as /parent modified time. was: [#HDFS-13386] is not handling the case when /parent in not present in mount table but /parent/subdir is in mount table. In this case getMountPointDates is not able to fetch the latest time for /parent as /parent is not present in mount table. For this scenario we will display latest modified subdir date/time as /parent modified time. > getMountPointDates should fetch latest subdir time/date when parent dir is > not present but /parent/child dirs are present in mount table > > > Key: HDFS-13453 > URL: https://issues.apache.org/jira/browse/HDFS-13453 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > > [HDFS-13386|https://issues.apache.org/jira/browse/HDFS-13386] is not handling > the case when /parent in not present in mount table but /parent/subdir is in > mount table. > In this case getMountPointDates is not able to fetch the latest time for > /parent as /parent is not present in mount table. > For this scenario we will display latest modified subdir date/time as /parent > modified time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13453) getMountPointDates should fetch latest subdir time/date when parent dir is not present but /parent/child dirs are present in mount table
[ https://issues.apache.org/jira/browse/HDFS-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dibyendu Karmakar updated HDFS-13453: - Description: [#HDFS-13386] is not handling the case when /parent in not present in mount table but /parent/subdir is in mount table. In this case getMountPointDates is not able to fetch the latest time for /parent as /parent is not present in mount table. For this scenario we will display latest modified subdir date/time as /parent modified time. was: [#HDFS-13386 https://issues.apache.org/jira/browse/HDFS-13386] is not handling the case when /parent in not present in mount table but /parent/subdir is in mount table. In this case getMountPointDates is not able to fetch the latest time for /parent as /parent is not present in mount table. For this scenario we will display latest modified subdir date/time as /parent modified time. > getMountPointDates should fetch latest subdir time/date when parent dir is > not present but /parent/child dirs are present in mount table > > > Key: HDFS-13453 > URL: https://issues.apache.org/jira/browse/HDFS-13453 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > > [#HDFS-13386] is not handling the case when /parent in not present in mount > table but /parent/subdir is in mount table. > In this case getMountPointDates is not able to fetch the latest time for > /parent as /parent is not present in mount table. > For this scenario we will display latest modified subdir date/time as /parent > modified time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13453) getMountPointDates should fetch latest subdir time/date when parent dir is not present but /parent/child dirs are present in mount table
[ https://issues.apache.org/jira/browse/HDFS-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dibyendu Karmakar updated HDFS-13453: - Description: [#HDFS-13386 https://issues.apache.org/jira/browse/HDFS-13386] is not handling the case when /parent in not present in mount table but /parent/subdir is in mount table. In this case getMountPointDates is not able to fetch the latest time for /parent as /parent is not present in mount table. For this scenario we will display latest modified subdir date/time as /parent modified time. was: [#HDFS-13386] is not handling the case when /parent in not present in mount table but /parent/subdir is in mount table. In this case getMountPointDates is not able to fetch the latest time for /parent as /parent is not present in mount table. For this scenario we will display latest modified subdir date/time as /parent modified time. > getMountPointDates should fetch latest subdir time/date when parent dir is > not present but /parent/child dirs are present in mount table > > > Key: HDFS-13453 > URL: https://issues.apache.org/jira/browse/HDFS-13453 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > > [#HDFS-13386 https://issues.apache.org/jira/browse/HDFS-13386] is not > handling the case when /parent in not present in mount table but > /parent/subdir is in mount table. > In this case getMountPointDates is not able to fetch the latest time for > /parent as /parent is not present in mount table. > For this scenario we will display latest modified subdir date/time as /parent > modified time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13453) getMountPointDates should fetch latest subdir time/date when parent dir is not present but /parent/child dirs are present in mount table
Dibyendu Karmakar created HDFS-13453: Summary: getMountPointDates should fetch latest subdir time/date when parent dir is not present but /parent/child dirs are present in mount table Key: HDFS-13453 URL: https://issues.apache.org/jira/browse/HDFS-13453 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Dibyendu Karmakar Assignee: Dibyendu Karmakar [#HDFS-13386] is not handling the case when /parent in not present in mount table but /parent/subdir is in mount table. In this case getMountPointDates is not able to fetch the latest time for /parent as /parent is not present in mount table. For this scenario we will display latest modified subdir date/time as /parent modified time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12906) hedged point read in DFSInputStream sends only 1 hedge read request
[ https://issues.apache.org/jira/browse/HDFS-12906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438904#comment-16438904 ] Wei-Chiu Chuang commented on HDFS-12906: Hi [~thinktaocs] could you please add affect version? We fixed quite a few hedged read bugs late last year (HDFS-11738, HDFS-11303, HDFS-11708) so I'm not sure if this issue is still valid. > hedged point read in DFSInputStream sends only 1 hedge read request > --- > > Key: HDFS-12906 > URL: https://issues.apache.org/jira/browse/HDFS-12906 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Tao Zhang >Assignee: Tao Zhang >Priority: Major > > Hedged point read is handled in DFSInputStream.hedgedFetchBlockByteRange(). > It calls "getFirstToComplete()" to get the 1st returned result after sending > out hedge read requests. But since "getFirstToComplete()" uses > "CompletionService.take()" which is a endlessly blocking operation. It will > wait for 1 result after sending only 1 hedge read request. > It could be changed to wait for a specific timeout (instead of infinite > timeout) and starting another hedge read request. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13452) Some Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438821#comment-16438821 ] genericqa commented on HDFS-13452: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 1s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 50s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 20 unchanged - 0 fixed = 21 total (was 20) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 56s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 3s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 17s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}206m 56s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.namenode.TestNameNodeMXBean | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13452 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12919106/HDFS-13542_1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs
[jira] [Commented] (HDFS-12841) Ozone: Remove Pipeline from Datanode Container Protocol protobuf definition.
[ https://issues.apache.org/jira/browse/HDFS-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438816#comment-16438816 ] genericqa commented on HDFS-12841: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 15s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 28s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 8s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 24s{color} | {color:red} client in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 23s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 24s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 28s{color} | {color:red} integration-test in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 22s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 23s{color} | {color:red} client in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 56s{color} | {color:red} hadoop-hdds/common in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 23s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} client in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s{color} | {color:red} integration-test in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} tools in HDFS-7240 failed. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 11s{color} | {color:red} client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 10s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 11s{color} | {color:red} tools in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 10s{color} | {color:red} integration-test in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 10s{color} | {color:red} tools in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 21s{color} | {color:green} the patch passed {color} | |
[jira] [Commented] (HDFS-13433) webhdfs requests can be routed incorrectly in federated cluster
[ https://issues.apache.org/jira/browse/HDFS-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438810#comment-16438810 ] genericqa commented on HDFS-13433: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 114 unchanged - 1 fixed = 114 total (was 115) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 36s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}112m 52s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}219m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13433 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12918963/HDFS-13433.04.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux
[jira] [Commented] (HDFS-13422) Ozone: Fix whitespaces and license issues in HDFS-7240 branch
[ https://issues.apache.org/jira/browse/HDFS-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438808#comment-16438808 ] Lokesh Jain commented on HDFS-13422: I was able to resolve the license problems for hadoop-hdds-server-framework and hadoop-ozone after enabling hdds profile but hadoop-main still contains license issues of hadoop-cblock. Without hdds profile all the issues appear in hadoop-main. > Ozone: Fix whitespaces and license issues in HDFS-7240 branch > - > > Key: HDFS-13422 > URL: https://issues.apache.org/jira/browse/HDFS-13422 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Lokesh Jain >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13422-HDFS-7240.001.patch > > > This jira will be used to fix various findbugs, javac, license and findbugs > issues in HDFS-7240 branch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13422) Ozone: Fix whitespaces and license issues in HDFS-7240 branch
[ https://issues.apache.org/jira/browse/HDFS-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438800#comment-16438800 ] Lokesh Jain commented on HDFS-13422: The license check in the above run is done without -Phdds profile. In such a case all the errors appear in hadoop-main submodule. The license check can be done by command {code:java} mvn -fn apache-rat:check -Phdds {code} It shows apache license issues in hadoop-main, hadoop-hdds-server-framework, hadoop-ozone and hadoop-tools. > Ozone: Fix whitespaces and license issues in HDFS-7240 branch > - > > Key: HDFS-13422 > URL: https://issues.apache.org/jira/browse/HDFS-13422 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Lokesh Jain >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13422-HDFS-7240.001.patch > > > This jira will be used to fix various findbugs, javac, license and findbugs > issues in HDFS-7240 branch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13431) Ozone: Ozone Shell should use RestClient and RpcClient
[ https://issues.apache.org/jira/browse/HDFS-13431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438771#comment-16438771 ] Mukul Kumar Singh commented on HDFS-13431: -- Thanks for working on this [~ljain]. The patch is not applying on the latest branch. Can you please rebase the patch. > Ozone: Ozone Shell should use RestClient and RpcClient > -- > > Key: HDFS-13431 > URL: https://issues.apache.org/jira/browse/HDFS-13431 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDFS-13431-HDFS-7240.001.patch > > > Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and > RpcClient instead of OzoneRestClient. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12841) Ozone: pipeline from Datanode Container Protocol protobuf definition.
[ https://issues.apache.org/jira/browse/HDFS-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-12841: - Status: Patch Available (was: Open) > Ozone: pipeline from Datanode Container Protocol protobuf definition. > - > > Key: HDFS-12841 > URL: https://issues.apache.org/jira/browse/HDFS-12841 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-12841-HDFS-7240.001.patch > > > The current Ozone code passes pipeline information to datanodes as well. > However datanodes do not use this information. > Hence Pipeline should be removed from ozone datanode commands. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12841) Ozone: Remove Pipeline from Datanode Container Protocol protobuf definition.
[ https://issues.apache.org/jira/browse/HDFS-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-12841: - Summary: Ozone: Remove Pipeline from Datanode Container Protocol protobuf definition. (was: Ozone: pipeline from Datanode Container Protocol protobuf definition.) > Ozone: Remove Pipeline from Datanode Container Protocol protobuf definition. > > > Key: HDFS-12841 > URL: https://issues.apache.org/jira/browse/HDFS-12841 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-12841-HDFS-7240.001.patch > > > The current Ozone code passes pipeline information to datanodes as well. > However datanodes do not use this information. > Hence Pipeline should be removed from ozone datanode commands. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12841) Ozone: pipeline from Datanode Container Protocol protobuf definition.
[ https://issues.apache.org/jira/browse/HDFS-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-12841: - Attachment: HDFS-12841-HDFS-7240.001.patch > Ozone: pipeline from Datanode Container Protocol protobuf definition. > - > > Key: HDFS-12841 > URL: https://issues.apache.org/jira/browse/HDFS-12841 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-12841-HDFS-7240.001.patch > > > The current Ozone code passes pipeline information to datanodes as well. > However datanodes do not use this information. > Hence Pipeline should be removed from ozone datanode commands. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12841) Ozone: pipeline from Datanode Container Protocol protobuf definition.
[ https://issues.apache.org/jira/browse/HDFS-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-12841: - Description: The current Ozone code passes pipeline information to datanodes as well. However datanodes do not use this information. Hence Pipeline should be removed from ozone datanode commands. was: The current Ozone code heavily uses pipeline.getContainerName to get the container information. However a pipeline just represents a list of datanodes to be used for storing data. Hence 1) containerName should be removed from pipeline object. 2) Pipeline information should only be part of container Info if needed. It should be removed from other ozone datanode commands. > Ozone: pipeline from Datanode Container Protocol protobuf definition. > - > > Key: HDFS-12841 > URL: https://issues.apache.org/jira/browse/HDFS-12841 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > > The current Ozone code passes pipeline information to datanodes as well. > However datanodes do not use this information. > Hence Pipeline should be removed from ozone datanode commands. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12841) Ozone: pipeline from Datanode Container Protocol protobuf definition.
[ https://issues.apache.org/jira/browse/HDFS-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-12841: - Summary: Ozone: pipeline from Datanode Container Protocol protobuf definition. (was: Ozone: remove container name from pipeline and protobuf definition.) > Ozone: pipeline from Datanode Container Protocol protobuf definition. > - > > Key: HDFS-12841 > URL: https://issues.apache.org/jira/browse/HDFS-12841 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > > The current Ozone code heavily uses pipeline.getContainerName to get the > container information. However a pipeline just represents a list of datanodes > to be used for storing data. Hence > 1) containerName should be removed from pipeline object. > 2) Pipeline information should only be part of container Info if needed. It > should be removed from other ozone datanode commands. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13452) Some Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lujie updated HDFS-13452: - Status: Patch Available (was: Open) > Some Potential NPE > --- > > Key: HDFS-13452 > URL: https://issues.apache.org/jira/browse/HDFS-13452 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lujie >Priority: Major > Attachments: HDFS-13542_1.patch > > > We have developed a static analysis tool > [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential > NPE, just as descroped in HDFS-13451. We found another two bug or bad > practice after improve the tool. > attach the patch here -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13452) Some Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lujie updated HDFS-13452: - Attachment: HDFS-13542_1.patch > Some Potential NPE > --- > > Key: HDFS-13452 > URL: https://issues.apache.org/jira/browse/HDFS-13452 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lujie >Priority: Major > Attachments: HDFS-13542_1.patch > > > We have developed a static analysis tool > [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential > NPE, just as descroped in HDFS-13451. We found another two bug or bad > practice after improve the tool. > attach the patch here -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HDFS-13452) Some Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lujie updated HDFS-13452: - Comment: was deleted (was: =) > Some Potential NPE > --- > > Key: HDFS-13452 > URL: https://issues.apache.org/jira/browse/HDFS-13452 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lujie >Priority: Major > Attachments: HDFS-13542_1.patch > > > We have developed a static analysis tool > [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential > NPE, just as descroped in HDFS-13451. We found another two bug or bad > practice after improve the tool. > attach the patch here -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13452) Some Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lujie updated HDFS-13452: - Description: We have developed a static analysis tool [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential NPE, just as descroped in HDFS-13451. We found another two bug or bad practice after improve the tool. attach the patch here was:I am stil > Some Potential NPE > --- > > Key: HDFS-13452 > URL: https://issues.apache.org/jira/browse/HDFS-13452 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lujie >Priority: Major > Attachments: HDFS-13542_1.patch > > > We have developed a static analysis tool > [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential > NPE, just as descroped in HDFS-13451. We found another two bug or bad > practice after improve the tool. > attach the patch here -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13451) Fix Some Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438703#comment-16438703 ] genericqa commented on HDFS-13451: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 56s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 36s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 38s{color} | {color:orange} root: The patch generated 6 new + 544 unchanged - 0 fixed = 550 total (was 544) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 57s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 26s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 41s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 16s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}261m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13451 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12919101/HDFS-13451_1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5ba87758ed96
[jira] [Updated] (HDFS-13452) Some Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lujie updated HDFS-13452: - Description: I am stil > Some Potential NPE > --- > > Key: HDFS-13452 > URL: https://issues.apache.org/jira/browse/HDFS-13452 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lujie >Priority: Major > > I am stil -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13452) Some Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lujie updated HDFS-13452: - Summary: Some Potential NPE (was: Two Potential NPE ) > Some Potential NPE > --- > > Key: HDFS-13452 > URL: https://issues.apache.org/jira/browse/HDFS-13452 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lujie >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13452) Some Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438687#comment-16438687 ] lujie edited comment on HDFS-13452 at 4/15/18 12:21 PM: = was (Author: xiaoheipangzi): sorry about that put wrong place, should be hbase > Some Potential NPE > --- > > Key: HDFS-13452 > URL: https://issues.apache.org/jira/browse/HDFS-13452 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lujie >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13452) Two Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lujie updated HDFS-13452: - Description: (was: Callee ZKUtil#listChildrenAndWatchForNewChildren may return null, it has 8 callers, 6 of the caller have null checker like: {code:java} List children = ZKUtil.listChildrenAndWatchForNewChildren(zkw, zkw.znodePaths.rsZNode); if (children == null) { return Collections.emptyList(); } {code} but another two callers do not have null checker:RSGroupInfoManagerImpl#retrieveGroupListFromZookeeper,ZKProcedureMemberRpcs#watchForAbortedProcedures. We attach the patch to fix this probelm.(We found this bug by tool [NPEDetector|https://github.com/lujiefsi/NPEDetector])) > Two Potential NPE > -- > > Key: HDFS-13452 > URL: https://issues.apache.org/jira/browse/HDFS-13452 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lujie >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13452) Two Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438687#comment-16438687 ] lujie commented on HDFS-13452: -- sorry about that put wrong place, should be hbase > Two Potential NPE > -- > > Key: HDFS-13452 > URL: https://issues.apache.org/jira/browse/HDFS-13452 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lujie >Priority: Major > > Callee ZKUtil#listChildrenAndWatchForNewChildren may return null, it has 8 > callers, 6 of the caller have null checker like: > {code:java} > List children = ZKUtil.listChildrenAndWatchForNewChildren(zkw, > zkw.znodePaths.rsZNode); > if (children == null) { > return Collections.emptyList(); > } > {code} > but another two callers do not have null > checker:RSGroupInfoManagerImpl#retrieveGroupListFromZookeeper,ZKProcedureMemberRpcs#watchForAbortedProcedures. > > We attach the patch to fix this probelm.(We found this bug by tool > [NPEDetector|https://github.com/lujiefsi/NPEDetector]) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13452) Two Potential NPE
lujie created HDFS-13452: Summary: Two Potential NPE Key: HDFS-13452 URL: https://issues.apache.org/jira/browse/HDFS-13452 Project: Hadoop HDFS Issue Type: Bug Reporter: lujie Callee ZKUtil#listChildrenAndWatchForNewChildren may return null, it has 8 callers, 6 of the caller have null checker like: {code:java} List children = ZKUtil.listChildrenAndWatchForNewChildren(zkw, zkw.znodePaths.rsZNode); if (children == null) { return Collections.emptyList(); } {code} but another two callers do not have null checker:RSGroupInfoManagerImpl#retrieveGroupListFromZookeeper,ZKProcedureMemberRpcs#watchForAbortedProcedures. We attach the patch to fix this probelm.(We found this bug by tool [NPEDetector|https://github.com/lujiefsi/NPEDetector]) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13424) Ozone: Refactor MiniOzoneClassicCluster
[ https://issues.apache.org/jira/browse/HDFS-13424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438678#comment-16438678 ] Mukul Kumar Singh commented on HDFS-13424: -- Thanks for working on this [~nandakumar131]. The code looks good to me. I concentrated on changes in HddsDatanodeService and MiniOzoneClusterImpl. I feel that we can have two implementations for MiniOzoneCluster. 1) MiniOzoneClassicCluster: this can be the new implementation without hdfs related components. 2) MiniOzoneHdfsCluster: where Ozone is created over hdfs datanode. (we necessarily do not need to support all the apis) I feel that this approach will help in making sure that the plugin inside hdfs datanode work correctly as well. Some minor nitpicks in code. 1) HddsDatanodeService:190, the indentation for 190-192 is wrong 2) OzoneContract:41, unused import. > Ozone: Refactor MiniOzoneClassicCluster > --- > > Key: HDFS-13424 > URL: https://issues.apache.org/jira/browse/HDFS-13424 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Attachments: HDFS-13424-HDFS-7240.000.patch > > > This jira will track the refactoring work on {{MiniOzoneClassicCluster}} > which removes the dependency and changes made in {{MiniDFSCluster}} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13451) Fix Some Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lujie updated HDFS-13451: - Description: We have developed a static analysis tool [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential NPE. Our analysis shows that some callees may return null in corner case(e.g. node crash , IO exception), some of their callers have _!=null_ check but some do not have. In this issue we post a patch which can add !=null based on existed !=null check. For example: callee BlockInfo#getDatanode may return null: {code:java} public DatanodeDescriptor getDatanode(int index) { DatanodeStorageInfo storage = getStorageInfo(index); return storage == null ? null : storage.getDatanodeDescriptor(); } {code} it has 4 callers, 3 of them have !=null checker, like in CacheReplicationMonitor#addNewPendingCached: {code:java} DatanodeDescriptor datanode = blockInfo.getDatanode(i); if (datanode == null) { continue; } {code} but in caller NamenodeFsck#blockIdCK have no !null checker, we add checker just like CacheReplicationMonitor#addNewPendingCached {code:java} DatanodeDescriptor dn = blockInfo.getDatanode(idx); if (dn == null) { continue; } {code} But due to we are not very familiar with HDFS, hope some expert can review it. Thanks was: We have developed a static analysis tool [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential NPE. Our analysis shows that some callees may return null in corner case(e.g. node crash , IO exception), some of their callers have _!=null_ check but some do not have. In this issue we post a patch which can add !=null based on existed !=null check. For example: callee BlockInfo#getDatanode may return null: {code:java} public DatanodeDescriptor getDatanode(int index) { DatanodeStorageInfo storage = getStorageInfo(index); return storage == null ? null : storage.getDatanodeDescriptor(); } {code} it has 4 callers, 3 of them have !=null checker, like in CacheReplicationMonitor#addNewPendingCached: {code:java} DatanodeDescriptor datanode = blockInfo.getDatanode(i); if (datanode == null) { continue; } {code} but in caller NamenodeFsck#blockIdCK have no !null checker, we add checker just like CacheReplicationMonitor#addNewPendingCached {code:java} DatanodeDescriptor dn = blockInfo.getDatanode(idx); if (dn == null) { continue; } {code} But due to we are not very familiar with CASSANDRA, hope some expert can review it. Thanks > Fix Some Potential NPE > -- > > Key: HDFS-13451 > URL: https://issues.apache.org/jira/browse/HDFS-13451 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-beta1 >Reporter: lujie >Priority: Major > Attachments: HDFS-13451_1.patch > > > We have developed a static analysis tool > [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential > NPE. Our analysis shows that some callees may return null in corner case(e.g. > node crash , IO exception), some of their callers have _!=null_ check but > some do not have. In this issue we post a patch which can add !=null based > on existed !=null check. For example: > callee BlockInfo#getDatanode may return null: > {code:java} > public DatanodeDescriptor getDatanode(int index) { > DatanodeStorageInfo storage = getStorageInfo(index); >return storage == null ? null : storage.getDatanodeDescriptor(); > } > {code} > it has 4 callers, 3 of them have !=null checker, like in > CacheReplicationMonitor#addNewPendingCached: > {code:java} > DatanodeDescriptor datanode = blockInfo.getDatanode(i); > if (datanode == null) { >continue; > } > {code} > but in caller NamenodeFsck#blockIdCK have no !null checker, we add checker > just like CacheReplicationMonitor#addNewPendingCached > {code:java} > DatanodeDescriptor dn = blockInfo.getDatanode(idx); > if (dn == null) { > continue; > } > {code} > But due to we are not very familiar with HDFS, hope some expert can review > it. > Thanks -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13451) Fix Some Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lujie updated HDFS-13451: - Affects Version/s: 3.0.0-beta1 > Fix Some Potential NPE > -- > > Key: HDFS-13451 > URL: https://issues.apache.org/jira/browse/HDFS-13451 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-beta1 >Reporter: lujie >Priority: Major > Attachments: HDFS-13451_1.patch > > > We have developed a static analysis tool > [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential > NPE. Our analysis shows that some callees may return null in corner case(e.g. > node crash , IO exception), some of their callers have _!=null_ check but > some do not have. In this issue we post a patch which can add !=null based > on existed !=null check. For example: > callee BlockInfo#getDatanode may return null: > {code:java} > public DatanodeDescriptor getDatanode(int index) { > DatanodeStorageInfo storage = getStorageInfo(index); >return storage == null ? null : storage.getDatanodeDescriptor(); > } > {code} > it has 4 callers, 3 of them have !=null checker, like in > CacheReplicationMonitor#addNewPendingCached: > {code:java} > DatanodeDescriptor datanode = blockInfo.getDatanode(i); > if (datanode == null) { >continue; > } > {code} > but in caller NamenodeFsck#blockIdCK have no !null checker, we add checker > just like CacheReplicationMonitor#addNewPendingCached > {code:java} > DatanodeDescriptor dn = blockInfo.getDatanode(idx); > if (dn == null) { > continue; > } > {code} > But due to we are not very familiar with CASSANDRA, hope some expert can > review it. > Thanks -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13451) Fix Some Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lujie updated HDFS-13451: - Attachment: HDFS-13451_1.patch > Fix Some Potential NPE > -- > > Key: HDFS-13451 > URL: https://issues.apache.org/jira/browse/HDFS-13451 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lujie >Priority: Major > Attachments: HDFS-13451_1.patch > > > We have developed a static analysis tool > [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential > NPE. Our analysis shows that some callees may return null in corner case(e.g. > node crash , IO exception), some of their callers have _!=null_ check but > some do not have. In this issue we post a patch which can add !=null based > on existed !=null check. For example: > callee BlockInfo#getDatanode may return null: > {code:java} > public DatanodeDescriptor getDatanode(int index) { > DatanodeStorageInfo storage = getStorageInfo(index); >return storage == null ? null : storage.getDatanodeDescriptor(); > } > {code} > it has 4 callers, 3 of them have !=null checker, like in > CacheReplicationMonitor#addNewPendingCached: > {code:java} > DatanodeDescriptor datanode = blockInfo.getDatanode(i); > if (datanode == null) { >continue; > } > {code} > but in caller NamenodeFsck#blockIdCK have no !null checker, we add checker > just like CacheReplicationMonitor#addNewPendingCached > {code:java} > DatanodeDescriptor dn = blockInfo.getDatanode(idx); > if (dn == null) { > continue; > } > {code} > But due to we are not very familiar with CASSANDRA, hope some expert can > review it. > Thanks -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13451) Fix Some Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lujie updated HDFS-13451: - Status: Patch Available (was: Open) > Fix Some Potential NPE > -- > > Key: HDFS-13451 > URL: https://issues.apache.org/jira/browse/HDFS-13451 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lujie >Priority: Major > Attachments: HDFS-13451_1.patch > > > We have developed a static analysis tool > [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential > NPE. Our analysis shows that some callees may return null in corner case(e.g. > node crash , IO exception), some of their callers have _!=null_ check but > some do not have. In this issue we post a patch which can add !=null based > on existed !=null check. For example: > callee BlockInfo#getDatanode may return null: > {code:java} > public DatanodeDescriptor getDatanode(int index) { > DatanodeStorageInfo storage = getStorageInfo(index); >return storage == null ? null : storage.getDatanodeDescriptor(); > } > {code} > it has 4 callers, 3 of them have !=null checker, like in > CacheReplicationMonitor#addNewPendingCached: > {code:java} > DatanodeDescriptor datanode = blockInfo.getDatanode(i); > if (datanode == null) { >continue; > } > {code} > but in caller NamenodeFsck#blockIdCK have no !null checker, we add checker > just like CacheReplicationMonitor#addNewPendingCached > {code:java} > DatanodeDescriptor dn = blockInfo.getDatanode(idx); > if (dn == null) { > continue; > } > {code} > But due to we are not very familiar with CASSANDRA, hope some expert can > review it. > Thanks -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13451) Fix Some Potential NPE
[ https://issues.apache.org/jira/browse/HDFS-13451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lujie updated HDFS-13451: - Description: We have developed a static analysis tool [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential NPE. Our analysis shows that some callees may return null in corner case(e.g. node crash , IO exception), some of their callers have _!=null_ check but some do not have. In this issue we post a patch which can add !=null based on existed !=null check. For example: callee BlockInfo#getDatanode may return null: {code:java} public DatanodeDescriptor getDatanode(int index) { DatanodeStorageInfo storage = getStorageInfo(index); return storage == null ? null : storage.getDatanodeDescriptor(); } {code} it has 4 callers, 3 of them have !=null checker, like in CacheReplicationMonitor#addNewPendingCached: {code:java} DatanodeDescriptor datanode = blockInfo.getDatanode(i); if (datanode == null) { continue; } {code} but in caller NamenodeFsck#blockIdCK have no !null checker, we add checker just like CacheReplicationMonitor#addNewPendingCached {code:java} DatanodeDescriptor dn = blockInfo.getDatanode(idx); if (dn == null) { continue; } {code} But due to we are not very familiar with CASSANDRA, hope some expert can review it. Thanks was: We have developed a static analysis tool [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential NPE. Our analysis shows that some callees may return null in corner case(e.g. node crash , IO exception), some of their callers have _!=null_ check but some do not have. In this issue we post a patch which can add !=null based on existed !=null check. For example: callee BlockInfo#getDatanode may return null: {code:java} public DatanodeDescriptor getDatanode(int index) { DatanodeStorageInfo storage = getStorageInfo(index); return storage == null ? null : storage.getDatanodeDescriptor(); } {code} it has 4 callers, 3 of them have \!=null checker, like in CacheReplicationMonitor#addNewPendingCached: {code:java} DatanodeDescriptor datanode = blockInfo.getDatanode(i); if (datanode == null) { continue; } {code} but in caller NamenodeFsck#blockIdCK have no \!null checker, we add checker just like CacheReplicationMonitor#addNewPendingCached {code:java} DatanodeDescriptor dn = blockInfo.getDatanode(idx); if (dn == null) { continue; } {code} But due to we are not very familiar with CASSANDRA, hope some expert can review it. Thanks > Fix Some Potential NPE > -- > > Key: HDFS-13451 > URL: https://issues.apache.org/jira/browse/HDFS-13451 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lujie >Priority: Major > > We have developed a static analysis tool > [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential > NPE. Our analysis shows that some callees may return null in corner case(e.g. > node crash , IO exception), some of their callers have _!=null_ check but > some do not have. In this issue we post a patch which can add !=null based > on existed !=null check. For example: > callee BlockInfo#getDatanode may return null: > {code:java} > public DatanodeDescriptor getDatanode(int index) { > DatanodeStorageInfo storage = getStorageInfo(index); >return storage == null ? null : storage.getDatanodeDescriptor(); > } > {code} > it has 4 callers, 3 of them have !=null checker, like in > CacheReplicationMonitor#addNewPendingCached: > {code:java} > DatanodeDescriptor datanode = blockInfo.getDatanode(i); > if (datanode == null) { >continue; > } > {code} > but in caller NamenodeFsck#blockIdCK have no !null checker, we add checker > just like CacheReplicationMonitor#addNewPendingCached > {code:java} > DatanodeDescriptor dn = blockInfo.getDatanode(idx); > if (dn == null) { > continue; > } > {code} > But due to we are not very familiar with CASSANDRA, hope some expert can > review it. > Thanks -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13451) Fix Some Potential NPE
lujie created HDFS-13451: Summary: Fix Some Potential NPE Key: HDFS-13451 URL: https://issues.apache.org/jira/browse/HDFS-13451 Project: Hadoop HDFS Issue Type: Bug Reporter: lujie We have developed a static analysis tool [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential NPE. Our analysis shows that some callees may return null in corner case(e.g. node crash , IO exception), some of their callers have _!=null_ check but some do not have. In this issue we post a patch which can add !=null based on existed !=null check. For example: callee BlockInfo#getDatanode may return null: {code:java} public DatanodeDescriptor getDatanode(int index) { DatanodeStorageInfo storage = getStorageInfo(index); return storage == null ? null : storage.getDatanodeDescriptor(); } {code} it has 4 callers, 3 of them have \!=null checker, like in CacheReplicationMonitor#addNewPendingCached: {code:java} DatanodeDescriptor datanode = blockInfo.getDatanode(i); if (datanode == null) { continue; } {code} but in caller NamenodeFsck#blockIdCK have no \!null checker, we add checker just like CacheReplicationMonitor#addNewPendingCached {code:java} DatanodeDescriptor dn = blockInfo.getDatanode(idx); if (dn == null) { continue; } {code} But due to we are not very familiar with CASSANDRA, hope some expert can review it. Thanks -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org