[jira] [Commented] (HDFS-14935) Refactor DFSNetworkTopology#isNodeInScope
[ https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960420#comment-16960420 ] Hadoop QA commented on HDFS-14935: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 47s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 23s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 26s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 35s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}160m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HDFS-14935 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12984120/HDFS-14935.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6b29065fb540 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7be5508 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/28185/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/28185/testReport/ | | Max. process+thread count | 2924 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U:
[jira] [Updated] (HDDS-2273) Avoid buffer copying in GrpcReplicationService
[ https://issues.apache.org/jira/browse/HDDS-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Doroszlai updated HDDS-2273: --- Status: Patch Available (was: In Progress) > Avoid buffer copying in GrpcReplicationService > -- > > Key: HDDS-2273 > URL: https://issues.apache.org/jira/browse/HDDS-2273 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Tsz-wo Sze >Assignee: Attila Doroszlai >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > In GrpcOutputStream, it writes data to a ByteArrayOutputStream and copies > them to a ByteString. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2272) Avoid buffer copying in GrpcReplicationClient
[ https://issues.apache.org/jira/browse/HDDS-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Doroszlai updated HDDS-2272: --- Status: Patch Available (was: In Progress) > Avoid buffer copying in GrpcReplicationClient > - > > Key: HDDS-2272 > URL: https://issues.apache.org/jira/browse/HDDS-2272 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Tsz-wo Sze >Assignee: Attila Doroszlai >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > In StreamDownloader.onNext, CopyContainerResponseProto is copied to a byte[] > and then it is written out to the stream. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2368) TestOzoneManagerDoubleBufferWithDummyResponse failing intermittently
[ https://issues.apache.org/jira/browse/HDDS-2368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Doroszlai updated HDDS-2368: --- Status: Patch Available (was: Open) > TestOzoneManagerDoubleBufferWithDummyResponse failing intermittently > > > Key: HDDS-2368 > URL: https://issues.apache.org/jira/browse/HDDS-2368 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: test >Affects Versions: 0.5.0 >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > {noformat} > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.479 s <<< > FAILURE! - in > org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse > testDoubleBufferWithDummyResponse(org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse) > Time elapsed: 1.404 s <<< FAILURE! > java.lang.AssertionError > ... > at > org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse.testDoubleBufferWithDummyResponse(TestOzoneManagerDoubleBufferWithDummyResponse.java:116) > {noformat} > * > https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2345-jsf2s/unit/hadoop-ozone/ozone-manager/org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse.txt > > * > https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2272-bfh6s/unit/hadoop-ozone/ozone-manager/org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse.txt > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-2368) TestOzoneManagerDoubleBufferWithDummyResponse failing intermittently
[ https://issues.apache.org/jira/browse/HDDS-2368?focusedWorklogId=334567=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334567 ] ASF GitHub Bot logged work on HDDS-2368: Author: ASF GitHub Bot Created on: 26/Oct/19 19:20 Start Date: 26/Oct/19 19:20 Worklog Time Spent: 10m Work Description: adoroszlai commented on pull request #93: HDDS-2368. TestOzoneManagerDoubleBufferWithDummyResponse failing intermittently URL: https://github.com/apache/hadoop-ozone/pull/93 ## What changes were proposed in this pull request? Fix TestOzoneManagerDoubleBufferWithDummyResponse, which (very) intermittently fails at: ``` testDoubleBufferWithDummyResponse(TestOzoneManagerDoubleBufferWithDummyResponse.java:116) ``` * https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2345-jsf2s/unit/hadoop-ozone/ozone-manager/org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse.txt * https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2272-bfh6s/unit/hadoop-ozone/ozone-manager/org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse.txt Variables are set in one thread (`OzoneManagerDoubleBuffer#daemon`): https://github.com/apache/hadoop-ozone/blob/6a6558025ec35203bfd61839aadf7b2a26520222/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java#L160-L186 and checked in the test thread: https://github.com/apache/hadoop-ozone/blob/6a6558025ec35203bfd61839aadf7b2a26520222/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithDummyResponse.java#L110-L117 The test waits for the first variable (`flushedTransactionCount`) to reach `bucketCount`. The other variable (`totalNumOfFlushedTransactions`) is set later, so the test thread may reach the assertion before the updater thread reaches `updateMetrics`. The proposed fix swaps check of variables in the test: wait for the second variable and let the assertion check the first one. This guarantees that both of them are updated by the time `waitFor` returns. https://issues.apache.org/jira/browse/HDDS-2368 ## How was this patch tested? Reproduced the failure by adding some sleep in `OzoneManagerDoubleBuffer#updateMetrics`. Verified the test passes with the fix despite the sleep. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 334567) Remaining Estimate: 0h Time Spent: 10m > TestOzoneManagerDoubleBufferWithDummyResponse failing intermittently > > > Key: HDDS-2368 > URL: https://issues.apache.org/jira/browse/HDDS-2368 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: test >Affects Versions: 0.5.0 >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > {noformat} > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.479 s <<< > FAILURE! - in > org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse > testDoubleBufferWithDummyResponse(org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse) > Time elapsed: 1.404 s <<< FAILURE! > java.lang.AssertionError > ... > at > org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse.testDoubleBufferWithDummyResponse(TestOzoneManagerDoubleBufferWithDummyResponse.java:116) > {noformat} > * > https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2345-jsf2s/unit/hadoop-ozone/ozone-manager/org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse.txt > > * > https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2272-bfh6s/unit/hadoop-ozone/ozone-manager/org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse.txt > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2368) TestOzoneManagerDoubleBufferWithDummyResponse failing intermittently
[ https://issues.apache.org/jira/browse/HDDS-2368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-2368: - Labels: pull-request-available (was: ) > TestOzoneManagerDoubleBufferWithDummyResponse failing intermittently > > > Key: HDDS-2368 > URL: https://issues.apache.org/jira/browse/HDDS-2368 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: test >Affects Versions: 0.5.0 >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Minor > Labels: pull-request-available > > {noformat} > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.479 s <<< > FAILURE! - in > org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse > testDoubleBufferWithDummyResponse(org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse) > Time elapsed: 1.404 s <<< FAILURE! > java.lang.AssertionError > ... > at > org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse.testDoubleBufferWithDummyResponse(TestOzoneManagerDoubleBufferWithDummyResponse.java:116) > {noformat} > * > https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2345-jsf2s/unit/hadoop-ozone/ozone-manager/org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse.txt > > * > https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2272-bfh6s/unit/hadoop-ozone/ozone-manager/org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse.txt > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2368) TestOzoneManagerDoubleBufferWithDummyResponse failing intermittently
Attila Doroszlai created HDDS-2368: -- Summary: TestOzoneManagerDoubleBufferWithDummyResponse failing intermittently Key: HDDS-2368 URL: https://issues.apache.org/jira/browse/HDDS-2368 Project: Hadoop Distributed Data Store Issue Type: Bug Components: test Affects Versions: 0.5.0 Reporter: Attila Doroszlai Assignee: Attila Doroszlai {noformat} Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.479 s <<< FAILURE! - in org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse testDoubleBufferWithDummyResponse(org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse) Time elapsed: 1.404 s <<< FAILURE! java.lang.AssertionError ... at org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse.testDoubleBufferWithDummyResponse(TestOzoneManagerDoubleBufferWithDummyResponse.java:116) {noformat} * https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2345-jsf2s/unit/hadoop-ozone/ozone-manager/org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse.txt * https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2272-bfh6s/unit/hadoop-ozone/ozone-manager/org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2367) TestScmSafeMode fails consistently
Attila Doroszlai created HDDS-2367: -- Summary: TestScmSafeMode fails consistently Key: HDDS-2367 URL: https://issues.apache.org/jira/browse/HDDS-2367 Project: Hadoop Distributed Data Store Issue Type: Bug Components: test Affects Versions: 0.5.0 Reporter: Attila Doroszlai TestScmSafeMode is currently failing consistently: {noformat:title=https://github.com/elek/ozone-ci-q4/blob/master/trunk/trunk-nightly-20191022-cthwk/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.om.TestScmSafeMode.txt} Tests run: 5, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 211.927 s <<< FAILURE! - in org.apache.hadoop.ozone.om.TestScmSafeMode testSCMSafeMode(org.apache.hadoop.ozone.om.TestScmSafeMode) Time elapsed: 81.176 s <<< ERROR! java.util.concurrent.TimeoutException: Timed out waiting for condition. Thread diagnostics: ... at org.apache.hadoop.ozone.MiniOzoneClusterImpl.waitForClusterToBeReady(MiniOzoneClusterImpl.java:144) at org.apache.hadoop.ozone.om.TestScmSafeMode.testSCMSafeMode(TestScmSafeMode.java:222) {noformat} It is passing if Ratis version is reverted to {{3f446aaf}} snapshot. So one or more of the following Ratis commits is triggering the failure: d6d58d0f RATIS-603. Add a method to StateMachine for printing StateMachineLogEntryProto. Contributed by Mukul Kumar Singh debf50ac RATIS-694. Fix checkstyle violations in ratis-metrics. Contributed by Dinesh Chitlangia 2fd6c04d RATIS-692. RaftStorageDirectory.tryLock throws a very deep IOException. ad6d7c3e RATIS-728. TimeoutScheduler for GrpcLogAppender holds on to the AppendEntryRequest till it times out even though request succeeds. Contributed by Tsz-wo Sze. 9c1638db RATIS-704. Invoke sendAsync as soon as OrderedAsync is created. Contributed by Tsz Wo Nicholas Sze. 55cbfbbc RATIS-726. TimeoutScheduler holds on to the raftClientRequest till it times out even though request succeeds. Contributed by Tsz-wo Sze. c76ab77d RATIS-718. TimeoutScheduler throws IllegalStateException. Contributed by Tsz Wo Nicholas Sze. 510206dc RATIS-390. Fix package of LogServiceClient 99c33e0b RATIS-723. Fix pluggable metrics impl which cause test failures ced7dbfe RATIS-649. Add metrics related to ClientRequests. Contributed by Aravindan Vijayan. ef12b1f2 RATIS-676. Add a metric in RaftServer to track the Log Commit Index. Contributed by Aravindan Vijayan. f36acabc RATIS-717. NPE thrown on the follower while instantiating RaftLeaderMetrics. Contributed by Aravindan Vijayan. d11320db RATIS-706. Dead lock in GrpcClientRpc. Contributed by Tsz Wo Nicholas Sze. 9cbf1efa RATIS-691. Fix checkstyle violations in ratis-logservice. Contributed by Dinesh Chitlangia b9314825 RATIS-702. Make metrics reporting implementation pluggable. Contributed by Marton Elek 5bb587b2 RATIS-716. RetryCache$CacheEntry$replyFuture holds onto RaftClientRequest until eviction. Contributed by Lokesh Jain. 1053da10 RATIS-703. Intermittent ambiguous attempt(..) method in JavaUtils. Contributed by Henrik Hegardt b49974df RATIS-707. Test failures caused by minTimeout set to zero. Contributed by Siddharth Wagle -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-758) Separate client/server configuration settings
[ https://issues.apache.org/jira/browse/HDDS-758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] YiSheng Lien reassigned HDDS-758: - Assignee: YiSheng Lien > Separate client/server configuration settings > - > > Key: HDDS-758 > URL: https://issues.apache.org/jira/browse/HDDS-758 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Arpit Agarwal >Assignee: YiSheng Lien >Priority: Major > Labels: beta1 > > Ozone should have separate config files for client and server. E.g. > - ozone-site-client.xml > - ozone-site-server.xml > Clients should never load ozone-site-server.xml. And vice versa i.e. servers > should never load ozone-site-client.xml. > This may require duplicating a very small number of settings like OM and SCM > address. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14935) Refactor DFSNetworkTopology#isNodeInScope
[ https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960395#comment-16960395 ] Lisheng Sun commented on HDFS-14935: i agree your idea. before code getNode() has already checked excludedScope. {code:java} DFSTopologyNodeImpl root = (DFSTopologyNodeImpl)node; Node excludeRoot = excludedScope == null ? null : getNode(excludedScope);{code} but is it necessary to check for specific parameters for the private method provided? i updated this patch and uploaded the v003 patch.Thank you [~ayushtkn]. > Refactor DFSNetworkTopology#isNodeInScope > - > > Key: HDFS-14935 > URL: https://issues.apache.org/jira/browse/HDFS-14935 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lisheng Sun >Assignee: Lisheng Sun >Priority: Major > Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch, > HDFS-14935.003.patch > > > {code:java} > private boolean isNodeInScope(Node node, String scope) { > if (!scope.endsWith("/")) { > scope += "/"; > } > String nodeLocation = node.getNetworkLocation() + "/"; > return nodeLocation.startsWith(scope); > } > {code} > NodeBase#normalize() is used to normalize scope. > so i refator DFSNetworkTopology#isNodeInScope. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14935) Refactor DFSNetworkTopology#isNodeInScope
[ https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HDFS-14935: --- Attachment: HDFS-14935.003.patch > Refactor DFSNetworkTopology#isNodeInScope > - > > Key: HDFS-14935 > URL: https://issues.apache.org/jira/browse/HDFS-14935 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lisheng Sun >Assignee: Lisheng Sun >Priority: Major > Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch, > HDFS-14935.003.patch > > > {code:java} > private boolean isNodeInScope(Node node, String scope) { > if (!scope.endsWith("/")) { > scope += "/"; > } > String nodeLocation = node.getNetworkLocation() + "/"; > return nodeLocation.startsWith(scope); > } > {code} > NodeBase#normalize() is used to normalize scope. > so i refator DFSNetworkTopology#isNodeInScope. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-1505) Remove "ozone.enabled" parameter from ozone configs
[ https://issues.apache.org/jira/browse/HDDS-1505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vivek Ratnavel Subramanian resolved HDDS-1505. -- Resolution: Duplicate > Remove "ozone.enabled" parameter from ozone configs > --- > > Key: HDDS-1505 > URL: https://issues.apache.org/jira/browse/HDDS-1505 > Project: Hadoop Distributed Data Store > Issue Type: Task > Components: Ozone Manager >Affects Versions: 0.4.0 >Reporter: Vivek Ratnavel Subramanian >Assignee: Vivek Ratnavel Subramanian >Priority: Minor > > Remove "ozone.enabled" config as it is no longer needed -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2361) Ozone Manager init & start command prints out unnecessary line in the beginning.
[ https://issues.apache.org/jira/browse/HDDS-2361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HDDS-2361. -- Fix Version/s: 0.5.0 Resolution: Fixed > Ozone Manager init & start command prints out unnecessary line in the > beginning. > > > Key: HDDS-2361 > URL: https://issues.apache.org/jira/browse/HDDS-2361 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Aravindan Vijayan >Assignee: YiSheng Lien >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > {code} > [root@avijayan-om-1 ozone-0.5.0-SNAPSHOT]# bin/ozone --daemon start om > Ozone Manager classpath extended by > {code} > We could probably print this line only when extra elements are added to OM > classpath or skip printing this line altogether. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2365) TestRatisPipelineProvider#testCreatePipelinesDnExclude is flaky
[ https://issues.apache.org/jira/browse/HDDS-2365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-2365: - Fix Version/s: 0.5.0 Resolution: Fixed Status: Resolved (was: Patch Available) > TestRatisPipelineProvider#testCreatePipelinesDnExclude is flaky > --- > > Key: HDDS-2365 > URL: https://issues.apache.org/jira/browse/HDDS-2365 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: test >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Minor > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > TestRatisPipelineProvider#testCreatePipelinesDnExclude is flaky, failing in > CI intermittently: > * > https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2360-9pxww/integration/hadoop-ozone/integration-test/org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineProvider.txt > * > https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2352-cxhw9/integration/hadoop-ozone/integration-test/org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineProvider.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-2365) TestRatisPipelineProvider#testCreatePipelinesDnExclude is flaky
[ https://issues.apache.org/jira/browse/HDDS-2365?focusedWorklogId=334533=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334533 ] ASF GitHub Bot logged work on HDDS-2365: Author: ASF GitHub Bot Created on: 26/Oct/19 14:32 Start Date: 26/Oct/19 14:32 Worklog Time Spent: 10m Work Description: bharatviswa504 commented on pull request #84: HDDS-2365. Fix TestRatisPipelineProvider#testCreatePipelinesDnExclude URL: https://github.com/apache/hadoop-ozone/pull/84 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 334533) Time Spent: 20m (was: 10m) > TestRatisPipelineProvider#testCreatePipelinesDnExclude is flaky > --- > > Key: HDDS-2365 > URL: https://issues.apache.org/jira/browse/HDDS-2365 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: test >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Minor > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > TestRatisPipelineProvider#testCreatePipelinesDnExclude is flaky, failing in > CI intermittently: > * > https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2360-9pxww/integration/hadoop-ozone/integration-test/org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineProvider.txt > * > https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2352-cxhw9/integration/hadoop-ozone/integration-test/org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineProvider.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-2361) Ozone Manager init & start command prints out unnecessary line in the beginning.
[ https://issues.apache.org/jira/browse/HDDS-2361?focusedWorklogId=334532=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334532 ] ASF GitHub Bot logged work on HDDS-2361: Author: ASF GitHub Bot Created on: 26/Oct/19 14:22 Start Date: 26/Oct/19 14:22 Worklog Time Spent: 10m Work Description: bharatviswa504 commented on pull request #91: HDDS-2361. Ozone Manager init & start command prints out unnecessary line in the beginning. URL: https://github.com/apache/hadoop-ozone/pull/91 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 334532) Time Spent: 20m (was: 10m) > Ozone Manager init & start command prints out unnecessary line in the > beginning. > > > Key: HDDS-2361 > URL: https://issues.apache.org/jira/browse/HDDS-2361 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Aravindan Vijayan >Assignee: YiSheng Lien >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > {code} > [root@avijayan-om-1 ozone-0.5.0-SNAPSHOT]# bin/ozone --daemon start om > Ozone Manager classpath extended by > {code} > We could probably print this line only when extra elements are added to OM > classpath or skip printing this line altogether. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-2345) Add a UT for newly added clone() in OmBucketInfo
[ https://issues.apache.org/jira/browse/HDDS-2345?focusedWorklogId=334528=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334528 ] ASF GitHub Bot logged work on HDDS-2345: Author: ASF GitHub Bot Created on: 26/Oct/19 13:03 Start Date: 26/Oct/19 13:03 Worklog Time Spent: 10m Work Description: cxorm commented on pull request #92: HDDS-2345. Add a UT for newly added clone() in OmBucketInfo URL: https://github.com/apache/hadoop-ozone/pull/92 ## What changes were proposed in this pull request? Add an UT in ```TestOmBucketInfo.java``` for testing ```copyObject()``` Ref to ```TestOmVolumeArgs#testClone```, but we don't test acl related, cause the ```copyObject()``` in ```OmBucketInfo.java``` don't clone acl. ## What is the link to the Apache JIRA https://issues.apache.org/jira/browse/HDDS-2345 ## How was this patch tested? Ran the command in ```hadoop-ozone/```: ```mvn -Dtest=org.apache.hadoop.ozone.om.helpers.TestOmBucketInfo test``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 334528) Remaining Estimate: 0h Time Spent: 10m > Add a UT for newly added clone() in OmBucketInfo > > > Key: HDDS-2345 > URL: https://issues.apache.org/jira/browse/HDDS-2345 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Bharat Viswanadham >Assignee: YiSheng Lien >Priority: Major > Labels: newbie, pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Add a UT for newly added clone() method in OMBucketInfo as part of HDDS-2333. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2345) Add a UT for newly added clone() in OmBucketInfo
[ https://issues.apache.org/jira/browse/HDDS-2345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-2345: - Labels: newbie pull-request-available (was: newbie) > Add a UT for newly added clone() in OmBucketInfo > > > Key: HDDS-2345 > URL: https://issues.apache.org/jira/browse/HDDS-2345 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Bharat Viswanadham >Assignee: YiSheng Lien >Priority: Major > Labels: newbie, pull-request-available > > Add a UT for newly added clone() method in OMBucketInfo as part of HDDS-2333. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14935) Refactor DFSNetworkTopology#isNodeInScope
[ https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960321#comment-16960321 ] Ayush Saxena commented on HDFS-14935: - Yes, it should stop, means if it is moving ahead, it is sure that the conditions that throw exception are not true, then again if we call normalize() we will again do these checks unnecessarily, where we already know that these conditions aren’t true. > Refactor DFSNetworkTopology#isNodeInScope > - > > Key: HDFS-14935 > URL: https://issues.apache.org/jira/browse/HDFS-14935 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lisheng Sun >Assignee: Lisheng Sun >Priority: Major > Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch > > > {code:java} > private boolean isNodeInScope(Node node, String scope) { > if (!scope.endsWith("/")) { > scope += "/"; > } > String nodeLocation = node.getNetworkLocation() + "/"; > return nodeLocation.startsWith(scope); > } > {code} > NodeBase#normalize() is used to normalize scope. > so i refator DFSNetworkTopology#isNodeInScope. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14908) LeaseManager should check parent-child relationship when filter open files.
[ https://issues.apache.org/jira/browse/HDFS-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960312#comment-16960312 ] Hadoop QA commented on HDFS-14908: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 51s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 42s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 402 unchanged - 0 fixed = 403 total (was 402) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 40s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}161m 32s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDistributedFileSystem | | | hadoop.hdfs.tools.TestDFSZKFailoverController | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HDFS-14908 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12984099/HDFS-14908.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 67ddbdfe1187 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7be5508 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/28184/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/28184/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/28184/testReport/ | | Max. process+thread
[jira] [Commented] (HDFS-14935) Refactor DFSNetworkTopology#isNodeInScope
[ https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960293#comment-16960293 ] Lisheng Sun commented on HDFS-14935: [~ayushtkn] i think if there is illegalArgumentException, it should throw this Exception and stop. As NetworkTopology#countNumOfAvailableNodes() is also done. {code:java} @VisibleForTesting public int countNumOfAvailableNodes(String scope, Collection excludedNodes) { boolean isExcluded=false; if (scope.startsWith("~")) { isExcluded=true; scope=scope.substring(1); } scope = NodeBase.normalize(scope); . }{code} Please correct me if i was wrong. Thank you [~ayushtkn] > Refactor DFSNetworkTopology#isNodeInScope > - > > Key: HDFS-14935 > URL: https://issues.apache.org/jira/browse/HDFS-14935 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lisheng Sun >Assignee: Lisheng Sun >Priority: Major > Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch > > > {code:java} > private boolean isNodeInScope(Node node, String scope) { > if (!scope.endsWith("/")) { > scope += "/"; > } > String nodeLocation = node.getNetworkLocation() + "/"; > return nodeLocation.startsWith(scope); > } > {code} > NodeBase#normalize() is used to normalize scope. > so i refator DFSNetworkTopology#isNodeInScope. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDDS-2345) Add a UT for newly added clone() in OmBucketInfo
[ https://issues.apache.org/jira/browse/HDDS-2345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDDS-2345 started by YiSheng Lien. -- > Add a UT for newly added clone() in OmBucketInfo > > > Key: HDDS-2345 > URL: https://issues.apache.org/jira/browse/HDDS-2345 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Bharat Viswanadham >Assignee: YiSheng Lien >Priority: Major > Labels: newbie > > Add a UT for newly added clone() method in OMBucketInfo as part of HDDS-2333. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14908) LeaseManager should check parent-child relationship when filter open files.
[ https://issues.apache.org/jira/browse/HDFS-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960284#comment-16960284 ] Jinglun commented on HDFS-14908: Hi [~hexiaoqiao], v01 is not the final version using Strings.startsWith() and character comparison. I upload v04 as the final version of *[startsWIthAndCharAt]*. Hi [~elgoiri] , v02~v04 use different ways to implement DFSUtil.isParnet(). {quote}v04 uses String.startsWith() + char comparison. *[startsWIthAndCharAt]* v02 uses String.startsWith(). *[startsWith]* v03 uses optimized startsWith(). *[isParent]* v02 + v03 are designed for common use while v04 only cover the case of listOpenFiles(). {quote} [~hexiaoqiao] prefer using *[**startsWIthAndCharAt/v04**]* because that would be more readable and performance cost diff is small. I'd prefer *[isParent/v03]* because I want DFSUtil.isParnet() to be a common method. If using *[**startsWIthAndCharAt/v04**]* then it won't be common because it doesn't cover all the cases. *[isParnet/v03]* also has a better performance especially in case 2. (case 2 means the path ends with a '/'). > LeaseManager should check parent-child relationship when filter open files. > --- > > Key: HDFS-14908 > URL: https://issues.apache.org/jira/browse/HDFS-14908 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.0.1 >Reporter: Jinglun >Assignee: Jinglun >Priority: Minor > Attachments: HDFS-14908.001.patch, HDFS-14908.002.patch, > HDFS-14908.003.patch, HDFS-14908.004.patch, Test.java, TestV2.java, > TestV3.java > > > Now when doing listOpenFiles(), LeaseManager only checks whether the filter > path is the prefix of the open files. We should check whether the filter path > is the parent/ancestor of the open files. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14908) LeaseManager should check parent-child relationship when filter open files.
[ https://issues.apache.org/jira/browse/HDFS-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinglun updated HDFS-14908: --- Attachment: HDFS-14908.004.patch > LeaseManager should check parent-child relationship when filter open files. > --- > > Key: HDFS-14908 > URL: https://issues.apache.org/jira/browse/HDFS-14908 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.0.1 >Reporter: Jinglun >Assignee: Jinglun >Priority: Minor > Attachments: HDFS-14908.001.patch, HDFS-14908.002.patch, > HDFS-14908.003.patch, HDFS-14908.004.patch, Test.java, TestV2.java, > TestV3.java > > > Now when doing listOpenFiles(), LeaseManager only checks whether the filter > path is the prefix of the open files. We should check whether the filter path > is the parent/ancestor of the open files. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1600) Add userName and IPAddress as part of OMRequest.
[ https://issues.apache.org/jira/browse/HDDS-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960283#comment-16960283 ] YiSheng Lien commented on HDDS-1600: Thanks [~bharat], these comments are very useful to fix the HDDS-1643 > Add userName and IPAddress as part of OMRequest. > > > Key: HDDS-1600 > URL: https://issues.apache.org/jira/browse/HDDS-1600 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: pull-request-available > Fix For: 0.4.1, 0.5.0 > > Time Spent: 5.5h > Remaining Estimate: 0h > > In OM HA, the actual execution of request happens under GRPC context, so UGI > object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will > not be available. > In similar manner ProtobufRpcEngine.Server.getRemoteIp(). > > So, during preExecute(which happens under RPC context) extract userName and > IPAddress and add it to the OMRequest, and then send the request to ratis > server. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2361) Ozone Manager init & start command prints out unnecessary line in the beginning.
[ https://issues.apache.org/jira/browse/HDDS-2361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-2361: - Labels: pull-request-available (was: ) > Ozone Manager init & start command prints out unnecessary line in the > beginning. > > > Key: HDDS-2361 > URL: https://issues.apache.org/jira/browse/HDDS-2361 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Aravindan Vijayan >Assignee: YiSheng Lien >Priority: Major > Labels: pull-request-available > > {code} > [root@avijayan-om-1 ozone-0.5.0-SNAPSHOT]# bin/ozone --daemon start om > Ozone Manager classpath extended by > {code} > We could probably print this line only when extra elements are added to OM > classpath or skip printing this line altogether. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-2361) Ozone Manager init & start command prints out unnecessary line in the beginning.
[ https://issues.apache.org/jira/browse/HDDS-2361?focusedWorklogId=334498=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334498 ] ASF GitHub Bot logged work on HDDS-2361: Author: ASF GitHub Bot Created on: 26/Oct/19 06:41 Start Date: 26/Oct/19 06:41 Worklog Time Spent: 10m Work Description: cxorm commented on pull request #91: HDDS-2361. Ozone Manager init & start command prints out unnecessary line in the beginning. URL: https://github.com/apache/hadoop-ozone/pull/91 If OZONE_MANAGER_CLASSPATH is null, we will not show the string. ## What changes were proposed in this pull request? Modify the if-condition in ```hadoop-ozone-manager.sh``` ## What is the link to the Apache JIRA https://issues.apache.org/jira/browse/HDDS-2361 ## How was this patch tested? Ran the command in ```hadoop-ozone/```: ```mvn clean package -Pdist -Dtar -DskipTests``` And ran the commands after deployment ```ozone --daemon start om``` ```ozone --daemon stop om``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 334498) Remaining Estimate: 0h Time Spent: 10m > Ozone Manager init & start command prints out unnecessary line in the > beginning. > > > Key: HDDS-2361 > URL: https://issues.apache.org/jira/browse/HDDS-2361 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Aravindan Vijayan >Assignee: YiSheng Lien >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > {code} > [root@avijayan-om-1 ozone-0.5.0-SNAPSHOT]# bin/ozone --daemon start om > Ozone Manager classpath extended by > {code} > We could probably print this line only when extra elements are added to OM > classpath or skip printing this line altogether. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org