[jira] [Work logged] (HDFS-16616) Remove the use if Sets#newHashSet and Sets#newTreeSet
[ https://issues.apache.org/jira/browse/HDFS-16616?focusedWorklogId=781920=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-781920 ] ASF GitHub Bot logged work on HDFS-16616: - Author: ASF GitHub Bot Created on: 16/Jun/22 05:48 Start Date: 16/Jun/22 05:48 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4400: URL: https://github.com/apache/hadoop/pull/4400#issuecomment-1157258588 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 4s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 10 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 27s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 37s | | trunk passed | | +1 :green_heart: | compile | 7m 17s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 6m 30s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 36s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 37s | | trunk passed | | -1 :x: | javadoc | 1m 22s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4400/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in trunk failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 2m 54s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 59s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 51s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 19s | | the patch passed | | +1 :green_heart: | compile | 8m 48s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 8m 48s | | the patch passed | | +1 :green_heart: | compile | 7m 1s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 7m 1s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 22s | | hadoop-hdfs-project: The patch generated 0 new + 385 unchanged - 1 fixed = 385 total (was 386) | | +1 :green_heart: | mvnsite | 2m 24s | | the patch passed | | -1 :x: | javadoc | 1m 10s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4400/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 2m 39s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 457m 48s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4400/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 39m 0s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 1m 20s | | The patch does not generate ASF License warnings. | | | | 656m 26s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41
[jira] [Work logged] (HDFS-16632) java.io.IOException: Version Mismatch (Expected: 28, Received: 520 )
[ https://issues.apache.org/jira/browse/HDFS-16632?focusedWorklogId=781910=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-781910 ] ASF GitHub Bot logged work on HDFS-16632: - Author: ASF GitHub Bot Created on: 16/Jun/22 04:24 Start Date: 16/Jun/22 04:24 Worklog Time Spent: 10m Work Description: jojochuang commented on PR #4185: URL: https://github.com/apache/hadoop/pull/4185#issuecomment-1157216878 There are tons of WARNING in the ctest output. Would you check if it's related at all? https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4185/1/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt Issue Time Tracking --- Worklog Id: (was: 781910) Remaining Estimate: 503h 40m (was: 503h 50m) Time Spent: 20m (was: 10m) > java.io.IOException: Version Mismatch (Expected: 28, Received: 520 ) > > > Key: HDFS-16632 > URL: https://issues.apache.org/jira/browse/HDFS-16632 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs++, native >Affects Versions: 3.3.0, 3.3.1, 3.2.3, 3.3.2 > Environment: NAME="openEuler" > VERSION="20.03 (LTS-SP1)" > ID="openEuler" > VERSION_ID="20.03" >Reporter: cnnc >Priority: Major > Labels: libhdfs, libhdfscpp, pull-request-available > Attachments: test1M.cpp > > Original Estimate: 504h > Time Spent: 20m > Remaining Estimate: 503h 40m > > read 1M bytes from hdfs with libhdfspp, and the datanode-server report an > error like: > > 2022-04-12 20:10:21,872 ERROR > org.apache.hadoop.hdfs.server.datanode.DataNode: server1:9866:DataXceiver > error processing READ_BLOCK operation src: /90.90.43.114:47956 dst: > /90.90.43.114:9866 > java.io.IOException: Version Mismatch (Expected: 28, Received: 520 ) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) > at java.lang.Thread.run(Thread.java:748) > 2022-04-12 20:13:27,615 ERROR > org.apache.hadoop.hdfs.server.datanode.DataNode: server1:9866:DataXceiver > error processing READ_BLOCK operation src: /90.90.43.114:48142 dst: > /90.90.43.114:9866 > java.io.IOException: Version Mismatch (Expected: 28, Received: 520 ) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) > at java.lang.Thread.run(Thread.java:748) > > > (1) compile test1M.cpp ,using command: > g++ test1M.cpp -lprotobuf -lhdfspp_static -lhdfs -lpthread -lsasl2 -lcrypto > -llz4 -I./ -L./ -o test1M > > (2)and execute test1M with: > ./test1M REAL_HDFS_FILE_PATH > > (3) check hadoop logs and you will find erro info like : > java.io.IOException: Version Mismatch (Expected: 28, Received: 520 ) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) > at java.lang.Thread.run(Thread.java:748) > > > > -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Moved] (HDFS-16632) java.io.IOException: Version Mismatch (Expected: 28, Received: 520 )
[ https://issues.apache.org/jira/browse/HDFS-16632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang moved HADOOP-18204 to HDFS-16632: - Component/s: hdfs++ native (was: hdfs-client) Key: HDFS-16632 (was: HADOOP-18204) Target Version/s: (was: 3.3.2) Affects Version/s: 3.3.2 3.2.3 3.3.1 3.3.0 (was: 3.3.0) (was: 3.3.1) (was: 3.2.3) (was: 3.3.2) Project: Hadoop HDFS (was: Hadoop Common) > java.io.IOException: Version Mismatch (Expected: 28, Received: 520 ) > > > Key: HDFS-16632 > URL: https://issues.apache.org/jira/browse/HDFS-16632 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs++, native >Affects Versions: 3.3.2, 3.2.3, 3.3.1, 3.3.0 > Environment: NAME="openEuler" > VERSION="20.03 (LTS-SP1)" > ID="openEuler" > VERSION_ID="20.03" >Reporter: cnnc >Priority: Major > Labels: libhdfs, libhdfscpp, pull-request-available > Attachments: test1M.cpp > > Original Estimate: 504h > Time Spent: 10m > Remaining Estimate: 503h 50m > > read 1M bytes from hdfs with libhdfspp, and the datanode-server report an > error like: > > 2022-04-12 20:10:21,872 ERROR > org.apache.hadoop.hdfs.server.datanode.DataNode: server1:9866:DataXceiver > error processing READ_BLOCK operation src: /90.90.43.114:47956 dst: > /90.90.43.114:9866 > java.io.IOException: Version Mismatch (Expected: 28, Received: 520 ) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) > at java.lang.Thread.run(Thread.java:748) > 2022-04-12 20:13:27,615 ERROR > org.apache.hadoop.hdfs.server.datanode.DataNode: server1:9866:DataXceiver > error processing READ_BLOCK operation src: /90.90.43.114:48142 dst: > /90.90.43.114:9866 > java.io.IOException: Version Mismatch (Expected: 28, Received: 520 ) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) > at java.lang.Thread.run(Thread.java:748) > > > (1) compile test1M.cpp ,using command: > g++ test1M.cpp -lprotobuf -lhdfspp_static -lhdfs -lpthread -lsasl2 -lcrypto > -llz4 -I./ -L./ -o test1M > > (2)and execute test1M with: > ./test1M REAL_HDFS_FILE_PATH > > (3) check hadoop logs and you will find erro info like : > java.io.IOException: Version Mismatch (Expected: 28, Received: 520 ) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) > at java.lang.Thread.run(Thread.java:748) > > > > -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-13522) RBF: Support observer node from Router-Based Federation
[ https://issues.apache.org/jira/browse/HDFS-13522?focusedWorklogId=781882=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-781882 ] ASF GitHub Bot logged work on HDFS-13522: - Author: ASF GitHub Bot Created on: 16/Jun/22 00:40 Start Date: 16/Jun/22 00:40 Worklog Time Spent: 10m Work Description: simbadzina commented on code in PR #4441: URL: https://github.com/apache/hadoop/pull/4441#discussion_r898588205 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterGSIContext.java: ## @@ -0,0 +1,56 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hdfs.server.federation.router; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.hdfs.ClientGSIContext; +import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcRequestHeaderProto; + +/** + * Global State ID context for the router. + * + * This is the router side implementation responsible for receiving + * state alignment info from server(s). + */ +@InterfaceAudience.Private +@InterfaceStability.Evolving +public class RouterGSIContext extends ClientGSIContext { Review Comment: I feel the alignment between routers and namenode can be taken care of with just ClientGSIContext. How does the router not updating the lastSeenStateID when communicating with the namenode help? ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java: ## @@ -231,6 +246,18 @@ public ConnectionContext getConnection(UserGroupInformation ugi, return conn; } + /** + * Dynamically reconfigure the enableObserverRead. + */ + public void reconfEnableObserverRead(boolean enableObserverRead) { +readLock.lock(); +this.enableObserverRead = enableObserverRead; +for (RouterGSIContext routerGSIContext : alignmentContexts.values()) { + routerGSIContext.setEnableObserverRead(enableObserverRead); +} +readLock.unlock(); + } + Review Comment: Why do we need to reconfigure observer reads dynamically? Issue Time Tracking --- Worklog Id: (was: 781882) Time Spent: 12h 40m (was: 12.5h) > RBF: Support observer node from Router-Based Federation > --- > > Key: HDFS-13522 > URL: https://issues.apache.org/jira/browse/HDFS-13522 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, namenode >Reporter: Erik Krogen >Assignee: Simbarashe Dzinamarira >Priority: Major > Labels: pull-request-available > Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, > HDFS-13522_WIP.patch, RBF_ Observer support.pdf, Router+Observer RPC > clogging.png, ShortTerm-Routers+Observer.png > > Time Spent: 12h 40m > Remaining Estimate: 0h > > Changes will need to occur to the router to support the new observer node. > One such change will be to make the router understand the observer state, > e.g. {{FederationNamenodeServiceState}}. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-13522) RBF: Support observer node from Router-Based Federation
[ https://issues.apache.org/jira/browse/HDFS-13522?focusedWorklogId=781861=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-781861 ] ASF GitHub Bot logged work on HDFS-13522: - Author: ASF GitHub Bot Created on: 15/Jun/22 22:55 Start Date: 15/Jun/22 22:55 Worklog Time Spent: 10m Work Description: simbadzina commented on code in PR #4311: URL: https://github.com/apache/hadoop/pull/4311#discussion_r898503159 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java: ## @@ -349,6 +349,18 @@ public static ClientProtocol createProxyWithAlignmentContext( boolean withRetries, AtomicBoolean fallbackToSimpleAuth, AlignmentContext alignmentContext) throws IOException { +if (!conf.getBoolean(HdfsClientConfigKeys.DFS_OBSERVER_READ_ENABLE, Review Comment: I'll add commented on your draft RB https://github.com/apache/hadoop/pull/4441 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java: ## @@ -349,6 +349,18 @@ public static ClientProtocol createProxyWithAlignmentContext( boolean withRetries, AtomicBoolean fallbackToSimpleAuth, AlignmentContext alignmentContext) throws IOException { +if (!conf.getBoolean(HdfsClientConfigKeys.DFS_OBSERVER_READ_ENABLE, Review Comment: I'll add comments on your draft RB https://github.com/apache/hadoop/pull/4441 Issue Time Tracking --- Worklog Id: (was: 781861) Time Spent: 12.5h (was: 12h 20m) > RBF: Support observer node from Router-Based Federation > --- > > Key: HDFS-13522 > URL: https://issues.apache.org/jira/browse/HDFS-13522 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, namenode >Reporter: Erik Krogen >Assignee: Simbarashe Dzinamarira >Priority: Major > Labels: pull-request-available > Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, > HDFS-13522_WIP.patch, RBF_ Observer support.pdf, Router+Observer RPC > clogging.png, ShortTerm-Routers+Observer.png > > Time Spent: 12.5h > Remaining Estimate: 0h > > Changes will need to occur to the router to support the new observer node. > One such change will be to make the router understand the observer state, > e.g. {{FederationNamenodeServiceState}}. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-13522) RBF: Support observer node from Router-Based Federation
[ https://issues.apache.org/jira/browse/HDFS-13522?focusedWorklogId=781860=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-781860 ] ASF GitHub Bot logged work on HDFS-13522: - Author: ASF GitHub Bot Created on: 15/Jun/22 22:43 Start Date: 15/Jun/22 22:43 Worklog Time Spent: 10m Work Description: simbadzina commented on code in PR #4311: URL: https://github.com/apache/hadoop/pull/4311#discussion_r898452709 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java: ## @@ -349,6 +349,18 @@ public static ClientProtocol createProxyWithAlignmentContext( boolean withRetries, AtomicBoolean fallbackToSimpleAuth, AlignmentContext alignmentContext) throws IOException { +if (!conf.getBoolean(HdfsClientConfigKeys.DFS_OBSERVER_READ_ENABLE, +HdfsClientConfigKeys.DFS_OBSERVER_READ_ENABLE_DEFAULT)) { + //Disabled observer read + if (alignmentContext == null) { +alignmentContext = new ClientGSIContext(); + } + if (alignmentContext instanceof ClientGSIContext) { Review Comment: Even when alignment context is not equal to null, we still disable observer reads in this if condition. The not null condition occurs when the ObserverReadProxyProvider is being used. ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientGSIContext.java: ## @@ -40,6 +40,14 @@ public class ClientGSIContext implements AlignmentContext { private final LongAccumulator lastSeenStateId = new LongAccumulator(Math::max, Long.MIN_VALUE); + public void disableObserverRead() { +if(lastSeenStateId.get() > -1L) { + throw new IllegalStateException( + "Can't disable observer read after communicate."); +} +lastSeenStateId.accumulate(-1L); Review Comment: I do not expect available use cases but I can document that this is a reserved value for disabling router reads. ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStateIdContext.java: ## @@ -0,0 +1,87 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hdfs.server.federation.router; + +import java.lang.reflect.Method; +import java.util.HashSet; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.hdfs.protocol.ClientProtocol; +import org.apache.hadoop.hdfs.server.namenode.ha.ReadOnly; +import org.apache.hadoop.ipc.AlignmentContext; +import org.apache.hadoop.ipc.RetriableException; +import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcRequestHeaderProto; +import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; + +/** + * This is the router implementation responsible for passing + * client state id to next level. + */ +@InterfaceAudience.Private +@InterfaceStability.Evolving +class RouterStateIdContext implements AlignmentContext { Review Comment: This means the client's header state ID accessible in the router. But yes, the value is not used in this PR. https://github.com/apache/hadoop/pull/4127 is the one which then uses the value. ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java: ## @@ -172,11 +178,12 @@ public void close() { * @param ugi User group information. * @param nnAddress Namenode address for the connection. * @param protocol Protocol for the connection. + * @param nsId Nameservice Identify. Review Comment: Fixed ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java: ## @@ -349,6 +349,18 @@ public static ClientProtocol createProxyWithAlignmentContext( boolean withRetries, AtomicBoolean fallbackToSimpleAuth, AlignmentContext alignmentContext) throws IOException { +if (!conf.getBoolean(HdfsClientConfigKeys.DFS_OBSERVER_READ_ENABLE, +
[jira] [Work logged] (HDFS-16600) Deadlock on DataNode
[ https://issues.apache.org/jira/browse/HDFS-16600?focusedWorklogId=781652=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-781652 ] ASF GitHub Bot logged work on HDFS-16600: - Author: ASF GitHub Bot Created on: 15/Jun/22 13:31 Start Date: 15/Jun/22 13:31 Worklog Time Spent: 10m Work Description: Hexiaoqiao commented on PR #4367: URL: https://github.com/apache/hadoop/pull/4367#issuecomment-115642 Retrigger jenkins and wait to another build result. Thanks everyone's helpful discussion. I would like to checkin for a while if no more other comments and build clean. Issue Time Tracking --- Worklog Id: (was: 781652) Time Spent: 4.5h (was: 4h 20m) > Deadlock on DataNode > > > Key: HDFS-16600 > URL: https://issues.apache.org/jira/browse/HDFS-16600 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: ZanderXu >Assignee: ZanderXu >Priority: Major > Labels: pull-request-available > Time Spent: 4.5h > Remaining Estimate: 0h > > The UT > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.testSynchronousEviction > failed, because happened deadlock, which is introduced by > [HDFS-16534|https://issues.apache.org/jira/browse/HDFS-16534]. > DeadLock: > {code:java} > // org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.createRbw line 1588 > need a read lock > try (AutoCloseableLock lock = lockManager.readLock(LockLevel.BLOCK_POOl, > b.getBlockPoolId())) > // org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.evictBlocks line > 3526 need a write lock > try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.BLOCK_POOl, > bpid)) > {code} -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-16628) RBF: Correct target directory when move to trash for kerberos login user.
[ https://issues.apache.org/jira/browse/HDFS-16628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He resolved HDFS-16628. Hadoop Flags: Reviewed Target Version/s: 3.4.0 Resolution: Fixed Committed to trunk. Thanks [~zhangxiping]. > RBF: Correct target directory when move to trash for kerberos login user. > - > > Key: HDFS-16628 > URL: https://issues.apache.org/jira/browse/HDFS-16628 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.4.0 >Reporter: Xiping Zhang >Assignee: Xiping Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > remove data from the router will fail using such a user > username/d...@hadoop.com -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16628) RBF: Correct target directory when move to trash for kerberos login user.
[ https://issues.apache.org/jira/browse/HDFS-16628?focusedWorklogId=781643=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-781643 ] ASF GitHub Bot logged work on HDFS-16628: - Author: ASF GitHub Bot Created on: 15/Jun/22 13:17 Start Date: 15/Jun/22 13:17 Worklog Time Spent: 10m Work Description: Hexiaoqiao commented on PR #4424: URL: https://github.com/apache/hadoop/pull/4424#issuecomment-1156461060 Committed to trunk. Thanks @zhangxiping1 for your report and contribution. Issue Time Tracking --- Worklog Id: (was: 781643) Time Spent: 1h 50m (was: 1h 40m) > RBF: Correct target directory when move to trash for kerberos login user. > - > > Key: HDFS-16628 > URL: https://issues.apache.org/jira/browse/HDFS-16628 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.4.0 >Reporter: Xiping Zhang >Assignee: Xiping Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > remove data from the router will fail using such a user > username/d...@hadoop.com -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16628) RBF: Correct target directory when move to trash for kerberos login user.
[ https://issues.apache.org/jira/browse/HDFS-16628?focusedWorklogId=781640=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-781640 ] ASF GitHub Bot logged work on HDFS-16628: - Author: ASF GitHub Bot Created on: 15/Jun/22 13:16 Start Date: 15/Jun/22 13:16 Worklog Time Spent: 10m Work Description: Hexiaoqiao merged PR #4424: URL: https://github.com/apache/hadoop/pull/4424 Issue Time Tracking --- Worklog Id: (was: 781640) Time Spent: 1h 40m (was: 1.5h) > RBF: Correct target directory when move to trash for kerberos login user. > - > > Key: HDFS-16628 > URL: https://issues.apache.org/jira/browse/HDFS-16628 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.4.0 >Reporter: Xiping Zhang >Assignee: Xiping Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > remove data from the router will fail using such a user > username/d...@hadoop.com -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-16628) RBF: Correct target directory when move to trash for kerberos login user.
[ https://issues.apache.org/jira/browse/HDFS-16628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He reassigned HDFS-16628: -- Assignee: Xiping Zhang > RBF: Correct target directory when move to trash for kerberos login user. > - > > Key: HDFS-16628 > URL: https://issues.apache.org/jira/browse/HDFS-16628 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.4.0 >Reporter: Xiping Zhang >Assignee: Xiping Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1.5h > Remaining Estimate: 0h > > remove data from the router will fail using such a user > username/d...@hadoop.com -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16628) RBF: Correct target directory when move to trash for kerberos login user.
[ https://issues.apache.org/jira/browse/HDFS-16628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He updated HDFS-16628: --- Summary: RBF: Correct target directory when move to trash for kerberos login user. (was: RBF: kerberos user remove Non-default namespace data failed) > RBF: Correct target directory when move to trash for kerberos login user. > - > > Key: HDFS-16628 > URL: https://issues.apache.org/jira/browse/HDFS-16628 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.4.0 >Reporter: Xiping Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1.5h > Remaining Estimate: 0h > > remove data from the router will fail using such a user > username/d...@hadoop.com -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-16469) Locate protoc-gen-hrpc across platforms
[ https://issues.apache.org/jira/browse/HDFS-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gautham Banasandra resolved HDFS-16469. --- Fix Version/s: 3.4.0 Resolution: Fixed Merged PR https://github.com/apache/hadoop/pull/4434 to trunk. > Locate protoc-gen-hrpc across platforms > --- > > Key: HDFS-16469 > URL: https://issues.apache.org/jira/browse/HDFS-16469 > Project: Hadoop HDFS > Issue Type: Improvement > Components: libhdfs++ >Affects Versions: 3.4.0 > Environment: Windows 10 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: libhdfscpp, pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h > Remaining Estimate: 0h > > protoc-gen-hrpc.exe is supposed to be found at > [${CMAKE_CURRENT_BINARY_DIR}/protoc-gen-hrpc|https://github.com/apache/hadoop/blob/652b257478f723a9e119e5e9181f3c7450ac92b5/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto/CMakeLists.txt#L70]. > This works so long as we're building the Release build. Since we can only > build RelWithDebInfo on Windows, the protoc-gen-hrpc binary will be placed at > {*}${CMAKE_CURRENT_BINARY_DIR}/RelWithDebInfo/protoc-gen-hrpc.exe{*}. Hadoop > would need to locate this binary in order to generate the Protobuf headers. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16469) Locate protoc-gen-hrpc across platforms
[ https://issues.apache.org/jira/browse/HDFS-16469?focusedWorklogId=781565=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-781565 ] ASF GitHub Bot logged work on HDFS-16469: - Author: ASF GitHub Bot Created on: 15/Jun/22 09:58 Start Date: 15/Jun/22 09:58 Worklog Time Spent: 10m Work Description: GauthamBanasandra merged PR #4434: URL: https://github.com/apache/hadoop/pull/4434 Issue Time Tracking --- Worklog Id: (was: 781565) Time Spent: 1h (was: 50m) > Locate protoc-gen-hrpc across platforms > --- > > Key: HDFS-16469 > URL: https://issues.apache.org/jira/browse/HDFS-16469 > Project: Hadoop HDFS > Issue Type: Improvement > Components: libhdfs++ >Affects Versions: 3.4.0 > Environment: Windows 10 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: libhdfscpp, pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > protoc-gen-hrpc.exe is supposed to be found at > [${CMAKE_CURRENT_BINARY_DIR}/protoc-gen-hrpc|https://github.com/apache/hadoop/blob/652b257478f723a9e119e5e9181f3c7450ac92b5/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto/CMakeLists.txt#L70]. > This works so long as we're building the Release build. Since we can only > build RelWithDebInfo on Windows, the protoc-gen-hrpc binary will be placed at > {*}${CMAKE_CURRENT_BINARY_DIR}/RelWithDebInfo/protoc-gen-hrpc.exe{*}. Hadoop > would need to locate this binary in order to generate the Protobuf headers. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-13522) RBF: Support observer node from Router-Based Federation
[ https://issues.apache.org/jira/browse/HDFS-13522?focusedWorklogId=781545=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-781545 ] ASF GitHub Bot logged work on HDFS-13522: - Author: ASF GitHub Bot Created on: 15/Jun/22 09:01 Start Date: 15/Jun/22 09:01 Worklog Time Spent: 10m Work Description: zhengchenyu commented on PR #4441: URL: https://github.com/apache/hadoop/pull/4441#issuecomment-1156196587 > Thanks @zhengchenyu for your review and comment. This a draft PR related to [PR4311](https://github.com/apache/hadoop/pull/4311). I'm not following [HDFS-13522](https://issues.apache.org/jira/browse/HDFS-13522).002.patch, and I will read it carefully. > > Client -> RBF -> NameNode. Whether RBF proxies the read request to the Observer should have nothing to do with the Client. In HDFS-13522.002.patch, isReadCall method, router will check "call.getClientStateId() == -1L". This is rpc call level. If observer read is disable in client side, call.getClientStateId() in router side will return -1, router will ignore observer namenode. I think config in client side may be more flexible. By the way, I add some extra comment. In HDFS-13522.002.patch, router only check whether state id is -1. They don't pass client's state id. If dfs.federation.router.observer.auto-msync-period are not set to 0, but a large number, will be wrong. In our draft design, after apply HDFS-13522.002.patch, I wanna proxy client's state id. For busy work recently, I delay it. Maybe we can work and discuss it together. Issue Time Tracking --- Worklog Id: (was: 781545) Time Spent: 12h 10m (was: 12h) > RBF: Support observer node from Router-Based Federation > --- > > Key: HDFS-13522 > URL: https://issues.apache.org/jira/browse/HDFS-13522 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, namenode >Reporter: Erik Krogen >Assignee: Simbarashe Dzinamarira >Priority: Major > Labels: pull-request-available > Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, > HDFS-13522_WIP.patch, RBF_ Observer support.pdf, Router+Observer RPC > clogging.png, ShortTerm-Routers+Observer.png > > Time Spent: 12h 10m > Remaining Estimate: 0h > > Changes will need to occur to the router to support the new observer node. > One such change will be to make the router understand the observer state, > e.g. {{FederationNamenodeServiceState}}. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-13522) RBF: Support observer node from Router-Based Federation
[ https://issues.apache.org/jira/browse/HDFS-13522?focusedWorklogId=781534=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-781534 ] ASF GitHub Bot logged work on HDFS-13522: - Author: ASF GitHub Bot Created on: 15/Jun/22 08:34 Start Date: 15/Jun/22 08:34 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4441: URL: https://github.com/apache/hadoop/pull/4441#issuecomment-1156167007 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 32s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 49s | | trunk passed | | +1 :green_heart: | compile | 25m 6s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 21m 29s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 34s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 15s | | trunk passed | | +1 :green_heart: | javadoc | 2m 42s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 32s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 4m 49s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 44s | | the patch passed | | +1 :green_heart: | compile | 24m 10s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 10s | | the patch passed | | +1 :green_heart: | compile | 21m 42s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 21m 42s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 22s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4441/1/artifact/out/results-checkstyle-root.txt) | root: The patch generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) | | +1 :green_heart: | mvnsite | 3m 7s | | the patch passed | | +1 :green_heart: | javadoc | 2m 37s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 31s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | spotbugs | 2m 3s | [/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4441/1/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html) | hadoop-hdfs-project/hadoop-hdfs-rbf generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 25m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 44s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 41m 40s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 1m 18s | | The patch does not generate ASF License warnings. | | | | 289m 53s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs-rbf | | | org.apache.hadoop.hdfs.server.federation.router.ConnectionManager.reconfEnableObserverRead(boolean) does not release lock on all exception paths At ConnectionManager.java:on all exception paths At ConnectionManager.java:[line 253] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base:
[jira] [Work logged] (HDFS-13522) RBF: Support observer node from Router-Based Federation
[ https://issues.apache.org/jira/browse/HDFS-13522?focusedWorklogId=781525=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-781525 ] ASF GitHub Bot logged work on HDFS-13522: - Author: ASF GitHub Bot Created on: 15/Jun/22 08:00 Start Date: 15/Jun/22 08:00 Worklog Time Spent: 10m Work Description: ZanderXu commented on PR #4441: URL: https://github.com/apache/hadoop/pull/4441#issuecomment-1156127801 Thanks @zhengchenyu for your review and comment. This a draft PR related to [PR4311](https://github.com/apache/hadoop/pull/4311). I'm not following [HDFS-13522](https://issues.apache.org/jira/browse/HDFS-13522).002.patch, and I will read it carefully. Client -> RBF -> NameNode. Whether RBF proxies the read request to the Observer should have nothing to do with the Client. Issue Time Tracking --- Worklog Id: (was: 781525) Time Spent: 11h 50m (was: 11h 40m) > RBF: Support observer node from Router-Based Federation > --- > > Key: HDFS-13522 > URL: https://issues.apache.org/jira/browse/HDFS-13522 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, namenode >Reporter: Erik Krogen >Assignee: Simbarashe Dzinamarira >Priority: Major > Labels: pull-request-available > Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, > HDFS-13522_WIP.patch, RBF_ Observer support.pdf, Router+Observer RPC > clogging.png, ShortTerm-Routers+Observer.png > > Time Spent: 11h 50m > Remaining Estimate: 0h > > Changes will need to occur to the router to support the new observer node. > One such change will be to make the router understand the observer state, > e.g. {{FederationNamenodeServiceState}}. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-13522) RBF: Support observer node from Router-Based Federation
[ https://issues.apache.org/jira/browse/HDFS-13522?focusedWorklogId=781510=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-781510 ] ASF GitHub Bot logged work on HDFS-13522: - Author: ASF GitHub Bot Created on: 15/Jun/22 07:24 Start Date: 15/Jun/22 07:24 Worklog Time Spent: 10m Work Description: zhengchenyu commented on PR #4441: URL: https://github.com/apache/hadoop/pull/4441#issuecomment-1156087987 @ZanderXu Seems HDFS-13522.002.patch have include this function. The difference is that in HDFS-13522.002.patch, enable or disable observer read in client side. In your PR, enable or disable observer read in router side. Issue Time Tracking --- Worklog Id: (was: 781510) Time Spent: 11h 40m (was: 11.5h) > RBF: Support observer node from Router-Based Federation > --- > > Key: HDFS-13522 > URL: https://issues.apache.org/jira/browse/HDFS-13522 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation, namenode >Reporter: Erik Krogen >Assignee: Simbarashe Dzinamarira >Priority: Major > Labels: pull-request-available > Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, > HDFS-13522_WIP.patch, RBF_ Observer support.pdf, Router+Observer RPC > clogging.png, ShortTerm-Routers+Observer.png > > Time Spent: 11h 40m > Remaining Estimate: 0h > > Changes will need to occur to the router to support the new observer node. > One such change will be to make the router understand the observer state, > e.g. {{FederationNamenodeServiceState}}. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-4026) Improve error message for dfsadmin -refreshServiceAcl
[ https://issues.apache.org/jira/browse/HDFS-4026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Samrat Deb reassigned HDFS-4026: Assignee: Samrat Deb (was: Clint Heath) > Improve error message for dfsadmin -refreshServiceAcl > - > > Key: HDFS-4026 > URL: https://issues.apache.org/jira/browse/HDFS-4026 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.1-alpha >Reporter: Stephen Chu >Assignee: Samrat Deb >Priority: Major > Labels: newbie > > I ran _hdfs dfsadmin -refreshServiceAcl_ on a cluster that did not have > Service Level Authorization enabled: > {code} > [schu@cs-10-20-90-154 ~]$ hdfs dfsadmin -refreshServiceAcl > refreshServiceAcl: > [schu@cs-10-20-90-154 ~]$ echo $? > 255 > [schu@cs-10-20-90-154 ~]$ > {code} -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-4026) Improve error message for dfsadmin -refreshServiceAcl
[ https://issues.apache.org/jira/browse/HDFS-4026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17554416#comment-17554416 ] Samrat Deb commented on HDFS-4026: -- picking this up ! > Improve error message for dfsadmin -refreshServiceAcl > - > > Key: HDFS-4026 > URL: https://issues.apache.org/jira/browse/HDFS-4026 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.1-alpha >Reporter: Stephen Chu >Assignee: Clint Heath >Priority: Major > Labels: newbie > > I ran _hdfs dfsadmin -refreshServiceAcl_ on a cluster that did not have > Service Level Authorization enabled: > {code} > [schu@cs-10-20-90-154 ~]$ hdfs dfsadmin -refreshServiceAcl > refreshServiceAcl: > [schu@cs-10-20-90-154 ~]$ echo $? > 255 > [schu@cs-10-20-90-154 ~]$ > {code} -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org