[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16818953#comment-16818953 ] Hudson commented on HBASE-20586: Results for branch master [build #936 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/936/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/936//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/936//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/936//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16818869#comment-16818869 ] Hudson commented on HBASE-20586: Results for branch branch-2.2 [build #189 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/189/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/189//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/189//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/189//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16818795#comment-16818795 ] Hudson commented on HBASE-20586: Results for branch branch-1 [build #775 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/775/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/775//General_Nightly_Build_Report/] (x) {color:red}-1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/775//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/775//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The above would
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16818741#comment-16818741 ] Hudson commented on HBASE-20586: Results for branch branch-1.4 [build #749 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/749/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/749//General_Nightly_Build_Report/] (x) {color:red}-1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/749//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/749//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16818652#comment-16818652 ] Hudson commented on HBASE-20586: Results for branch branch-2.1 [build #1056 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1056/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1056//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1056//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1056//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16818465#comment-16818465 ] Hudson commented on HBASE-20586: Results for branch branch-2 [build #1822 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1822/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1822//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1822//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1822//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0, 2.1.5, 2.2.1 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16818256#comment-16818256 ] Andrew Purtell commented on HBASE-20586: Looks like this will apply to branch-1.4 and up, working on this now > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.3.0 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The above would need additional authentication logic against the remote hbase > cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16817157#comment-16817157 ] Sean Busbey commented on HBASE-20586: - sure, sounds fine to me. > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.3.0 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The above would need additional authentication logic against the remote hbase > cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816513#comment-16816513 ] Andrew Purtell commented on HBASE-20586: So let's commit this then? +1 > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.3.0 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The above would need additional authentication logic against the remote hbase > cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16816114#comment-16816114 ] Wellington Chevreuil commented on HBASE-20586: -- Hi [~apurtell], latest patch only misses proper integration tests for hbase cross realm scenarios. Discussed it previously with [~busbey] and we agreed there's currently a limitation to reproduce this scenario in an automated way triggered from builds. This has been tested on production clusters from some of our customers, though. > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.3.0 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The above would need additional authentication logic against the remote hbase > cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815608#comment-16815608 ] Andrew Purtell commented on HBASE-20586: Any progress here? Or unschedule it? Or close it? > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 3.0.0, 1.5.0, 2.3.0 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The above would need additional authentication logic against the remote hbase > cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758654#comment-16758654 ] Hadoop QA commented on HBASE-20586: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 48s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 29s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 3s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 6s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 24m 37s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 20s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-20586 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12923490/HBASE-20586.master.001.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux f2c39e266ec2 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 64c32720d6 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC3 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/15845/testReport/ | | Max. process+thread count | 5469 (vs. ulimit of 1) | | modules | C: hbase-mapreduce U: hbase-mapreduce | | Console output |
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16685464#comment-16685464 ] Sean Busbey commented on HBASE-20586: - A doc jira blocked by this one sounds like a good idea. > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 1.5.0, 2.2.0 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The above would need additional authentication logic against the remote hbase > cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16681249#comment-16681249 ] Wellington Chevreuil commented on HBASE-20586: -- Just adding some notes here to update related [Ref Guide SyncTable section |https://hbase.apache.org/book.html#_step_2_synctable] once this is ever committed. Maybe open a doc Jira and make it blocked by this one? > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 1.5.0, 2.2.0 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The above would need additional authentication logic against the remote hbase > cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16678295#comment-16678295 ] Sean Busbey commented on HBASE-20586: - I agree that we don't have the needed infra to have a test for this right now. I would like whoever commits it to try running the change as well, especially given that it's been ~6 months since it was submitted. I'll try to make time next week. > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 1.5.0, 2.2.0 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The above would need additional authentication logic against the remote hbase > cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677922#comment-16677922 ] Wellington Chevreuil commented on HBASE-20586: -- Thanks [~busbey]. I had tried to play a bit with integration tests and use minikdc, minicluster, etc. Managed to have two of each running in the test, but whenever I change the credentials for the user running the clusters, the one on the other realm crashes. I guess problem here is that the two "fake" clusters in this test are running on same JVM, so I got stuck on that point while trying to implement automated tests. Had not worked on this further lately, but ain't sure if this is at all testable. End to End tests wise, yeah we did have that tested and even deployed on a production environment, where it did work well as an alternative for CopyTable. Maybe we could relax on our automated test policy for pushing this? > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Operability, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 1.5.0, 2.2.0 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The above would need additional authentication logic against the remote hbase > cluster. -- This message was
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677077#comment-16677077 ] Hadoop QA commented on HBASE-20586: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 2s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 5s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 58s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 19s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 47m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-20586 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12923490/HBASE-20586.master.001.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 1472012ab652 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 86cbbdea9e | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC3 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/14963/testReport/ | | Max. process+thread count | 5323 (vs. ulimit of 1) | | modules | C: hbase-mapreduce U: hbase-mapreduce | | Console output |
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676992#comment-16676992 ] Sean Busbey commented on HBASE-20586: - What are we waiting on here? basically for a committer to set up two clusters with different Kerberos Realms and try it out? > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Fix For: 1.5.0, 2.2.0 > > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The above would need additional authentication logic against the remote hbase > cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541971#comment-16541971 ] Hadoop QA commented on HBASE-20586: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 30s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 39s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 13m 20s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 2s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 59m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-20586 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12923490/HBASE-20586.master.001.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux ff479255ab0e 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 3fc23fe930 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC3 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/13609/testReport/ | | Max. process+thread count | 4712 (vs. ulimit of 1) | | modules | C: hbase-mapreduce U: hbase-mapreduce | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/13609/console | |
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541887#comment-16541887 ] Sean Busbey commented on HBASE-20586: - I'm not sure it's possible to get a unit test going for this, especially if MiniKDC can't do it. If we do this as an integration test outside of maven that manually runs two different MiniKDCs could we do it? > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce, Replication >Affects Versions: 1.2.0, 2.0.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The above would need additional authentication logic against the remote hbase > cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters
[ https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475922#comment-16475922 ] Wellington Chevreuil commented on HBASE-20586: -- Initial patch proposal. Had worked on manual tests. Not sure how to add tests to this, is there any existing template or example integration test for cross-realm domains? I had looked at some kerberos related tests using *MiniKdc* class, however I don't think it's able to emulate cross realm environments with it. Any suggestions and/or ideas on testing are welcome. > SyncTable tool: Add support for cross-realm remote clusters > --- > > Key: HBASE-20586 > URL: https://issues.apache.org/jira/browse/HBASE-20586 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HBASE-20586.master.001.patch > > > One possible scenario for HashTable/SyncTable is for synchronize different > clusters, for instance, when replication has been enabled but data existed > already, or due replication issues that may had caused long lags in the > replication. > For secured clusters under different kerberos realms (with cross-realm > properly set), though, current SyncTable version would fail to authenticate > with the remote cluster when trying to read HashTable outputs (when > *sourcehashdir* is remote) and also when trying to read table data on the > remote cluster (when *sourcezkcluster* is remote). > The hdfs error would look like this: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status > : FAILED > Error: java.io.IOException: Failed on local exception: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; > destination host is: "remote-nn":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1506) > at org.apache.hadoop.ipc.Client.call(Client.java:1439) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256) > ... > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144) > at > org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) > ... > Caused by: java.io.IOException: > org.apache.hadoop.security.AccessControlException: Client cannot authenticate > via:[TOKEN, KERBEROS]{noformat} > The above can be sorted if the SyncTable job acquires a DT for the remote NN. > Once hdfs related authentication is done, it's also necessary to authenticate > against remote HBase, as the below error would arise: > {noformat} > INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status > : FAILED > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get > the location > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326) > ... > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) > at > org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331) > ... > Caused by: java.io.IOException: Could not set up IO Streams to > remote-rs-host/1.1.1.2:60020 > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786) > ... > Caused by: java.lang.RuntimeException: SASL authentication failed. The most > likely cause is missing or invalid credentials. Consider 'kinit'. > ... > Caused by: GSSException: No valid credentials provided (Mechanism level: > Failed to find any Kerberos tgt) > ...{noformat} > The above would need additional authentication logic against the remote hbase > cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)