[GitHub] [hbase] Apache-HBase commented on pull request #3800: HBASE-26347 Support detect and exclude slow DNs in fan-out of WAL
Apache-HBase commented on pull request #3800: URL: https://github.com/apache/hbase/pull/3800#issuecomment-988577908 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 13s | master passed | | +1 :green_heart: | compile | 1m 35s | master passed | | +1 :green_heart: | shadedjars | 9m 41s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 57s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 23s | the patch passed | | +1 :green_heart: | compile | 1m 33s | the patch passed | | +1 :green_heart: | javac | 1m 33s | the patch passed | | +1 :green_heart: | shadedjars | 9m 31s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 58s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 36s | hbase-asyncfs in the patch passed. | | +1 :green_heart: | unit | 151m 49s | hbase-server in the patch passed. | | | | 189m 58s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/10/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3800 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux c46043c3b293 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ca3ba494cb | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/10/testReport/ | | Max. process+thread count | 4958 (vs. ulimit of 3) | | modules | C: hbase-asyncfs hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/10/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3925: HBASE-26027 The calling of HTable.batch blocked at AsyncRequestFutureImpl.waitUntilDone caused by ArrayStoreException
Apache-HBase commented on pull request #3925: URL: https://github.com/apache/hbase/pull/3925#issuecomment-988568886 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 49s | branch-2 passed | | +1 :green_heart: | compile | 1m 24s | branch-2 passed | | +1 :green_heart: | shadedjars | 6m 34s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 4s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 33s | the patch passed | | +1 :green_heart: | compile | 1m 27s | the patch passed | | +1 :green_heart: | javac | 1m 27s | the patch passed | | +1 :green_heart: | shadedjars | 6m 30s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 2s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 30s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 145m 44s | hbase-server in the patch passed. | | | | 177m 14s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3925 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 6af28e3e78a6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 140b5d8d26 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/2/testReport/ | | Max. process+thread count | 3990 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3800: HBASE-26347 Support detect and exclude slow DNs in fan-out of WAL
Apache-HBase commented on pull request #3800: URL: https://github.com/apache/hbase/pull/3800#issuecomment-988516097 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 5s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 33s | master passed | | +1 :green_heart: | compile | 3m 56s | master passed | | +1 :green_heart: | checkstyle | 1m 22s | master passed | | +1 :green_heart: | spotbugs | 2m 45s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 14s | the patch passed | | +1 :green_heart: | compile | 3m 51s | the patch passed | | -0 :warning: | javac | 0m 27s | hbase-asyncfs generated 2 new + 22 unchanged - 7 fixed = 24 total (was 29) | | +1 :green_heart: | checkstyle | 0m 11s | hbase-asyncfs: The patch generated 0 new + 1 unchanged - 1 fixed = 1 total (was 2) | | -0 :warning: | checkstyle | 1m 9s | hbase-server: The patch generated 1 new + 10 unchanged - 0 fixed = 11 total (was 10) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 21m 30s | Patch does not cause any errors with Hadoop 3.1.2 3.2.2 3.3.1. | | +1 :green_heart: | spotbugs | 3m 4s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 22s | The patch does not generate ASF License warnings. | | | | 57m 28s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/10/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3800 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 1e6b8568a482 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ca3ba494cb | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | javac | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/10/artifact/yetus-general-check/output/diff-compile-javac-hbase-asyncfs.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/10/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 86 (vs. ulimit of 3) | | modules | C: hbase-asyncfs hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3800/10/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3925: HBASE-26027 The calling of HTable.batch blocked at AsyncRequestFutureImpl.waitUntilDone caused by ArrayStoreException
Apache-HBase commented on pull request #3925: URL: https://github.com/apache/hbase/pull/3925#issuecomment-988513843 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 47s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 39s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 16s | branch-2 passed | | +1 :green_heart: | compile | 4m 52s | branch-2 passed | | +1 :green_heart: | checkstyle | 1m 53s | branch-2 passed | | +1 :green_heart: | spotbugs | 3m 36s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 0s | the patch passed | | +1 :green_heart: | compile | 5m 0s | the patch passed | | +1 :green_heart: | javac | 5m 0s | the patch passed | | -0 :warning: | checkstyle | 1m 20s | hbase-server: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 13m 59s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 3m 57s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 26s | The patch does not generate ASF License warnings. | | | | 55m 33s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3925 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux bbf214a640ab 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 140b5d8d26 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 96 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/2/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bbeaudreault commented on pull request #3926: HBASE-26546: hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1
bbeaudreault commented on pull request #3926: URL: https://github.com/apache/hbase/pull/3926#issuecomment-988503569 I did some investigation and cannot figure out why this works without tweaking our allow list. In fact, I checked out the original state prior to https://github.com/apache/hbase/pull/3184 and (after applying https://github.com/apache/hbase/pull/3299, which adds a different necessary exclusion) it also succeeds there. For reference, I'm running the following locally to test (which confirms runs and succeeds invariant checks): `mvn clean verify -Phadoop-3.0 -Dhadoop.profile=3.0 -Dhadoop-three.version=3.3.1 -Dcheckstyle.skip -DskipTests` I've also manually verified that the jars look correct: ``` # should be zero $ jar -tf hbase-shaded/hbase-shaded-mapreduce/target/hbase-shaded-mapreduce-3.0.0-alpha-2-SNAPSHOT.jar | grep -c "org/apache/hadoop/thirdparty" 0 # should be zero $ jar -tf hbase-shaded/hbase-shaded-client-byo-hadoop/target/hbase-shaded-client-byo-hadoop-3.0.0-alpha-2-SNAPSHOT.jar | grep -c "org/apache/hadoop/thirdparty" 0 # should be non-zero $ jar -tf hbase-shaded/hbase-shaded-client/target/hbase-shaded-client-3.0.0-alpha-2-SNAPSHOT.jar | grep -c "org/apache/hadoop/thirdparty" 3092 ``` I also diffed the `jar -tf` content between master and this branch, and the two byo-hadoop jars were identical while the hbase-shaded-client only included the new thirdparty classes as expected. Finally, I also pushed this to our internal fork and verified that my end-client which was previously failing is now succeeding. It seems like this should be good to merge as-is. @saintstack since you originally ran into this failure, what do you think? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] sunhelly commented on a change in pull request #3800: HBASE-26347 Support detect and exclude slow DNs in fan-out of WAL
sunhelly commented on a change in pull request #3800: URL: https://github.com/apache/hbase/pull/3800#discussion_r764543482 ## File path: hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/monitor/ExcludeDatanodeManager.java ## @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.io.asyncfs.monitor; + +import java.util.Collections; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.TimeUnit; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.conf.ConfigurationObserver; +import org.apache.hadoop.hdfs.protocol.DatanodeInfo; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.apache.hbase.thirdparty.com.google.common.cache.Cache; +import org.apache.hbase.thirdparty.com.google.common.cache.CacheBuilder; + +/** + * The class to manage the excluded datanodes of the WALs on the regionserver. + */ +@InterfaceAudience.Private +public class ExcludeDatanodeManager implements ConfigurationObserver { + private static final Logger LOG = LoggerFactory.getLogger(ExcludeDatanodeManager.class); + + /** + * Configure for the max count the excluded datanodes. + */ + private static final String WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT_KEY = +"hbase.regionserver.async.wal.max.exclude.datanode.count"; + private static final int DEFAULT_WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT = 3; + + /** + * Configure for the TTL time of the datanodes excluded + */ + private static final String WAL_EXCLUDE_DATANODE_TTL_KEY = +"hbase.regionserver.async.wal.exclude.datanode.info.ttl.hour"; + private static final int DEFAULT_WAL_EXCLUDE_DATANODE_TTL = 6; // 6 hours + + private volatile Cache excludeDNsCache; + private final int maxExcludeDNCount; + private final Configuration conf; + // This is a map of providerId->StreamSlowMonitor + private final Map streamSlowMonitors = +new ConcurrentHashMap<>(1); + + public ExcludeDatanodeManager(Configuration conf) { +this.conf = conf; +this.maxExcludeDNCount = conf.getInt(WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT_KEY, + DEFAULT_WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT); +this.excludeDNsCache = CacheBuilder.newBuilder() + .expireAfterWrite(this.conf.getLong(WAL_EXCLUDE_DATANODE_TTL_KEY, +DEFAULT_WAL_EXCLUDE_DATANODE_TTL), +TimeUnit.HOURS) + .maximumSize(this.maxExcludeDNCount) + .concurrencyLevel(10) Review comment: Here we can use the default concurrencyLevel, which is 4. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] sunhelly commented on a change in pull request #3800: HBASE-26347 Support detect and exclude slow DNs in fan-out of WAL
sunhelly commented on a change in pull request #3800: URL: https://github.com/apache/hbase/pull/3800#discussion_r764543345 ## File path: hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/monitor/ExcludeDatanodeManager.java ## @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.io.asyncfs.monitor; + +import java.util.Collections; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.TimeUnit; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.conf.ConfigurationObserver; +import org.apache.hadoop.hdfs.protocol.DatanodeInfo; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.apache.hbase.thirdparty.com.google.common.cache.Cache; +import org.apache.hbase.thirdparty.com.google.common.cache.CacheBuilder; + +/** + * The class to manage the excluded datanodes of the WALs on the regionserver. + */ +@InterfaceAudience.Private +public class ExcludeDatanodeManager implements ConfigurationObserver { + private static final Logger LOG = LoggerFactory.getLogger(ExcludeDatanodeManager.class); + + /** + * Configure for the max count the excluded datanodes. + */ + private static final String WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT_KEY = +"hbase.regionserver.async.wal.max.exclude.datanode.count"; + private static final int DEFAULT_WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT = 3; + + /** + * Configure for the TTL time of the datanodes excluded + */ + private static final String WAL_EXCLUDE_DATANODE_TTL_KEY = +"hbase.regionserver.async.wal.exclude.datanode.info.ttl.hour"; + private static final int DEFAULT_WAL_EXCLUDE_DATANODE_TTL = 6; // 6 hours + + private volatile Cache excludeDNsCache; + private final int maxExcludeDNCount; + private final Configuration conf; + // This is a map of providerId->StreamSlowMonitor + private final Map streamSlowMonitors = +new ConcurrentHashMap<>(1); + + public ExcludeDatanodeManager(Configuration conf) { +this.conf = conf; +this.maxExcludeDNCount = conf.getInt(WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT_KEY, + DEFAULT_WAL_MAX_EXCLUDE_SLOW_DATANODE_COUNT); +this.excludeDNsCache = CacheBuilder.newBuilder() + .expireAfterWrite(this.conf.getLong(WAL_EXCLUDE_DATANODE_TTL_KEY, +DEFAULT_WAL_EXCLUDE_DATANODE_TTL), +TimeUnit.HOURS) + .maximumSize(this.maxExcludeDNCount) + .concurrencyLevel(10) + .build(); + } + + /** + * Try to add a datanode to the regionserver excluding cache + * @param datanodeInfo the datanode to be added to the excluded cache + * @param cause the cause that the datanode is hope to be excluded + * @return True if the datanode is added to the regionserver excluding cache, false otherwise + */ + public boolean tryAddExcludeDN(DatanodeInfo datanodeInfo, String cause) { +boolean alreadyMarkedSlow = getExcludeDNs().containsKey(datanodeInfo); +if (excludeDNsCache.size() < maxExcludeDNCount) { + if (!alreadyMarkedSlow) { +excludeDNsCache.put(datanodeInfo, System.currentTimeMillis()); Review comment: Hi, @Apache9 , do you mean that in the put method of guava cache, concurrently put operation of segments may cause the cache size be larger than the configured maximumSize? And in each segment, there is lock for put operation, so that the table value set and the eviction method are all in the lock. But for cache here has maximumSize=3(no matter what the concurrencyLevel is), the segment count is always 1, so I think that concurrently put will be serially executed internally through the lock. This is the source code of LocalCache set segment count, `int segmentShift = 0; int segmentCount = 1; while (segmentCount < concurrencyLevel && (!evictsBySize() || segmentCount * 20 <= maxWeight)) { ++segmentShift; segmentCount <<= 1; }` Here the evictsBySize() is always true, because maxWeight=maximumSize=3(default config)>=0, and segmentCount * 20 <= maxWeight is false in most circumstances, because we will not let the exclude cache contains more than
[jira] [Commented] (HBASE-26525) Use unique thread name for group WALs
[ https://issues.apache.org/jira/browse/HBASE-26525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454938#comment-17454938 ] Hudson commented on HBASE-26525: Results for branch branch-2.4 [build #254 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/254/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/254/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/254/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/254/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/254/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Use unique thread name for group WALs > - > > Key: HBASE-26525 > URL: https://issues.apache.org/jira/browse/HBASE-26525 > Project: HBase > Issue Type: Improvement > Components: wal >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.9 > > Attachments: image-2021-12-01-16-20-18-912.png, > image-2021-12-01-16-21-18-032.png, image-2021-12-02-17-38-21-959.png > > > The consumer threads for each WAL group has the same name, since they only > use the WAL root dir in the thread name. > {code:java} > new ThreadFactoryBuilder().setNameFormat("AsyncFSWAL-%d-" + > rootDir.toString()). > setDaemon(true).build()); {code} > For example, for BoundedGroupingStrategy, the consumer threads names are as > follows, > !image-2021-12-01-16-20-18-912.png|width=1199,height=130! > We can use the log prefix instead, the consumer threads names will be changed > to > !image-2021-12-02-17-38-21-959.png|width=1102,height=197! > So we can clearly see what happens from the log and the jstack info if > something wrong with the WAL. > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-24870) Ignore TestAsyncTableRSCrashPublish
[ https://issues.apache.org/jira/browse/HBASE-24870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454937#comment-17454937 ] Hudson commented on HBASE-24870: Results for branch branch-2.4 [build #254 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/254/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/254/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/254/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/254/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/254/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Ignore TestAsyncTableRSCrashPublish > --- > > Key: HBASE-24870 > URL: https://issues.apache.org/jira/browse/HBASE-24870 > Project: HBase > Issue Type: Sub-task >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > Fix For: 2.2.6, 2.5.0, 2.3.8, 2.4.9 > > > [ERROR] Failures: > [ERROR] TestAsyncTableRSCrashPublish.test:94 Waiting timed out after [60,000] > msec > > I meet this failure many times when runAllTests. And other developers meet > this too when vote RC. Let's ignore this first and enable this after parent > issue resolved. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [hbase] Apache-HBase commented on pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations
Apache-HBase commented on pull request #3906: URL: https://github.com/apache/hbase/pull/3906#issuecomment-988430137 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 7s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 13s | master passed | | +1 :green_heart: | compile | 0m 58s | master passed | | +1 :green_heart: | shadedjars | 9m 10s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 51s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 5m 1s | the patch passed | | +1 :green_heart: | compile | 0m 57s | the patch passed | | +1 :green_heart: | javac | 0m 57s | the patch passed | | +1 :green_heart: | shadedjars | 9m 8s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 49s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 13s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 1m 43s | hbase-client in the patch passed. | | | | 38m 58s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3906/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3906 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 028a61b2a692 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ca3ba494cb | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3906/5/testReport/ | | Max. process+thread count | 216 (vs. ulimit of 3) | | modules | C: hbase-common hbase-client U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3906/5/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations
Apache-HBase commented on pull request #3906: URL: https://github.com/apache/hbase/pull/3906#issuecomment-988429450 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 3m 52s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 6s | master passed | | +1 :green_heart: | compile | 0m 52s | master passed | | +1 :green_heart: | shadedjars | 8m 10s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 47s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 53s | the patch passed | | +1 :green_heart: | compile | 0m 52s | the patch passed | | +1 :green_heart: | javac | 0m 52s | the patch passed | | +1 :green_heart: | shadedjars | 8m 13s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 45s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 48s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 1m 31s | hbase-client in the patch passed. | | | | 36m 48s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3906/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3906 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux db4b1a4bb548 4.15.0-161-generic #169-Ubuntu SMP Fri Oct 15 13:41:54 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ca3ba494cb | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3906/5/testReport/ | | Max. process+thread count | 346 (vs. ulimit of 3) | | modules | C: hbase-common hbase-client U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3906/5/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations
ndimiduk commented on a change in pull request #3906: URL: https://github.com/apache/hbase/pull/3906#discussion_r764483238 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncTableImpl.java ## @@ -220,35 +224,47 @@ private static Result toResult(HBaseRpcController controller, MutateResponse res @Override public CompletableFuture get(Get get) { +final Supplier supplier = new TableOperationSpanBuilder() Review comment: The operation argument is polymorphic, so I'd have to implement several identical methods, each with a different operation type in their signature. I have wrapped up invocations of `TableOperationSpanBuilder` as described. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26067) Change the way on how we track store file list
[ https://issues.apache.org/jira/browse/HBASE-26067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454859#comment-17454859 ] Hudson commented on HBASE-26067: Results for branch HBASE-26067 [build #8 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/HBASE-26067/8/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/HBASE-26067/8/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/HBASE-26067/8/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/HBASE-26067/8/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Change the way on how we track store file list > -- > > Key: HBASE-26067 > URL: https://issues.apache.org/jira/browse/HBASE-26067 > Project: HBase > Issue Type: Umbrella > Components: HFile >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > Open a separated jira to track the work since it can not be fully included in > HBASE-24749. > I think this could be a landed prior to HBASE-24749, as if this works, we > could have different implementations for tracking store file list. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [hbase] Apache-HBase commented on pull request #3926: HBASE-26546: hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1
Apache-HBase commented on pull request #3926: URL: https://github.com/apache/hbase/pull/3926#issuecomment-988283425 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 2s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 31s | master passed | | +1 :green_heart: | compile | 0m 34s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 14s | the patch passed | | +1 :green_heart: | compile | 0m 32s | the patch passed | | +1 :green_heart: | javac | 0m 32s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | hadoopcheck | 21m 39s | Patch does not cause any errors with Hadoop 3.1.2 3.2.2 3.3.1. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 12s | The patch does not generate ASF License warnings. | | | | 41m 41s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3926/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3926 | | JIRA Issue | HBASE-26546 | | Optional Tests | dupname asflicense javac hadoopcheck xml compile | | uname | Linux f1b749729142 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ca3ba494cb | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 86 (vs. ulimit of 3) | | modules | C: hbase-shaded U: hbase-shaded | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3926/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3926: HBASE-26546: hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1
Apache-HBase commented on pull request #3926: URL: https://github.com/apache/hbase/pull/3926#issuecomment-988277763 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 0s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 50s | master passed | | +1 :green_heart: | compile | 0m 33s | master passed | | +1 :green_heart: | shadedjars | 8m 18s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 19s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 26s | the patch passed | | +1 :green_heart: | compile | 0m 32s | the patch passed | | +1 :green_heart: | javac | 0m 32s | the patch passed | | +1 :green_heart: | shadedjars | 8m 16s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 17s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 2s | hbase-shaded in the patch passed. | | | | 30m 54s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3926/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3926 | | JIRA Issue | HBASE-26546 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux ef39c310b5b8 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ca3ba494cb | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3926/1/testReport/ | | Max. process+thread count | 476 (vs. ulimit of 3) | | modules | C: hbase-shaded U: hbase-shaded | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3926/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3926: HBASE-26546: hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1
Apache-HBase commented on pull request #3926: URL: https://github.com/apache/hbase/pull/3926#issuecomment-988276853 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 12s | master passed | | +1 :green_heart: | compile | 0m 29s | master passed | | +1 :green_heart: | shadedjars | 8m 23s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 18s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 54s | the patch passed | | +1 :green_heart: | compile | 0m 30s | the patch passed | | +1 :green_heart: | javac | 0m 30s | the patch passed | | +1 :green_heart: | shadedjars | 8m 24s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 16s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 7s | hbase-shaded in the patch passed. | | | | 29m 15s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3926/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3926 | | JIRA Issue | HBASE-26546 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux c6877e0ec21f 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ca3ba494cb | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3926/1/testReport/ | | Max. process+thread count | 482 (vs. ulimit of 3) | | modules | C: hbase-shaded U: hbase-shaded | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3926/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3925: HBASE-26027 The calling of HTable.batch blocked at AsyncRequestFutureImpl.waitUntilDone caused by ArrayStoreException
Apache-HBase commented on pull request #3925: URL: https://github.com/apache/hbase/pull/3925#issuecomment-988264202 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 3m 20s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 19s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 42s | branch-2 passed | | +1 :green_heart: | compile | 2m 11s | branch-2 passed | | +1 :green_heart: | shadedjars | 9m 3s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 33s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 5m 26s | the patch passed | | +1 :green_heart: | compile | 2m 12s | the patch passed | | +1 :green_heart: | javac | 2m 12s | the patch passed | | +1 :green_heart: | shadedjars | 9m 22s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 24s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 3m 49s | hbase-client in the patch failed. | | +1 :green_heart: | unit | 275m 23s | hbase-server in the patch passed. | | | | 322m 48s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3925 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 685c74beebe8 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 140b5d8d26 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/1/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-client.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/1/testReport/ | | Max. process+thread count | 2631 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bbeaudreault opened a new pull request #3926: HBASE-26546: hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1
bbeaudreault opened a new pull request #3926: URL: https://github.com/apache/hbase/pull/3926 I expected there would be more to this, but this ended up working locally without needing to modify our invariants checks. I'm continuing to look into why, but wanted to get the full build started. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (HBASE-26546) hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1
[ https://issues.apache.org/jira/browse/HBASE-26546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Beaudreault reassigned HBASE-26546: - Assignee: Bryan Beaudreault > hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1 > -- > > Key: HBASE-26546 > URL: https://issues.apache.org/jira/browse/HBASE-26546 > Project: HBase > Issue Type: Bug >Reporter: Bryan Beaudreault >Assignee: Bryan Beaudreault >Priority: Major > > In HBASE-25792, the shaded thirdparty libraries from hadoop were removed from > the hbase-shaded-client fat jar to satisfy invariant checks. Unfortunately > this causes users of hbase-shaded-client to fail, because required classes > are not available at runtime. > The specific failure I'm seeing is when trying to call new Configuration(), > which results in: > > > {code:java} > Caused by: java.lang.NoClassDefFoundError: > org/apache/hadoop/thirdparty/com/google/common/base/Preconditions > at > org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:430) > > at > org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:443) > > at > org.apache.hadoop.conf.Configuration.(Configuration.java:525){code} > > > If you take a look at the hbase-shaded-client fat jar, it contains the > org.apache.hadoop.conf.Configuration class as you'd expect. If you decompile > that class (or look at the 3.3.1 source), you'll see that there is an import > for org.apache.hadoop.thirdparty.com.google.common.base.Preconditions but the > fat jar does not provide it. > > One way for clients to get around this is to add an explicit dependency on > hadoop-shaded-guava, but this is problematic for a few reasons: > > - it's best practice to use maven-dependency-plugin to disallow declared, > unused dependencies (which this would be) > - it requires users to continually keep the version of hadoop-shaded-guava > up-to-date over time. > - it only covers guava, but there is also protobuf and potentially other > shaded libraries in the future. > > I think we should remove the exclusion of > {{org/apache/hadoop/thirdparty/**/*}} from the shading config and instead add > that pattern to the allowlist so that hbase-shaded-client is all clients need > to get started with hbase. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26546) hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1
[ https://issues.apache.org/jira/browse/HBASE-26546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454829#comment-17454829 ] Bryan Beaudreault commented on HBASE-26546: --- I'll give an implementation a shot. I mostly wanted to run this by you before proceeding, in case I was missing some context. Thanks! > hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1 > -- > > Key: HBASE-26546 > URL: https://issues.apache.org/jira/browse/HBASE-26546 > Project: HBase > Issue Type: Bug >Reporter: Bryan Beaudreault >Priority: Major > > In HBASE-25792, the shaded thirdparty libraries from hadoop were removed from > the hbase-shaded-client fat jar to satisfy invariant checks. Unfortunately > this causes users of hbase-shaded-client to fail, because required classes > are not available at runtime. > The specific failure I'm seeing is when trying to call new Configuration(), > which results in: > > > {code:java} > Caused by: java.lang.NoClassDefFoundError: > org/apache/hadoop/thirdparty/com/google/common/base/Preconditions > at > org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:430) > > at > org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:443) > > at > org.apache.hadoop.conf.Configuration.(Configuration.java:525){code} > > > If you take a look at the hbase-shaded-client fat jar, it contains the > org.apache.hadoop.conf.Configuration class as you'd expect. If you decompile > that class (or look at the 3.3.1 source), you'll see that there is an import > for org.apache.hadoop.thirdparty.com.google.common.base.Preconditions but the > fat jar does not provide it. > > One way for clients to get around this is to add an explicit dependency on > hadoop-shaded-guava, but this is problematic for a few reasons: > > - it's best practice to use maven-dependency-plugin to disallow declared, > unused dependencies (which this would be) > - it requires users to continually keep the version of hadoop-shaded-guava > up-to-date over time. > - it only covers guava, but there is also protobuf and potentially other > shaded libraries in the future. > > I think we should remove the exclusion of > {{org/apache/hadoop/thirdparty/**/*}} from the shading config and instead add > that pattern to the allowlist so that hbase-shaded-client is all clients need > to get started with hbase. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Comment Edited] (HBASE-26546) hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1
[ https://issues.apache.org/jira/browse/HBASE-26546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454806#comment-17454806 ] Michael Stack edited comment on HBASE-26546 at 12/7/21, 8:14 PM: - Makes sense [~bbeaudreault] . Thanks for digging in. Shout if you want me implement your suggestion (sounds like you have better test setup than I though). was (Author: stack): Makes sense [~bbeaudreault] . Thanks for digging in. > hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1 > -- > > Key: HBASE-26546 > URL: https://issues.apache.org/jira/browse/HBASE-26546 > Project: HBase > Issue Type: Bug >Reporter: Bryan Beaudreault >Priority: Major > > In HBASE-25792, the shaded thirdparty libraries from hadoop were removed from > the hbase-shaded-client fat jar to satisfy invariant checks. Unfortunately > this causes users of hbase-shaded-client to fail, because required classes > are not available at runtime. > The specific failure I'm seeing is when trying to call new Configuration(), > which results in: > > > {code:java} > Caused by: java.lang.NoClassDefFoundError: > org/apache/hadoop/thirdparty/com/google/common/base/Preconditions > at > org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:430) > > at > org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:443) > > at > org.apache.hadoop.conf.Configuration.(Configuration.java:525){code} > > > If you take a look at the hbase-shaded-client fat jar, it contains the > org.apache.hadoop.conf.Configuration class as you'd expect. If you decompile > that class (or look at the 3.3.1 source), you'll see that there is an import > for org.apache.hadoop.thirdparty.com.google.common.base.Preconditions but the > fat jar does not provide it. > > One way for clients to get around this is to add an explicit dependency on > hadoop-shaded-guava, but this is problematic for a few reasons: > > - it's best practice to use maven-dependency-plugin to disallow declared, > unused dependencies (which this would be) > - it requires users to continually keep the version of hadoop-shaded-guava > up-to-date over time. > - it only covers guava, but there is also protobuf and potentially other > shaded libraries in the future. > > I think we should remove the exclusion of > {{org/apache/hadoop/thirdparty/**/*}} from the shading config and instead add > that pattern to the allowlist so that hbase-shaded-client is all clients need > to get started with hbase. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [hbase] Apache-HBase commented on pull request #3851: HBASE-26286: Add support for specifying store file tracker when restoring or cloning snapshot
Apache-HBase commented on pull request #3851: URL: https://github.com/apache/hbase/pull/3851#issuecomment-988210160 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 3s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-26067 Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 32s | HBASE-26067 passed | | +1 :green_heart: | compile | 3m 16s | HBASE-26067 passed | | +1 :green_heart: | shadedjars | 9m 7s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 59s | HBASE-26067 passed | | -0 :warning: | patch | 11m 53s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 16s | the patch passed | | +1 :green_heart: | compile | 3m 9s | the patch passed | | +1 :green_heart: | javac | 3m 9s | the patch passed | | +1 :green_heart: | shadedjars | 9m 8s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 58s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 49s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 35s | hbase-client in the patch passed. | | -1 :x: | unit | 225m 29s | hbase-server in the patch failed. | | +1 :green_heart: | unit | 8m 40s | hbase-thrift in the patch passed. | | +1 :green_heart: | unit | 8m 22s | hbase-shell in the patch passed. | | | | 286m 36s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3851/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3851 | | JIRA Issue | HBASE-26286 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 3dfd5163bbcf 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-26067 / 4aa3f47aa2 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3851/5/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3851/5/testReport/ | | Max. process+thread count | 3359 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3851/5/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26546) hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1
[ https://issues.apache.org/jira/browse/HBASE-26546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454806#comment-17454806 ] Michael Stack commented on HBASE-26546: --- Makes sense [~bbeaudreault] . Thanks for digging in. > hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1 > -- > > Key: HBASE-26546 > URL: https://issues.apache.org/jira/browse/HBASE-26546 > Project: HBase > Issue Type: Bug >Reporter: Bryan Beaudreault >Priority: Major > > In HBASE-25792, the shaded thirdparty libraries from hadoop were removed from > the hbase-shaded-client fat jar to satisfy invariant checks. Unfortunately > this causes users of hbase-shaded-client to fail, because required classes > are not available at runtime. > The specific failure I'm seeing is when trying to call new Configuration(), > which results in: > > > {code:java} > Caused by: java.lang.NoClassDefFoundError: > org/apache/hadoop/thirdparty/com/google/common/base/Preconditions > at > org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:430) > > at > org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:443) > > at > org.apache.hadoop.conf.Configuration.(Configuration.java:525){code} > > > If you take a look at the hbase-shaded-client fat jar, it contains the > org.apache.hadoop.conf.Configuration class as you'd expect. If you decompile > that class (or look at the 3.3.1 source), you'll see that there is an import > for org.apache.hadoop.thirdparty.com.google.common.base.Preconditions but the > fat jar does not provide it. > > One way for clients to get around this is to add an explicit dependency on > hadoop-shaded-guava, but this is problematic for a few reasons: > > - it's best practice to use maven-dependency-plugin to disallow declared, > unused dependencies (which this would be) > - it requires users to continually keep the version of hadoop-shaded-guava > up-to-date over time. > - it only covers guava, but there is also protobuf and potentially other > shaded libraries in the future. > > I think we should remove the exclusion of > {{org/apache/hadoop/thirdparty/**/*}} from the shading config and instead add > that pattern to the allowlist so that hbase-shaded-client is all clients need > to get started with hbase. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [hbase] Apache-HBase commented on pull request #3925: HBASE-26027 The calling of HTable.batch blocked at AsyncRequestFutureImpl.waitUntilDone caused by ArrayStoreException
Apache-HBase commented on pull request #3925: URL: https://github.com/apache/hbase/pull/3925#issuecomment-988184799 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 14s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 10s | branch-2 passed | | +1 :green_heart: | compile | 1m 53s | branch-2 passed | | +1 :green_heart: | shadedjars | 8m 0s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 16s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 40s | the patch passed | | +1 :green_heart: | compile | 1m 52s | the patch passed | | +1 :green_heart: | javac | 1m 52s | the patch passed | | +1 :green_heart: | shadedjars | 8m 3s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 15s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 3m 1s | hbase-client in the patch failed. | | +1 :green_heart: | unit | 149m 16s | hbase-server in the patch passed. | | | | 188m 47s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3925 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux c482221f1ea0 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 140b5d8d26 | | Default Java | AdoptOpenJDK-11.0.10+9 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-client.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/1/testReport/ | | Max. process+thread count | 3594 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24870) Ignore TestAsyncTableRSCrashPublish
[ https://issues.apache.org/jira/browse/HBASE-24870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454794#comment-17454794 ] Hudson commented on HBASE-24870: Results for branch branch-2 [build #412 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/412/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/412/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/412/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/412/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/412/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Ignore TestAsyncTableRSCrashPublish > --- > > Key: HBASE-24870 > URL: https://issues.apache.org/jira/browse/HBASE-24870 > Project: HBase > Issue Type: Sub-task >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > Fix For: 2.2.6, 2.5.0, 2.3.8, 2.4.9 > > > [ERROR] Failures: > [ERROR] TestAsyncTableRSCrashPublish.test:94 Waiting timed out after [60,000] > msec > > I meet this failure many times when runAllTests. And other developers meet > this too when vote RC. Let's ignore this first and enable this after parent > issue resolved. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26527) ArrayIndexOutOfBoundsException in KeyValueUtil.copyToNewKeyValue()
[ https://issues.apache.org/jira/browse/HBASE-26527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454793#comment-17454793 ] Hudson commented on HBASE-26527: Results for branch branch-2 [build #412 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/412/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/412/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/412/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/412/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/412/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > ArrayIndexOutOfBoundsException in KeyValueUtil.copyToNewKeyValue() > -- > > Key: HBASE-26527 > URL: https://issues.apache.org/jira/browse/HBASE-26527 > Project: HBase > Issue Type: Bug > Components: wal >Affects Versions: 2.2.7, 3.0.0-alpha-2 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.9 > > > While investigating a Phoenix crash, I've found a possible problem in > KeyValueUtil. > When using Phoenix, we need configure (at least for older versions) > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec as a WAL codec > in HBase. > This codec will eventually serialize standard (not phoenix specifc WAL > entries) to the WAL file, and internally converts the Cell objects to > KeyValue objects, by building a new byte[]. > This fails with an ArrayIndexOutOfBoundsException, because the we allocate a > byte[] the size of Cell.getSerializedSize(), and it seems that we are > processing a Cell that does not actually serialize the column family and > later fields. > However, we are building a traditional KeyValue object for serialization, > which does serialize them, hence we run out of bytes. > I think that since we are writing a KeyValue, we should not rely of the > getSerializedSize() method of the source cell, but rather calculate the > backing array size based on how KeyValue expects its data to be serialized. > The stack trace for reference: > {noformat} > java.lang.ArrayIndexOutOfBoundsException: 9787 > at org.apache.hadoop.hbase.util.Bytes.putByte(Bytes.java:502) > at > org.apache.hadoop.hbase.KeyValueUtil.appendKeyTo(KeyValueUtil.java:142) > at > org.apache.hadoop.hbase.KeyValueUtil.appendToByteArray(KeyValueUtil.java:156) > at > org.apache.hadoop.hbase.KeyValueUtil.copyToNewByteArray(KeyValueUtil.java:133) > at > org.apache.hadoop.hbase.KeyValueUtil.copyToNewKeyValue(KeyValueUtil.java:97) > at > org.apache.phoenix.util.PhoenixKeyValueUtil.maybeCopyCell(PhoenixKeyValueUtil.java:214) > at > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueEncoder.write(IndexedWALEditCodec.java:218) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(ProtobufLogWriter.java:59) > at > org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:294) > at > org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:65) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendEntry(AbstractFSWAL.java:931) > at > org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1075) > at > org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:964) > at > org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:873) > at > com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:129) > at java.lang.Thread.run(Thread.java:748) > {noformat} > Note that I am still not sure exactly what triggers this bug, one possibility > is org.apache.hadoop.hbase.ByteBufferKeyOnlyKeyValue -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26462) Should persist restoreAcl flag in the procedure state for CloneSnapshotProcedure and RestoreSnapshotProcedure
[ https://issues.apache.org/jira/browse/HBASE-26462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454792#comment-17454792 ] Hudson commented on HBASE-26462: Results for branch branch-2 [build #412 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/412/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/412/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/412/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/412/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/412/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Should persist restoreAcl flag in the procedure state for > CloneSnapshotProcedure and RestoreSnapshotProcedure > - > > Key: HBASE-26462 > URL: https://issues.apache.org/jira/browse/HBASE-26462 > Project: HBase > Issue Type: Bug > Components: proc-v2, snapshots >Reporter: Duo Zhang >Assignee: LiangJun He >Priority: Critical > Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.9 > > > Found this when reviewing HBASE-26454. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-16247) SparkSQL Avro serialization doesn't handle enums correctly
[ https://issues.apache.org/jira/browse/HBASE-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454790#comment-17454790 ] Sean Busbey commented on HBASE-16247: - AFAIK we haven't addressed the issue. unscheduling it for now until it's a priority for someone. > SparkSQL Avro serialization doesn't handle enums correctly > -- > > Key: HBASE-16247 > URL: https://issues.apache.org/jira/browse/HBASE-16247 > Project: HBase > Issue Type: Bug > Components: hbase-connectors, spark >Affects Versions: 2.0.0 >Reporter: Sean Busbey >Priority: Major > > Avro's generic api expects GenericEnumSymbol as the runtime type for > instances of fields that are of Avro type ENUM. The Avro 1.7 libraries are > lax in some cases for handling this, but the 1.8 libraries are strict. We > should proactively fix our serialization. > (the lax serialization in 1.7 fails for some nested use in unions, see > AVRO-997 for details) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (HBASE-16247) SparkSQL Avro serialization doesn't handle enums correctly
[ https://issues.apache.org/jira/browse/HBASE-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-16247: Fix Version/s: (was: hbase-connectors-1.0.1) (was: 3.0.0-alpha-2) > SparkSQL Avro serialization doesn't handle enums correctly > -- > > Key: HBASE-16247 > URL: https://issues.apache.org/jira/browse/HBASE-16247 > Project: HBase > Issue Type: Bug > Components: hbase-connectors, spark >Affects Versions: 2.0.0 >Reporter: Sean Busbey >Priority: Major > > Avro's generic api expects GenericEnumSymbol as the runtime type for > instances of fields that are of Avro type ENUM. The Avro 1.7 libraries are > lax in some cases for handling this, but the 1.8 libraries are strict. We > should proactively fix our serialization. > (the lax serialization in 1.7 fails for some nested use in unions, see > AVRO-997 for details) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HBASE-26547) Passing an invalid DURABILITY when creating a table enters an endless loop of retries
Bryan Beaudreault created HBASE-26547: - Summary: Passing an invalid DURABILITY when creating a table enters an endless loop of retries Key: HBASE-26547 URL: https://issues.apache.org/jira/browse/HBASE-26547 Project: HBase Issue Type: Bug Reporter: Bryan Beaudreault As part of our Hbase2 upgrade, our automation copies the HTableDescriptor from a CDH5 cluster into the HBase2 cluster, then kicks off replication. During our testing we encountered a misconfigured table, which had a DURABILITY => 'DEFAULT', when the correct value is 'USE_DEFAULT'. In hbase 1.x, any invalid value encountered by Durability.valueOf is try/caught and results in the default value of USE_DEFAULT. So this misconfiguration caused no pain in cdh5. In hbase 2.x+, the IllegalArgumentException from Durability.valueOf is no longer caught. This is probably a good thing, but unfortunately it caused the CreateTableProcedure to fail in a way that resulted in an endless loop of retries, with no backoff. This may be a general issue with CreateTableProcedure – there should probably be a pre-step which validates the HTableDescriptor and terminally fails if invalid. Additionally, does it make sense to have a backoff on the retry of procedures? The vary rapid retry of this procedure actually caused HDFS issues because it was creating many thousands of .regioninfo files in rapid succession, enough to lag replication and cause DataNodes to be considered bad, which caused RegionServers to abort due to failed WAL writes. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-16247) SparkSQL Avro serialization doesn't handle enums correctly
[ https://issues.apache.org/jira/browse/HBASE-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454787#comment-17454787 ] Tak-Lon (Stephen) Wu commented on HBASE-16247: -- should we close this issue? if not, can we move this issue to 1.1.0? > SparkSQL Avro serialization doesn't handle enums correctly > -- > > Key: HBASE-16247 > URL: https://issues.apache.org/jira/browse/HBASE-16247 > Project: HBase > Issue Type: Bug > Components: hbase-connectors, spark >Affects Versions: 2.0.0 >Reporter: Sean Busbey >Priority: Major > Fix For: hbase-connectors-1.0.1, 3.0.0-alpha-2 > > > Avro's generic api expects GenericEnumSymbol as the runtime type for > instances of fields that are of Avro type ENUM. The Avro 1.7 libraries are > lax in some cases for handling this, but the 1.8 libraries are strict. We > should proactively fix our serialization. > (the lax serialization in 1.7 fails for some nested use in unions, see > AVRO-997 for details) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [hbase] Apache-HBase commented on pull request #3851: HBASE-26286: Add support for specifying store file tracker when restoring or cloning snapshot
Apache-HBase commented on pull request #3851: URL: https://github.com/apache/hbase/pull/3851#issuecomment-988157101 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 40s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-26067 Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 36s | HBASE-26067 passed | | +1 :green_heart: | compile | 3m 46s | HBASE-26067 passed | | +1 :green_heart: | shadedjars | 8m 18s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 31s | HBASE-26067 passed | | -0 :warning: | patch | 11m 47s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 30s | the patch passed | | +1 :green_heart: | compile | 3m 47s | the patch passed | | +1 :green_heart: | javac | 3m 47s | the patch passed | | +1 :green_heart: | shadedjars | 8m 18s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 32s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 2s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 22s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 143m 43s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 6m 54s | hbase-thrift in the patch passed. | | +1 :green_heart: | unit | 7m 10s | hbase-shell in the patch passed. | | | | 202m 20s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3851/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3851 | | JIRA Issue | HBASE-26286 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux c32c0a617f58 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-26067 / 4aa3f47aa2 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3851/5/testReport/ | | Max. process+thread count | 4397 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3851/5/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-22338) LICENSE file only contains Apache 2.0
[ https://issues.apache.org/jira/browse/HBASE-22338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454784#comment-17454784 ] Tak-Lon (Stephen) Wu commented on HBASE-22338: -- is this JIRA still valid? [~rabikumar.kc] [~busbey] > LICENSE file only contains Apache 2.0 > - > > Key: HBASE-22338 > URL: https://issues.apache.org/jira/browse/HBASE-22338 > Project: HBase > Issue Type: Bug > Components: hbase-connectors >Affects Versions: connector-1.0.0 >Reporter: Peter Somogyi >Assignee: Rabi Kumar K C >Priority: Critical > Fix For: hbase-connectors-1.0.1 > > Attachments: hbase-connectors-dependency.html > > > LICENSE.md file has only Apache 2.0 licenses but we package dependencies that > use different ones. For example jcodings uses MIT. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-25792) Filter out o.a.hadoop.thirdparty building shaded jars
[ https://issues.apache.org/jira/browse/HBASE-25792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454776#comment-17454776 ] Bryan Beaudreault commented on HBASE-25792: --- Actually [~stack] , I decided to just submit this to a new issue since this one is closed. If you don't mind, let's continue this discussion in HBASE-26546 > Filter out o.a.hadoop.thirdparty building shaded jars > - > > Key: HBASE-25792 > URL: https://issues.apache.org/jira/browse/HBASE-25792 > Project: HBase > Issue Type: Bug > Components: shading >Affects Versions: 3.0.0-alpha-1, 2.5.0, 2.4.3 >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.3 > > > Hadoop 3.3.1 (unreleased currently) shades guava. The shaded guava then trips > the check in our shading that tries to exclude hadoop bits from the fat jars > we build. > For the issue to trigger, need to build against tip of hadoop branch-3.3. You > then get this complaint: > {code} > [INFO] --- exec-maven-plugin:1.6.0:exec (check-jar-contents) @ > hbase-shaded-check-invariants --- > [ERROR] Found artifact with unexpected contents: > '/Users/stack/.m2/repository/org/apache/hbase/hbase-shaded-mapreduce/2.3.6-SNAPSHOT/hbase-shaded-mapreduce-2.3.6-SNAPSHOT.jar' > Please check the following and either correct the build or update > the allowed list with reasoning. > org/apache/hadoop/thirdparty/ > org/apache/hadoop/thirdparty/com/ > org/apache/hadoop/thirdparty/com/google/ > org/apache/hadoop/thirdparty/com/google/common/ > org/apache/hadoop/thirdparty/com/google/common/annotations/ > org/apache/hadoop/thirdparty/com/google/common/annotations/Beta.class > > org/apache/hadoop/thirdparty/com/google/common/annotations/GwtCompatible.class > > org/apache/hadoop/thirdparty/com/google/common/annotations/GwtIncompatible.class > > org/apache/hadoop/thirdparty/com/google/common/annotations/VisibleForTesting.class > org/apache/hadoop/thirdparty/com/google/common/base/ > org/apache/hadoop/thirdparty/com/google/common/base/Absent.class > > org/apache/hadoop/thirdparty/com/google/common/base/AbstractIterator$1.class > > org/apache/hadoop/thirdparty/com/google/common/base/AbstractIterator$State.class > org/apache/hadoop/thirdparty/com/google/common/base/AbstractIterator.class > org/apache/hadoop/thirdparty/com/google/common/base/Ascii.class > org/apache/hadoop/thirdparty/com/google/common/base/CaseFormat$1.class > org/apache/hadoop/thirdparty/com/google/common/base/CaseFormat$2.class > org/apache/hadoop/thirdparty/com/google/common/base/CaseFormat$3.class > org/apache/hadoop/thirdparty/com/google/common/base/CaseFormat$4.class > > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HBASE-26546) hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1
Bryan Beaudreault created HBASE-26546: - Summary: hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1 Key: HBASE-26546 URL: https://issues.apache.org/jira/browse/HBASE-26546 Project: HBase Issue Type: Bug Reporter: Bryan Beaudreault In HBASE-25792, the shaded thirdparty libraries from hadoop were removed from the hbase-shaded-client fat jar to satisfy invariant checks. Unfortunately this causes users of hbase-shaded-client to fail, because required classes are not available at runtime. The specific failure I'm seeing is when trying to call new Configuration(), which results in: {code:java} Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/thirdparty/com/google/common/base/Preconditions at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:430) at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:443) at org.apache.hadoop.conf.Configuration.(Configuration.java:525){code} If you take a look at the hbase-shaded-client fat jar, it contains the org.apache.hadoop.conf.Configuration class as you'd expect. If you decompile that class (or look at the 3.3.1 source), you'll see that there is an import for org.apache.hadoop.thirdparty.com.google.common.base.Preconditions but the fat jar does not provide it. One way for clients to get around this is to add an explicit dependency on hadoop-shaded-guava, but this is problematic for a few reasons: - it's best practice to use maven-dependency-plugin to disallow declared, unused dependencies (which this would be) - it requires users to continually keep the version of hadoop-shaded-guava up-to-date over time. - it only covers guava, but there is also protobuf and potentially other shaded libraries in the future. I think we should remove the exclusion of {{org/apache/hadoop/thirdparty/**/*}} from the shading config and instead add that pattern to the allowlist so that hbase-shaded-client is all clients need to get started with hbase. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations
ndimiduk commented on a change in pull request #3906: URL: https://github.com/apache/hbase/pull/3906#discussion_r764228006 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/trace/TableOperationSpanBuilder.java ## @@ -0,0 +1,119 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.client.trace; + +import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_NAME; +import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_OPERATION; +import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.NAMESPACE_KEY; +import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.TABLE_KEY; +import io.opentelemetry.api.common.AttributeKey; +import io.opentelemetry.api.trace.Span; +import io.opentelemetry.api.trace.SpanBuilder; +import io.opentelemetry.api.trace.SpanKind; +import java.util.HashMap; +import java.util.Map; +import java.util.function.Supplier; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.Append; +import org.apache.hadoop.hbase.client.CheckAndMutate; +import org.apache.hadoop.hbase.client.Delete; +import org.apache.hadoop.hbase.client.Get; +import org.apache.hadoop.hbase.client.Increment; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.RegionCoprocessorServiceExec; +import org.apache.hadoop.hbase.client.Row; +import org.apache.hadoop.hbase.client.RowMutations; +import org.apache.hadoop.hbase.client.Scan; +import org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.Operation; +import org.apache.hadoop.hbase.trace.TraceUtil; +import org.apache.yetus.audience.InterfaceAudience; + +/** + * Construct {@link io.opentelemetry.api.trace.Span} instances originating from + * "table operations" -- the verbs in our public API that interact with data in tables. + */ +@InterfaceAudience.Private +public class TableOperationSpanBuilder implements Supplier { + + // n.b. The results of this class are tested implicitly by way of the likes of + // `TestAsyncTableTracing` and friends. + + private static final String unknown = "UNKNOWN"; + + private TableName tableName; + private final Map, Object> attributes = new HashMap<>(); + + @Override public Span get() { +return build(); + } + + public TableOperationSpanBuilder setOperation(final Scan scan) { +return setOperation(valueFrom(scan)); + } + + public TableOperationSpanBuilder setOperation(final Row row) { +return setOperation(valueFrom(row)); + } + + public TableOperationSpanBuilder setOperation(final Operation operation) { +attributes.put(DB_OPERATION, operation.name()); +return this; + } + + public TableOperationSpanBuilder setTableName(final TableName tableName) { +this.tableName = tableName; +attributes.put(NAMESPACE_KEY, tableName.getNamespaceAsString()); +attributes.put(DB_NAME, tableName.getNamespaceAsString()); +attributes.put(TABLE_KEY, tableName.getNameAsString()); +return this; + } + + @SuppressWarnings("unchecked") + public Span build() { +final String name = attributes.getOrDefault(DB_OPERATION, unknown) Review comment: > And what you said about the scanner.next, I was not talking about client side scan, I was talking about the server side RegionScanner... Okay, understood. We can discuss that separately. > And we will always have the rpc method to be traced, so even if we do nothing in the scan method, we could still see a lot of rpc spans when scanning. This is true. It sounds like we need to open an operation-level span, just to encapsulate all the RPC spans. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-26541) hbase-protocol-shaded not buildable on M1 MacOSX
[ https://issues.apache.org/jira/browse/HBASE-26541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Kyle Purtell updated HBASE-26541: Description: I needed to make some changes to get branch-2's hbase-protocol-shaded building on an M1 mac. (was: I needed to make some changes to get branch-2's hbase-protocol-shaded building on an M1 mac. - Upgrade internal.protobuf.version to 3.17.3. - any.proto include not found. Get it from github.com/google/protobuf and add it. - Warbucks rule fails because of any.proto. Disable warbucks just in hbase-protobuf-shaded. ) > hbase-protocol-shaded not buildable on M1 MacOSX > > > Key: HBASE-26541 > URL: https://issues.apache.org/jira/browse/HBASE-26541 > Project: HBase > Issue Type: Bug >Affects Versions: 2.5.0, 2.4.8 > Environment: Apache Maven 3.8.3 > (ff8e977a158738155dc465c6a97ffaf31982d739) > Java version: 1.8.0_312, vendor: Azul Systems, Inc., runtime: > /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre > OS name: "mac os x", version: "12.0.1", arch: "aarch64", family: "mac" >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-2 > > > I needed to make some changes to get branch-2's hbase-protocol-shaded > building on an M1 mac. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations
ndimiduk commented on a change in pull request #3906: URL: https://github.com/apache/hbase/pull/3906#discussion_r764225344 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/trace/TableOperationSpanBuilder.java ## @@ -0,0 +1,119 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.client.trace; + +import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_NAME; +import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.DB_OPERATION; +import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.NAMESPACE_KEY; +import static org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.TABLE_KEY; +import io.opentelemetry.api.common.AttributeKey; +import io.opentelemetry.api.trace.Span; +import io.opentelemetry.api.trace.SpanBuilder; +import io.opentelemetry.api.trace.SpanKind; +import java.util.HashMap; +import java.util.Map; +import java.util.function.Supplier; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.Append; +import org.apache.hadoop.hbase.client.CheckAndMutate; +import org.apache.hadoop.hbase.client.Delete; +import org.apache.hadoop.hbase.client.Get; +import org.apache.hadoop.hbase.client.Increment; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.RegionCoprocessorServiceExec; +import org.apache.hadoop.hbase.client.Row; +import org.apache.hadoop.hbase.client.RowMutations; +import org.apache.hadoop.hbase.client.Scan; +import org.apache.hadoop.hbase.trace.HBaseSemanticAttributes.Operation; +import org.apache.hadoop.hbase.trace.TraceUtil; +import org.apache.yetus.audience.InterfaceAudience; + +/** + * Construct {@link io.opentelemetry.api.trace.Span} instances originating from + * "table operations" -- the verbs in our public API that interact with data in tables. + */ +@InterfaceAudience.Private +public class TableOperationSpanBuilder implements Supplier { + + // n.b. The results of this class are tested implicitly by way of the likes of + // `TestAsyncTableTracing` and friends. + + private static final String unknown = "UNKNOWN"; + + private TableName tableName; + private final Map, Object> attributes = new HashMap<>(); + + @Override public Span get() { +return build(); + } + + public TableOperationSpanBuilder setOperation(final Scan scan) { +return setOperation(valueFrom(scan)); + } + + public TableOperationSpanBuilder setOperation(final Row row) { +return setOperation(valueFrom(row)); + } + + public TableOperationSpanBuilder setOperation(final Operation operation) { +attributes.put(DB_OPERATION, operation.name()); +return this; + } + + public TableOperationSpanBuilder setTableName(final TableName tableName) { +this.tableName = tableName; +attributes.put(NAMESPACE_KEY, tableName.getNamespaceAsString()); +attributes.put(DB_NAME, tableName.getNamespaceAsString()); +attributes.put(TABLE_KEY, tableName.getNameAsString()); +return this; + } + + @SuppressWarnings("unchecked") + public Span build() { +final String name = attributes.getOrDefault(DB_OPERATION, unknown) ++ " " ++ (tableName != null ? tableName.getNameWithNamespaceInclAsString() : unknown); +final SpanBuilder builder = TraceUtil.getGlobalTracer() + .spanBuilder(name) + // TODO: what about clients embedded in Master/RegionServer/Gateways/? + .setSpanKind(SpanKind.CLIENT); +attributes.forEach((k, v) -> builder.setAttribute((AttributeKey) k, v)); +return builder.startSpan(); + } + + private static Operation valueFrom(final Scan scan) { +if (scan == null) { return null; } +return Operation.SCAN; + } + + private static Operation valueFrom(final Row row) { +if (row == null) { return null; } +if (row instanceof Append) { return Operation.APPEND; } +if (row instanceof CheckAndMutate) { return Operation.CHECK_AND_MUTATE; } +if (row instanceof Delete) { return Operation.DELETE; } +if (row instanceof Get) { return Operation.GET; } +if (row instanceof Increment) { return Operation.INCREMENT; } +if (row instanceof Put) { return Operation.PUT; } +if (row instanceof
[GitHub] [hbase] ndimiduk commented on a change in pull request #3906: HBASE-26472 Adhere to semantic conventions regarding table data operations
ndimiduk commented on a change in pull request #3906: URL: https://github.com/apache/hbase/pull/3906#discussion_r764221474 ## File path: hbase-common/src/main/java/org/apache/hadoop/hbase/trace/HBaseSemanticAttributes.java ## @@ -28,7 +28,9 @@ */ @InterfaceAudience.Private public final class HBaseSemanticAttributes { + public static final AttributeKey DB_NAME = SemanticAttributes.DB_NAME; public static final AttributeKey NAMESPACE_KEY = SemanticAttributes.DB_HBASE_NAMESPACE; + public static final AttributeKey DB_OPERATION = SemanticAttributes.DB_OPERATION; public static final AttributeKey TABLE_KEY = AttributeKey.stringKey("db.hbase.table"); Review comment: I guess we can drop the `_KEY` part here as none of these constants we import from `SemanticAttributes` use this naming convention. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24870) Ignore TestAsyncTableRSCrashPublish
[ https://issues.apache.org/jira/browse/HBASE-24870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454766#comment-17454766 ] Hudson commented on HBASE-24870: Results for branch branch-2.3 [build #318 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/318/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/318/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/318/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/318/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/318/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Ignore TestAsyncTableRSCrashPublish > --- > > Key: HBASE-24870 > URL: https://issues.apache.org/jira/browse/HBASE-24870 > Project: HBase > Issue Type: Sub-task >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > Fix For: 2.2.6, 2.5.0, 2.3.8, 2.4.9 > > > [ERROR] Failures: > [ERROR] TestAsyncTableRSCrashPublish.test:94 Waiting timed out after [60,000] > msec > > I meet this failure many times when runAllTests. And other developers meet > this too when vote RC. Let's ignore this first and enable this after parent > issue resolved. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Comment Edited] (HBASE-26541) hbase-protocol-shaded not buildable on M1 MacOSX
[ https://issues.apache.org/jira/browse/HBASE-26541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454760#comment-17454760 ] Andrew Kyle Purtell edited comment on HBASE-26541 at 12/7/21, 5:20 PM: --- For branch-2, we also have this problem. {noformat} [ERROR] Failed to execute goal org.xolstice.maven.plugins:protobuf-maven-plugin:0.6.1:compile (compile-protoc) on project hbase-protocol: Unable to resolve artifact: Missing: [ERROR] -- [ERROR] 1) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] Try downloading the file manually from the project website. [ERROR] [ERROR] Then, install it using the command: [ERROR] mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file [ERROR] [ERROR] Alternatively, if you host your own repository you can deploy the file there: [ERROR] mvn deploy:deploy-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] [ERROR] [ERROR] Path to dependency: [ERROR] 1) org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] 2) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] -- [ERROR] 1 required artifact is missing. [ERROR] [ERROR] for artifact: [ERROR] org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] [ERROR] from the specified remote repositories: [ERROR] apache.snapshots (https://repository.apache.org/snapshots, releases=false, snapshots=true), [ERROR] central (https://repo.maven.apache.org/maven2, releases=true, snapshots=false) {noformat} The solution is https://gist.github.com/liusheng/64aee1b27de037f8b9ccf1873b82c413 i.e. {noformat} $ curl -sSL https://github.com/protocolbuffers/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.gz | tar zx - $ cd protobuf-2.5.0 $ curl -L -O https://gist.githubusercontent.com/liusheng/64aee1b27de037f8b9ccf1873b82c413/raw/118c2fce733a9a62a03281753572a45b6efb8639/protobuf-2.5.0-arm64.patch $ patch -p1 < protobuf-2.5.0-arm64.patch $ ./configure --disable-shared $ make $ mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=src/protoc {noformat} Only people who want to compile HBase from source need to follow those directions. I do not propose any HBase source level changes for this. A documentation update that adds this build instruction errata would be helpful to users, though. was (Author: apurtell): For branch-2, we also have this problem. {noformat} [ERROR] Failed to execute goal org.xolstice.maven.plugins:protobuf-maven-plugin:0.6.1:compile (compile-protoc) on project hbase-protocol: Unable to resolve artifact: Missing: [ERROR] -- [ERROR] 1) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] Try downloading the file manually from the project website. [ERROR] [ERROR] Then, install it using the command: [ERROR] mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file [ERROR] [ERROR] Alternatively, if you host your own repository you can deploy the file there: [ERROR] mvn deploy:deploy-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] [ERROR] [ERROR] Path to dependency: [ERROR] 1) org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] 2) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] -- [ERROR] 1 required artifact is missing. [ERROR] [ERROR] for artifact: [ERROR] org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] [ERROR] from the specified remote repositories: [ERROR] apache.snapshots (https://repository.apache.org/snapshots, releases=false, snapshots=true), [ERROR] central (https://repo.maven.apache.org/maven2, releases=true, snapshots=false) {noformat} The solution is https://gist.github.com/liusheng/64aee1b27de037f8b9ccf1873b82c413 i.e. {noformat} $ curl -sSL https://github.com/protocolbuffers/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.gz | tar zx - $ cd protobuf-2.5.0 $ curl -L -O https://gist.githubusercontent.com/liusheng/64aee1b27de037f8b9ccf1873b82c413/raw/118c2fce733a9a62a03281753572a45b6efb8639/protobuf-2.5.0-arm64.patch $ patch -p1 < protobuf-2.5.0-arm64.patch $ ./configure --disable-shared $ make $ mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=src/protoc {noformat} Only people who want to compile HBase from source need to follow those directions. >
[jira] [Comment Edited] (HBASE-26541) hbase-protocol-shaded not buildable on M1 MacOSX
[ https://issues.apache.org/jira/browse/HBASE-26541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454760#comment-17454760 ] Andrew Kyle Purtell edited comment on HBASE-26541 at 12/7/21, 5:18 PM: --- For branch-2, we also have this problem. {noformat} [ERROR] Failed to execute goal org.xolstice.maven.plugins:protobuf-maven-plugin:0.6.1:compile (compile-protoc) on project hbase-protocol: Unable to resolve artifact: Missing: [ERROR] -- [ERROR] 1) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] Try downloading the file manually from the project website. [ERROR] [ERROR] Then, install it using the command: [ERROR] mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file [ERROR] [ERROR] Alternatively, if you host your own repository you can deploy the file there: [ERROR] mvn deploy:deploy-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] [ERROR] [ERROR] Path to dependency: [ERROR] 1) org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] 2) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] -- [ERROR] 1 required artifact is missing. [ERROR] [ERROR] for artifact: [ERROR] org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] [ERROR] from the specified remote repositories: [ERROR] apache.snapshots (https://repository.apache.org/snapshots, releases=false, snapshots=true), [ERROR] central (https://repo.maven.apache.org/maven2, releases=true, snapshots=false) {noformat} The solution is https://gist.github.com/liusheng/64aee1b27de037f8b9ccf1873b82c413 i.e. {noformat} $ curl -sSL https://github.com/protocolbuffers/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.gz | tar zx - $ cd protobuf-2.5.0 $ curl -L -O https://gist.githubusercontent.com/liusheng/64aee1b27de037f8b9ccf1873b82c413/raw/118c2fce733a9a62a03281753572a45b6efb8639/protobuf-2.5.0-arm64.patch $ patch -p1 < protobuf-2.5.0-arm64.patch $ ./configure --disable-shared $ make $ mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=src/protoc {noformat} Only people who want to compile HBase from source need to follow those directions. was (Author: apurtell): For branch-2, we also have this problem. {noformat} [ERROR] Failed to execute goal org.xolstice.maven.plugins:protobuf-maven-plugin:0.6.1:compile (compile-protoc) on project hbase-protocol: Unable to resolve artifact: Missing: [ERROR] -- [ERROR] 1) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] Try downloading the file manually from the project website. [ERROR] [ERROR] Then, install it using the command: [ERROR] mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file [ERROR] [ERROR] Alternatively, if you host your own repository you can deploy the file there: [ERROR] mvn deploy:deploy-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] [ERROR] [ERROR] Path to dependency: [ERROR] 1) org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] 2) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] -- [ERROR] 1 required artifact is missing. [ERROR] [ERROR] for artifact: [ERROR] org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] [ERROR] from the specified remote repositories: [ERROR] apache.snapshots (https://repository.apache.org/snapshots, releases=false, snapshots=true), [ERROR] central (https://repo.maven.apache.org/maven2, releases=true, snapshots=false) {noformat} The solution is https://gist.github.com/liusheng/64aee1b27de037f8b9ccf1873b82c413 i.e. {noformat} $ curl -sSL https://github.com/protocolbuffers/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.gz | tar zx - $ cd protobuf-2.5.0 $ curl -L -O https://gist.githubusercontent.com/liusheng/64aee1b27de037f8b9ccf1873b82c413/raw/118c2fce733a9a62a03281753572a45b6efb8639/protobuf-2.5.0-arm64.patch $ patch -p1 < protobuf-2.5.0-arm64.patch $./configure --disable-shared $ make $ mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=src/protoc {noformat} Only people who want to compile HBase from source need to follow those directions. > hbase-protocol-shaded not buildable on M1 MacOSX > > > Key: HBASE-26541 > URL:
[jira] [Comment Edited] (HBASE-26541) hbase-protocol-shaded not buildable on M1 MacOSX
[ https://issues.apache.org/jira/browse/HBASE-26541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454760#comment-17454760 ] Andrew Kyle Purtell edited comment on HBASE-26541 at 12/7/21, 5:18 PM: --- For branch-2, we also have this problem. {noformat} [ERROR] Failed to execute goal org.xolstice.maven.plugins:protobuf-maven-plugin:0.6.1:compile (compile-protoc) on project hbase-protocol: Unable to resolve artifact: Missing: [ERROR] -- [ERROR] 1) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] Try downloading the file manually from the project website. [ERROR] [ERROR] Then, install it using the command: [ERROR] mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file [ERROR] [ERROR] Alternatively, if you host your own repository you can deploy the file there: [ERROR] mvn deploy:deploy-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] [ERROR] [ERROR] Path to dependency: [ERROR] 1) org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] 2) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] -- [ERROR] 1 required artifact is missing. [ERROR] [ERROR] for artifact: [ERROR] org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] [ERROR] from the specified remote repositories: [ERROR] apache.snapshots (https://repository.apache.org/snapshots, releases=false, snapshots=true), [ERROR] central (https://repo.maven.apache.org/maven2, releases=true, snapshots=false) {noformat} The solution is https://gist.github.com/liusheng/64aee1b27de037f8b9ccf1873b82c413 i.e. {noformat} $ curl -sSL https://github.com/protocolbuffers/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.gz | tar zx - $ cd protobuf-2.5.0 $ curl -L -O https://gist.githubusercontent.com/liusheng/64aee1b27de037f8b9ccf1873b82c413/raw/118c2fce733a9a62a03281753572a45b6efb8639/protobuf-2.5.0-arm64.patch $ patch -p1 < protobuf-2.5.0-arm64.patch $./configure --disable-shared $ make $ mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=src/protoc {noformat} Only people who want to compile HBase from source need to follow those directions. was (Author: apurtell): For branch-2, we also have this problem. {noformat} [ERROR] Failed to execute goal org.xolstice.maven.plugins:protobuf-maven-plugin:0.6.1:compile (compile-protoc) on project hbase-protocol: Unable to resolve artifact: Missing: [ERROR] -- [ERROR] 1) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] Try downloading the file manually from the project website. [ERROR] [ERROR] Then, install it using the command: [ERROR] mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file [ERROR] [ERROR] Alternatively, if you host your own repository you can deploy the file there: [ERROR] mvn deploy:deploy-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] [ERROR] [ERROR] Path to dependency: [ERROR] 1) org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] 2) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] -- [ERROR] 1 required artifact is missing. [ERROR] [ERROR] for artifact: [ERROR] org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] [ERROR] from the specified remote repositories: [ERROR] apache.snapshots (https://repository.apache.org/snapshots, releases=false, snapshots=true), [ERROR] central (https://repo.maven.apache.org/maven2, releases=true, snapshots=false) {noformat} The solution is https://gist.github.com/liusheng/64aee1b27de037f8b9ccf1873b82c413 . Only people who want to compile HBase from source need to follow those directions. > hbase-protocol-shaded not buildable on M1 MacOSX > > > Key: HBASE-26541 > URL: https://issues.apache.org/jira/browse/HBASE-26541 > Project: HBase > Issue Type: Bug >Affects Versions: 2.5.0, 2.4.8 > Environment: Apache Maven 3.8.3 > (ff8e977a158738155dc465c6a97ffaf31982d739) > Java version: 1.8.0_312, vendor: Azul Systems, Inc., runtime: > /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre > OS name: "mac os x", version: "12.0.1", arch: "aarch64", family: "mac" >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For:
[GitHub] [hbase] Apache-HBase commented on pull request #3922: HBASE-26541 hbase-protocol-shaded not buildable on M1 MacOSX
Apache-HBase commented on pull request #3922: URL: https://github.com/apache/hbase/pull/3922#issuecomment-988119946 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 6m 35s | master passed | | +1 :green_heart: | compile | 1m 55s | master passed | | +1 :green_heart: | shadedjars | 12m 4s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 44s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 6m 6s | the patch passed | | +1 :green_heart: | compile | 1m 54s | the patch passed | | +1 :green_heart: | javac | 1m 54s | the patch passed | | +1 :green_heart: | shadedjars | 11m 48s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 43s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 19s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 52s | hbase-examples in the patch passed. | | | | 47m 32s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3922/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3922 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux add5f81a90c1 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / ca3ba494cb | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3922/2/testReport/ | | Max. process+thread count | 2086 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-examples U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3922/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26541) hbase-protocol-shaded not buildable on M1 MacOSX
[ https://issues.apache.org/jira/browse/HBASE-26541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454760#comment-17454760 ] Andrew Kyle Purtell commented on HBASE-26541: - For branch-2, we also have this problem. {noformat} [ERROR] Failed to execute goal org.xolstice.maven.plugins:protobuf-maven-plugin:0.6.1:compile (compile-protoc) on project hbase-protocol: Unable to resolve artifact: Missing: [ERROR] -- [ERROR] 1) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] Try downloading the file manually from the project website. [ERROR] [ERROR] Then, install it using the command: [ERROR] mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file [ERROR] [ERROR] Alternatively, if you host your own repository you can deploy the file there: [ERROR] mvn deploy:deploy-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] [ERROR] [ERROR] Path to dependency: [ERROR] 1) org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] 2) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] -- [ERROR] 1 required artifact is missing. [ERROR] [ERROR] for artifact: [ERROR] org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] [ERROR] from the specified remote repositories: [ERROR] apache.snapshots (https://repository.apache.org/snapshots, releases=false, snapshots=true), [ERROR] central (https://repo.maven.apache.org/maven2, releases=true, snapshots=false) {noformat} The solution is https://gist.github.com/liusheng/64aee1b27de037f8b9ccf1873b82c413 > hbase-protocol-shaded not buildable on M1 MacOSX > > > Key: HBASE-26541 > URL: https://issues.apache.org/jira/browse/HBASE-26541 > Project: HBase > Issue Type: Bug >Affects Versions: 2.5.0, 2.4.8 > Environment: Apache Maven 3.8.3 > (ff8e977a158738155dc465c6a97ffaf31982d739) > Java version: 1.8.0_312, vendor: Azul Systems, Inc., runtime: > /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre > OS name: "mac os x", version: "12.0.1", arch: "aarch64", family: "mac" >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-2 > > > I needed to make some changes to get branch-2's hbase-protocol-shaded > building on an M1 mac. > - Upgrade internal.protobuf.version to 3.17.3. > - any.proto include not found. Get it from github.com/google/protobuf and add > it. > - Warbucks rule fails because of any.proto. Disable warbucks just in > hbase-protobuf-shaded. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Comment Edited] (HBASE-26541) hbase-protocol-shaded not buildable on M1 MacOSX
[ https://issues.apache.org/jira/browse/HBASE-26541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454760#comment-17454760 ] Andrew Kyle Purtell edited comment on HBASE-26541 at 12/7/21, 5:13 PM: --- For branch-2, we also have this problem. {noformat} [ERROR] Failed to execute goal org.xolstice.maven.plugins:protobuf-maven-plugin:0.6.1:compile (compile-protoc) on project hbase-protocol: Unable to resolve artifact: Missing: [ERROR] -- [ERROR] 1) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] Try downloading the file manually from the project website. [ERROR] [ERROR] Then, install it using the command: [ERROR] mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file [ERROR] [ERROR] Alternatively, if you host your own repository you can deploy the file there: [ERROR] mvn deploy:deploy-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] [ERROR] [ERROR] Path to dependency: [ERROR] 1) org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] 2) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] -- [ERROR] 1 required artifact is missing. [ERROR] [ERROR] for artifact: [ERROR] org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] [ERROR] from the specified remote repositories: [ERROR] apache.snapshots (https://repository.apache.org/snapshots, releases=false, snapshots=true), [ERROR] central (https://repo.maven.apache.org/maven2, releases=true, snapshots=false) {noformat} The solution is https://gist.github.com/liusheng/64aee1b27de037f8b9ccf1873b82c413 . Only people who want to compile HBase from source need to follow those directions. was (Author: apurtell): For branch-2, we also have this problem. {noformat} [ERROR] Failed to execute goal org.xolstice.maven.plugins:protobuf-maven-plugin:0.6.1:compile (compile-protoc) on project hbase-protocol: Unable to resolve artifact: Missing: [ERROR] -- [ERROR] 1) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] Try downloading the file manually from the project website. [ERROR] [ERROR] Then, install it using the command: [ERROR] mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file [ERROR] [ERROR] Alternatively, if you host your own repository you can deploy the file there: [ERROR] mvn deploy:deploy-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=2.5.0 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] [ERROR] [ERROR] Path to dependency: [ERROR] 1) org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] 2) com.google.protobuf:protoc:exe:osx-aarch_64:2.5.0 [ERROR] [ERROR] -- [ERROR] 1 required artifact is missing. [ERROR] [ERROR] for artifact: [ERROR] org.apache.hbase:hbase-protocol:jar:2.5.0-SNAPSHOT [ERROR] [ERROR] from the specified remote repositories: [ERROR] apache.snapshots (https://repository.apache.org/snapshots, releases=false, snapshots=true), [ERROR] central (https://repo.maven.apache.org/maven2, releases=true, snapshots=false) {noformat} The solution is https://gist.github.com/liusheng/64aee1b27de037f8b9ccf1873b82c413 > hbase-protocol-shaded not buildable on M1 MacOSX > > > Key: HBASE-26541 > URL: https://issues.apache.org/jira/browse/HBASE-26541 > Project: HBase > Issue Type: Bug >Affects Versions: 2.5.0, 2.4.8 > Environment: Apache Maven 3.8.3 > (ff8e977a158738155dc465c6a97ffaf31982d739) > Java version: 1.8.0_312, vendor: Azul Systems, Inc., runtime: > /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre > OS name: "mac os x", version: "12.0.1", arch: "aarch64", family: "mac" >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-2 > > > I needed to make some changes to get branch-2's hbase-protocol-shaded > building on an M1 mac. > - Upgrade internal.protobuf.version to 3.17.3. > - any.proto include not found. Get it from github.com/google/protobuf and add > it. > - Warbucks rule fails because of any.proto. Disable warbucks just in > hbase-protobuf-shaded. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [hbase] Apache-HBase commented on pull request #3922: HBASE-26541 hbase-protocol-shaded not buildable on M1 MacOSX
Apache-HBase commented on pull request #3922: URL: https://github.com/apache/hbase/pull/3922#issuecomment-988116383 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3923: HBASE-26541 hbase-protocol-shaded not buildable on M1 MacOSX
Apache-HBase commented on pull request #3923: URL: https://github.com/apache/hbase/pull/3923#issuecomment-988112810 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 38s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 52s | branch-2 passed | | +1 :green_heart: | compile | 1m 23s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 55s | the patch passed | | +1 :green_heart: | compile | 1m 21s | the patch passed | | +1 :green_heart: | javac | 1m 21s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | hadoopcheck | 14m 33s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 12s | The patch does not generate ASF License warnings. | | | | 34m 11s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3923/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3923 | | Optional Tests | dupname asflicense javac hadoopcheck xml compile | | uname | Linux 8508d1f8ee39 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 140b5d8d26 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 95 (vs. ulimit of 12500) | | modules | C: hbase-protocol-shaded U: hbase-protocol-shaded | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3923/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (HBASE-26541) hbase-protocol-shaded not buildable on M1 MacOSX
[ https://issues.apache.org/jira/browse/HBASE-26541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454755#comment-17454755 ] Andrew Kyle Purtell edited comment on HBASE-26541 at 12/7/21, 5:05 PM: --- bq. I did a manual install and encountered the any.proto problem. My guess is the binary 'protoc' in Maven central is a special version that handles any.proto differently. If you do something like this instead {noformat} $ git clone https://github.com/google/protobuf $ cd protobuf $ git checkout v3.11.4 (Apply the ARM64 atomics patch from a later version) $ sh autogen.sh $ ./configure ... $ make $ sudo make install $ mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=3.11.4 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/protoc {noformat} when you do a Maven build afterward the protobuf compile step does not work and the error message from 'protoc' is "_[ERROR] PROTOC FAILED: google/protobuf/any.proto: File not found._" was (Author: apurtell): bq. I did a manual install and encountered the any.proto problem. My guess is the binary 'protoc' in Maven central is a special version that handles any.proto differently. If you do something like this instead {noformat} $ git clone https://github.com/google/protobuf $ cd protobuf $ git checkout v3.11.4 (Apply the ARM64 atomics patch from a later version) $ sh autogen.sh $ ./configure ... $ make $ sudo make install $ mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=3.11.4 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/protoc {noformat} it does not work and the error message is "_[ERROR] PROTOC FAILED: google/protobuf/any.proto: File not found._" > hbase-protocol-shaded not buildable on M1 MacOSX > > > Key: HBASE-26541 > URL: https://issues.apache.org/jira/browse/HBASE-26541 > Project: HBase > Issue Type: Bug >Affects Versions: 2.5.0, 2.4.8 > Environment: Apache Maven 3.8.3 > (ff8e977a158738155dc465c6a97ffaf31982d739) > Java version: 1.8.0_312, vendor: Azul Systems, Inc., runtime: > /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre > OS name: "mac os x", version: "12.0.1", arch: "aarch64", family: "mac" >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-2 > > > I needed to make some changes to get branch-2's hbase-protocol-shaded > building on an M1 mac. > - Upgrade internal.protobuf.version to 3.17.3. > - any.proto include not found. Get it from github.com/google/protobuf and add > it. > - Warbucks rule fails because of any.proto. Disable warbucks just in > hbase-protobuf-shaded. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26541) hbase-protocol-shaded not buildable on M1 MacOSX
[ https://issues.apache.org/jira/browse/HBASE-26541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454755#comment-17454755 ] Andrew Kyle Purtell commented on HBASE-26541: - bq. I did a manual install and encountered the any.proto problem. My guess is the binary 'protoc' in Maven central is a special version that handles any.proto differently. If you do something like this instead {noformat} $ git clone https://github.com/google/protobuf $ cd protobuf $ git checkout v3.11.4 (Apply the ARM64 atomics patch from a later version) $ sh autogen.sh $ ./configure ... $ make $ sudo make install $ mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=3.11.4 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/protoc {noformat} it does not work and the error message is "_[ERROR] PROTOC FAILED: google/protobuf/any.proto: File not found._" > hbase-protocol-shaded not buildable on M1 MacOSX > > > Key: HBASE-26541 > URL: https://issues.apache.org/jira/browse/HBASE-26541 > Project: HBase > Issue Type: Bug >Affects Versions: 2.5.0, 2.4.8 > Environment: Apache Maven 3.8.3 > (ff8e977a158738155dc465c6a97ffaf31982d739) > Java version: 1.8.0_312, vendor: Azul Systems, Inc., runtime: > /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre > OS name: "mac os x", version: "12.0.1", arch: "aarch64", family: "mac" >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-2 > > > I needed to make some changes to get branch-2's hbase-protocol-shaded > building on an M1 mac. > - Upgrade internal.protobuf.version to 3.17.3. > - any.proto include not found. Get it from github.com/google/protobuf and add > it. > - Warbucks rule fails because of any.proto. Disable warbucks just in > hbase-protobuf-shaded. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [hbase] Apache-HBase commented on pull request #3923: HBASE-26541 hbase-protocol-shaded not buildable on M1 MacOSX
Apache-HBase commented on pull request #3923: URL: https://github.com/apache/hbase/pull/3923#issuecomment-988108333 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | | -0 :warning: | yetus | 0m 8s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 35s | branch-2 passed | | +1 :green_heart: | compile | 0m 54s | branch-2 passed | | +1 :green_heart: | shadedjars | 7m 21s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 18s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 19s | the patch passed | | +1 :green_heart: | compile | 0m 54s | the patch passed | | +1 :green_heart: | javac | 0m 54s | the patch passed | | +1 :green_heart: | shadedjars | 7m 19s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 15s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 54s | hbase-protocol-shaded in the patch passed. | | | | 28m 27s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3923/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3923 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 51f28e558c8e 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 140b5d8d26 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3923/2/testReport/ | | Max. process+thread count | 100 (vs. ulimit of 12500) | | modules | C: hbase-protocol-shaded U: hbase-protocol-shaded | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3923/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3923: HBASE-26541 hbase-protocol-shaded not buildable on M1 MacOSX
Apache-HBase commented on pull request #3923: URL: https://github.com/apache/hbase/pull/3923#issuecomment-988105916 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 13s | branch-2 passed | | +1 :green_heart: | compile | 0m 42s | branch-2 passed | | +1 :green_heart: | shadedjars | 6m 37s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 15s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 35s | the patch passed | | +1 :green_heart: | compile | 0m 42s | the patch passed | | +1 :green_heart: | javac | 0m 42s | the patch passed | | +1 :green_heart: | shadedjars | 6m 33s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 15s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 45s | hbase-protocol-shaded in the patch passed. | | | | 25m 20s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3923/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3923 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 5d75b1a05ff5 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 140b5d8d26 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3923/2/testReport/ | | Max. process+thread count | 79 (vs. ulimit of 12500) | | modules | C: hbase-protocol-shaded U: hbase-protocol-shaded | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3923/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3925: HBASE-26027 The calling of HTable.batch blocked at AsyncRequestFutureImpl.waitUntilDone caused by ArrayStoreException
Apache-HBase commented on pull request #3925: URL: https://github.com/apache/hbase/pull/3925#issuecomment-988097582 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 9m 0s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 2s | branch-2 passed | | +1 :green_heart: | compile | 4m 51s | branch-2 passed | | +1 :green_heart: | checkstyle | 1m 55s | branch-2 passed | | +1 :green_heart: | spotbugs | 3m 38s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 2s | the patch passed | | +1 :green_heart: | compile | 4m 41s | the patch passed | | +1 :green_heart: | javac | 4m 41s | the patch passed | | -0 :warning: | checkstyle | 1m 8s | hbase-server: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 14m 47s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 3m 55s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 62m 39s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3925 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 4dad4105ad42 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 140b5d8d26 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 96 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3925/1/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell edited a comment on pull request #3922: HBASE-26541 hbase-protocol-shaded not buildable on M1 MacOSX
apurtell edited a comment on pull request #3922: URL: https://github.com/apache/hbase/pull/3922#issuecomment-988087102 See https://issues.apache.org/jira/browse/HBASE-26541?focusedCommentId=17454738=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17454738 Going through this again (first I rm -rf ~/.m2/repository/com/google), it seems just the version change is enough. I have updated the PRs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell edited a comment on pull request #3923: HBASE-26541 hbase-protocol-shaded not buildable on M1 MacOSX
apurtell edited a comment on pull request #3923: URL: https://github.com/apache/hbase/pull/3923#issuecomment-988087074 See https://issues.apache.org/jira/browse/HBASE-26541?focusedCommentId=17454738=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17454738 Going through this again (first I rm -rf ~/.m2/repository/com/google), it seems just the version change is enough. I have updated the PRs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (HBASE-26541) hbase-protocol-shaded not buildable on M1 MacOSX
[ https://issues.apache.org/jira/browse/HBASE-26541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454738#comment-17454738 ] Andrew Kyle Purtell edited comment on HBASE-26541 at 12/7/21, 4:36 PM: --- The first issue is this: {noformat} [ERROR] Failed to execute goal org.xolstice.maven.plugins:protobuf-maven-plugin:0.6.1:compile (compile-protoc) on project hbase-protocol-shaded: Unable to resolve artifact: Missing: [ERROR] -- [ERROR] 1) com.google.protobuf:protoc:exe:osx-aarch_64:3.11.4 [ERROR] [ERROR] Try downloading the file manually from the project website. [ERROR] [ERROR] Then, install it using the command: [ERROR] mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=3.11.4 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file [ERROR] [ERROR] Alternatively, if you host your own repository you can deploy the file there: [ERROR] mvn deploy:deploy-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=3.11.4 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] [ERROR] [ERROR] Path to dependency: [ERROR] 1) org.apache.hbase:hbase-protocol-shaded:jar:3.0.0-alpha-2-SNAPSHOT [ERROR] 2) com.google.protobuf:protoc:exe:osx-aarch_64:3.11.4 [ERROR] [ERROR] -- [ERROR] 1 required artifact is missing. [ERROR] [ERROR] for artifact: [ERROR] org.apache.hbase:hbase-protocol-shaded:jar:3.0.0-alpha-2-SNAPSHOT [ERROR] [ERROR] from the specified remote repositories: [ERROR] apache.snapshots (https://repository.apache.org/snapshots, releases=false, snapshots=true), [ERROR] central (https://repo.maven.apache.org/maven2, releases=true, snapshots=false) {noformat} To cure this, you can either manually install the 'protoc' binary into your local Maven cache (although 3.11.4 is missing ARM64 atomics and will not compile without a backport in protobuf of some patch from a later version) per the instructions or upgrade the version. Version 3.17.3 was mentioned elsewhere as a version which has an ARM64 EXE available in Maven central. I did a manual install and encountered the any.proto problem. Going through this again (first I rm -rf ~/.m2/repository/com/google), it seems just the version change is enough. I have updated the PRs. was (Author: apurtell): The first issue is this: {noformat} [ERROR] Failed to execute goal org.xolstice.maven.plugins:protobuf-maven-plugin:0.6.1:compile (compile-protoc) on project hbase-protocol-shaded: Unable to resolve artifact: Missing: [ERROR] -- [ERROR] 1) com.google.protobuf:protoc:exe:osx-aarch_64:3.11.4 [ERROR] [ERROR] Try downloading the file manually from the project website. [ERROR] [ERROR] Then, install it using the command: [ERROR] mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=3.11.4 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file [ERROR] [ERROR] Alternatively, if you host your own repository you can deploy the file there: [ERROR] mvn deploy:deploy-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=3.11.4 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] [ERROR] [ERROR] Path to dependency: [ERROR] 1) org.apache.hbase:hbase-protocol-shaded:jar:3.0.0-alpha-2-SNAPSHOT [ERROR] 2) com.google.protobuf:protoc:exe:osx-aarch_64:3.11.4 [ERROR] [ERROR] -- [ERROR] 1 required artifact is missing. [ERROR] [ERROR] for artifact: [ERROR] org.apache.hbase:hbase-protocol-shaded:jar:3.0.0-alpha-2-SNAPSHOT [ERROR] [ERROR] from the specified remote repositories: [ERROR] apache.snapshots (https://repository.apache.org/snapshots, releases=false, snapshots=true), [ERROR] central (https://repo.maven.apache.org/maven2, releases=true, snapshots=false) {noformat} To cure this, you can either manually install protobuf (although 3.11.4 is missing ARM64 atomics and will not compile without a backport in protobuf of some patch from a later version) per the instructions or upgrade the version. Version 3.17.3 was mentioned elsewhere as a version which has an ARM64 EXE available in Maven central. I did a manual install and encountered the any.proto problem. Going through this again (first I rm -rf ~/.m2/repository/com/google), it seems just the version change is enough. I have updated the PRs. > hbase-protocol-shaded not buildable on M1 MacOSX > > > Key: HBASE-26541 > URL: https://issues.apache.org/jira/browse/HBASE-26541 > Project: HBase > Issue Type: Bug >Affects Versions: 2.5.0, 2.4.8 > Environment: Apache Maven 3.8.3 > (ff8e977a158738155dc465c6a97ffaf31982d739) > Java version: 1.8.0_312, vendor: Azul Systems, Inc., runtime:
[GitHub] [hbase] apurtell commented on pull request #3922: HBASE-26541 hbase-protocol-shaded not buildable on M1 MacOSX
apurtell commented on pull request #3922: URL: https://github.com/apache/hbase/pull/3922#issuecomment-988087102 See https://issues.apache.org/jira/browse/HBASE-26541?focusedCommentId=17454738=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17454738 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on pull request #3923: HBASE-26541 hbase-protocol-shaded not buildable on M1 MacOSX
apurtell commented on pull request #3923: URL: https://github.com/apache/hbase/pull/3923#issuecomment-988087074 See https://issues.apache.org/jira/browse/HBASE-26541?focusedCommentId=17454738=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17454738 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26541) hbase-protocol-shaded not buildable on M1 MacOSX
[ https://issues.apache.org/jira/browse/HBASE-26541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454738#comment-17454738 ] Andrew Kyle Purtell commented on HBASE-26541: - The first issue is this: {noformat} [ERROR] Failed to execute goal org.xolstice.maven.plugins:protobuf-maven-plugin:0.6.1:compile (compile-protoc) on project hbase-protocol-shaded: Unable to resolve artifact: Missing: [ERROR] -- [ERROR] 1) com.google.protobuf:protoc:exe:osx-aarch_64:3.11.4 [ERROR] [ERROR] Try downloading the file manually from the project website. [ERROR] [ERROR] Then, install it using the command: [ERROR] mvn install:install-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=3.11.4 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file [ERROR] [ERROR] Alternatively, if you host your own repository you can deploy the file there: [ERROR] mvn deploy:deploy-file -DgroupId=com.google.protobuf -DartifactId=protoc -Dversion=3.11.4 -Dclassifier=osx-aarch_64 -Dpackaging=exe -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] [ERROR] [ERROR] Path to dependency: [ERROR] 1) org.apache.hbase:hbase-protocol-shaded:jar:3.0.0-alpha-2-SNAPSHOT [ERROR] 2) com.google.protobuf:protoc:exe:osx-aarch_64:3.11.4 [ERROR] [ERROR] -- [ERROR] 1 required artifact is missing. [ERROR] [ERROR] for artifact: [ERROR] org.apache.hbase:hbase-protocol-shaded:jar:3.0.0-alpha-2-SNAPSHOT [ERROR] [ERROR] from the specified remote repositories: [ERROR] apache.snapshots (https://repository.apache.org/snapshots, releases=false, snapshots=true), [ERROR] central (https://repo.maven.apache.org/maven2, releases=true, snapshots=false) {noformat} To cure this, you can either manually install protobuf (although 3.11.4 is missing ARM64 atomics and will not compile without a backport in protobuf of some patch from a later version) per the instructions or upgrade the version. Version 3.17.3 was mentioned elsewhere as a version which has an ARM64 EXE available in Maven central. I did a manual install and encountered the any.proto problem. Going through this again (first I rm -rf ~/.m2/repository/com/google), it seems just the version change is enough. I have updated the PRs. > hbase-protocol-shaded not buildable on M1 MacOSX > > > Key: HBASE-26541 > URL: https://issues.apache.org/jira/browse/HBASE-26541 > Project: HBase > Issue Type: Bug >Affects Versions: 2.5.0, 2.4.8 > Environment: Apache Maven 3.8.3 > (ff8e977a158738155dc465c6a97ffaf31982d739) > Java version: 1.8.0_312, vendor: Azul Systems, Inc., runtime: > /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre > OS name: "mac os x", version: "12.0.1", arch: "aarch64", family: "mac" >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-2 > > > I needed to make some changes to get branch-2's hbase-protocol-shaded > building on an M1 mac. > - Upgrade internal.protobuf.version to 3.17.3. > - any.proto include not found. Get it from github.com/google/protobuf and add > it. > - Warbucks rule fails because of any.proto. Disable warbucks just in > hbase-protobuf-shaded. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [hbase] Apache-HBase commented on pull request #3851: HBASE-26286: Add support for specifying store file tracker when restoring or cloning snapshot
Apache-HBase commented on pull request #3851: URL: https://github.com/apache/hbase/pull/3851#issuecomment-988073118 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ HBASE-26067 Compile Tests _ | | +0 :ok: | mvndep | 0m 35s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 54s | HBASE-26067 passed | | +1 :green_heart: | compile | 7m 7s | HBASE-26067 passed | | +1 :green_heart: | checkstyle | 2m 44s | HBASE-26067 passed | | +1 :green_heart: | spotbugs | 9m 11s | HBASE-26067 passed | | -0 :warning: | patch | 2m 11s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 8s | the patch passed | | +1 :green_heart: | compile | 7m 48s | the patch passed | | +1 :green_heart: | cc | 7m 48s | the patch passed | | +1 :green_heart: | javac | 7m 48s | the patch passed | | +1 :green_heart: | checkstyle | 0m 11s | The patch passed checkstyle in hbase-protocol-shaded | | +1 :green_heart: | checkstyle | 0m 30s | The patch passed checkstyle in hbase-client | | +1 :green_heart: | checkstyle | 1m 8s | hbase-server: The patch generated 0 new + 130 unchanged - 9 fixed = 130 total (was 139) | | +1 :green_heart: | checkstyle | 0m 44s | The patch passed checkstyle in hbase-thrift | | +1 :green_heart: | checkstyle | 0m 11s | The patch passed checkstyle in hbase-shell | | +1 :green_heart: | rubocop | 0m 14s | There were no new rubocop issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 22m 20s | Patch does not cause any errors with Hadoop 3.1.2 3.2.2 3.3.1. | | +1 :green_heart: | hbaseprotoc | 3m 23s | the patch passed | | +1 :green_heart: | spotbugs | 10m 16s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 57s | The patch does not generate ASF License warnings. | | | | 87m 5s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3851/5/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3851 | | JIRA Issue | HBASE-26286 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc prototool rubocop | | uname | Linux 72dd3791d597 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-26067 / 4aa3f47aa2 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 96 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3851/5/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 rubocop=0.80.0 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (HBASE-25792) Filter out o.a.hadoop.thirdparty building shaded jars
[ https://issues.apache.org/jira/browse/HBASE-25792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454729#comment-17454729 ] Bryan Beaudreault edited comment on HBASE-25792 at 12/7/21, 4:12 PM: - [~stack] I just stumbled across this and I think it may not be the correct solution. I wonder if you can give more context around the decision. I think it may have been more appropriate to add these classes to the allow list. Here's my reasoning: I tried creating a downstream project which depends on {{{}hbase-shaded-client{}}}. I'd expect this to Just Work, but I get the following error when constructing a Configuration object: {{Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/thirdparty/com/google/common/base/Preconditions}} {{ at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:430)}} {{ at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:443)}} {{ at org.apache.hadoop.conf.Configuration.(Configuration.java:525)}} {{ }} {{Looking at the hbase-shaded-client}} output fat jar, it does indeed include {{org/apache/hadoop/conf/Configuration}} as I'd expect. If you decompile that Configuration, you'll see that it imports {{{}org.apache.hadoop.thirdparty.com.google.common.base.Preconditions{}}}. But due to this JIRA, we do not include that Preconditions class in the fat jar. {{ }} One way to work around this on the client side is to explicitly add a dependency on hadoop-shaded-guava, but that is problematic because it requires the end user to keep track of the appropriate version to include over time. It also only solves for guava, when there is also protobuf and potentially other thirdparty artifacts that might cause issues over time. It also poses an issue for environments (like mine) which use maven-dependency-plugin to ensure that there are no unused, declared dependencies in a project. Thoughts on reverting this and instead adding to the allow list? If you agree I can create a new Jira to track. was (Author: bbeaudreault): [~stack] I just stumbled across this and I think it may not be the correct solution. I wonder if you can give more context around the decision. I think it may have been more appropriate to add these classes to the allow list. Here's my reasoning: I tried creating a downstream project which depends on {{{}hbase-shaded-client{}}}. I'd expect this to Just Work, but I get the following error when constructing a Configuration object: {{Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/thirdparty/com/google/common/base/Preconditions}} {{ at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:430)}} {{ at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:443)}} {{ at org.apache.hadoop.conf.Configuration.(Configuration.java:525)}} {{ }} {{Looking at the hbase-shaded-client}} output fat jar, it does indeed include {{org/apache/hadoop/conf/Configuration}} as I'd expect. If you decompile that Configuration, you'll see that it imports {{{}org.apache.hadoop.thirdparty.com.google.common.base.Preconditions{}}}. But due to this JIRA, we do not include that Preconditions class in the fat jar. {{ }} {{One way to work around this on the client side is to explicitly add a dependency on hadoop-shaded-guava, but that is problematic because it requires the end user to keep track of the appropriate version to include over time. It also only solves for guava, when there is also protobuf and potentially other thirdparty artifacts that might cause issues over time.}} {{ }} {{Thoughts on reverting this and instead adding to the allow list? If you agree I can create a new Jira to track.}} > Filter out o.a.hadoop.thirdparty building shaded jars > - > > Key: HBASE-25792 > URL: https://issues.apache.org/jira/browse/HBASE-25792 > Project: HBase > Issue Type: Bug > Components: shading >Affects Versions: 3.0.0-alpha-1, 2.5.0, 2.4.3 >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.3 > > > Hadoop 3.3.1 (unreleased currently) shades guava. The shaded guava then trips > the check in our shading that tries to exclude hadoop bits from the fat jars > we build. > For the issue to trigger, need to build against tip of hadoop branch-3.3. You > then get this complaint: > {code} > [INFO] --- exec-maven-plugin:1.6.0:exec (check-jar-contents) @ > hbase-shaded-check-invariants --- > [ERROR] Found artifact with unexpected contents: > '/Users/stack/.m2/repository/org/apache/hbase/hbase-shaded-mapreduce/2.3.6-SNAPSHOT/hbase-shaded-mapreduce-2.3.6-SNAPSHOT.jar' > Please check the following and either correct the build or update > the allowed
[jira] [Comment Edited] (HBASE-25792) Filter out o.a.hadoop.thirdparty building shaded jars
[ https://issues.apache.org/jira/browse/HBASE-25792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454729#comment-17454729 ] Bryan Beaudreault edited comment on HBASE-25792 at 12/7/21, 4:05 PM: - [~stack] I just stumbled across this and I think it may not be the correct solution. I wonder if you can give more context around the decision. I think it may have been more appropriate to add these classes to the allow list. Here's my reasoning: I tried creating a downstream project which depends on {{{}hbase-shaded-client{}}}. I'd expect this to Just Work, but I get the following error when constructing a Configuration object: {{Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/thirdparty/com/google/common/base/Preconditions}} {{ at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:430)}} {{ at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:443)}} {{ at org.apache.hadoop.conf.Configuration.(Configuration.java:525)}} {{ }} {{Looking at the hbase-shaded-client}} output fat jar, it does indeed include {{org/apache/hadoop/conf/Configuration}} as I'd expect. If you decompile that Configuration, you'll see that it imports {{{}org.apache.hadoop.thirdparty.com.google.common.base.Preconditions{}}}. But due to this JIRA, we do not include that Preconditions class in the fat jar. {{ }} {{One way to work around this on the client side is to explicitly add a dependency on hadoop-shaded-guava, but that is problematic because it requires the end user to keep track of the appropriate version to include over time. It also only solves for guava, when there is also protobuf and potentially other thirdparty artifacts that might cause issues over time.}} {{ }} {{Thoughts on reverting this and instead adding to the allow list? If you agree I can create a new Jira to track.}} was (Author: bbeaudreault): [~stack] I just stumbled across this and I think it may not be the correct solution. I wonder if you can give more context around the decision. I think it may have been more appropriate to add these classes to the allow list. Here's my reasoning: I tried creating a downstream project which depends on {{{}hbase-shaded-client{}}}. I'd expect this to Just Work, but I get the following error when constructing a Configuration object: {{Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/thirdparty/com/google/common/base/Preconditions}} {{ at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:430)}} {{ at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:443)}} at org.apache.hadoop.conf.Configuration.(Configuration.java:525) {{ }} {{Looking at the {{hbase-shaded-client}} output fat jar, it does indeed include {{org/apache/hadoop/conf/Configuration}} as I'd expect. If you decompile that Configuration, you'll see that it imports {{{}org.apache.hadoop.thirdparty.com.google.common.base.Preconditions{}}}. But due to this JIRA, we do not include that Preconditions class in the fat jar.}} {{ }} {{One way to work around this on the client side is to explicitly add a dependency on hadoop-shaded-guava, but that is problematic because it requires the end user to keep track of the appropriate version to include over time. It also only solves for guava, when there is also protobuf and potentially other thirdparty artifacts that might cause issues over time.}} {{ }} {{Thoughts on reverting this and instead adding to the allow list? If you agree I can create a new Jira to track.}} > Filter out o.a.hadoop.thirdparty building shaded jars > - > > Key: HBASE-25792 > URL: https://issues.apache.org/jira/browse/HBASE-25792 > Project: HBase > Issue Type: Bug > Components: shading >Affects Versions: 3.0.0-alpha-1, 2.5.0, 2.4.3 >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.3 > > > Hadoop 3.3.1 (unreleased currently) shades guava. The shaded guava then trips > the check in our shading that tries to exclude hadoop bits from the fat jars > we build. > For the issue to trigger, need to build against tip of hadoop branch-3.3. You > then get this complaint: > {code} > [INFO] --- exec-maven-plugin:1.6.0:exec (check-jar-contents) @ > hbase-shaded-check-invariants --- > [ERROR] Found artifact with unexpected contents: > '/Users/stack/.m2/repository/org/apache/hbase/hbase-shaded-mapreduce/2.3.6-SNAPSHOT/hbase-shaded-mapreduce-2.3.6-SNAPSHOT.jar' > Please check the following and either correct the build or update > the allowed list with reasoning. > org/apache/hadoop/thirdparty/ > org/apache/hadoop/thirdparty/com/ > org/apache/hadoop/thirdparty/com/google/
[jira] [Comment Edited] (HBASE-25792) Filter out o.a.hadoop.thirdparty building shaded jars
[ https://issues.apache.org/jira/browse/HBASE-25792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454729#comment-17454729 ] Bryan Beaudreault edited comment on HBASE-25792 at 12/7/21, 4:04 PM: - [~stack] I just stumbled across this and I think it may not be the correct solution. I wonder if you can give more context around the decision. I think it may have been more appropriate to add these classes to the allow list. Here's my reasoning: I tried creating a downstream project which depends on {{{}hbase-shaded-client{}}}. I'd expect this to Just Work, but I get the following error when constructing a Configuration object: {{Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/thirdparty/com/google/common/base/Preconditions}} {{ at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:430)}} {{ at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:443)}} at org.apache.hadoop.conf.Configuration.(Configuration.java:525) {{ }} {{Looking at the {{hbase-shaded-client}} output fat jar, it does indeed include {{org/apache/hadoop/conf/Configuration}} as I'd expect. If you decompile that Configuration, you'll see that it imports {{{}org.apache.hadoop.thirdparty.com.google.common.base.Preconditions{}}}. But due to this JIRA, we do not include that Preconditions class in the fat jar.}} {{ }} {{One way to work around this on the client side is to explicitly add a dependency on hadoop-shaded-guava, but that is problematic because it requires the end user to keep track of the appropriate version to include over time. It also only solves for guava, when there is also protobuf and potentially other thirdparty artifacts that might cause issues over time.}} {{ }} {{Thoughts on reverting this and instead adding to the allow list? If you agree I can create a new Jira to track.}} was (Author: bbeaudreault): [~stack] I just stumbled across this and I think it may not be the correct solution. I wonder if you can give more context around the decision. I think it may have been more appropriate to add these classes to the allow list. Here's my reasoning: I tried creating a downstream project which depends on {{{}hbase-shaded-client{}}}. I'd expect this to Just Work, but I get the following error when constructing a Configuration object: {{Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/thirdparty/com/google/common/base/Preconditions }}{{ at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:430) }}{{ at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:443) }}{{ at org.apache.hadoop.conf.Configuration.(Configuration.java:525)}} Looking at the {{hbase-shaded-client}} output fat jar, it does indeed include {{org/apache/hadoop/conf/Configuration}} as I'd expect. If you decompile that Configuration, you'll see that it imports {{{}org.apache.hadoop.thirdparty.com.google.common.base.Preconditions{}}}. But due to this JIRA, we do not include that Preconditions class in the fat jar. One way to work around this on the client side is to explicitly add a dependency on hadoop-shaded-guava, but that is problematic because it requires the end user to keep track of the appropriate version to include over time. It also only solves for guava, when there is also protobuf and potentially other thirdparty artifacts that might cause issues over time. Thoughts on reverting this and instead adding to the allow list? If you agree I can create a new Jira to track. > Filter out o.a.hadoop.thirdparty building shaded jars > - > > Key: HBASE-25792 > URL: https://issues.apache.org/jira/browse/HBASE-25792 > Project: HBase > Issue Type: Bug > Components: shading >Affects Versions: 3.0.0-alpha-1, 2.5.0, 2.4.3 >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.3 > > > Hadoop 3.3.1 (unreleased currently) shades guava. The shaded guava then trips > the check in our shading that tries to exclude hadoop bits from the fat jars > we build. > For the issue to trigger, need to build against tip of hadoop branch-3.3. You > then get this complaint: > {code} > [INFO] --- exec-maven-plugin:1.6.0:exec (check-jar-contents) @ > hbase-shaded-check-invariants --- > [ERROR] Found artifact with unexpected contents: > '/Users/stack/.m2/repository/org/apache/hbase/hbase-shaded-mapreduce/2.3.6-SNAPSHOT/hbase-shaded-mapreduce-2.3.6-SNAPSHOT.jar' > Please check the following and either correct the build or update > the allowed list with reasoning. > org/apache/hadoop/thirdparty/ > org/apache/hadoop/thirdparty/com/ > org/apache/hadoop/thirdparty/com/google/ >
[jira] [Commented] (HBASE-25792) Filter out o.a.hadoop.thirdparty building shaded jars
[ https://issues.apache.org/jira/browse/HBASE-25792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454729#comment-17454729 ] Bryan Beaudreault commented on HBASE-25792: --- [~stack] I just stumbled across this and I think it may not be the correct solution. I wonder if you can give more context around the decision. I think it may have been more appropriate to add these classes to the allow list. Here's my reasoning: I tried creating a downstream project which depends on {{{}hbase-shaded-client{}}}. I'd expect this to Just Work, but I get the following error when constructing a Configuration object: {{Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/thirdparty/com/google/common/base/Preconditions }}{{ at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:430) }}{{ at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:443) }}{{ at org.apache.hadoop.conf.Configuration.(Configuration.java:525)}} Looking at the {{hbase-shaded-client}} output fat jar, it does indeed include {{org/apache/hadoop/conf/Configuration}} as I'd expect. If you decompile that Configuration, you'll see that it imports {{{}org.apache.hadoop.thirdparty.com.google.common.base.Preconditions{}}}. But due to this JIRA, we do not include that Preconditions class in the fat jar. One way to work around this on the client side is to explicitly add a dependency on hadoop-shaded-guava, but that is problematic because it requires the end user to keep track of the appropriate version to include over time. It also only solves for guava, when there is also protobuf and potentially other thirdparty artifacts that might cause issues over time. Thoughts on reverting this and instead adding to the allow list? If you agree I can create a new Jira to track. > Filter out o.a.hadoop.thirdparty building shaded jars > - > > Key: HBASE-25792 > URL: https://issues.apache.org/jira/browse/HBASE-25792 > Project: HBase > Issue Type: Bug > Components: shading >Affects Versions: 3.0.0-alpha-1, 2.5.0, 2.4.3 >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.3 > > > Hadoop 3.3.1 (unreleased currently) shades guava. The shaded guava then trips > the check in our shading that tries to exclude hadoop bits from the fat jars > we build. > For the issue to trigger, need to build against tip of hadoop branch-3.3. You > then get this complaint: > {code} > [INFO] --- exec-maven-plugin:1.6.0:exec (check-jar-contents) @ > hbase-shaded-check-invariants --- > [ERROR] Found artifact with unexpected contents: > '/Users/stack/.m2/repository/org/apache/hbase/hbase-shaded-mapreduce/2.3.6-SNAPSHOT/hbase-shaded-mapreduce-2.3.6-SNAPSHOT.jar' > Please check the following and either correct the build or update > the allowed list with reasoning. > org/apache/hadoop/thirdparty/ > org/apache/hadoop/thirdparty/com/ > org/apache/hadoop/thirdparty/com/google/ > org/apache/hadoop/thirdparty/com/google/common/ > org/apache/hadoop/thirdparty/com/google/common/annotations/ > org/apache/hadoop/thirdparty/com/google/common/annotations/Beta.class > > org/apache/hadoop/thirdparty/com/google/common/annotations/GwtCompatible.class > > org/apache/hadoop/thirdparty/com/google/common/annotations/GwtIncompatible.class > > org/apache/hadoop/thirdparty/com/google/common/annotations/VisibleForTesting.class > org/apache/hadoop/thirdparty/com/google/common/base/ > org/apache/hadoop/thirdparty/com/google/common/base/Absent.class > > org/apache/hadoop/thirdparty/com/google/common/base/AbstractIterator$1.class > > org/apache/hadoop/thirdparty/com/google/common/base/AbstractIterator$State.class > org/apache/hadoop/thirdparty/com/google/common/base/AbstractIterator.class > org/apache/hadoop/thirdparty/com/google/common/base/Ascii.class > org/apache/hadoop/thirdparty/com/google/common/base/CaseFormat$1.class > org/apache/hadoop/thirdparty/com/google/common/base/CaseFormat$2.class > org/apache/hadoop/thirdparty/com/google/common/base/CaseFormat$3.class > org/apache/hadoop/thirdparty/com/google/common/base/CaseFormat$4.class > > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26487) Run some tests to verify the new region replication framework
[ https://issues.apache.org/jira/browse/HBASE-26487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454724#comment-17454724 ] Duo Zhang commented on HBASE-26487: --- I've done the first test, there is no big differences between old and the new implementations on writing performance. I've both run PE 3 times. For the old implementation, the result {noformat} Round 1: latency 158us, qps 31453 Round 2: latency 159us, qps 31353 Round 3: latency 162us, qps 30084 Avg: latency 159.7us, qps 30963.3 {noformat} For the new implementation, the result {noformat} Round 1: latency 150us, qps 33025 Round 2: latency 146us, qps 32949 Round 3: latency 150us, qps 33987 Avg: latency 148.7us, qps 33320.3 {noformat} The difference on latency and qps is about 7%, no very big, and the new implementation is a bit faster > Run some tests to verify the new region replication framework > - > > Key: HBASE-26487 > URL: https://issues.apache.org/jira/browse/HBASE-26487 > Project: HBase > Issue Type: Sub-task > Components: integration tests, test >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > Make sure there is no big bugs before merging back. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [hbase] bsglz opened a new pull request #3925: HBASE-26027 The calling of HTable.batch blocked at AsyncRequestFutureImpl.waitUntilDone caused by ArrayStoreException
bsglz opened a new pull request #3925: URL: https://github.com/apache/hbase/pull/3925 This is a new pr instead of #3419. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-26539) The default rpc timeout 200ms is too small for replicating meta edits
[ https://issues.apache.org/jira/browse/HBASE-26539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-26539. --- Fix Version/s: HBASE-26233 Hadoop Flags: Reviewed Resolution: Fixed > The default rpc timeout 200ms is too small for replicating meta edits > - > > Key: HBASE-26539 > URL: https://issues.apache.org/jira/browse/HBASE-26539 > Project: HBase > Issue Type: Sub-task > Components: read replicas >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: HBASE-26233 > > > For most meta edits, we will call refreshStoreFiles, which is time consuming. > When running tests in HBASE-26487, it is very easy to timeout and cause the > replicating to pause for a while. > I think for replicating meta edits, we should have a larger timeout value. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HBASE-26538) Should find a way to clear the replication queue for a legacy region_replica_replication peer
[ https://issues.apache.org/jira/browse/HBASE-26538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-26538. --- Fix Version/s: HBASE-26233 Hadoop Flags: Reviewed Resolution: Fixed > Should find a way to clear the replication queue for a legacy > region_replica_replication peer > - > > Key: HBASE-26538 > URL: https://issues.apache.org/jira/browse/HBASE-26538 > Project: HBase > Issue Type: Sub-task > Components: read replicas, Replication >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: HBASE-26233 > > > When rolling upgrading, we will delete the legacy region_replica_replication > peer. But since the old region servers still use this peer for replicating, > we can not delete all the replication queues. > We need to find a way to deal with these legacy replication queues after > upgrading. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [hbase] frostruan commented on pull request #3716: HBASE-26323 introduce a SnapshotProcedure
frostruan commented on pull request #3716: URL: https://github.com/apache/hbase/pull/3716#issuecomment-988001728 yes, I can. Thanks so much. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #3716: HBASE-26323 introduce a SnapshotProcedure
Apache9 commented on pull request #3716: URL: https://github.com/apache/hbase/pull/3716#issuecomment-987998207 Email sent, could you please check whether you can see the email? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] BukrosSzabolcs commented on pull request #3851: HBASE-26286: Add support for specifying store file tracker when restoring or cloning snapshot
BukrosSzabolcs commented on pull request #3851: URL: https://github.com/apache/hbase/pull/3851#issuecomment-987994714 Fixed the unit test and the error prone part. Hopefully fixed the conflicts too. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] frostruan commented on pull request #3716: HBASE-26323 introduce a SnapshotProcedure
frostruan commented on pull request #3716: URL: https://github.com/apache/hbase/pull/3716#issuecomment-987989218 Really thanks for your patience @Apache9 . anyway, I want to say Hi all, As we all know, currently the snapshot in hbase has a few limitations, so I want to propose a proc-v2 implementation of snapshot. Here are some related links. jira https://issues.apache.org/jira/browse/HBASE-26323 design doc https://docs.google.com/document/d/1Il_PB1SenXGr1-mmCIWEogxEMeGZe2fpuN3bMbjqiGI/edit the initial implementation https://github.com/apache/hbase/pull/3920 If you are interested, please take a look in your free time. Looking forward your advice and feedback. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #3716: HBASE-26323 introduce a SnapshotProcedure
Apache9 commented on pull request #3716: URL: https://github.com/apache/hbase/pull/3716#issuecomment-987979639 No, there is no such limitation. The only possible condition is whether you have subscribed to the mailing list... Anyway, you could send the content to me first and then I can help posting it to the mailing-list... Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26524) Support remove coprocessor by class name via alter table command
[ https://issues.apache.org/jira/browse/HBASE-26524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454665#comment-17454665 ] Hudson commented on HBASE-26524: Results for branch master [build #462 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Support remove coprocessor by class name via alter table command > > > Key: HBASE-26524 > URL: https://issues.apache.org/jira/browse/HBASE-26524 > Project: HBase > Issue Type: Improvement > Components: Coprocessors, shell >Affects Versions: 3.0.0-alpha-2 >Reporter: Tak-Lon (Stephen) Wu >Assignee: Tak-Lon (Stephen) Wu >Priority: Major > Labels: incompatible, incompatibleChange > Fix For: 3.0.0-alpha-2 > > > With the shell, when operator wants to remove a table coprocessor, the flow > is to > 1. first use {{decs}} find the the mapping of coprocessor$# e.g. > coprocessor$1, where # is the ordered number when the coprocessor was > internally added to the table attribute > 2. issue {{table_att_unset}} with the target `coprocessor$#` that maps to a > value that include the unique class name. > This task is to simplify the flow if the operator know exactly the class name > of the added coprocessor, and create a new sub-method to {{alter}}, such that > operator can do it only with the class name. > NOTE that this logic has been added behind the scenes at > [TableDescriptorBuilder#removeCoprocessor|https://github.com/apache/hbase/blob/358c4dc9022c507ee0159c1d4916aba41d42cde8/hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptorBuilder.java#L1537-L1565] > for removing `ConstraintProcessor` , and we are just exposing this logic > with a new method to {{alter}} command. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26462) Should persist restoreAcl flag in the procedure state for CloneSnapshotProcedure and RestoreSnapshotProcedure
[ https://issues.apache.org/jira/browse/HBASE-26462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454662#comment-17454662 ] Hudson commented on HBASE-26462: Results for branch master [build #462 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Should persist restoreAcl flag in the procedure state for > CloneSnapshotProcedure and RestoreSnapshotProcedure > - > > Key: HBASE-26462 > URL: https://issues.apache.org/jira/browse/HBASE-26462 > Project: HBase > Issue Type: Bug > Components: proc-v2, snapshots >Reporter: Duo Zhang >Assignee: LiangJun He >Priority: Critical > Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.9 > > > Found this when reviewing HBASE-26454. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26527) ArrayIndexOutOfBoundsException in KeyValueUtil.copyToNewKeyValue()
[ https://issues.apache.org/jira/browse/HBASE-26527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454663#comment-17454663 ] Hudson commented on HBASE-26527: Results for branch master [build #462 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > ArrayIndexOutOfBoundsException in KeyValueUtil.copyToNewKeyValue() > -- > > Key: HBASE-26527 > URL: https://issues.apache.org/jira/browse/HBASE-26527 > Project: HBase > Issue Type: Bug > Components: wal >Affects Versions: 2.2.7, 3.0.0-alpha-2 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.9 > > > While investigating a Phoenix crash, I've found a possible problem in > KeyValueUtil. > When using Phoenix, we need configure (at least for older versions) > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec as a WAL codec > in HBase. > This codec will eventually serialize standard (not phoenix specifc WAL > entries) to the WAL file, and internally converts the Cell objects to > KeyValue objects, by building a new byte[]. > This fails with an ArrayIndexOutOfBoundsException, because the we allocate a > byte[] the size of Cell.getSerializedSize(), and it seems that we are > processing a Cell that does not actually serialize the column family and > later fields. > However, we are building a traditional KeyValue object for serialization, > which does serialize them, hence we run out of bytes. > I think that since we are writing a KeyValue, we should not rely of the > getSerializedSize() method of the source cell, but rather calculate the > backing array size based on how KeyValue expects its data to be serialized. > The stack trace for reference: > {noformat} > java.lang.ArrayIndexOutOfBoundsException: 9787 > at org.apache.hadoop.hbase.util.Bytes.putByte(Bytes.java:502) > at > org.apache.hadoop.hbase.KeyValueUtil.appendKeyTo(KeyValueUtil.java:142) > at > org.apache.hadoop.hbase.KeyValueUtil.appendToByteArray(KeyValueUtil.java:156) > at > org.apache.hadoop.hbase.KeyValueUtil.copyToNewByteArray(KeyValueUtil.java:133) > at > org.apache.hadoop.hbase.KeyValueUtil.copyToNewKeyValue(KeyValueUtil.java:97) > at > org.apache.phoenix.util.PhoenixKeyValueUtil.maybeCopyCell(PhoenixKeyValueUtil.java:214) > at > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec$IndexKeyValueEncoder.write(IndexedWALEditCodec.java:218) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(ProtobufLogWriter.java:59) > at > org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:294) > at > org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:65) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendEntry(AbstractFSWAL.java:931) > at > org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1075) > at > org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:964) > at > org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:873) > at > com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:129) > at java.lang.Thread.run(Thread.java:748) > {noformat} > Note that I am still not sure exactly what triggers this bug, one possibility > is org.apache.hadoop.hbase.ByteBufferKeyOnlyKeyValue -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26529) Document HBASE-26524 to section of Dynamic Unloading
[ https://issues.apache.org/jira/browse/HBASE-26529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454664#comment-17454664 ] Hudson commented on HBASE-26529: Results for branch master [build #462 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/462/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Document HBASE-26524 to section of Dynamic Unloading > > > Key: HBASE-26529 > URL: https://issues.apache.org/jira/browse/HBASE-26529 > Project: HBase > Issue Type: Task > Components: Coprocessors, shell >Affects Versions: 2.5.0, 3.0.0-alpha-2 >Reporter: Tak-Lon (Stephen) Wu >Assignee: Tak-Lon (Stephen) Wu >Priority: Minor > Fix For: 3.0.0-alpha-2 > > > HBASE-26524 has merged, and we need to update the documentation such that > operator knows there is a method of {{table_remove_coprocessor}} for > {{alter}} command -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Work started] (HBASE-26027) The calling of HTable.batch blocked at AsyncRequestFutureImpl.waitUntilDone caused by ArrayStoreException
[ https://issues.apache.org/jira/browse/HBASE-26027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-26027 started by Zheng Wang. -- > The calling of HTable.batch blocked at AsyncRequestFutureImpl.waitUntilDone > caused by ArrayStoreException > - > > Key: HBASE-26027 > URL: https://issues.apache.org/jira/browse/HBASE-26027 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 2.2.7, 2.3.5, 2.4.4 >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 2.5.0, 2.3.8, 2.4.9 > > > The batch api of HTable contains a param named results to store result or > exception, its type is Object[]. > If user pass an array with other type, eg: > org.apache.hadoop.hbase.client.Result, and if we need to put an exception > into it by some reason, then the ArrayStoreException will occur in > AsyncRequestFutureImpl.updateResult, then the > AsyncRequestFutureImpl.decActionCounter will be skipped, then in the > AsyncRequestFutureImpl.waitUntilDone we will stuck at here checking the > actionsInProgress again and again, forever. > It is better to add an cutoff calculated by operationTimeout, instead of only > depend on the value of actionsInProgress. > BTW, this issue only for 2.x, since 3.x the implement has refactored. > How to reproduce: > 1: add sleep in RSRpcServices.multi to mock slow response > {code:java} > try { > Thread.sleep(2000); > } catch (InterruptedException e) { > e.printStackTrace(); > } > {code} > 2: set time out in config > {code:java} > conf.set("hbase.rpc.timeout","2000"); > conf.set("hbase.client.operation.timeout","6000"); > {code} > 3: call batch api > {code:java} > Table table = HbaseUtil.getTable("test"); > byte[] cf = Bytes.toBytes("f"); > byte[] c = Bytes.toBytes("c1"); > List gets = new ArrayList<>(); > for (int i = 0; i < 10; i++) { > byte[] rk = Bytes.toBytes("rk-" + i); > Get get = new Get(rk); > get.addColumn(cf, c); > gets.add(get); > } > Result[] results = new Result[gets.size()]; > table.batch(gets, results); > {code} > The log will looks like below: > {code:java} > [ERROR] [2021/06/22 23:23:00,676] hconnection-0x6b927fb-shared-pool3-t1 - > id=1 error for test processing localhost,16020,1624343786295 > java.lang.ArrayStoreException: org.apache.hadoop.hbase.DoNotRetryIOException > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.updateResult(AsyncRequestFutureImpl.java:1242) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.trySetResultSimple(AsyncRequestFutureImpl.java:1087) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.setError(AsyncRequestFutureImpl.java:1021) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.manageError(AsyncRequestFutureImpl.java:683) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.receiveGlobalFailure(AsyncRequestFutureImpl.java:716) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.access$1500(AsyncRequestFutureImpl.java:69) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncRequestFutureImpl.java:219) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266) > at java.util.concurrent.FutureTask.run(FutureTask.java) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > [INFO ] [2021/06/22 23:23:10,375] main - #1, waiting for 10 actions to > finish on table: test > [INFO ] [2021/06/22 23:23:20,378] main - #1, waiting for 10 actions to > finish on table: test > [INFO ] [2021/06/22 23:23:30,384] main - #1, waiting for 10 actions to > finish on table: > [INFO ] [2021/06/22 23:23:40,387] main - #1, waiting for 10 actions to > finish on table: test > [INFO ] [2021/06/22 23:23:50,397] main - #1, waiting for 10 actions to > finish on table: test > [INFO ] [2021/06/22 23:24:00,400] main - #1, waiting for 10 actions to > finish on table: test > [INFO ] [2021/06/22 23:24:10,408] main - #1, waiting for 10 actions to > finish on table: test > [INFO ] [2021/06/22 23:24:20,413] main - #1, waiting for 10 actions to > finish on table: test > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [hbase] Apache9 commented on pull request #3922: HBASE-26541 hbase-protocol-shaded not buildable on M1 MacOSX
Apache9 commented on pull request #3922: URL: https://github.com/apache/hbase/pull/3922#issuecomment-987924716 I replied on jira, we need more information on the failure message, as at least we do have the any.proto in the shaded protobuf jar... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26541) hbase-protocol-shaded not buildable on M1 MacOSX
[ https://issues.apache.org/jira/browse/HBASE-26541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454651#comment-17454651 ] Duo Zhang commented on HBASE-26541: --- And what is the problem for using protobuf 3.11.4? The version number is just for protoc I think, the java files which are generated by the protoc are not compatible with the protobuf version in hbase-protobuf-shaded? Strange... > hbase-protocol-shaded not buildable on M1 MacOSX > > > Key: HBASE-26541 > URL: https://issues.apache.org/jira/browse/HBASE-26541 > Project: HBase > Issue Type: Bug >Affects Versions: 2.5.0, 2.4.8 > Environment: Apache Maven 3.8.3 > (ff8e977a158738155dc465c6a97ffaf31982d739) > Java version: 1.8.0_312, vendor: Azul Systems, Inc., runtime: > /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre > OS name: "mac os x", version: "12.0.1", arch: "aarch64", family: "mac" >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-2 > > > I needed to make some changes to get branch-2's hbase-protocol-shaded > building on an M1 mac. > - Upgrade internal.protobuf.version to 3.17.3. > - any.proto include not found. Get it from github.com/google/protobuf and add > it. > - Warbucks rule fails because of any.proto. Disable warbucks just in > hbase-protobuf-shaded. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26541) hbase-protocol-shaded not buildable on M1 MacOSX
[ https://issues.apache.org/jira/browse/HBASE-26541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454649#comment-17454649 ] Duo Zhang commented on HBASE-26541: --- We do have any.proto in our hbase-shaded-protobuf, you can unzip the hbase-shaded-protobuf.jar and check the content, it is under the google/protobuf directory. Could you please provide more information about the failure on M1 mac? Paste the maven output? > hbase-protocol-shaded not buildable on M1 MacOSX > > > Key: HBASE-26541 > URL: https://issues.apache.org/jira/browse/HBASE-26541 > Project: HBase > Issue Type: Bug >Affects Versions: 2.5.0, 2.4.8 > Environment: Apache Maven 3.8.3 > (ff8e977a158738155dc465c6a97ffaf31982d739) > Java version: 1.8.0_312, vendor: Azul Systems, Inc., runtime: > /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre > OS name: "mac os x", version: "12.0.1", arch: "aarch64", family: "mac" >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-2 > > > I needed to make some changes to get branch-2's hbase-protocol-shaded > building on an M1 mac. > - Upgrade internal.protobuf.version to 3.17.3. > - any.proto include not found. Get it from github.com/google/protobuf and add > it. > - Warbucks rule fails because of any.proto. Disable warbucks just in > hbase-protobuf-shaded. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26233) The region replication framework should not be built upon the general replication framework
[ https://issues.apache.org/jira/browse/HBASE-26233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454502#comment-17454502 ] Hudson commented on HBASE-26233: Results for branch HBASE-26233 [build #6 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/HBASE-26233/6/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/HBASE-26233/6/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/HBASE-26233/6/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/HBASE-26233/6/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > The region replication framework should not be built upon the general > replication framework > --- > > Key: HBASE-26233 > URL: https://issues.apache.org/jira/browse/HBASE-26233 > Project: HBase > Issue Type: Umbrella > Components: read replicas >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > > At least, at the source path, where we track the edits, we should not make > region replication rely on general replication framework. > The difficulty here for switching to a table based storage is that, the WAL > system and replication system highly depend on each other. There will be > cyclic dependency if we want to store replication peer and queue data in a > hbase table. > And after HBASE-18070, even meta wal provider will be integrated together > with replication system, which makes things more difficult. > But in general, for region replication, it is not a big deal to lose some > edits, a flush can fix everything, which means we do not so heavy tracking > system in the general replication system. > We should find a more light-weighted way to do region replication. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (HBASE-26409) `MAXIMUN_KEY_LENGTH` changed from 0.20.0 to 0.20.2
[ https://issues.apache.org/jira/browse/HBASE-26409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani reassigned HBASE-26409: Assignee: thrylokya > `MAXIMUN_KEY_LENGTH` changed from 0.20.0 to 0.20.2 > -- > > Key: HBASE-26409 > URL: https://issues.apache.org/jira/browse/HBASE-26409 > Project: HBase > Issue Type: Bug >Affects Versions: 0.20.0, 0.20.2 >Reporter: Yongkang Li >Assignee: thrylokya >Priority: Minor > > In HFile.java, there is a constant called `MAXIMUM_KEY_LENGTH`, which is used > to check the validity of the key. However, the value changed from 64 * 1024 > to Integer.MAX_VALUE. Therefore, I wonder whether it might throw an exception > when checking the key after downgrading from 0.20.2 to 0.20.0. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [hbase] nkalmar commented on a change in pull request #3872: HBASE-26340 - fix RegionSizeCalculator getLEngth to bytes instead of …
nkalmar commented on a change in pull request #3872: URL: https://github.com/apache/hbase/pull/3872#discussion_r763734266 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java ## @@ -1460,6 +1466,10 @@ RegionLoad createRegionLoad(final HRegion r, RegionLoad.Builder regionLoadBldr, totalStaticIndexSizeKB += (int) (store.getTotalStaticIndexSize() / 1024); totalStaticBloomSizeKB += (int) (store.getTotalStaticBloomSize() / 1024); } +//HBASE-26340 Fix false "0" size under 1MB +if(storefileSizeMB < 1 && nonEmptyStoreExist) { Review comment: That works, sure, at least I think we are thinking the same when I said above we could use long to track "storefileSize(MB)(Temp)" - no need for Temp though just remove MB - but at the end we set this as MB just like before. I'll do the change today. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org