[GitHub] [hbase] Apache-HBase commented on pull request #1257: HBASE 23887 Up to 3x increase BlockCache performance

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1257:
URL: https://github.com/apache/hbase/pull/1257#issuecomment-641748908


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 17s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 20s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 32s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 58s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  hbase-server: The patch 
generated 0 new + 9 unchanged - 2 fixed = 9 total (was 11)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  13m 23s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   2m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 13s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  39m  8s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/35/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1257 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux 798069912a71 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 7b396e9b8c |
   | Max. process+thread count | 84 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/35/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] saintstack commented on a change in pull request #1774: HBASE-24389 Introduce new master rpc methods to locate meta region through root region

2020-06-09 Thread GitBox


saintstack commented on a change in pull request #1774:
URL: https://github.com/apache/hbase/pull/1774#discussion_r437876487



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##
@@ -3921,4 +3996,85 @@ public MetaRegionLocationCache 
getMetaRegionLocationCache() {
   public RSGroupInfoManager getRSGroupInfoManager() {
 return rsGroupInfoManager;
   }
+
+  public RegionLocations locateMeta(byte[] row, RegionLocateType locateType) 
throws IOException {
+if (locateType == RegionLocateType.AFTER) {
+  // as we know the exact row after us, so we can just create the new row, 
and use the same
+  // algorithm to locate it.
+  row = Arrays.copyOf(row, row.length + 1);
+  locateType = RegionLocateType.CURRENT;
+}
+Scan scan =
+  MetaTableAccessor.createLocateRegionScan(TableName.META_TABLE_NAME, row, 
locateType, 1);
+try (RegionScanner scanner = masterRegion.getScanner(scan)) {
+  boolean moreRows;
+  List cells = new ArrayList<>();
+  do {
+moreRows = scanner.next(cells);
+if (cells.isEmpty()) {
+  continue;
+}
+Result result = Result.create(cells);
+cells.clear();
+RegionLocations locs = MetaTableAccessor.getRegionLocations(result);
+if (locs == null || locs.getDefaultRegionLocation() == null) {
+  LOG.warn("No location found when locating meta region with row='{}', 
locateType={}",
+Bytes.toStringBinary(row), locateType);
+  return null;
+}
+HRegionLocation loc = locs.getDefaultRegionLocation();
+RegionInfo info = loc.getRegion();
+if (info == null) {
+  LOG.warn("HRegionInfo is null when locating meta region with 
row='{}', locateType={}",
+Bytes.toStringBinary(row), locateType);
+  return null;
+}
+if (info.isSplitParent()) {
+  continue;
+}
+return locs;
+  } while (moreRows);
+  LOG.warn("No location available when locating meta region with row='{}', 
locateType={}",
+Bytes.toStringBinary(row), locateType);
+  return null;
+}
+  }
+
+  public List getAllMetaRegionLocations(boolean 
excludeOfflinedSplitParents)
+throws IOException {
+Scan scan = new Scan().addFamily(HConstants.CATALOG_FAMILY);
+List list = new ArrayList<>();
+try (RegionScanner scanner = masterRegion.getScanner(scan)) {
+  boolean moreRows;
+  List cells = new ArrayList<>();
+  do {
+moreRows = scanner.next(cells);
+if (cells.isEmpty()) {
+  continue;
+}
+Result result = Result.create(cells);
+cells.clear();
+RegionLocations locs = MetaTableAccessor.getRegionLocations(result);
+if (locs == null) {
+  LOG.warn("No locations in {}", result);
+  continue;
+}
+HRegionLocation loc = locs.getRegionLocation();
+if (loc == null) {
+  LOG.warn("No non null location in {}", result);
+  continue;
+}
+RegionInfo info = loc.getRegion();
+if (info == null) {
+  LOG.warn("No serialized RegionInfo in {}", result);
+  continue;
+}
+if (excludeOfflinedSplitParents && info.isSplitParent()) {
+  continue;
+}
+list.add(locs);
+  } while (moreRows);
+}
+return list;
+  }

Review comment:
   Can be follow-on.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] saintstack commented on a change in pull request #1774: HBASE-24389 Introduce new master rpc methods to locate meta region through root region

2020-06-09 Thread GitBox


saintstack commented on a change in pull request #1774:
URL: https://github.com/apache/hbase/pull/1774#discussion_r437876145



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/CreateTableProcedure.java
##
@@ -365,8 +365,6 @@ protected static void moveTempDirectoryToHBaseRoot(
   final List regions) throws IOException {
 assert (regions != null && regions.size() > 0) : "expected at least 1 
region, got " + regions;
 
-ProcedureSyncWait.waitMetaRegions(env);

Review comment:
   Ok. Work to do here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] saintstack commented on a change in pull request #1774: HBASE-24389 Introduce new master rpc methods to locate meta region through root region

2020-06-09 Thread GitBox


saintstack commented on a change in pull request #1774:
URL: https://github.com/apache/hbase/pull/1774#discussion_r437875533



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java
##
@@ -552,4 +553,12 @@ default SplitWALManager getSplitWALManager(){
* @return The state of the load balancer, or false if the load balancer 
isn't defined.
*/
   boolean isBalancerOn();
+
+  /**
+   * Get locations for all meta regions.
+   * @param excludeOfflinedSplitParents don't return split parents
+   * @return The locations of all the meta regions
+   */
+  List getAllMetaRegionLocations(boolean 
excludeOfflinedSplitParents)

Review comment:
   Ok. Helps. I dislike this parameter. It has the smell of 'more' params 
being needed as we progress... but perhaps not... We have method like this 
already over in Meta Accessor.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] saintstack commented on pull request #1774: HBASE-24389 Introduce new master rpc methods to locate meta region through root region

2020-06-09 Thread GitBox


saintstack commented on pull request #1774:
URL: https://github.com/apache/hbase/pull/1774#issuecomment-641736719


   Hopefully we can do work up front that makes it so new clients do not have 
to take a big pause while downloading MBs joining a big cluster.
   
   `There is no paging support in the client facing API right?`
   
   We do have 'paging' in client API; Scans (I talked of 
paging/iterating/cursor/scan-like API, I thought?).
   
   Here I'm advocating reuse  and existing models -- e.g. RegionLocator 
Interface seems to have your two APIs already as you note so why wouldn't we 
use it instead in ConnectionRegistry? --rather than new API and new PBs. The 
response seems to be that its all internal so its going to be fine; sure we can 
make it work but if an opportunity for reuse, it could be better.
   
   We can add prefetch to the PB to 'improve' performance but the API is still 
one-at-a-time; consumer has to do the juggling to figure what it should pass in 
as the next 'row'; instead of just next'ing to get next Region.
   
   I get your bit about Scan being too much to expose. That seems good. But the 
model of iterating across Table results I think we should keep here; doesn't 
have to be full-on Scan.
   
   Master doesn't expose Scan Interface so I suppose we can't reuse Scan RPC.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (HBASE-23887) BlockCache performance improve by reduce eviction rate

2020-06-09 Thread Danil Lipovoy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17128349#comment-17128349
 ] 

Danil Lipovoy edited comment on HBASE-23887 at 6/10/20, 5:40 AM:
-

Is it ok for the summary doc?
 Sorry for a lot of mistakes, my english quite bad. Hope someone would correct 
the text.

—

Sometimes we are reading much more data than can fit into BlockCache and it is 
the cause of a high rate of evictions.

This in turn leads to heavy Garbage Collector works. So a lot of blocks put 
into BlockCache but never read, but spending a lot of CPU resources for 
cleaning.

!BlockCacheEvictionProcess.gif!

(I will actualize the name of param in the gif later)

We could avoid this situation via parameters:

*hbase.lru.cache.heavy.eviction.count.limit*  - set how many times we have to 
run the eviction process that starts to avoid putting data to BlockCache. By 
default it is 2147483647 and actually equals to disable the feature of 
increasing performance. Because eviction runs about every 5 - 10 second (it 
depends of workload) and 2147483647 * 10 / 60 / 60 / 24 / 365 = 680 years.  
Just after that time it will start to work. We can set this parameter to 0 and 
get working the feature right now. 

But if we have some times short reading the same data and some times long-term 
reading - we can divide it by this parameter.

For example we know that our short reading used to be about 1 minutes, then we 
have to set the parameter about 10 and it will enable the feature only for long 
time massive reading (after ~100 seconds). So when we use short-reading and 
want all of them in the cache we will have it (except for eviction of course). 
When we use long-term heavy reading the feature will be enabled after some time 
and bring better performance.

 

*hbase.lru.cache.heavy.eviction.mb.size.limit* - set how many bytes desirable 
putting into BlockCache (and evicted from it). The feature will try to reach 
this value and maintain it. Don't try to set it too small because it leads to 
premature exit from this mode. For powerful CPUs (about 20-40 physical cores)  
it could be about 400-500 MB. Average system (~10 cores) 200-300 MB. Some weak 
systems (2-5 cores) may be good with 50-100 MB.

How it works: we set the limit and after each ~10 second calculate how many 
bytes were freed.

Overhead = Freed Bytes Sum (MB) * 100 / Limit (MB) - 100;

For example we set the limit = 500 and were evicted 2000 MB. Overhead is:

2000 * 100 / 500 - 100 = 300%

The feature is going to reduce a percent caching data blocks and fit evicted 
bytes closer to 100% (500 MB). So kind of an auto-scaling.

If freed bytes less then the limit we have got negative overhead, for example 
if were freed 200 MB:

200 * 100 / 500 - 100 = -60% 

The feature will increase the percent of caching blocks and fit evicted bytes 
closer to 100% (500 MB). 

The current situation we can found in the log of RegionServer:

_BlockCache evicted (MB): 0, overhead (%): -100, heavy eviction counter: 0, 
current caching DataBlock (%): 100_ < no eviction, 100% blocks is caching

_BlockCache evicted (MB): 2000, overhead (%): 300, heavy eviction counter: 1, 
current caching DataBlock (%): 97_ < eviction begin, reduce of caching blocks

It help to tune your system and find out what value is better set. Don't try to 
reach 0%  overhead, it is impossible. Quite good 30-100% overhead, it prevents 
premature exit from this mode.

 

*hbase.lru.cache.heavy.eviction.overhead.coefficient* - set how fast we want to 
get the result. If we know that our reading is heavy for a long time, we don't 
want to wait and can increase the coefficient and get good performance sooner. 
But if we aren't sure we can do it slowly and it could prevent premature exit 
from this mode. So, when the coefficient is higher we can get better 
performance when heavy reading is stable. But when reading is changing we can 
adjust to it and set  the coefficient to lower value.

For example, we set the coefficient = 0.01. It means the overhead (see above) 
will be multiplied by 0.01 and the result is the value of reducing percent 
caching blocks. For example, if the overhead = 300% and the coefficient = 0.01, 
then percent of caching blocks will reduce by 3%.

Similar logic when overhead has got negative value (overshooting).  Maybe it is 
just short-term fluctuation and we will try to stay in this mode. It helps 
avoid premature exit during short-term fluctuation. Backpressure has simple 
logic: more overshooting - more caching blocks.

 

!image-2020-06-08-18-35-48-366.png!

 

Finally, how to work reducing percent of caching blocks. Imagine we have very 
little cache, where can fit only 1 block and we are trying to read 3 blocks 
with offsets:

124

198

223

Without the feature, or when *hbase.lru.cache.heavy.eviction.count.limit* = 
2147483647 we will put the block:

12

[GitHub] [hbase] Apache-HBase commented on pull request #1881: HBASE-24529 hbase.rs.evictblocksonclose is not honored when removing …

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1881:
URL: https://github.com/apache/hbase/pull/1881#issuecomment-641733406


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 19s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 35s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 23s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 53s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  14m 24s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   2m 35s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  40m 52s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1881/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1881 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux 400da6607bc7 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 7b396e9b8c |
   | Max. process+thread count | 95 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1881/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase-native-client] bharathv commented on a change in pull request #3: HBASE-24400: Download folly, wangle, zookeeper, and protobuf

2020-06-09 Thread GitBox


bharathv commented on a change in pull request #3:
URL: https://github.com/apache/hbase-native-client/pull/3#discussion_r437867365



##
File path: CMakeLists.txt
##
@@ -162,12 +284,29 @@ add_custom_target(
 linter
 COMMAND ${CMAKE_SOURCE_DIR}/bin/cpplint.sh)
 # Copy the version.h file in before doing anything
-add_custom_target (
-   copy_version_h
-COMMAND "${CMAKE_CURRENT_SOURCE_DIR}/bin/copy-version.sh"
-)
-add_dependencies(hbaseclient-static copy_version_h)
-add_dependencies(hbaseclient-shared copy_version_h)
+if (NOT BUILD_HBASE)
+   add_custom_target (
+   copy_version_h
+   COMMAND "${CMAKE_CURRENT_SOURCE_DIR}/bin/copy-version.sh"
+   )
+   
+   add_dependencies(hbaseclient-static copy_version_h)
+   add_dependencies(hbaseclient-shared copy_version_h)
+
+endif(NOT BUILD_HBASE)

Review comment:
   Fair point, agree!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase-native-client] bharathv commented on a change in pull request #3: HBASE-24400: Download folly, wangle, zookeeper, and protobuf

2020-06-09 Thread GitBox


bharathv commented on a change in pull request #3:
URL: https://github.com/apache/hbase-native-client/pull/3#discussion_r437833597



##
File path: .gitignore
##
@@ -33,7 +33,7 @@ simple-client
 # CMake temporary files
 CMakeCache.txt
 CMakeFiles
-*.cmake
+#*.cmake

Review comment:
   This should be undone?

##
File path: CMakeLists.txt
##
@@ -25,15 +25,119 @@ set(PROJECT_VERSION_PATCH 0)
 set(BUILD_SHARED_LIBS ON)
 ## set our cmake module path
 list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
+include(CMakeDependentOption)
+include(CheckIncludeFile)
+include(ExternalProject)
+include(DownloadProject)
+include(ExecuteMaven)
+include(CheckCXXCompilerFlag)
+
+
+option(DOWNLOAD_DEPENDENCIES "Downloads and builds all dependencies locally " 
OFF)
+option(HBASE_TARGET_TAG "HBase tag to be used if HBASE_HOME is not set" 
"e5345b3a7c32c6a80394319c17540b84c8fe66ba")

Review comment:
   nit: use a human readable tag?

##
File path: CMakeLists.txt
##
@@ -25,15 +25,108 @@ set(PROJECT_VERSION_PATCH 0)
 set(BUILD_SHARED_LIBS ON)
 ## set our cmake module path
 list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
+include(CMakeDependentOption)
+include(CheckIncludeFile)
+include(ExternalProject)
+include(DownloadProject)
+include(CheckCXXCompilerFlag)
+
+
+option(BUILD_LOCAL_DEPENDENCIES "Downloads and builds all dependencies locally 
" OFF)
+option(HBASE_TARGET_TAG "Downloads and builds all dependencies locally" 
"e5345b3a7c32c6a80394319c17540b84c8fe66ba")
+option(BUILD_HBASE  "Builds Hbase " OFF)
+
+
+
+# Includes
+
+   
+if (WIN32)
+   set(BYPRODUCT_SUFFIX ".lib" CACHE STRING "" FORCE)
+   set(BYPRODUCT_SHARED_SUFFIX ".lib" CACHE STRING "" FORCE)
+   set(BYPRODUCT_PREFIX "" CACHE STRING "" FORCE)
+   set(BUILD_ARGS " -GVisual Studio 15 2017")
+else()
+   set(BYPRODUCT_PREFIX "lib" CACHE STRING "" FORCE)
+   set(BYPRODUCT_SHARED_SUFFIX ".so" CACHE STRING "" FORCE)
+   set(BYPRODUCT_SUFFIX ".a" CACHE STRING "" FORCE)
+endif()
+   
+
+
 ## include the Protobuf generation code
 include(ProtobufGen)
+include(DownloadFolly)
+include(DownloadWangle)
+include(DownloadZookeeper)
+
+if (BUILD_LOCAL_DEPENDENCIES)
+   ## we want to find the system protoc
+   download_project(PROJ Protobuf
+IS_AUTOGEN
+GIT_REPOSITORY 
"https://github.com/protocolbuffers/protobuf.git";
+GIT_TAG "3.5.1.1")
+
+   set(PROTOBUF_DIR "${Protobuf_BINARY_DIR}" CACHE STRING "" FORCE)
+   
+   add_library(Protobuf STATIC IMPORTED)
+   set_target_properties(Protobuf PROPERTIES IMPORTED_LOCATION 
"${Protobuf_BINARY_DIR}/lib/libprotobuf.a" )
+   set(PROTOBUF_LIBS "${Protobuf_BINARY_DIR}/lib/libprotobuf.a" 
"${Protobuf_BINARY_DIR}/lib/libprotoc.a" CACHE STRING "" FORCE)
+   set(PROTOBUF_INCLUDE_DIRS "${Protobuf_BINARY_DIR}/include" CACHE STRING 
"" FORCE)
+   add_dependencies(Protobuf Protobuf-download)
+   set(PROTOBUF_FOUND TRUE CACHE STRING "" FORCE)
+   
+   set(PROTOBUF_PROTOC_EXECUTABLE "${Protobuf_BINARY_DIR}/bin/protoc" 
CACHE STRING "" FORCE)
+   ## Add CMAKE_MODULE_PATHS
+   
+   list(APPEND CMAKE_MODULE_PATH 
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/zookeeper/local")
+   list(APPEND CMAKE_MODULE_PATH 
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/protobuf/local")
+   list(APPEND CMAKE_MODULE_PATH 
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/folly/local")
+   list(APPEND CMAKE_MODULE_PATH 
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/wangle/local")
+
+   ## Build Apache HBase components that are necessary for this project
+
+   if( BUILD_HBASE )
+   ## Download Apache HBase, and build hbase-common so that we can 
have a targeted build of version.h
+   download_project(PROJ apachehbase
+   IS_MAVEN
+   MAVEN_DIR "hbase-common"
+   GIT_REPOSITORY "https://github.com/apache/hbase.git";
+   GIT_TAG "${HBASE_TARGET_TAG}")
+
+   
include_directories("${CMAKE_CURRENT_BINARY_DIR}/apachehbase-src/hbase-common/target/generated-sources/native/")
+   endif(BUILD_HBASE)
+else()
+   list(APPEND CMAKE_MODULE_PATH 
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/zookeeper/system")
+   list(APPEND CMAKE_MODULE_PATH 
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/folly/system")
+   list(APPEND CMAKE_MODULE_PATH 
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/wangle/system")
+endif(BUILD_LOCAL_DEPENDENCIES)
+
+
+## Validate that we have C++ 14 support
+
+
+set(CMAKE_CXX_STANDARD 14)
+set(CMAKE_CXX_STANDARD_REQUIRED ON)
+set(CMAKE_CXX_EXTENSIONS OFF)
+
+
+CHECK_CXX_COMPILER_FLAG("-std=c++14" COMPILER_SUPPORTS_CXX14)
+CHECK_CXX_COMPILER_FLAG("-std=c++0x" COMPILER_SUPPORTS_CXX0X)
 if(COMPILER_SUPPORTS_CXX14)
 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=

[GitHub] [hbase] brfrn169 opened a new pull request #1881: HBASE-24529 hbase.rs.evictblocksonclose is not honored when removing …

2020-06-09 Thread GitBox


brfrn169 opened a new pull request #1881:
URL: https://github.com/apache/hbase/pull/1881


   …compacted files and closing the storefiles



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-24529) hbase.rs.evictblocksonclose is not honored when removing compacted files and closing the storefiles

2020-06-09 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-24529:
-
Description: 
Currently, when removing compacted files and closing the storefiles, RS always 
does evict block caches for the store files. It should honor 
hbase.rs.evictblocksonclose:
https://github.com/apache/hbase/blob/7b396e9b8ca93361de6a6c4bc8a40442db77c4da/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L2744
https://github.com/apache/hbase/blob/7b396e9b8ca93361de6a6c4bc8a40442db77c4da/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L625

  was:
Currently, when removing compacted files and closing the storefiles, RS always 
does evict block caches for the store files. It should honor 
hbase.rs.evictblocksonclose:
https://github.com/apache/hbase/blob/7b396e9b8ca93361de6a6c4bc8a40442db77c4da/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L2744



> hbase.rs.evictblocksonclose is not honored when removing compacted files and 
> closing the storefiles
> ---
>
> Key: HBASE-24529
> URL: https://issues.apache.org/jira/browse/HBASE-24529
> Project: HBase
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> Currently, when removing compacted files and closing the storefiles, RS 
> always does evict block caches for the store files. It should honor 
> hbase.rs.evictblocksonclose:
> https://github.com/apache/hbase/blob/7b396e9b8ca93361de6a6c4bc8a40442db77c4da/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L2744
> https://github.com/apache/hbase/blob/7b396e9b8ca93361de6a6c4bc8a40442db77c4da/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L625



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-24529) hbase.rs.evictblocksonclose is not honored when removing compacted files and closing the storefiles

2020-06-09 Thread Toshihiro Suzuki (Jira)
Toshihiro Suzuki created HBASE-24529:


 Summary: hbase.rs.evictblocksonclose is not honored when removing 
compacted files and closing the storefiles
 Key: HBASE-24529
 URL: https://issues.apache.org/jira/browse/HBASE-24529
 Project: HBase
  Issue Type: Bug
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


Currently, when removing compacted files and closing the storefiles, RS always 
does evict block caches for the store files. It should honor 
hbase.rs.evictblocksonclose:
https://github.com/apache/hbase/blob/7b396e9b8ca93361de6a6c4bc8a40442db77c4da/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L2744




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on a change in pull request #1866: HBASE-24517 AssignmentManager.start should add meta region to ServerS…

2020-06-09 Thread GitBox


saintstack commented on a change in pull request #1866:
URL: https://github.com/apache/hbase/pull/1866#discussion_r436981312



##
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/assignment/TestAssignmentManagerLoadMetaRegionState.java
##
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.assignment;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category({ MasterTests.class, MediumTests.class })
+public class TestAssignmentManagerLoadMetaRegionState {
+
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+
HBaseClassTestRule.forClass(TestAssignmentManagerLoadMetaRegionState.class);
+
+  private static final HBaseTestingUtility UTIL = new HBaseTestingUtility();
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+UTIL.startMiniCluster(1);
+  }
+
+  @AfterClass
+  public static void tearDown() throws IOException {
+UTIL.shutdownMiniCluster();
+  }
+
+  @Test
+  public void testRestart() throws InterruptedException, IOException {
+ServerName sn = 
UTIL.getMiniHBaseCluster().getRegionServer(0).getServerName();
+AssignmentManager am = 
UTIL.getMiniHBaseCluster().getMaster().getAssignmentManager();
+Set regions = new HashSet<>(am.getRegionsOnServer(sn));
+
+UTIL.getMiniHBaseCluster().stopMaster(0).join();
+HMaster newMaster = UTIL.getMiniHBaseCluster().startMaster().getMaster();
+UTIL.waitFor(3, () -> newMaster.isInitialized());
+
+am = UTIL.getMiniHBaseCluster().getMaster().getAssignmentManager();
+List newRegions = am.getRegionsOnServer(sn);

Review comment:
   How does the lock interfere here?

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
##
@@ -231,17 +231,15 @@ public void start() throws IOException, KeeperException {
   RegionState regionState = MetaTableLocator.getMetaRegionState(zkw);
   RegionStateNode regionNode =
 
regionStates.getOrCreateRegionStateNode(RegionInfoBuilder.FIRST_META_REGIONINFO);
-  regionNode.lock();

Review comment:
   We need a note here? Normally, we want to lock RegionNode because we do 
not want concurrent modifications happening. I see this is happening early in 
startup so should be safe. I do notice that the RpcServer is up by the time we 
get to here but Master is not active yet so we can't get requests at this point?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] bharathv commented on a change in pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


bharathv commented on a change in pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#discussion_r437846013



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServer.java
##
@@ -66,8 +66,14 @@ public HACK_UNTIL_ZOOKEEPER_1897_ZooKeeperMain(String[] args)
  * @throws IOException
  * @throws InterruptedException
  */
-void runCmdLine() throws KeeperException, IOException, 
InterruptedException {
-  processCmd(this.cl);
+void runCmdLine() throws IOException, InterruptedException {
+  try {
+processCmd(this.cl);
+  } catch (IOException | InterruptedException e) {
+throw e;
+  } catch (Exception e) {

Review comment:
   Looks like the root cause is this 
https://issues.apache.org/jira/browse/ZOOKEEPER-3760. I was looking in the 
branch-3.6 but the release bits of 3.6.0 didn't include this patch.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Reopened] (HBASE-24517) AssignmentManager.start should add meta region to ServerStateNode

2020-06-09 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reopened HBASE-24517:
---

Reopen for applying addendum.

> AssignmentManager.start should add meta region to ServerStateNode
> -
>
> Key: HBASE-24517
> URL: https://issues.apache.org/jira/browse/HBASE-24517
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6
>
>
> In AssignmentManager.start, we will load the meta region state and location 
> from zk and create the RegionStateNode, but we forget to call 
> regionStates.addRegionToServer to add the region to the region server.
> Found this when implementing HBASE-24390. As in HBASE-24390, we will remove 
> RegionInfoBuilder.FIRST_META_REGIONINFO so in SCP, we need to use the 
> getRegionsOnServer instead of RegionInfoBuilder.FIRST_META_REGIONINFO when 
> assigning meta, so the bug becomes a real problem.
> Though it is not a big problem for SCP for current 2.x and master branches, 
> it is a high risky bug. For example, in AssignmentManager.submitServerCrash, 
> now we use the RegionStateNode of meta regions to determine whether the given 
> region server carries meta regions. But it is also valid to test through the 
> ServerStateNode's region list. If later we change this method to use 
> ServerStateNode, it will cause very serious data loss bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] zhaoyim commented on a change in pull request #746: HBASE-23195 FSDataInputStreamWrapper unbuffer can NOT invoke the clas…

2020-06-09 Thread GitBox


zhaoyim commented on a change in pull request #746:
URL: https://github.com/apache/hbase/pull/746#discussion_r437817573



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/FSDataInputStreamWrapper.java
##
@@ -270,39 +267,23 @@ public void unbuffer() {
   if (this.instanceOfCanUnbuffer == null) {
 // To ensure we compute whether the stream is instance of CanUnbuffer 
only once.
 this.instanceOfCanUnbuffer = false;
-Class[] streamInterfaces = streamClass.getInterfaces();
-for (Class c : streamInterfaces) {
-  if 
(c.getCanonicalName().toString().equals("org.apache.hadoop.fs.CanUnbuffer")) {
-try {
-  this.unbuffer = streamClass.getDeclaredMethod("unbuffer");
-} catch (NoSuchMethodException | SecurityException e) {
-  if (isLogTraceEnabled) {
-LOG.trace("Failed to find 'unbuffer' method in class " + 
streamClass
-+ " . So there may be a TCP socket connection "
-+ "left open in CLOSE_WAIT state.", e);
-  }
-  return;
-}
-this.instanceOfCanUnbuffer = true;
-break;
-  }
+if (wrappedStream instanceof CanUnbuffer) {
+  this.unbuffer = (CanUnbuffer) wrappedStream;
+  this.instanceOfCanUnbuffer = true;
 }
   }
   if (this.instanceOfCanUnbuffer) {

Review comment:
   @joshelser  If `stream` is null, it will do nothing in the `unbuffer()` 
method, all the operations in the `if (stream != null)` judgment,  and the 
`instanceOfCanUnbuffer ` only used in `unbuffer()` ,  so it will NOT hit the 
NPE.  
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-23202) ExportSnapshot (import) will fail if copying files to root directory takes longer than cleaner TTL

2020-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129931#comment-17129931
 ] 

Hudson commented on HBASE-23202:


Results for branch branch-2
[build #2699 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> ExportSnapshot (import) will fail if copying files to root directory takes 
> longer than cleaner TTL
> --
>
> Key: HBASE-23202
> URL: https://issues.apache.org/jira/browse/HBASE-23202
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0-alpha-1, 1.5.0, 2.2.1, 1.4.11, 2.1.7
>Reporter: Zach York
>Assignee: Guangxu Cheng
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> HBASE-17330 removed the checking of the snapshot .tmp directory when 
> determining which files are candidates for deletes. It appears that in the 
> latest branches, this isn't an issue for taking a snapshot as it checks 
> whether a snapshot is in progress via the SnapshotManager.
> However, when using the ExportSnapshot tool to import a snapshot into a 
> cluster, it will first copy the snapshot manifest into 
> /.snapshot/.tmp/ [1], copies the files, and then renames the 
> snapshot manifest to the final snapshot directory. If the copyFiles job takes 
> longer than the cleaner TTL, the ExportSnapshot job will fail because HFiles 
> will get deleted before the snapshot is committed to the final directory. 
> The ExportSnapshot tool already has a functionality to skipTmp and write the 
> manifest directly to the final location. However, this has unintended 
> consequences such as the snapshot appearing to the user before it is usable. 
> So it looks like we will have to bring back the tmp directory check to avoid 
> this situation.
> [1] 
> https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java#L1029



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24340) PerformanceEvaluation options should not mandate any specific order

2020-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129932#comment-17129932
 ] 

Hudson commented on HBASE-24340:


Results for branch branch-2
[build #2699 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> PerformanceEvaluation options should not mandate any specific order
> ---
>
> Key: HBASE-24340
> URL: https://issues.apache.org/jira/browse/HBASE-24340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Anoop Sam John
>Assignee: Sambit Mohapatra
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.4.0
>
>
> During parsing of options, there are some validations.  One such is checking 
> whether autoFlush = false AND multiPut > 0.  This validation code mandates an 
> order that autoFlush=true should be specified before adding multiPut = x in 
> PE command.
> {code}
> final String multiPut = "--multiPut=";
>   if (cmd.startsWith(multiPut)) {
> opts.multiPut = Integer.parseInt(cmd.substring(multiPut.length()));
> if (!opts.autoFlush && opts.multiPut > 0) {
>   throw new IllegalArgumentException("autoFlush must be true when 
> multiPut is more than 0");
> }
> continue;
>   }
> {code}
> 'autoFlush ' default value is false. If multiPut is specified prior to 
> autoFlush in the PE command, we will end up throwing IllegalArgumentException.
> Checking other validations, seems not having such issue.  Still better to 
> move all the validations together into a private method and call that once 
> the parse is over.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24441) CacheConfig details logged at Store open is not really useful

2020-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129934#comment-17129934
 ] 

Hudson commented on HBASE-24441:


Results for branch branch-2
[build #2699 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> CacheConfig details logged at Store open is not really useful
> -
>
> Key: HBASE-24441
> URL: https://issues.apache.org/jira/browse/HBASE-24441
> Project: HBase
>  Issue Type: Improvement
>  Components: logging, regionserver
>Affects Versions: 3.0.0-alpha-1
>Reporter: Anoop Sam John
>Assignee: song XinCun
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0
>
>
> CacheConfig constructor is logging 'this' object at INFO level. This log 
> comes during Store open(As CacheConfig instance for that store is created). 
> As the log is at CacheConfig only, we don't get to know this is for which 
> region:store. So not really useful log.
> {code}
> blockCache=org.apache.hadoop.hbase.io.hfile.CombinedBlockCache@7bc02941, 
> cacheDataOnRead=true, cacheDataOnWrite=true, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> {code}
> Also during every compaction also this logs keeps coming. This is because 
> during compaction we create new CacheConfig based on the HStore level 
> CacheConfig object.  We can avoid this log with every compaction happening.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24367) ScheduledChore log elapsed timespan in a human-friendly format

2020-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129935#comment-17129935
 ] 

Hudson commented on HBASE-24367:


Results for branch branch-2
[build #2699 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> ScheduledChore log elapsed timespan in a human-friendly format
> --
>
> Key: HBASE-24367
> URL: https://issues.apache.org/jira/browse/HBASE-24367
> Project: HBase
>  Issue Type: Task
>  Components: master, regionserver
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6
>
>
> I noticed this in a log line,
> {noformat}
> 2020-04-23 18:31:14,183 INFO org.apache.hadoop.hbase.ScheduledChore: 
> host-a.example.com,16000,1587577999888-ClusterStatusChore average execution 
> time: 68488258 ns.
> {noformat}
> I'm not sure if there's a case when elapsed time in nanoseconds is meaningful 
> for these background chores, but we could do a little work before printing 
> the number and time unit to truncate precision down to something a little 
> more intuitive for operators. This number purports to be an average, so a 
> high level of precision isn't necessarily meaningful.
> Separately, or while we're here, if we think an operator really cares about 
> the performance of this chore, we should print a histogram of elapsed times, 
> rather than an opaque average.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24468) Add region info when log meessages in HStore.

2020-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129933#comment-17129933
 ] 

Hudson commented on HBASE-24468:


Results for branch branch-2
[build #2699 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2699/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Add region info when log meessages in HStore.
> -
>
> Key: HBASE-24468
> URL: https://issues.apache.org/jira/browse/HBASE-24468
> Project: HBase
>  Issue Type: Improvement
>  Components: logging, regionserver
>Affects Versions: 3.0.0-alpha-1
>Reporter: song XinCun
>Assignee: song XinCun
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0
>
>
> Some log message do not have region info when log, need to add it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24184) Backport HBASE-23896 to branch-1

2020-06-09 Thread tianhang tang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tianhang tang updated HBASE-24184:
--
Summary: Backport HBASE-23896 to branch-1  (was: listSnapshots returns 
empty when just use simple acl but not use authentication)

> Backport HBASE-23896 to branch-1
> 
>
> Key: HBASE-24184
> URL: https://issues.apache.org/jira/browse/HBASE-24184
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: tianhang tang
>Assignee: tianhang tang
>Priority: Minor
>
> For the owner of snapshots(not global admin user), currently list_snapshots 
> returns empty if i just use simple acls for authorization but not use 
> authentication.
> The code in AccessController.preListSnapshot:
> {code:java}
> if (SnapshotDescriptionUtils.isSnapshotOwner(snapshot, user)) {
> // list it, if user is the owner of snapshot
> AuthResult result = AuthResult.allow("listSnapshot " + snapshot.getName(),
> "Snapshot owner check allowed", user, null, null, null);
> accessChecker.logResult(result);
> }{code}
> And SnapshotManager.takeSnapshotInternal:
> {code:java}
> if (User.isHBaseSecurityEnabled(master.getConfiguration()) && user != null) {
>   builder.setOwner(user.getShortName());
> }
> {code}
> User.isHBaseSecurityEnabled:
> {code:java}
> public static boolean isHBaseSecurityEnabled(Configuration conf) {
>   return "kerberos".equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY));
> }
> {code}
> So i think the logic of setOwner is used for authorization, not 
> authentication, SnapshotManager should not only setOwner when 
> hbase.security.authentication = kerberos, which cause listSnapshots returns 
> empty when i just use simple acls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24184) Backport HBASE-23896 to branch-1

2020-06-09 Thread tianhang tang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129930#comment-17129930
 ] 

tianhang tang commented on HBASE-24184:
---

[~zghao] done : )

> Backport HBASE-23896 to branch-1
> 
>
> Key: HBASE-24184
> URL: https://issues.apache.org/jira/browse/HBASE-24184
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: tianhang tang
>Assignee: tianhang tang
>Priority: Minor
>
> For the owner of snapshots(not global admin user), currently list_snapshots 
> returns empty if i just use simple acls for authorization but not use 
> authentication.
> The code in AccessController.preListSnapshot:
> {code:java}
> if (SnapshotDescriptionUtils.isSnapshotOwner(snapshot, user)) {
> // list it, if user is the owner of snapshot
> AuthResult result = AuthResult.allow("listSnapshot " + snapshot.getName(),
> "Snapshot owner check allowed", user, null, null, null);
> accessChecker.logResult(result);
> }{code}
> And SnapshotManager.takeSnapshotInternal:
> {code:java}
> if (User.isHBaseSecurityEnabled(master.getConfiguration()) && user != null) {
>   builder.setOwner(user.getShortName());
> }
> {code}
> User.isHBaseSecurityEnabled:
> {code:java}
> public static boolean isHBaseSecurityEnabled(Configuration conf) {
>   return "kerberos".equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY));
> }
> {code}
> So i think the logic of setOwner is used for authorization, not 
> authentication, SnapshotManager should not only setOwner when 
> hbase.security.authentication = kerberos, which cause listSnapshots returns 
> empty when i just use simple acls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1670: HBASE-24337 Backport HBASE-23968 to branch-2

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1670:
URL: https://github.com/apache/hbase/pull/1670#issuecomment-641667381


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 48s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  6s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 37s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   5m  6s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 11s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m  3s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 134m 41s |  hbase-server in the patch passed.  
|
   |  |   | 158m 35s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1670/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1670 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux f6160410eb93 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / b67f896954 |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1670/1/testReport/
 |
   | Max. process+thread count | 3487 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1670/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24517) AssignmentManager.start should add meta region to ServerStateNode

2020-06-09 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129918#comment-17129918
 ] 

Duo Zhang commented on HBASE-24517:
---

{quote}
You pushed but there was an outstanding question on the PR and a nit that two 
folks suggested addressing.
{quote}

Let me check. Just saw too aprrove and no changes request...

> AssignmentManager.start should add meta region to ServerStateNode
> -
>
> Key: HBASE-24517
> URL: https://issues.apache.org/jira/browse/HBASE-24517
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6
>
>
> In AssignmentManager.start, we will load the meta region state and location 
> from zk and create the RegionStateNode, but we forget to call 
> regionStates.addRegionToServer to add the region to the region server.
> Found this when implementing HBASE-24390. As in HBASE-24390, we will remove 
> RegionInfoBuilder.FIRST_META_REGIONINFO so in SCP, we need to use the 
> getRegionsOnServer instead of RegionInfoBuilder.FIRST_META_REGIONINFO when 
> assigning meta, so the bug becomes a real problem.
> Though it is not a big problem for SCP for current 2.x and master branches, 
> it is a high risky bug. For example, in AssignmentManager.submitServerCrash, 
> now we use the RegionStateNode of meta regions to determine whether the given 
> region server carries meta regions. But it is also valid to test through the 
> ServerStateNode's region list. If later we change this method to use 
> ServerStateNode, it will cause very serious data loss bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#issuecomment-641658595


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   6m 54s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   8m  0s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  branch-1 passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  compile  |   1m  8s |  branch-1 passed with JDK 
v1.7.0_262  |
   | +1 :green_heart: |  checkstyle  |   2m 13s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   3m  1s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  branch-1 passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  branch-1 passed with JDK 
v1.7.0_262  |
   | +0 :ok: |  spotbugs  |   2m 41s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 16s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 54s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  the patch passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  javac  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  the patch passed with JDK 
v1.7.0_262  |
   | +1 :green_heart: |  javac  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  hbase-client: The patch 
generated 0 new + 62 unchanged - 1 fixed = 62 total (was 63)  |
   | +1 :green_heart: |  checkstyle  |   1m 33s |  The patch passed checkstyle 
in hbase-server  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 2 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedjars  |   2m 59s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   4m 35s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  the patch passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  the patch passed with JDK 
v1.7.0_262  |
   | +1 :green_heart: |  findbugs  |   4m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 40s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 129m 22s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 57s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 188m  7s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1879/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1879 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 8f82409227b7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1879/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / b6598cc |
   | Default Java | 1.7.0_262 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_252 
/usr/lib/jvm/zulu-7-amd64:1.7.0_262 |
   | whitespace | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1879/2/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1879/2/testReport/
 |
   | Max. process+thread count | 4053 (vs. ulimit of 1) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1879/2/console |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to th

[jira] [Commented] (HBASE-24184) listSnapshots returns empty when just use simple acl but not use authentication

2020-06-09 Thread Guanghao Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129914#comment-17129914
 ] 

Guanghao Zhang commented on HBASE-24184:


How about change this to "Backport HBASE-23896 to branch-1"?

> listSnapshots returns empty when just use simple acl but not use 
> authentication
> ---
>
> Key: HBASE-24184
> URL: https://issues.apache.org/jira/browse/HBASE-24184
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: tianhang tang
>Assignee: tianhang tang
>Priority: Minor
>
> For the owner of snapshots(not global admin user), currently list_snapshots 
> returns empty if i just use simple acls for authorization but not use 
> authentication.
> The code in AccessController.preListSnapshot:
> {code:java}
> if (SnapshotDescriptionUtils.isSnapshotOwner(snapshot, user)) {
> // list it, if user is the owner of snapshot
> AuthResult result = AuthResult.allow("listSnapshot " + snapshot.getName(),
> "Snapshot owner check allowed", user, null, null, null);
> accessChecker.logResult(result);
> }{code}
> And SnapshotManager.takeSnapshotInternal:
> {code:java}
> if (User.isHBaseSecurityEnabled(master.getConfiguration()) && user != null) {
>   builder.setOwner(user.getShortName());
> }
> {code}
> User.isHBaseSecurityEnabled:
> {code:java}
> public static boolean isHBaseSecurityEnabled(Configuration conf) {
>   return "kerberos".equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY));
> }
> {code}
> So i think the logic of setOwner is used for authorization, not 
> authentication, SnapshotManager should not only setOwner when 
> hbase.security.authentication = kerberos, which cause listSnapshots returns 
> empty when i just use simple acls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24403) FsDelegationToken should cache Token

2020-06-09 Thread Guanghao Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129913#comment-17129913
 ] 

Guanghao Zhang commented on HBASE-24403:


[~wuchang1989] Can you prepare a new patch for branch-2.x? It has conflict when 
cherry-pick to branch-2. Thanks. 

> FsDelegationToken should cache Token
> 
>
> Key: HBASE-24403
> URL: https://issues.apache.org/jira/browse/HBASE-24403
> Project: HBase
>  Issue Type: Bug
>Reporter: wuchang
>Assignee: wuchang
>Priority: Major
> Attachments: 24403.patch
>
>
> When doing Bulkload, we find that FsDelegationToken will acquire token of 
> NameNode everytime for a single file, although, from the comment of 
> acquireDelegationToken(), it claims that it is trying to find token in cache 
> firstly, but the newly requested token is never put to cache and thus the 
> cache is still empty for the following request;
> In cases there are many files to do the bulk load, the token request will 
> cause big short to NameNode.
>  
> {code:java}
> public void acquireDelegationToken(final FileSystem fs)
>  throws IOException {
>  if (userProvider.isHadoopSecurityEnabled()) {
>  this.fs = fs;
>  userToken = userProvider.getCurrent().getToken("HDFS_DELEGATION_TOKEN",
>  fs.getCanonicalServiceName());
>  if (userToken == null) {
>  hasForwardedToken = false;
>  try {
>  userToken = fs.getDelegationToken(renewer);
>  } catch (NullPointerException npe) {
>  // we need to handle NullPointerException in case HADOOP-10009 is missing
>  LOG.error("Failed to get token for " + renewer);
>  }
>  } else {
>  hasForwardedToken = true;
>  LOG.info("Use the existing token: " + userToken);
>  }
>  }
> }{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24403) FsDelegationToken should cache Token

2020-06-09 Thread Guanghao Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-24403:
---
Fix Version/s: 2.2.6
   2.3.0
   3.0.0-alpha-1

> FsDelegationToken should cache Token
> 
>
> Key: HBASE-24403
> URL: https://issues.apache.org/jira/browse/HBASE-24403
> Project: HBase
>  Issue Type: Bug
>Reporter: wuchang
>Assignee: wuchang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6
>
> Attachments: 24403.patch
>
>
> When doing Bulkload, we find that FsDelegationToken will acquire token of 
> NameNode everytime for a single file, although, from the comment of 
> acquireDelegationToken(), it claims that it is trying to find token in cache 
> firstly, but the newly requested token is never put to cache and thus the 
> cache is still empty for the following request;
> In cases there are many files to do the bulk load, the token request will 
> cause big short to NameNode.
>  
> {code:java}
> public void acquireDelegationToken(final FileSystem fs)
>  throws IOException {
>  if (userProvider.isHadoopSecurityEnabled()) {
>  this.fs = fs;
>  userToken = userProvider.getCurrent().getToken("HDFS_DELEGATION_TOKEN",
>  fs.getCanonicalServiceName());
>  if (userToken == null) {
>  hasForwardedToken = false;
>  try {
>  userToken = fs.getDelegationToken(renewer);
>  } catch (NullPointerException npe) {
>  // we need to handle NullPointerException in case HADOOP-10009 is missing
>  LOG.error("Failed to get token for " + renewer);
>  }
>  } else {
>  hasForwardedToken = true;
>  LOG.info("Use the existing token: " + userToken);
>  }
>  }
> }{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] infraio merged pull request #1743: HBASE-24403 FsDelegationToken Should Cache Token After Acquired A New One

2020-06-09 Thread GitBox


infraio merged pull request #1743:
URL: https://github.com/apache/hbase/pull/1743


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (HBASE-24403) FsDelegationToken should cache Token

2020-06-09 Thread Guanghao Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang reassigned HBASE-24403:
--

Assignee: wuchang

> FsDelegationToken should cache Token
> 
>
> Key: HBASE-24403
> URL: https://issues.apache.org/jira/browse/HBASE-24403
> Project: HBase
>  Issue Type: Bug
>Reporter: wuchang
>Assignee: wuchang
>Priority: Major
> Attachments: 24403.patch
>
>
> When doing Bulkload, we find that FsDelegationToken will acquire token of 
> NameNode everytime for a single file, although, from the comment of 
> acquireDelegationToken(), it claims that it is trying to find token in cache 
> firstly, but the newly requested token is never put to cache and thus the 
> cache is still empty for the following request;
> In cases there are many files to do the bulk load, the token request will 
> cause big short to NameNode.
>  
> {code:java}
> public void acquireDelegationToken(final FileSystem fs)
>  throws IOException {
>  if (userProvider.isHadoopSecurityEnabled()) {
>  this.fs = fs;
>  userToken = userProvider.getCurrent().getToken("HDFS_DELEGATION_TOKEN",
>  fs.getCanonicalServiceName());
>  if (userToken == null) {
>  hasForwardedToken = false;
>  try {
>  userToken = fs.getDelegationToken(renewer);
>  } catch (NullPointerException npe) {
>  // we need to handle NullPointerException in case HADOOP-10009 is missing
>  LOG.error("Failed to get token for " + renewer);
>  }
>  } else {
>  hasForwardedToken = true;
>  LOG.info("Use the existing token: " + userToken);
>  }
>  }
> }{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] apurtell commented on a change in pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


apurtell commented on a change in pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#discussion_r437788463



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServer.java
##
@@ -66,8 +66,14 @@ public HACK_UNTIL_ZOOKEEPER_1897_ZooKeeperMain(String[] args)
  * @throws IOException
  * @throws InterruptedException
  */
-void runCmdLine() throws KeeperException, IOException, 
InterruptedException {
-  processCmd(this.cl);
+void runCmdLine() throws IOException, InterruptedException {
+  try {
+processCmd(this.cl);
+  } catch (IOException | InterruptedException e) {
+throw e;
+  } catch (Exception e) {

Review comment:
   [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) 
on project hbase-server: Compilation failure
   [ERROR] 
/Users/apurtell/src/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServer.java:[70,17]
 unreported exception org.apache.zookeeper.cli.CliException; must be caught or 
declared to be thrown





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on a change in pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


apurtell commented on a change in pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#discussion_r437787274



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServer.java
##
@@ -66,8 +66,14 @@ public HACK_UNTIL_ZOOKEEPER_1897_ZooKeeperMain(String[] args)
  * @throws IOException
  * @throws InterruptedException
  */
-void runCmdLine() throws KeeperException, IOException, 
InterruptedException {
-  processCmd(this.cl);
+void runCmdLine() throws IOException, InterruptedException {
+  try {
+processCmd(this.cl);
+  } catch (IOException | InterruptedException e) {
+throw e;
+  } catch (Exception e) {

Review comment:
   This is a port of internal work, let me remove this hunk and check again.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1880: HBASE-24144 Update docs from master

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1880:
URL: https://github.com/apache/hbase/pull/1880#issuecomment-641641239


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 20s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 41s |  branch-2 passed  |
   | +1 :green_heart: |  mvnsite  |  16m 53s |  branch-2 passed  |
   | +0 :ok: |  refguide  |   6m 53s |  branch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 13s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  12m 18s |  the patch passed  |
   | -0 :warning: |  whitespace  |   0m  0s |  The patch has 49 line(s) that 
end in whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +0 :ok: |  refguide  |   7m  7s |  patch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  54m 32s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1880/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1880 |
   | Optional Tests | dupname asflicense refguide mvnsite |
   | uname | Linux 3d5884f49987 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / b67f896954 |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1880/1/artifact/yetus-general-check/output/branch-site/book.html
 |
   | whitespace | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1880/1/artifact/yetus-general-check/output/whitespace-eol.txt
 |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1880/1/artifact/yetus-general-check/output/patch-site/book.html
 |
   | Max. process+thread count | 89 (vs. ulimit of 12500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1880/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (HBASE-24528) Improve balancer decision observability

2020-06-09 Thread Andrew Kyle Purtell (Jira)
Andrew Kyle Purtell created HBASE-24528:
---

 Summary: Improve balancer decision observability
 Key: HBASE-24528
 URL: https://issues.apache.org/jira/browse/HBASE-24528
 Project: HBase
  Issue Type: New Feature
  Components: Admin, Balancer, shell, UI
Reporter: Andrew Kyle Purtell


We provide detailed INFO and DEBUG level logging of balancer decision factors, 
outcome, and reassignment planning, as well as similarly detailed logging of 
the resulting assignment manager activity. However, an operator may need to 
perform online and interactive observation, debugging, or performance analysis 
of current balancer activity. Scraping and correlating the many log lines 
resulting from a balancer execution is labor intensive and has a lot of latency 
(order of ~minutes to acquire and index, order of ~minutes to correlate). 

The balancer should maintain a rolling window of history, e.g. the last 100 
region move plans, or last 1000 region move plans submitted to the assignment 
manager. This history should include decision factor details and weights and 
costs. The rsgroups balancer may be able to provide fairly simple decision 
factors, like for example "this table was reassigned to that regionserver 
group". The underlying or vanilla stochastic balancer on the other hand, after 
a walk over random assignment plans, will have considered a number of cost 
functions with various inputs (locality, load, etc.) and multipliers, including 
custom cost functions. We can devise an extensible class structure that 
represents explanations for balancer decisions, and for each region move plan 
that is actually submitted to the assignment manager, we can keep the 
explanations of all relevant decision factors alongside the other details of 
the assignment plan like the region name, and the source and destination 
regionservers. 

This history should be available via API for use by new shell commands and 
admin UI widgets.

The new shell commands and UI widgets can unpack the representation of balancer 
decision components into human readable output. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-11288) Splittable Meta

2020-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129893#comment-17129893
 ] 

Hudson commented on HBASE-11288:


Results for branch HBASE-11288.splittable-meta
[build #6 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-11288.splittable-meta/6/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-11288.splittable-meta/6/General_20Nightly_20Build_20Report/]






(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-11288.splittable-meta/6/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-11288.splittable-meta/6/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Splittable Meta
> ---
>
> Key: HBASE-11288
> URL: https://issues.apache.org/jira/browse/HBASE-11288
> Project: HBase
>  Issue Type: Umbrella
>  Components: meta
>Reporter: Francis Christopher Liu
>Assignee: Francis Christopher Liu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24527) Improve region housekeeping status observability

2020-06-09 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-24527:

Description: 
We provide a coarse grained admin API and associated shell command for 
determining the compaction status of a table:

{noformat}
hbase(main):001:0> help "compaction_state"
Here is some help for this command:
 Gets compaction status (MAJOR, MAJOR_AND_MINOR, MINOR, NONE) for a table:
 hbase> compaction_state 'ns1:t1'
 hbase> compaction_state 't1'
{noformat}

We also log  compaction activity, including a compaction journal at completion, 
via log4j to whatever log aggregation solution is available in production.  

This is not sufficient for online and interactive observation, debugging, or 
performance analysis of current compaction activity. In this kind of activity 
an operator is attempting to observe and analyze compaction activity in real 
time. Log aggregation and presentation solutions have typical latencies (end to 
end visibility of log lines on the order of ~minutes) which make that not 
possible today.

We don't offer any API or tools for directly interrogating split and merge 
activity in real time. Some indirect knowledge of split or merge activity can 
be inferred from RIT information via ClusterStatus. It can also be scraped, 
with some difficulty, from the debug servlet. 

We should have new APIs and shell commands, and perhaps also new admin UI 
views, for

at regionserver scope:
* listing the current state of a regionserver's compaction, split, and merge 
tasks and threads
* counting (simple view) and listing (detailed view) a regionserver's 
compaction queues
* listing a region's currently compacting, splitting, or merging status

at master scope, aggregations of the above detailed information into:
* listing the active compaction tasks and threads for a given table, the 
extension of _compaction_state_ with a new detailed view
* listing the active split or merge tasks and threads for a given table's 
regions

Compaction detail should include the names of the effective engine and policy 
classes, and the results and timestamp of the last compaction selection 
evaluation. Split and merge detail should include the names of the effective 
policy classes and the result of the last split or merge criteria evaluation. 

  was:
We provide a coarse grained admin API and associated shell command for 
determining the compaction status of a table:

{noformat}
hbase(main):001:0> help "compaction_state"
Here is some help for this command:
 Gets compaction status (MAJOR, MAJOR_AND_MINOR, MINOR, NONE) for a table:
 hbase> compaction_state 'ns1:t1'
 hbase> compaction_state 't1'
{noformat}

We also log  compaction activity, including a compaction journal at completion, 
via log4j to whatever log aggregation solution is available in production.  

This is not sufficient for online and interactive observation, debugging, or 
performance analysis of current compaction activity. In this kind of activity 
an operator is attempting to observe and analyze compaction activity in real 
time. Log aggregation and presentation solutions have typical latencies (end to 
end visibility of log lines on the order of ~minutes) which make that not 
possible today.

We don't offer any API or tools for directly interrogating split and merge 
activity in real time. Some indirect knowledge of split or merge activity can 
be inferred from RIT information via ClusterStatus. It can also be scraped, 
with some difficulty, from the debug servlet. 

We should have new APIs and shell commands, and perhaps also new admin UI 
views, for

at regionserver scope:
* listing the current state of a regionserver's compaction, split, and merge 
tasks and threads
* counting (simple view) and listing (detailed view) a regionserver's 
compaction queues
* listing a region's currently compacting, splitting, or merging status

at master scope, aggregations of the above detailed information into:
* listing the active compaction tasks and threads for a given table, the 
extension of _compaction_state_ with a new detailed view
* listing the active split or merge tasks and threads for a given table's 
regions


> Improve region housekeeping status observability
> 
>
> Key: HBASE-24527
> URL: https://issues.apache.org/jira/browse/HBASE-24527
> Project: HBase
>  Issue Type: New Feature
>  Components: Admin, Compaction, shell, UI
>Reporter: Andrew Kyle Purtell
>Priority: Major
>
> We provide a coarse grained admin API and associated shell command for 
> determining the compaction status of a table:
> {noformat}
> hbase(main):001:0> help "compaction_state"
> Here is some help for this command:
>  Gets compaction status (MAJOR, MAJOR_AND_MINOR, MINOR, NONE) for a table:
> 

[jira] [Updated] (HBASE-24527) Improve region housekeeping status observability

2020-06-09 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-24527:

Description: 
We provide a coarse grained admin API and associated shell command for 
determining the compaction status of a table:

{noformat}
hbase(main):001:0> help "compaction_state"
Here is some help for this command:
 Gets compaction status (MAJOR, MAJOR_AND_MINOR, MINOR, NONE) for a table:
 hbase> compaction_state 'ns1:t1'
 hbase> compaction_state 't1'
{noformat}

We also log  compaction activity, including a compaction journal at completion, 
via log4j to whatever log aggregation solution is available in production.  

This is not sufficient for online and interactive observation, debugging, or 
performance analysis of current compaction activity. In this kind of activity 
an operator is attempting to observe and analyze compaction activity in real 
time. Log aggregation and presentation solutions have typical latencies (end to 
end visibility of log lines on the order of ~minutes) which make that not 
possible today.

We don't offer any API or tools for directly interrogating split and merge 
activity in real time. Some indirect knowledge of split or merge activity can 
be inferred from RIT information via ClusterStatus. It can also be scraped, 
with some difficulty, from the debug servlet. 

We should have new APIs and shell commands, and perhaps also new admin UI 
views, for

at regionserver scope:
* listing the current state of a regionserver's compaction, split, and merge 
tasks and threads
* counting (simple view) and listing (detailed view) a regionserver's 
compaction queues
* listing a region's currently compacting, splitting, or merging status

at master scope, aggregations of the above detailed information into:
* listing the active compaction tasks and threads for a given table, the 
extension of _compaction_state_ with a new detailed view
* listing the active split or merge tasks and threads for a given table's 
regions

  was:
We provide a coarse grained admin API and associated shell command for 
determining the compaction status of a table:

{noformat}
hbase(main):001:0> help "compaction_state"
Here is some help for this command:
 Gets compaction status (MAJOR, MAJOR_AND_MINOR, MINOR, NONE) for a table:
 hbase> compaction_state 'ns1:t1'
 hbase> compaction_state 't1'
{noformat}

We also log  compaction activity, including a compaction journal at completion, 
via log4j to whatever log aggregation solution is available in production.  

This is not sufficient for online and interactive observation, debugging, or 
performance analysis of current compaction activity. In this kind of activity 
an operator is attempting to observe and analyze compaction activity in real 
time. Log aggregation and presentation solutions have typical latencies (end to 
end visibility of log lines on the order of ~minutes) which make that not 
possible today.

We don't offer any API or tools for directly interrogating split and merge 
activity in real time. Some indirect knowledge of split or merge activity can 
be inferred from RIT information via ClusterStatus. 

We should have new APIs and shell commands, and perhaps also new admin UI 
views, for

at regionserver scope:
* listing the current state of a regionserver's compaction, split, and merge 
tasks and threads
* counting (simple view) and listing (detailed view) a regionserver's 
compaction queues
* listing a region's currently compacting, splitting, or merging status

at master scope, aggregations of the above detailed information into:
* listing the active compaction tasks and threads for a given table, the 
extension of _compaction_state_ with a new detailed view
* listing the active split or merge tasks and threads for a given table's 
regions


> Improve region housekeeping status observability
> 
>
> Key: HBASE-24527
> URL: https://issues.apache.org/jira/browse/HBASE-24527
> Project: HBase
>  Issue Type: New Feature
>  Components: Admin, Compaction, shell, UI
>Reporter: Andrew Kyle Purtell
>Priority: Major
>
> We provide a coarse grained admin API and associated shell command for 
> determining the compaction status of a table:
> {noformat}
> hbase(main):001:0> help "compaction_state"
> Here is some help for this command:
>  Gets compaction status (MAJOR, MAJOR_AND_MINOR, MINOR, NONE) for a table:
>  hbase> compaction_state 'ns1:t1'
>  hbase> compaction_state 't1'
> {noformat}
> We also log  compaction activity, including a compaction journal at 
> completion, via log4j to whatever log aggregation solution is available in 
> production.  
> This is not sufficient for online and interactive observation, debugging, or 
> performance analysis of current compaction activity

[jira] [Created] (HBASE-24527) Improve region housekeeping status observability

2020-06-09 Thread Andrew Kyle Purtell (Jira)
Andrew Kyle Purtell created HBASE-24527:
---

 Summary: Improve region housekeeping status observability
 Key: HBASE-24527
 URL: https://issues.apache.org/jira/browse/HBASE-24527
 Project: HBase
  Issue Type: New Feature
  Components: Admin, Compaction, shell, UI
Reporter: Andrew Kyle Purtell


We provide a coarse grained admin API and associated shell command for 
determining the compaction status of a table:

{noformat}
hbase(main):001:0> help "compaction_state"
Here is some help for this command:
 Gets compaction status (MAJOR, MAJOR_AND_MINOR, MINOR, NONE) for a table:
 hbase> compaction_state 'ns1:t1'
 hbase> compaction_state 't1'
{noformat}

We also log  compaction activity, including a compaction journal at completion, 
via log4j to whatever log aggregation solution is available in production.  

This is not sufficient for online and interactive observation, debugging, or 
performance analysis of current compaction activity. In this kind of activity 
an operator is attempting to observe and analyze compaction activity in real 
time. Log aggregation and presentation solutions have typical latencies (end to 
end visibility of log lines on the order of ~minutes) which make that not 
possible today.

We don't offer any API or tools for directly interrogating split and merge 
activity in real time. Some indirect knowledge of split or merge activity can 
be inferred from RIT information via ClusterStatus. 

We should have new APIs and shell commands, and perhaps also new admin UI 
views, for

at regionserver scope:
* listing the current state of a regionserver's compaction, split, and merge 
tasks and threads
* counting (simple view) and listing (detailed view) a regionserver's 
compaction queues
* listing a region's currently compacting, splitting, or merging status

at master scope, aggregations of the above detailed information into:
* listing the active compaction tasks and threads for a given table, the 
extension of _compaction_state_ with a new detailed view
* listing the active split or merge tasks and threads for a given table's 
regions



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1670: HBASE-24337 Backport HBASE-23968 to branch-2

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1670:
URL: https://github.com/apache/hbase/pull/1670#issuecomment-641632737


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 37s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  branch-2 passed  |
   | +1 :green_heart: |  spotbugs  |   1m 56s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 12s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  11m 20s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   2m  4s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 15s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  32m 20s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1670/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1670 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux 7d0b993fc168 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / b67f896954 |
   | Max. process+thread count | 94 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1670/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase-operator-tools] ndimiduk commented on a change in pull request #64: HBASE-23927 recommit the fix with updates for PR review feedback

2020-06-09 Thread GitBox


ndimiduk commented on a change in pull request #64:
URL: 
https://github.com/apache/hbase-operator-tools/pull/64#discussion_r437772726



##
File path: hbase-hbck2/src/test/java/org/apache/hbase/TestHBCK2.java
##
@@ -70,6 +73,8 @@
   private static final TableName TABLE_NAME = 
TableName.valueOf(TestHBCK2.class.getSimpleName());
   private static final TableName REGION_STATES_TABLE_NAME = TableName.
 valueOf(TestHBCK2.class.getSimpleName() + "-REGIONS_STATES");
+  private final static String ASSIGNS = "assigns";

Review comment:
   thanks!

##
File path: hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
##
@@ -294,7 +300,30 @@ int setRegionState(ClusterConnection connection, String 
region,
   return null;
 }
 boolean overrideFlag = commandLine.hasOption(override.getOpt());
-return hbck.assigns(commandLine.getArgList(), overrideFlag);
+
+List argList = commandLine.getArgList();
+if (!commandLine.hasOption(inputFile.getOpt())) {
+  return hbck.assigns(argList, overrideFlag);
+} else {

Review comment:
   nit: else block is redundant to the early `return` the line prior.

##
File path: hbase-hbck2/src/test/java/org/apache/hbase/TestHBCK2.java
##
@@ -127,32 +132,39 @@ public void testAssigns() throws IOException {
 getRegionStates().getRegionState(ri.getEncodedName());
 LOG.info("RS: {}", rs.toString());
   }
-  List regionStrs =
-  
regions.stream().map(RegionInfo::getEncodedName).collect(Collectors.toList());
-  String [] regionStrsArray = regionStrs.toArray(new String[] {});
+  String [] regionStrsArray  =
+  
regions.stream().map(RegionInfo::getEncodedName).collect(Collectors.toList())
+  .toArray(new String[] {});

Review comment:
   You can skip the `collect` and go directly 
[`toArray`](https://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html#toArray-java.util.function.IntFunction-).

##
File path: hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
##
@@ -294,7 +300,30 @@ int setRegionState(ClusterConnection connection, String 
region,
   return null;
 }
 boolean overrideFlag = commandLine.hasOption(override.getOpt());
-return hbck.assigns(commandLine.getArgList(), overrideFlag);
+
+List argList = commandLine.getArgList();
+if (!commandLine.hasOption(inputFile.getOpt())) {
+  return hbck.assigns(argList, overrideFlag);
+} else {
+  List assignmentList = new ArrayList<>();
+  for (String filePath : argList) {
+try {
+  File file = new File(filePath);
+  FileReader fileReader = new FileReader(file);
+  BufferedReader bufferedReader = new BufferedReader(fileReader);
+  String regionName = bufferedReader.readLine().trim();
+  while (regionName != null) {

Review comment:
   Sorry I wasn't more specific before. How about something like 
[IOUtils.lineIterator](http://commons.apache.org/proper/commons-io/javadocs/api-release/org/apache/commons/io/IOUtils.html#lineIterator-java.io.InputStream-java.nio.charset.Charset-)?
 That way there's no infinite loops and no null checking, just a simple for 
loop with an iterator.

##
File path: hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
##
@@ -441,16 +470,20 @@ private static void 
usageAddFsRegionsMissingInMeta(PrintWriter writer) {
   }
 
   private static void usageAssigns(PrintWriter writer) {
-writer.println(" " + ASSIGNS + " [OPTIONS] ...");
+writer.println(" " + ASSIGNS + " [OPTIONS] 
...");
 writer.println("   Options:");
 writer.println("-o,--override  override ownership by another 
procedure");
+writer.println("-i,--inputFiles  take one or more files of encoded 
region names");

Review comment:
   yikes! i'm surprised our cli parsing library doesn't handle generating a 
help message from arguments on our behalf :(





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1880: HBASE-24144 Update docs from master

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1880:
URL: https://github.com/apache/hbase/pull/1880#issuecomment-641627049


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 20s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   3m 52s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1880/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1880 |
   | Optional Tests |  |
   | uname | Linux 48d35a65e3e1 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / b67f896954 |
   | Max. process+thread count | 47 (vs. ulimit of 12500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1880/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1880: HBASE-24144 Update docs from master

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1880:
URL: https://github.com/apache/hbase/pull/1880#issuecomment-641626709


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  9s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   2m 32s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1880/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1880 |
   | Optional Tests |  |
   | uname | Linux 3093744415de 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / b67f896954 |
   | Max. process+thread count | 49 (vs. ulimit of 12500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1880/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] saintstack commented on pull request #1737: HBASE-24382 Flush partial stores of region filtered by seqId when arc…

2020-06-09 Thread GitBox


saintstack commented on pull request #1737:
URL: https://github.com/apache/hbase/pull/1737#issuecomment-641626808


   FlushPolicy does seem to cut across what you are trying to do here where you 
only flush a subset even though reading the original issue, it seems to be 
trying to do subset only: https://issues.apache.org/jira/browse/HBASE-10201 
   
   We should change flushpolicy to accommodate your need?
   
   If we flush all regardless of the policy, that sounds like a problem.
   
   Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase-operator-tools] clarax commented on pull request #61: taking one or more files for regions for assigns

2020-06-09 Thread GitBox


clarax commented on pull request #61:
URL: 
https://github.com/apache/hbase-operator-tools/pull/61#issuecomment-641625372


   > Please pull the cleanup into its own PR
   
   Thank you. Closed and opened a new PR 
https://github.com/apache/hbase-operator-tools/pull/64



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] saintstack commented on a change in pull request #1737: HBASE-24382 Flush partial stores of region filtered by seqId when arc…

2020-06-09 Thread GitBox


saintstack commented on a change in pull request #1737:
URL: https://github.com/apache/hbase/pull/1737#discussion_r437769058



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
##
@@ -58,8 +60,8 @@ protected void scheduleFlush(String encodedRegionName) {
 encodedRegionName, r);
   return;
 }
-// force flushing all stores to clean old logs
-requester.requestFlush(r, true, FlushLifeCycleTracker.DUMMY);
+// force flushing specified stores to clean old logs
+requester.requestFlush(r, false, families, FlushLifeCycleTracker.DUMMY);

Review comment:
   Are you sure. You change the force from true to false in line above.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Work started] (HBASE-24144) Update docs from master

2020-06-09 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-24144 started by Nick Dimiduk.

> Update docs from master
> ---
>
> Key: HBASE-24144
> URL: https://issues.apache.org/jira/browse/HBASE-24144
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
>
> Take a pass updating the docs. Have a look at what's on branch-2.2 and add 
> whatever updates we need from master. Consider refreshing branch-2 as well, 
> since it's been a while.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase-operator-tools] clarax opened a new pull request #64: HBASE-23927 recommit the fix with updates for PR review feedback

2020-06-09 Thread GitBox


clarax opened a new pull request #64:
URL: https://github.com/apache/hbase-operator-tools/pull/64


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk opened a new pull request #1880: HBASE-24144 Update docs from master

2020-06-09 Thread GitBox


ndimiduk opened a new pull request #1880:
URL: https://github.com/apache/hbase/pull/1880


   Bring back documentation from master branch (9ef17c2784), using
   
   ```
   $ git checkout master -- src/main/asciidoc/
   $ git checkout master -- src/site/asciidoc/
   ```
   
   Followed up with a commit to revert changes for synchronous replication, 
which is a feature only on master at this time. What other docs changes need 
dropped from this commit? I fear some content cannot be simply dropped because 
the language won't make sense.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24444) Should shutdown mini cluster after class in TestMetaAssignmentWithStopMaster

2020-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129865#comment-17129865
 ] 

Hudson commented on HBASE-2:


Results for branch branch-2.2
[build #889 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Should shutdown mini cluster after class in TestMetaAssignmentWithStopMaster
> 
>
> Key: HBASE-2
> URL: https://issues.apache.org/jira/browse/HBASE-2
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Duo Zhang
>Assignee: wenfeiyi666
>Priority: Minor
>  Labels: trivial
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24117) Shutdown AssignmentManager before ProcedureExecutor may cause SCP to accidentally skip assigning a region

2020-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129867#comment-17129867
 ] 

Hudson commented on HBASE-24117:


Results for branch branch-2.2
[build #889 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Shutdown AssignmentManager before ProcedureExecutor may cause SCP to 
> accidentally skip assigning a region
> -
>
> Key: HBASE-24117
> URL: https://issues.apache.org/jira/browse/HBASE-24117
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Reporter: Michael Stack
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6
>
> Attachments: 
> org.apache.hadoop.hbase.master.assignment.TestCloseRegionWhileRSCrash-output.txt
>
>
> I saw this on TestCloseRegionWithRSCrash. The Region 
> 788a516d1f86af98e0a16bcc1afe4fa1 was being moved to RS  
> example.com,62652,1586032098445 just after it was killed. The Move Close 
> fails because the RS has no node in the Master. The Move then tries to 
> 'confirm' the close but it fails because no remote RS. We are then to wait in 
> this state until operator or some other procedure intervenes to 'fix' the 
> state. Normally a ServerCrashProcedure would do the job but in this test the 
> Master is restarted after the RS is killed, a condition we do not accommodate.
> Let me attach the test log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24367) ScheduledChore log elapsed timespan in a human-friendly format

2020-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129868#comment-17129868
 ] 

Hudson commented on HBASE-24367:


Results for branch branch-2.2
[build #889 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> ScheduledChore log elapsed timespan in a human-friendly format
> --
>
> Key: HBASE-24367
> URL: https://issues.apache.org/jira/browse/HBASE-24367
> Project: HBase
>  Issue Type: Task
>  Components: master, regionserver
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6
>
>
> I noticed this in a log line,
> {noformat}
> 2020-04-23 18:31:14,183 INFO org.apache.hadoop.hbase.ScheduledChore: 
> host-a.example.com,16000,1587577999888-ClusterStatusChore average execution 
> time: 68488258 ns.
> {noformat}
> I'm not sure if there's a case when elapsed time in nanoseconds is meaningful 
> for these background chores, but we could do a little work before printing 
> the number and time unit to truncate precision down to something a little 
> more intuitive for operators. This number purports to be an average, so a 
> high level of precision isn't necessarily meaningful.
> Separately, or while we're here, if we think an operator really cares about 
> the performance of this chore, we should print a histogram of elapsed times, 
> rather than an opaque average.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24517) AssignmentManager.start should add meta region to ServerStateNode

2020-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129866#comment-17129866
 ] 

Hudson commented on HBASE-24517:


Results for branch branch-2.2
[build #889 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/889//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> AssignmentManager.start should add meta region to ServerStateNode
> -
>
> Key: HBASE-24517
> URL: https://issues.apache.org/jira/browse/HBASE-24517
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6
>
>
> In AssignmentManager.start, we will load the meta region state and location 
> from zk and create the RegionStateNode, but we forget to call 
> regionStates.addRegionToServer to add the region to the region server.
> Found this when implementing HBASE-24390. As in HBASE-24390, we will remove 
> RegionInfoBuilder.FIRST_META_REGIONINFO so in SCP, we need to use the 
> getRegionsOnServer instead of RegionInfoBuilder.FIRST_META_REGIONINFO when 
> assigning meta, so the bug becomes a real problem.
> Though it is not a big problem for SCP for current 2.x and master branches, 
> it is a high risky bug. For example, in AssignmentManager.submitServerCrash, 
> now we use the RegionStateNode of meta regions to determine whether the given 
> region server carries meta regions. But it is also valid to test through the 
> ServerStateNode's region list. If later we change this method to use 
> ServerStateNode, it will cause very serious data loss bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-18659) Use HDFS ACL to give user the ability to read snapshot directly on HDFS

2020-06-09 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-18659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129863#comment-17129863
 ] 

Nick Dimiduk commented on HBASE-18659:
--

Looks like this has basically been implemented and applied on 2.3. Any 
objection to moving out the last subtask to be an independent task, and we can 
close this parent issue as resolved against 2.3.0?

> Use HDFS ACL to give user the ability to read snapshot directly on HDFS
> ---
>
> Key: HBASE-18659
> URL: https://issues.apache.org/jira/browse/HBASE-18659
> Project: HBase
>  Issue Type: New Feature
>Reporter: Duo Zhang
>Assignee: Yi Mei
>Priority: Major
>
> On the dev meetup notes in Shenzhen after HBaseCon Asia, there is a topic 
> about the permission to read hfiles on HDFS directly.
> {quote}
> For client-side scanner going against hfiles directly; is there a means of 
> being able to pass the permissions from hbase to hdfs?
> {quote}
> And at Xiaomi we also face the same problem. {{SnapshotScanner}} is much 
> faster and consumes less resources, but only super use has the ability to 
> read hfile directly on HDFS.
> So here we want to use HDFS ACL to address this problem.
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#ACLs_File_System_API
> The basic idea is to set acl and default acl on the ns/table/cf directory on 
> HDFS for the users who have the permission to read the table on HBase.
> Suggestions are welcomed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-22625) documet use scan snapshot feature

2020-06-09 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-22625:
-
Fix Version/s: 3.0.0-alpha-1

> documet use scan snapshot feature
> -
>
> Key: HBASE-22625
> URL: https://issues.apache.org/jira/browse/HBASE-22625
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
>
> Add the design doc in dev-support/design-docs{{ and describe }}the feature in 
> the reference guide.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24005) Document maven invocation with JDK11

2020-06-09 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-24005.
--
Fix Version/s: 3.0.0-alpha-1
   Resolution: Fixed

> Document maven invocation with JDK11
> 
>
> Key: HBASE-24005
> URL: https://issues.apache.org/jira/browse/HBASE-24005
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.0.0-alpha-1
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
>
> This is not obvious at the moment. Add some docs to ease dev setup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk commented on pull request #1871: HBASE-24005 Document maven invocation with JDK11

2020-06-09 Thread GitBox


ndimiduk commented on pull request #1871:
URL: https://github.com/apache/hbase/pull/1871#issuecomment-641616500


   Thanks for the reviews!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk merged pull request #1871: HBASE-24005 Document maven invocation with JDK11

2020-06-09 Thread GitBox


ndimiduk merged pull request #1871:
URL: https://github.com/apache/hbase/pull/1871


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase-operator-tools] huaxiangsun merged pull request #63: Revert "HBASE-23927 HBCK takes one or more files for assigns"

2020-06-09 Thread GitBox


huaxiangsun merged pull request #63:
URL: https://github.com/apache/hbase-operator-tools/pull/63


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1257: HBASE 23887 Up to 3x increase BlockCache performance

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1257:
URL: https://github.com/apache/hbase/pull/1257#issuecomment-641614369


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   3m 51s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 11s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 35s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 48s |  hbase-server in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 13s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 46s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 49s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  49m  4s |  hbase-server in the patch failed.  |
   |  |   |  81m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/34/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1257 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux b7ba20c08443 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 474d200daa |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/34/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/34/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/34/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/34/testReport/
 |
   | Max. process+thread count | 618 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/34/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1257: HBASE 23887 Up to 3x increase BlockCache performance

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1257:
URL: https://github.com/apache/hbase/pull/1257#issuecomment-641612849


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   3m 51s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 57s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 17s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  3s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  0s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m  9s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  48m 10s |  hbase-server in the patch failed.  |
   |  |   |  77m  6s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/34/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1257 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux fc98ac150f9b 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 474d200daa |
   | Default Java | 1.8.0_232 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/34/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/34/testReport/
 |
   | Max. process+thread count | 725 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/34/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase-operator-tools] huaxiangsun opened a new pull request #63: Revert "HBASE-23927 HBCK takes one or more files for assigns"

2020-06-09 Thread GitBox


huaxiangsun opened a new pull request #63:
URL: https://github.com/apache/hbase-operator-tools/pull/63


   Reverts apache/hbase-operator-tools#62 because no correct jira info is in 
the commit message.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase-operator-tools] huaxiangsun commented on pull request #62: HBASE-23927 HBCK takes one or more files for assigns

2020-06-09 Thread GitBox


huaxiangsun commented on pull request #62:
URL: 
https://github.com/apache/hbase-operator-tools/pull/62#issuecomment-641611553


   Sorry @busbey, did not notice that jira id is missing. I will revert and ask 
@clarax to resubmit the patch with correct jira info.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-12187) Review in source the paper "Simple Testing Can Prevent Most Critical Failures"

2020-06-09 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129842#comment-17129842
 ] 

Michael Stack commented on HBASE-12187:
---

2.4.0 was released 11 days ago 
https://github.com/google/error-prone/releases/tag/v2.4.0

> Review in source the paper "Simple Testing Can Prevent Most Critical Failures"
> --
>
> Key: HBASE-12187
> URL: https://issues.apache.org/jira/browse/HBASE-12187
> Project: HBase
>  Issue Type: Bug
>Reporter: Michael Stack
>Priority: Critical
> Attachments: HBASE-12187.patch, abortInOvercatch.warnings.txt, 
> emptyCatch.warnings.txt, todoInCatch.warnings.txt
>
>
> Review the helpful paper 
> https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf
> It describes 'catastrophic failures', especially issues where exceptions are 
> thrown but not properly handled.  Their static analysis tool Aspirator turns 
> up a bunch of the obvious offenders (Lets add to test-patch.sh alongside 
> findbugs?).  This issue is about going through code base making sub-issues to 
> root out these and others (Don't we have the test described in figure #6 
> already? I thought we did?  If we don't, need to add).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] bharathv commented on a change in pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


bharathv commented on a change in pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#discussion_r437747899



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServer.java
##
@@ -66,8 +66,14 @@ public HACK_UNTIL_ZOOKEEPER_1897_ZooKeeperMain(String[] args)
  * @throws IOException
  * @throws InterruptedException
  */
-void runCmdLine() throws KeeperException, IOException, 
InterruptedException {
-  processCmd(this.cl);
+void runCmdLine() throws IOException, InterruptedException {
+  try {
+processCmd(this.cl);
+  } catch (IOException | InterruptedException e) {
+throw e;
+  } catch (Exception e) {

Review comment:
   > ZK 3.6 throws another type of checked exception, which causes a 
compilation error
   
   I didn't see it here [1], hence my question.  May be I missed something.
   
   [1] 
https://github.com/apache/zookeeper/blob/branch-3.6/zookeeper-server/src/main/java/org/apache/zookeeper/ZooKeeperMain.java#L380

##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java
##
@@ -85,14 +85,20 @@ public static void main(String[] args) {
   }
 
   private static void runZKServer(QuorumPeerConfig zkConfig) throws 
UnknownHostException, IOException {

Review comment:
   Ah, okay. I missed the AdminServerException exception from the 
signature, wfm to keep it as-is.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


apurtell commented on pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#issuecomment-641601976


   Another update to fix checkstyle nits. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


apurtell commented on pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#issuecomment-641601299


   @bharathv I attempted to address your feedback with comments. Please let me 
know if this works for you or if more should be done.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#issuecomment-641601238


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   7m 14s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   8m  0s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  branch-1 passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  compile  |   1m  9s |  branch-1 passed with JDK 
v1.7.0_262  |
   | +1 :green_heart: |  checkstyle  |   2m 16s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   3m  0s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  branch-1 passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  branch-1 passed with JDK 
v1.7.0_262  |
   | +0 :ok: |  spotbugs  |   2m 39s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 14s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 58s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  the patch passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  javac  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  the patch passed with JDK 
v1.7.0_262  |
   | +1 :green_heart: |  javac  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  hbase-client: The patch 
generated 0 new + 62 unchanged - 1 fixed = 62 total (was 63)  |
   | -1 :x: |  checkstyle  |   1m 37s |  hbase-server: The patch generated 1 
new + 14 unchanged - 0 fixed = 15 total (was 14)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   2m 48s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   4m 36s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  the patch passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  the patch passed with JDK 
v1.7.0_262  |
   | +1 :green_heart: |  findbugs  |   4m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 41s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 126m 50s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 57s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 186m 13s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1879/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1879 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 0b95a1f29939 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1879/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / b6598cc |
   | Default Java | 1.7.0_262 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_252 
/usr/lib/jvm/zulu-7-amd64:1.7.0_262 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1879/1/artifact/out/diff-checkstyle-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1879/1/testReport/
 |
   | Max. process+thread count | 4485 (vs. ulimit of 1) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1879/1/console |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHu

[GitHub] [hbase] Apache-HBase commented on pull request #1257: HBASE 23887 Up to 3x increase BlockCache performance

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1257:
URL: https://github.com/apache/hbase/pull/1257#issuecomment-641597316


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  5s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 42s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 11s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 41s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  hbase-server: The patch 
generated 0 new + 9 unchanged - 2 fixed = 9 total (was 11)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m 13s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   2m 16s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 12s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  35m 22s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/34/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1257 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux 735a780cf9b1 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 474d200daa |
   | Max. process+thread count | 84 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/34/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on a change in pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


apurtell commented on a change in pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#discussion_r437738544



##
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestHQuorumPeer.java
##
@@ -111,25 +115,39 @@
 QuorumPeerConfig config = new QuorumPeerConfig();
 config.parseProperties(properties);
 
-assertEquals(this.dataDir.toString(), config.getDataDir());
+assertEquals(this.dataDir.toString(), config.getDataDir().toString());
 assertEquals(2181, config.getClientPortAddress().getPort());
 Map servers = config.getServers();
 assertEquals(3, servers.size());
 assertTrue(servers.containsKey(Long.valueOf(0)));
 QuorumServer server = servers.get(Long.valueOf(0));
-assertEquals("localhost", server.addr.getHostName());
+assertEquals("localhost", getHostName(server));
 
 // Override with system property.
 System.setProperty("hbase.master.hostname", "foo.bar");
 is = new ByteArrayInputStream(s.getBytes());
 properties = ZKConfig.parseZooCfg(conf, is);
 assertEquals("foo.bar:2888:3888", properties.get("server.0"));
-
 config.parseProperties(properties);
 
 servers = config.getServers();
 server = servers.get(Long.valueOf(0));
-assertEquals("foo.bar", server.addr.getHostName());
+assertEquals("foo.bar", getHostName(server));
+  }
+
+  private static String getHostName(QuorumServer server) throws Exception {
+String hostname;
+switch (server.addr.getClass().getName()) {

Review comment:
   I can add a comment. I think the cross-version issues are clear enough 
by this resort to reflection :-( 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on a change in pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


apurtell commented on a change in pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#discussion_r437738171



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java
##
@@ -85,14 +85,20 @@ public static void main(String[] args) {
   }
 
   private static void runZKServer(QuorumPeerConfig zkConfig) throws 
UnknownHostException, IOException {

Review comment:
   > UnknownHostException is also an IOE
   
   Might be true but the method signature here was what it was and there is no 
need to fix it. I have little opinion either way other than a preference to 
minimize change. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on a change in pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


apurtell commented on a change in pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#discussion_r437737505



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java
##
@@ -85,14 +85,20 @@ public static void main(String[] args) {
   }
 
   private static void runZKServer(QuorumPeerConfig zkConfig) throws 
UnknownHostException, IOException {

Review comment:
   
   The current signature proposes we throw IOException or UnknownHostException. 
ZK 3.6 throws another type of checked exception, which causes a compilation 
error. Therefore I catch that and potentially others, and wrap it into an IOE.
   
   Adding the new checked exception to the signature would cause a compilation 
problem with 3.4, so that is not possible due to this legacy.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on a change in pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


apurtell commented on a change in pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#discussion_r437737134



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServer.java
##
@@ -66,8 +66,14 @@ public HACK_UNTIL_ZOOKEEPER_1897_ZooKeeperMain(String[] args)
  * @throws IOException
  * @throws InterruptedException
  */
-void runCmdLine() throws KeeperException, IOException, 
InterruptedException {
-  processCmd(this.cl);
+void runCmdLine() throws IOException, InterruptedException {
+  try {
+processCmd(this.cl);
+  } catch (IOException | InterruptedException e) {
+throw e;
+  } catch (Exception e) {

Review comment:
   The current signature proposes we throw IOException or 
InterruptedException. ZK 3.6 throws another type of checked exception, which 
causes a compilation error. Therefore I catch that and potentially others, and 
wrap it into an IOE. 
   
   Adding the new checked exception to the signature would cause a compilation 
problem with 3.4, so that is not possible due to this legacy.

##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java
##
@@ -85,14 +85,20 @@ public static void main(String[] args) {
   }
 
   private static void runZKServer(QuorumPeerConfig zkConfig) throws 
UnknownHostException, IOException {

Review comment:
   
   
   The current signature proposes we throw IOException or InterruptedException. 
ZK 3.6 throws another type of checked exception, which causes a compilation 
error. Therefore I catch that and potentially others, and wrap it into an IOE.
   
   Adding the new checked exception to the signature would cause a compilation 
problem with 3.4, so that is not possible due to this legacy.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on a change in pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


apurtell commented on a change in pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#discussion_r437737134



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServer.java
##
@@ -66,8 +66,14 @@ public HACK_UNTIL_ZOOKEEPER_1897_ZooKeeperMain(String[] args)
  * @throws IOException
  * @throws InterruptedException
  */
-void runCmdLine() throws KeeperException, IOException, 
InterruptedException {
-  processCmd(this.cl);
+void runCmdLine() throws IOException, InterruptedException {
+  try {
+processCmd(this.cl);
+  } catch (IOException | InterruptedException e) {
+throw e;
+  } catch (Exception e) {

Review comment:
   The current signature proposes we throw IOException or 
InterruptedException. ZK 3.6 throws another type of checked exception, which 
causes a compilation error. Therefore I catch that and potentially others, and 
wrap it into an IOE. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1871: HBASE-24005 Document maven invocation with JDK11

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1871:
URL: https://github.com/apache/hbase/pull/1871#issuecomment-641584181


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 50s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 45s |  master passed  |
   | +0 :ok: |  refguide  |   4m 49s |  branch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 18s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +0 :ok: |  refguide  |   4m 55s |  patch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 18s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  20m 28s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1871/5/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1871 |
   | Optional Tests | dupname asflicense refguide |
   | uname | Linux 399b51985d86 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 474d200daa |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1871/5/artifact/yetus-general-check/output/branch-site/book.html
 |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1871/5/artifact/yetus-general-check/output/patch-site/book.html
 |
   | Max. process+thread count | 79 (vs. ulimit of 12500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1871/5/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-12187) Review in source the paper "Simple Testing Can Prevent Most Critical Failures"

2020-06-09 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129804#comment-17129804
 ] 

Michael Stack commented on HBASE-12187:
---

We are currently on 2.3.4. Could update.

> Review in source the paper "Simple Testing Can Prevent Most Critical Failures"
> --
>
> Key: HBASE-12187
> URL: https://issues.apache.org/jira/browse/HBASE-12187
> Project: HBase
>  Issue Type: Bug
>Reporter: Michael Stack
>Priority: Critical
> Attachments: HBASE-12187.patch, abortInOvercatch.warnings.txt, 
> emptyCatch.warnings.txt, todoInCatch.warnings.txt
>
>
> Review the helpful paper 
> https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf
> It describes 'catastrophic failures', especially issues where exceptions are 
> thrown but not properly handled.  Their static analysis tool Aspirator turns 
> up a bunch of the obvious offenders (Lets add to test-patch.sh alongside 
> findbugs?).  This issue is about going through code base making sub-issues to 
> root out these and others (Don't we have the test described in figure #6 
> already? I thought we did?  If we don't, need to add).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1871: HBASE-24005 Document maven invocation with JDK11

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1871:
URL: https://github.com/apache/hbase/pull/1871#issuecomment-641566105


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 51s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   3m  0s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1871/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1871 |
   | Optional Tests |  |
   | uname | Linux f5fa1888d91c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 474d200daa |
   | Max. process+thread count | 47 (vs. ulimit of 12500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1871/5/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on a change in pull request #1791: HBASE-23202 ExportSnapshot (import) will fail if copying files to roo…

2020-06-09 Thread GitBox


ndimiduk commented on a change in pull request #1791:
URL: https://github.com/apache/hbase/pull/1791#discussion_r437717024



##
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotFileCacheWithDifferentWorkingDir.java
##
@@ -0,0 +1,64 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.snapshot;
+
+import java.io.File;
+import java.nio.file.Paths;
+import java.util.UUID;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.experimental.categories.Category;
+
+/**
+ * Test that we correctly reload the cache, filter directories, etc.
+ * while the temporary directory is on a different file system than the root 
directory
+ */
+@Category({MasterTests.class, LargeTests.class})
+public class TestSnapshotFileCacheWithDifferentWorkingDir extends 
TestSnapshotFileCache {

Review comment:
   Yep, no problem. Thanks for addressing my other concerns :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1871: HBASE-24005 Document maven invocation with JDK11

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1871:
URL: https://github.com/apache/hbase/pull/1871#issuecomment-641565361


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 28s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   1m 32s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1871/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1871 |
   | Optional Tests |  |
   | uname | Linux b60c6cd54cc6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 474d200daa |
   | Max. process+thread count | 45 (vs. ulimit of 12500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1871/5/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24510) Remove HBaseTestCase and GenericTestUtils

2020-06-09 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129794#comment-17129794
 ] 

Nick Dimiduk commented on HBASE-24510:
--

bq. IIRC the assumption is, the code under test directory are private by 
default, unless we explicitly set to IA.Public, like HBTU.

I agree with you. The trouble is we haven't done a good job of making HBTU 
independently consumable, without all the other stuff in {{src/test}}. It could 
be these classes are no longer reachable from the APIs of HBTU (certainly 
{{HBaseTestCase}} shouldn't be, i'm not as sure about {{GenericTestUtils}}.

My preference is that we reboot the testing support we provide to downstreamers 
before we start deleting utility classes that have been around for years. We 
would need a jar with classes in {{src/main}} that provide the functionality of 
HTU and whatever other supporting functions are required. Those classes would 
be {{IA.Public}} with clear compatibility contract. Once we have that, 
everything else becomes deprecated, removed.

I don't know if we have a Jira for that specific effort. I've seen a number of 
tickets like this one that are trying to chip away at the problem.

> Remove HBaseTestCase and GenericTestUtils
> -
>
> Key: HBASE-24510
> URL: https://issues.apache.org/jira/browse/HBASE-24510
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
>
> It is still a junit3 style test base, let's remove it.
> GenericTestUtils is also useless, remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk commented on a change in pull request #1871: HBASE-24005 Document maven invocation with JDK11

2020-06-09 Thread GitBox


ndimiduk commented on a change in pull request #1871:
URL: https://github.com/apache/hbase/pull/1871#discussion_r437707270



##
File path: src/main/asciidoc/_chapters/developer.adoc
##
@@ -397,28 +396,109 @@ mvn clean install -DskipTests
 See the <> section in 
<>
 
 [[maven.build.hadoop]]
- Building against various hadoop versions.
+ Building against various Hadoop versions
+
+HBase supports building against Apache Hadoop versions: 2.y and 3.y (early 
release artifacts).
+Exactly which version of Hadoop is used by default varies by release branch. 
See the section
+<> for the complete breakdown of supported Hadoop version by 
HBase release.
+
+The mechanism for selecting a Hadoop version at build time is identical across 
all releases. Which
+version of Hadoop is default varies. We manage Hadoop major version selection 
by way of Maven
+profiles. Due to the peculiarities of Maven profile mutual exclusion, the 
profile that builds
+against a particular Hadoop version is activated by setting a property, *not* 
the usual profile
+activation. Hadoop version profile activation is summarized by the following 
table.
+
+.Hadoop Profile Activation by HBase Release
+[cols="3*^.^", options="header"]
+|===
+| | Hadoop2 Activation | Hadoop3 Activation
+| HBase 1.3+ | _active by default_ | `-Dhadoop.profile=3.0`
+| HBase 3.0+ | _not supported_ | _active by default_
+|===
+
+[WARNING]
+
+Please note that where a profile is active by default, `hadoop.profile` must 
NOT be provided.
+
+
+Once the Hadoop major version profile is activated, the exact Hadoop version 
can be
+specified by overriding the appropriate property value. For Hadoop2 versions, 
the property name
+is `hadoop-two.version`. With Hadoop3 versions, the property name is 
`hadoop-three.version`.
 
-HBase supports building against Apache Hadoop versions: 2.y and 3.y (early 
release artifacts). By default we build against Hadoop 2.x.
+.Example 1, Building HBase 1.7 against Hadoop 2.10.0
 
-To build against a specific release from the Hadoop 2.y line, set e.g. 
`-Dhadoop-two.version=2.6.3`.
+For example, to build HBase 1.7 against Hadoop 2.10.0, the profile is set for 
Hadoop2 by default,
+so only `hadoop-two.version` must be specified:
 
 [source,bourne]
 
-mvn -Dhadoop-two.version=2.6.3 ...
+git checkout branch-1
+mvn -Dhadoop-two.version=2.10.0 ...
 
 
-To change the major release line of Hadoop we build against, add a 
hadoop.profile property when you invoke +mvn+:
+.Example 2, Building HBase 2.3 against Hadoop 3.3.0-SNAPSHOT

Review comment:
   @busbey updated the language around the SNAPSHOT dependency, let me know 
if this satisfies.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1257: HBASE 23887 Up to 3x increase BlockCache performance

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1257:
URL: https://github.com/apache/hbase/pull/1257#issuecomment-641558578


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 18s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 41s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 21s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 42s |  hbase-server in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 23s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 40s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  47m 32s |  hbase-server in the patch failed.  |
   |  |   |  75m 41s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/33/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1257 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 40931749bed6 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 474d200daa |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/33/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/33/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/33/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/33/testReport/
 |
   | Max. process+thread count | 612 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/33/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1257: HBASE 23887 Up to 3x increase BlockCache performance

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1257:
URL: https://github.com/apache/hbase/pull/1257#issuecomment-641556810


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  7s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  4s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m  7s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m  4s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  47m  2s |  hbase-server in the patch failed.  |
   |  |   |  72m 32s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/33/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1257 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 86bb2a20a189 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 474d200daa |
   | Default Java | 1.8.0_232 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/33/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/33/testReport/
 |
   | Max. process+thread count | 795 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/33/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-12187) Review in source the paper "Simple Testing Can Prevent Most Critical Failures"

2020-06-09 Thread Ding Yuan (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129775#comment-17129775
 ] 

Ding Yuan commented on HBASE-12187:
---

[~mattf] the empty catch block rule is now part of [error-prone's newest 
release (v2.4.0)|https://github.com/google/error-prone/releases/tag/v2.4.0]. 
This is by far the most important rule among Aspirator's rules (vast majority 
of the bugs fall under this category). Try it :)

> Review in source the paper "Simple Testing Can Prevent Most Critical Failures"
> --
>
> Key: HBASE-12187
> URL: https://issues.apache.org/jira/browse/HBASE-12187
> Project: HBase
>  Issue Type: Bug
>Reporter: Michael Stack
>Priority: Critical
> Attachments: HBASE-12187.patch, abortInOvercatch.warnings.txt, 
> emptyCatch.warnings.txt, todoInCatch.warnings.txt
>
>
> Review the helpful paper 
> https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf
> It describes 'catastrophic failures', especially issues where exceptions are 
> thrown but not properly handled.  Their static analysis tool Aspirator turns 
> up a bunch of the obvious offenders (Lets add to test-patch.sh alongside 
> findbugs?).  This issue is about going through code base making sub-issues to 
> root out these and others (Don't we have the test described in figure #6 
> already? I thought we did?  If we don't, need to add).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24523) Include target hostname/ip/port in `ipc.ServerNotRunningYetException`

2020-06-09 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129774#comment-17129774
 ] 

Nick Dimiduk commented on HBASE-24523:
--

I am not. Please help yourself.

> Include target hostname/ip/port in `ipc.ServerNotRunningYetException`
> -
>
> Key: HBASE-24523
> URL: https://issues.apache.org/jira/browse/HBASE-24523
> Project: HBase
>  Issue Type: Task
>  Components: IPC/RPC
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Priority: Major
>
> We get almost 100 lines of exception backtrace along with this exception, but 
> the identity of the server with whom the client is trying to communication is 
> not included. For example,
> {noformat}
> 2020-06-06 00:35:37,123 WARN  [ChaosMonkey] client.ConnectionImplementation: 
> Checking master connection
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: 
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not 
> running yet
> at 
> org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2901)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:1178)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase-operator-tools] ndimiduk commented on a change in pull request #62: HBCK takes one or more files for assigns

2020-06-09 Thread GitBox


ndimiduk commented on a change in pull request #62:
URL: 
https://github.com/apache/hbase-operator-tools/pull/62#discussion_r437699655



##
File path: hbase-hbck2/src/test/java/org/apache/hbase/TestHBCK2.java
##
@@ -266,6 +277,29 @@ public void testFormatReportMissingInMetaOneMissing() 
throws IOException {
 assertTrue(result.contains(expectedResult));
   }
 
+  private void unassigns(List regions, String[] regionStrsArray) 
throws IOException {

Review comment:
   nice refactor.

##
File path: hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
##
@@ -294,7 +301,29 @@ int setRegionState(ClusterConnection connection, String 
region,
   return null;
 }
 boolean overrideFlag = commandLine.hasOption(override.getOpt());
-return hbck.assigns(commandLine.getArgList(), overrideFlag);
+
+List argList = commandLine.getArgList();
+if (!commandLine.hasOption(inputFile.getOpt())) {
+  return hbck.assigns(argList, overrideFlag);
+} else {
+  List assignmentList = new ArrayList<>();
+  for (String filePath : argList) {
+try {
+  File file = new File(filePath);
+  FileReader fileReader = new FileReader(file);
+  BufferedReader bufferedReader = new BufferedReader(fileReader);
+  String regionName;
+  while ((regionName = bufferedReader.readLine()) != null) {

Review comment:
   doesn't check style complain about this while-loop construct? Oh, I see 
we have no precommit job for this project :(

##
File path: hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
##
@@ -109,7 +111,10 @@
   private static final String ADD_MISSING_REGIONS_IN_META_FOR_TABLES =
 "addFsRegionsMissingInMeta";
   private static final String REPORT_MISSING_REGIONS_IN_META = 
"reportMissingRegionsInMeta";
+
   static final String EXTRA_REGIONS_IN_META = "extraRegionsInMeta";
+  static final String ASSIGNS = "assigns";

Review comment:
   nit: I prefer constants used in APIs/CLIs to remain private, and the 
tests to duplicate the values. That way, it's easier to notice a breaking 
change when refactoring existing code -- the test that duplicates the value 
(rather than referring to the constant in the source class) will start breaking.

##
File path: hbase-hbck2/src/test/java/org/apache/hbase/TestHBCK2.java
##
@@ -127,32 +130,40 @@ public void testAssigns() throws IOException {
 getRegionStates().getRegionState(ri.getEncodedName());
 LOG.info("RS: {}", rs.toString());
   }
-  List regionStrs =
-  
regions.stream().map(RegionInfo::getEncodedName).collect(Collectors.toList());
-  String [] regionStrsArray = regionStrs.toArray(new String[] {});
+  String [] regionStrsArray  =
+  
regions.stream().map(RegionInfo::getEncodedName).collect(Collectors.toList())
+  .toArray(new String[] {});
+
   try (ClusterConnection connection = this.hbck2.connect(); Hbck hbck = 
connection.getHbck()) {
-List pids = this.hbck2.unassigns(hbck, regionStrsArray);
-waitOnPids(pids);
-for (RegionInfo ri : regions) {
-  RegionState rs = 
TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager().
-  getRegionStates().getRegionState(ri.getEncodedName());
-  LOG.info("RS: {}", rs.toString());
-  assertTrue(rs.toString(), rs.isClosed());
-}
-pids = this.hbck2.assigns(hbck, regionStrsArray);
+unassigns(regions, regionStrsArray);
+List pids = this.hbck2.assigns(hbck, regionStrsArray);
 waitOnPids(pids);
-for (RegionInfo ri : regions) {
-  RegionState rs = 
TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager().
-  getRegionStates().getRegionState(ri.getEncodedName());
-  LOG.info("RS: {}", rs.toString());
-  assertTrue(rs.toString(), rs.isOpened());
-}
+validateOpen(regions);
 // What happens if crappy region list passed?
 pids = this.hbck2.assigns(hbck, Arrays.stream(new String[]{"a", "some 
rubbish name"}).
-collect(Collectors.toList()).toArray(new String[]{}));
+collect(Collectors.toList()).toArray(new String[]{}));
 for (long pid : pids) {
   
assertEquals(org.apache.hadoop.hbase.procedure2.Procedure.NO_PROC_ID, pid);
 }
+
+// test input files
+unassigns(regions, regionStrsArray);
+String testFile = "inputForAssignsTest";
+FileOutputStream output = new FileOutputStream(testFile, false);
+for (String regionStr : regionStrsArray) {
+  output.write((regionStr + System.lineSeparator()).getBytes());
+}
+output.close();
+String result = testRunWithArgs(new String[] {HBCK2.ASSIGNS, "-i", 
testFile});
+Scanner scanner = new Scanner(result).useDelimiter("[\\D]+");
+pids = new ArrayList<>();
+while (scanner.hasNext(

[jira] [Updated] (HBASE-24367) ScheduledChore log elapsed timespan in a human-friendly format

2020-06-09 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-24367:
-
Fix Version/s: (was: 2.4.0)
   (was: 2.3.3)
   2.3.0

> ScheduledChore log elapsed timespan in a human-friendly format
> --
>
> Key: HBASE-24367
> URL: https://issues.apache.org/jira/browse/HBASE-24367
> Project: HBase
>  Issue Type: Task
>  Components: master, regionserver
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6
>
>
> I noticed this in a log line,
> {noformat}
> 2020-04-23 18:31:14,183 INFO org.apache.hadoop.hbase.ScheduledChore: 
> host-a.example.com,16000,1587577999888-ClusterStatusChore average execution 
> time: 68488258 ns.
> {noformat}
> I'm not sure if there's a case when elapsed time in nanoseconds is meaningful 
> for these background chores, but we could do a little work before printing 
> the number and time unit to truncate precision down to something a little 
> more intuitive for operators. This number purports to be an average, so a 
> high level of precision isn't necessarily meaningful.
> Separately, or while we're here, if we think an operator really cares about 
> the performance of this chore, we should print a histogram of elapsed times, 
> rather than an opaque average.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24526) Deadlock executing assign meta procedure

2020-06-09 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129770#comment-17129770
 ] 

Michael Stack commented on HBASE-24526:
---

These went in last night; seem related:

{code}
commit 4486a565b5cd9b9304701bc24c0f7d30cf174711
Author: Duo Zhang 
Date:   Tue Jun 9 11:07:16 2020 +0800

HBASE-24117 Shutdown AssignmentManager before ProcedureExecutor may cause 
SCP to accidentally skip assigning a region (#1865)

Signed-off-by: Michael Stack 

commit dd1010c15d1737d6f83497ef56e4dad09d80ac74
Author: Duo Zhang 
Date:   Tue Jun 9 08:14:00 2020 +0800

HBASE-24517 AssignmentManager.start should add meta region to 
ServerStateNode (#1866)

Signed-off-by: Viraj Jasani 
Signed-off-by: Wellington Ramos Chevreuil 
{code}

> Deadlock executing assign meta procedure
> 
>
> Key: HBASE-24526
> URL: https://issues.apache.org/jira/browse/HBASE-24526
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Priority: Critical
>
> I have what appears to be a deadlock while assigning meta. During recovery, 
> master creates the assign procedure for meta, and immediately marks meta as 
> assigned in zookeeper. It then creates the subprocedure to open meta on the 
> target region. However, the PEWorker pool is full of procedures that are 
> stuck, I think because their calls to update meta are going nowhere. For what 
> it's worth, the balancer is running concurrently, and has calculated a plan 
> size of 41.
> From the master log,
> {noformat}
> 2020-06-06 00:34:07,314 INFO 
> org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure: 
> Starting pid=17802, ppid=17801, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; 
> TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; 
> state=OPEN, location=null; forceNewPlan=true, retain=false
> 2020-06-06 00:34:07,465 INFO 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta 
> (replicaId=0) location in ZooKeeper as 
> hbasedn139.example.com,16020,1591403576247
> 2020-06-06 00:34:07,466 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized 
> subprocedures=[{pid=17803, ppid=17802, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
> {noformat}
> {{pid=17803}} is not mentioned again. hbasedn139 never receives an 
> {{openRegion}} RPC.
> Meanwhile, additional procedures are scheduled and picked up by workers, each 
> getting "stuck". I see log lines for all 16 PEWorker threads, saying that 
> they are stuck.
> {noformat}
> 2020-06-06 00:34:07,961 INFO 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: Took xlock 
> for pid=17804, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; 
> TransitRegionStateProcedure table=IntegrationTestBigLinkedList, 
> region=54f4f6c0e921e6d25e6043cba79c09aa, REOPEN/MOVE
> 2020-06-06 00:34:07,961 INFO 
> org.apache.hadoop.hbase.master.assignment.RegionStateStore: pid=17804 
> updating hbase:meta row=54f4f6c0e921e6d25e6043cba79c09aa, 
> regionState=CLOSING, regionLocation=hbasedn046.example.com,16020,1591402383956
> ...
> 2020-06-06 00:34:22,295 WARN 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck 
> PEWorker-16(pid=17804), run time 14.3340 sec
> ...
> 2020-06-06 00:34:27,295 WARN 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck 
> PEWorker-16(pid=17804), run time 19.3340 sec
> ...
> {noformat}
> The cluster stays in this state, with PEWorker thread stuck for upwards of 15 
> minutes. Eventually master starts logging
> {noformat}
> 2020-06-06 00:50:18,033 INFO 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl: Call exception, 
> tries=30, retries=31, started=970072 ms ago, cancelled=false, msg=Call queue 
> is full on hbasedn139.example.com,16020,1591403576247, too many items queued 
> ?, details=row 
> 'IntegrationTestBigLinkedList,,1591398987965.54f4f6c0e921e6d25e6043cba79c09aa.'
>  on table 'hbase:meta' at region=hbase:meta,,1.
> 1588230740, hostname=hbasedn139.example.com,16020,1591403576247, seqNum=-1, 
> see https://s.apache.org/timeout
> {noformat}
> The master never recovers on its own.
> I'm not sure how common this condition might be. This popped after about 20 
> total hours of running ITBLL with ServerKillingMonkey.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24521) Reenable the TestExportSnapshot family of tests

2020-06-09 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-24521:
-
Affects Version/s: 2.3.0
   3.0.0-alpha-1

> Reenable the TestExportSnapshot family of tests
> ---
>
> Key: HBASE-24521
> URL: https://issues.apache.org/jira/browse/HBASE-24521
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha-1, 2.3.0
>Reporter: Michael Stack
>Priority: Major
>
> The yarn history server can take a while to start. It is hardcoded at one 
> minute before all is killed. On loaded servers can take this long to start. 
> Cannot disable history server in tests nor can we configure the mini yarn 
> cluster so it can take longer starting history server. After this addressed, 
> and yarn history server is given more time or disabled, reenable the 
> TestExportSnapshot* family of tests. See parent issue for details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24521) Reenable the TestExportSnapshot family of tests

2020-06-09 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129769#comment-17129769
 ] 

Nick Dimiduk commented on HBASE-24521:
--

Filed a ticket back with YARN to see about getting things fixed.

> Reenable the TestExportSnapshot family of tests
> ---
>
> Key: HBASE-24521
> URL: https://issues.apache.org/jira/browse/HBASE-24521
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Michael Stack
>Priority: Major
>
> The yarn history server can take a while to start. It is hardcoded at one 
> minute before all is killed. On loaded servers can take this long to start. 
> Cannot disable history server in tests nor can we configure the mini yarn 
> cluster so it can take longer starting history server. After this addressed, 
> and yarn history server is given more time or disabled, reenable the 
> TestExportSnapshot* family of tests. See parent issue for details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24526) Deadlock executing assign meta procedure

2020-06-09 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129760#comment-17129760
 ] 

Nick Dimiduk commented on HBASE-24526:
--

I have a couple observations, I propose each here for discussion:
# Master should not set meta in ZooKeeper until after the region open on a 
region server has been acknowledged. This would allow other procedures 
involving meta edits to short-circuit abort for reschedule, since they would 
know they cannot make progress in this state.
# The PEWorker pool should prioritize operations involving meta (and other 
system tables). I presume there's some kind of queuing mechanism here, which 
may be a false assumption.
# Work dispatched to a PEWorker thread should not be permitted to hold that 
thread indefinitely. We should have an external mechanism, or something in the 
PEWorker's run loop, that interrupts procedure execution after a time limit. 
Procedures are designed to be durable and resumable, so this shouldn't impact 
correctness.

> Deadlock executing assign meta procedure
> 
>
> Key: HBASE-24526
> URL: https://issues.apache.org/jira/browse/HBASE-24526
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Priority: Critical
>
> I have what appears to be a deadlock while assigning meta. During recovery, 
> master creates the assign procedure for meta, and immediately marks meta as 
> assigned in zookeeper. It then creates the subprocedure to open meta on the 
> target region. However, the PEWorker pool is full of procedures that are 
> stuck, I think because their calls to update meta are going nowhere. For what 
> it's worth, the balancer is running concurrently, and has calculated a plan 
> size of 41.
> From the master log,
> {noformat}
> 2020-06-06 00:34:07,314 INFO 
> org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure: 
> Starting pid=17802, ppid=17801, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; 
> TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; 
> state=OPEN, location=null; forceNewPlan=true, retain=false
> 2020-06-06 00:34:07,465 INFO 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta 
> (replicaId=0) location in ZooKeeper as 
> hbasedn139.example.com,16020,1591403576247
> 2020-06-06 00:34:07,466 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized 
> subprocedures=[{pid=17803, ppid=17802, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
> {noformat}
> {{pid=17803}} is not mentioned again. hbasedn139 never receives an 
> {{openRegion}} RPC.
> Meanwhile, additional procedures are scheduled and picked up by workers, each 
> getting "stuck". I see log lines for all 16 PEWorker threads, saying that 
> they are stuck.
> {noformat}
> 2020-06-06 00:34:07,961 INFO 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: Took xlock 
> for pid=17804, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; 
> TransitRegionStateProcedure table=IntegrationTestBigLinkedList, 
> region=54f4f6c0e921e6d25e6043cba79c09aa, REOPEN/MOVE
> 2020-06-06 00:34:07,961 INFO 
> org.apache.hadoop.hbase.master.assignment.RegionStateStore: pid=17804 
> updating hbase:meta row=54f4f6c0e921e6d25e6043cba79c09aa, 
> regionState=CLOSING, regionLocation=hbasedn046.example.com,16020,1591402383956
> ...
> 2020-06-06 00:34:22,295 WARN 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck 
> PEWorker-16(pid=17804), run time 14.3340 sec
> ...
> 2020-06-06 00:34:27,295 WARN 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck 
> PEWorker-16(pid=17804), run time 19.3340 sec
> ...
> {noformat}
> The cluster stays in this state, with PEWorker thread stuck for upwards of 15 
> minutes. Eventually master starts logging
> {noformat}
> 2020-06-06 00:50:18,033 INFO 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl: Call exception, 
> tries=30, retries=31, started=970072 ms ago, cancelled=false, msg=Call queue 
> is full on hbasedn139.example.com,16020,1591403576247, too many items queued 
> ?, details=row 
> 'IntegrationTestBigLinkedList,,1591398987965.54f4f6c0e921e6d25e6043cba79c09aa.'
>  on table 'hbase:meta' at region=hbase:meta,,1.
> 1588230740, hostname=hbasedn139.example.com,16020,1591403576247, seqNum=-1, 
> see https://s.apache.org/timeout
> {noformat}
> The master never recovers on its own.
> I'm not sure how common this condition might be. This popped after about 20 
> total hours of running ITBLL with ServerKillingMonkey.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24526) Deadlock executing assign meta procedure

2020-06-09 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129756#comment-17129756
 ] 

Nick Dimiduk commented on HBASE-24526:
--

bq. Is there reason to believe this is new for 2.3, or we just haven't gotten 
evidence that it's elsewhere yet?

I don't have an assessment one way or the other. I suspect it's possible on 
anything running procV2 (PEWorker pool getting completely consumed), but it may 
only be possible in combination with the details of this 
{{TransitionRegionStateProcedure}} and how it handles meta. I was running with 
branch-2.3, {{02c099d566}}.

> Deadlock executing assign meta procedure
> 
>
> Key: HBASE-24526
> URL: https://issues.apache.org/jira/browse/HBASE-24526
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Priority: Critical
>
> I have what appears to be a deadlock while assigning meta. During recovery, 
> master creates the assign procedure for meta, and immediately marks meta as 
> assigned in zookeeper. It then creates the subprocedure to open meta on the 
> target region. However, the PEWorker pool is full of procedures that are 
> stuck, I think because their calls to update meta are going nowhere. For what 
> it's worth, the balancer is running concurrently, and has calculated a plan 
> size of 41.
> From the master log,
> {noformat}
> 2020-06-06 00:34:07,314 INFO 
> org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure: 
> Starting pid=17802, ppid=17801, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; 
> TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; 
> state=OPEN, location=null; forceNewPlan=true, retain=false
> 2020-06-06 00:34:07,465 INFO 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta 
> (replicaId=0) location in ZooKeeper as 
> hbasedn139.example.com,16020,1591403576247
> 2020-06-06 00:34:07,466 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized 
> subprocedures=[{pid=17803, ppid=17802, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
> {noformat}
> {{pid=17803}} is not mentioned again. hbasedn139 never receives an 
> {{openRegion}} RPC.
> Meanwhile, additional procedures are scheduled and picked up by workers, each 
> getting "stuck". I see log lines for all 16 PEWorker threads, saying that 
> they are stuck.
> {noformat}
> 2020-06-06 00:34:07,961 INFO 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: Took xlock 
> for pid=17804, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; 
> TransitRegionStateProcedure table=IntegrationTestBigLinkedList, 
> region=54f4f6c0e921e6d25e6043cba79c09aa, REOPEN/MOVE
> 2020-06-06 00:34:07,961 INFO 
> org.apache.hadoop.hbase.master.assignment.RegionStateStore: pid=17804 
> updating hbase:meta row=54f4f6c0e921e6d25e6043cba79c09aa, 
> regionState=CLOSING, regionLocation=hbasedn046.example.com,16020,1591402383956
> ...
> 2020-06-06 00:34:22,295 WARN 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck 
> PEWorker-16(pid=17804), run time 14.3340 sec
> ...
> 2020-06-06 00:34:27,295 WARN 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck 
> PEWorker-16(pid=17804), run time 19.3340 sec
> ...
> {noformat}
> The cluster stays in this state, with PEWorker thread stuck for upwards of 15 
> minutes. Eventually master starts logging
> {noformat}
> 2020-06-06 00:50:18,033 INFO 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl: Call exception, 
> tries=30, retries=31, started=970072 ms ago, cancelled=false, msg=Call queue 
> is full on hbasedn139.example.com,16020,1591403576247, too many items queued 
> ?, details=row 
> 'IntegrationTestBigLinkedList,,1591398987965.54f4f6c0e921e6d25e6043cba79c09aa.'
>  on table 'hbase:meta' at region=hbase:meta,,1.
> 1588230740, hostname=hbasedn139.example.com,16020,1591403576247, seqNum=-1, 
> see https://s.apache.org/timeout
> {noformat}
> The master never recovers on its own.
> I'm not sure how common this condition might be. This popped after about 20 
> total hours of running ITBLL with ServerKillingMonkey.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] bharathv commented on a change in pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


bharathv commented on a change in pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#discussion_r437681253



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java
##
@@ -85,14 +85,20 @@ public static void main(String[] args) {
   }
 
   private static void runZKServer(QuorumPeerConfig zkConfig) throws 
UnknownHostException, IOException {

Review comment:
   UnknownHostException is also an IOE, so I think you can just remove 
UnknownHostException from the method signature and all is good? no need to 
try/catch/wrap blocks?

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServer.java
##
@@ -66,8 +66,14 @@ public HACK_UNTIL_ZOOKEEPER_1897_ZooKeeperMain(String[] args)
  * @throws IOException
  * @throws InterruptedException
  */
-void runCmdLine() throws KeeperException, IOException, 
InterruptedException {
-  processCmd(this.cl);
+void runCmdLine() throws IOException, InterruptedException {
+  try {
+processCmd(this.cl);
+  } catch (IOException | InterruptedException e) {
+throw e;
+  } catch (Exception e) {

Review comment:
   Why this? Any un-checked exception is propagated as-is?

##
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestHQuorumPeer.java
##
@@ -111,25 +115,39 @@
 QuorumPeerConfig config = new QuorumPeerConfig();
 config.parseProperties(properties);
 
-assertEquals(this.dataDir.toString(), config.getDataDir());
+assertEquals(this.dataDir.toString(), config.getDataDir().toString());
 assertEquals(2181, config.getClientPortAddress().getPort());
 Map servers = config.getServers();
 assertEquals(3, servers.size());
 assertTrue(servers.containsKey(Long.valueOf(0)));
 QuorumServer server = servers.get(Long.valueOf(0));
-assertEquals("localhost", server.addr.getHostName());
+assertEquals("localhost", getHostName(server));
 
 // Override with system property.
 System.setProperty("hbase.master.hostname", "foo.bar");
 is = new ByteArrayInputStream(s.getBytes());
 properties = ZKConfig.parseZooCfg(conf, is);
 assertEquals("foo.bar:2888:3888", properties.get("server.0"));
-
 config.parseProperties(properties);
 
 servers = config.getServers();
 server = servers.get(Long.valueOf(0));
-assertEquals("foo.bar", server.addr.getHostName());
+assertEquals("foo.bar", getHostName(server));
+  }
+
+  private static String getHostName(QuorumServer server) throws Exception {
+String hostname;
+switch (server.addr.getClass().getName()) {

Review comment:
   nit: I think this for cross-version compatibility, a quick comment would 
be nice. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1257: HBASE 23887 Up to 3x increase BlockCache performance

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1257:
URL: https://github.com/apache/hbase/pull/1257#issuecomment-641538051


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 30s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m  0s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 24s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  hbase-server: The patch 
generated 0 new + 9 unchanged - 2 fixed = 9 total (was 11)  |
   | -0 :warning: |  whitespace  |   0m  0s |  The patch has 3 line(s) that end 
in whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  hadoopcheck  |  10m 59s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   2m  4s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  32m  3s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/33/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1257 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux e93e1a43029e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 474d200daa |
   | whitespace | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/33/artifact/yetus-general-check/output/whitespace-eol.txt
 |
   | Max. process+thread count | 94 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/33/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24526) Deadlock executing assign meta procedure

2020-06-09 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129749#comment-17129749
 ] 

Sean Busbey commented on HBASE-24526:
-

Is there reason to believe this is new for 2.3, or we just haven't gotten 
evidence that it's elsewhere yet?

> Deadlock executing assign meta procedure
> 
>
> Key: HBASE-24526
> URL: https://issues.apache.org/jira/browse/HBASE-24526
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Priority: Critical
>
> I have what appears to be a deadlock while assigning meta. During recovery, 
> master creates the assign procedure for meta, and immediately marks meta as 
> assigned in zookeeper. It then creates the subprocedure to open meta on the 
> target region. However, the PEWorker pool is full of procedures that are 
> stuck, I think because their calls to update meta are going nowhere. For what 
> it's worth, the balancer is running concurrently, and has calculated a plan 
> size of 41.
> From the master log,
> {noformat}
> 2020-06-06 00:34:07,314 INFO 
> org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure: 
> Starting pid=17802, ppid=17801, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; 
> TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; 
> state=OPEN, location=null; forceNewPlan=true, retain=false
> 2020-06-06 00:34:07,465 INFO 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta 
> (replicaId=0) location in ZooKeeper as 
> hbasedn139.example.com,16020,1591403576247
> 2020-06-06 00:34:07,466 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized 
> subprocedures=[{pid=17803, ppid=17802, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
> {noformat}
> {{pid=17803}} is not mentioned again. hbasedn139 never receives an 
> {{openRegion}} RPC.
> Meanwhile, additional procedures are scheduled and picked up by workers, each 
> getting "stuck". I see log lines for all 16 PEWorker threads, saying that 
> they are stuck.
> {noformat}
> 2020-06-06 00:34:07,961 INFO 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: Took xlock 
> for pid=17804, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; 
> TransitRegionStateProcedure table=IntegrationTestBigLinkedList, 
> region=54f4f6c0e921e6d25e6043cba79c09aa, REOPEN/MOVE
> 2020-06-06 00:34:07,961 INFO 
> org.apache.hadoop.hbase.master.assignment.RegionStateStore: pid=17804 
> updating hbase:meta row=54f4f6c0e921e6d25e6043cba79c09aa, 
> regionState=CLOSING, regionLocation=hbasedn046.example.com,16020,1591402383956
> ...
> 2020-06-06 00:34:22,295 WARN 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck 
> PEWorker-16(pid=17804), run time 14.3340 sec
> ...
> 2020-06-06 00:34:27,295 WARN 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck 
> PEWorker-16(pid=17804), run time 19.3340 sec
> ...
> {noformat}
> The cluster stays in this state, with PEWorker thread stuck for upwards of 15 
> minutes. Eventually master starts logging
> {noformat}
> 2020-06-06 00:50:18,033 INFO 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl: Call exception, 
> tries=30, retries=31, started=970072 ms ago, cancelled=false, msg=Call queue 
> is full on hbasedn139.example.com,16020,1591403576247, too many items queued 
> ?, details=row 
> 'IntegrationTestBigLinkedList,,1591398987965.54f4f6c0e921e6d25e6043cba79c09aa.'
>  on table 'hbase:meta' at region=hbase:meta,,1.
> 1588230740, hostname=hbasedn139.example.com,16020,1591403576247, seqNum=-1, 
> see https://s.apache.org/timeout
> {noformat}
> The master never recovers on its own.
> I'm not sure how common this condition might be. This popped after about 20 
> total hours of running ITBLL with ServerKillingMonkey.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1257: HBASE 23887 Up to 3x increase BlockCache performance

2020-06-09 Thread GitBox


Apache-HBase commented on pull request #1257:
URL: https://github.com/apache/hbase/pull/1257#issuecomment-641521867


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  7s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 46s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 27s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 41s |  hbase-server in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 25s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 41s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 190m  3s |  hbase-server in the patch passed.  
|
   |  |   | 218m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.11 Server=19.03.11 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/32/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1257 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux f8163abd7f83 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 15ddded26b |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/32/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/32/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/32/testReport/
 |
   | Max. process+thread count | 3543 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1257/32/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-24526) Deadlock executing assign meta procedure

2020-06-09 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-24526:
-
Description: 
I have what appears to be a deadlock while assigning meta. During recovery, 
master creates the assign procedure for meta, and immediately marks meta as 
assigned in zookeeper. It then creates the subprocedure to open meta on the 
target region. However, the PEWorker pool is full of procedures that are stuck, 
I think because their calls to update meta are going nowhere. For what it's 
worth, the balancer is running concurrently, and has calculated a plan size of 
41.

>From the master log,

{noformat}
2020-06-06 00:34:07,314 INFO 
org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure: Starting 
pid=17802, ppid=17801, 
state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; 
TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; 
state=OPEN, location=null; forceNewPlan=true, retain=false
2020-06-06 00:34:07,465 INFO 
org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta 
(replicaId=0) location in ZooKeeper as 
hbasedn139.example.com,16020,1591403576247
2020-06-06 00:34:07,466 INFO 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized 
subprocedures=[{pid=17803, ppid=17802, state=RUNNABLE; 
org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
{noformat}

{{pid=17803}} is not mentioned again. hbasedn139 never receives an 
{{openRegion}} RPC.

Meanwhile, additional procedures are scheduled and picked up by workers, each 
getting "stuck". I see log lines for all 16 PEWorker threads, saying that they 
are stuck.

{noformat}
2020-06-06 00:34:07,961 INFO 
org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: Took xlock 
for pid=17804, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; 
TransitRegionStateProcedure table=IntegrationTestBigLinkedList, 
region=54f4f6c0e921e6d25e6043cba79c09aa, REOPEN/MOVE
2020-06-06 00:34:07,961 INFO 
org.apache.hadoop.hbase.master.assignment.RegionStateStore: pid=17804 updating 
hbase:meta row=54f4f6c0e921e6d25e6043cba79c09aa, regionState=CLOSING, 
regionLocation=hbasedn046.example.com,16020,1591402383956
...
2020-06-06 00:34:22,295 WARN 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck 
PEWorker-16(pid=17804), run time 14.3340 sec
...
2020-06-06 00:34:27,295 WARN 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck 
PEWorker-16(pid=17804), run time 19.3340 sec
...
{noformat}

The cluster stays in this state, with PEWorker thread stuck for upwards of 15 
minutes. Eventually master starts logging

{noformat}
2020-06-06 00:50:18,033 INFO 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl: Call exception, tries=30, 
retries=31, started=970072 ms ago, cancelled=false, msg=Call queue is full on 
hbasedn139.example.com,16020,1591403576247, too many items queued ?, 
details=row 
'IntegrationTestBigLinkedList,,1591398987965.54f4f6c0e921e6d25e6043cba79c09aa.' 
on table 'hbase:meta' at region=hbase:meta,,1.
1588230740, hostname=hbasedn139.example.com,16020,1591403576247, seqNum=-1, see 
https://s.apache.org/timeout
{noformat}

The master never recovers on its own.

I'm not sure how common this condition might be. This popped after about 20 
total hours of running ITBLL with ServerKillingMonkey.

  was:
I have what appears to be a deadlock while assigning meta. During recovery, 
master creates the assign procedure for meta, and immediately marks meta as 
assigned in zookeeper. It then creates the subprocedure to open meta on the 
target region. However, the PEWorker pool is full of procedures that are stuck, 
I think because their calls to update meta are going nowhere. For what it's 
worth, the balancer is running concurrently, and has calculated a plan size of 
41.

>From the master log,

{noformat}
2020-06-06 00:34:07,314 INFO 
org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure: Starting 
pid=17802, ppid=17801, 
state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; 
TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; 
state=OPEN, location=null; forceNewPlan=true, retain=false
2020-06-06 00:34:07,465 INFO 
org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta 
(replicaId=0) location in ZooKeeper as 
hbasedn139.example.com,16020,1591403576247
2020-06-06 00:34:07,466 INFO 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized 
subprocedures=[{pid=17803, ppid=17802, state=RUNNABLE; 
org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
{noformat}

{{pid=17803 }} is not mentioned again. hbasedn139 never receives an 
{{openRegion}} RPC.

Meanwhile, additional procedures are scheduled and picked up by workers, each 
getting "stuck". I see log lines for all 16 PEWorker threads, saying that they 
are stuck.

{noformat}
2020-06-06 00:34:07,961 INFO 
org.apa

[jira] [Created] (HBASE-24526) Deadlock executing assign meta procedure

2020-06-09 Thread Nick Dimiduk (Jira)
Nick Dimiduk created HBASE-24526:


 Summary: Deadlock executing assign meta procedure
 Key: HBASE-24526
 URL: https://issues.apache.org/jira/browse/HBASE-24526
 Project: HBase
  Issue Type: Bug
  Components: proc-v2, Region Assignment
Affects Versions: 2.3.0
Reporter: Nick Dimiduk


I have what appears to be a deadlock while assigning meta. During recovery, 
master creates the assign procedure for meta, and immediately marks meta as 
assigned in zookeeper. It then creates the subprocedure to open meta on the 
target region. However, the PEWorker pool is full of procedures that are stuck, 
I think because their calls to update meta are going nowhere. For what it's 
worth, the balancer is running concurrently, and has calculated a plan size of 
41.

>From the master log,

{noformat}
2020-06-06 00:34:07,314 INFO 
org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure: Starting 
pid=17802, ppid=17801, 
state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; 
TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; 
state=OPEN, location=null; forceNewPlan=true, retain=false
2020-06-06 00:34:07,465 INFO 
org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta 
(replicaId=0) location in ZooKeeper as 
hbasedn139.example.com,16020,1591403576247
2020-06-06 00:34:07,466 INFO 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized 
subprocedures=[{pid=17803, ppid=17802, state=RUNNABLE; 
org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
{noformat}

{{pid=17803 }} is not mentioned again. hbasedn139 never receives an 
{{openRegion}} RPC.

Meanwhile, additional procedures are scheduled and picked up by workers, each 
getting "stuck". I see log lines for all 16 PEWorker threads, saying that they 
are stuck.

{noformat}
2020-06-06 00:34:07,961 INFO 
org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: Took xlock 
for pid=17804, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; 
TransitRegionStateProcedure table=IntegrationTestBigLinkedList, 
region=54f4f6c0e921e6d25e6043cba79c09aa, REOPEN/MOVE
2020-06-06 00:34:07,961 INFO 
org.apache.hadoop.hbase.master.assignment.RegionStateStore: pid=17804 updating 
hbase:meta row=54f4f6c0e921e6d25e6043cba79c09aa, regionState=CLOSING, 
regionLocation=hbasedn046.example.com,16020,1591402383956
...
2020-06-06 00:34:22,295 WARN 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck 
PEWorker-16(pid=17804), run time 14.3340 sec
...
2020-06-06 00:34:27,295 WARN 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Worker stuck 
PEWorker-16(pid=17804), run time 19.3340 sec
...
{noformat}

The cluster stays in this state, with PEWorker thread stuck for upwards of 15 
minutes. Eventually master starts logging

{noformat}
2020-06-06 00:50:18,033 INFO 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl: Call exception, tries=30, 
retries=31, started=970072 ms ago, cancelled=false, msg=Call queue is full on 
hbasedn139.example.com,16020,1591403576247, too many items queued ?, 
details=row 
'IntegrationTestBigLinkedList,,1591398987965.54f4f6c0e921e6d25e6043cba79c09aa.' 
on table 'hbase:meta' at region=hbase:meta,,1.
1588230740, hostname=hbasedn139.example.com,16020,1591403576247, seqNum=-1, see 
https://s.apache.org/timeout
{noformat}

The master never recovers on its own.

I'm not sure how common this condition might be. This popped after about 20 
total hours of running ITBLL with ServerKillingMonkey.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] apurtell commented on a change in pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


apurtell commented on a change in pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#discussion_r437643117



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
##
@@ -1811,9 +1811,10 @@ private static Op toZooKeeperOp(ZooKeeperWatcher zkw, 
ZKUtilOp op)
*/
   public static void multiOrSequential(ZooKeeperWatcher zkw, List 
ops,
   boolean runSequentialOnMultiFailure) throws KeeperException {
-if (ops == null) return;
+if (ops == null || ops.isEmpty()) {

Review comment:
   ZooKeeper 3.6 throws NPE if the multi list is empty





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on a change in pull request #1879: HBASE-24525 [branch-1] Support ZooKeeper 3.6.0+

2020-06-09 Thread GitBox


apurtell commented on a change in pull request #1879:
URL: https://github.com/apache/hbase/pull/1879#discussion_r437643393



##
File path: 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon
##
@@ -294,7 +294,7 @@ AssignmentManager assignmentManager = 
master.getAssignmentManager();
 
 
 ZooKeeper Client Version
-<% org.apache.zookeeper.Version.getVersion() %>, 
revision=<% org.apache.zookeeper.Version.getRevision() %>
+<% org.apache.zookeeper.Version.getVersion() %>

Review comment:
   3.6.0 removed getRevision; 3.6.1 puts it back. Do we need it? I think: no





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   >