ujjawal4046 commented on code in PR #4533:
URL: https://github.com/apache/hbase/pull/4533#discussion_r950849504
##########
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestOpenRegionFailedMemoryLeak.java:
##########
@@ -60,6 +63,14 @@ public class TestOpenRegionFailedMemoryLeak {
private static HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil();
+ @BeforeClass
+ public static void startCluster() throws Exception {
+ Configuration conf = TEST_UTIL.getConfiguration();
+
+ // Enable sanity check for coprocessor
+ conf.setBoolean(TableDescriptorChecker.TABLE_SANITY_CHECKS, true);
Review Comment:
Oh, it's due to the fact that this test checks region open [is failing due
to invalid loaded coprocessor at line
90](https://github.com/apache/hbase/pull/4533/files#diff-38ced1e200aafb2db3ee4f242681b31ee6a379f31ca238bd42ac86a7eaf46463R90)
below, before this change it was always true (since the coprocessor loading
was independent of sanity check). However after this change, we have made it
configurable based on the sanity check config ([which is always false in
hbase-server's test related
hbase-site.xml](https://github.com/apache/hbase/blob/master/hbase-server/src/test/resources/hbase-site.xml#L154)),
so we need to explicitly enable sanity check in the test setup
##########
hbase-server/src/main/java/org/apache/hadoop/hbase/util/TableDescriptorChecker.java:
##########
@@ -185,11 +175,6 @@ public static void sanityCheck(final Configuration c,
final TableDescriptor td)
warnOrThrowExceptionForFailure(logWarn, message, null);
Review Comment:
There are multiple checks based on logWarn variable before this line (for
e.g. [at line no
92](https://github.com/apache/hbase/pull/4533/files#diff-e83f41eef4aa6cbdb757de3bcbd8c976a9068fea4849b4a52966ddad7a8018daR92),
106, 127, 134 above)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]