[ https://issues.apache.org/jira/browse/HBASE-19343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16407291#comment-16407291 ]
Hadoop QA commented on HBASE-19343: ----------------------------------- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 45s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 3s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 46s{color} | {color:green} branch-1 passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 26s{color} | {color:red} hbase-server in branch-1 failed with JDK v1.8.0_163. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 16s{color} | {color:red} hbase-server in branch-1 failed with JDK v1.7.0_171. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 23s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 58s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} branch-1 passed with JDK v1.8.0_163 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} branch-1 passed with JDK v1.7.0_171 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 14s{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_163. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 14s{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_163. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 15s{color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_171. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 15s{color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_171. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 33s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 38s{color} | {color:red} The patch causes 44 errors with Hadoop v2.4.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 36s{color} | {color:red} The patch causes 44 errors with Hadoop v2.5.2. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} the patch passed with JDK v1.8.0_163 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed with JDK v1.7.0_171 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}139m 5s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}167m 48s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.master.TestMasterBalanceThrottling | | | hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles | | | hadoop.hbase.replication.regionserver.TestGlobalThrottler | | | hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint | | | hadoop.hbase.mapreduce.TestLoadIncrementalHFiles | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:36a7029 | | JIRA Issue | HBASE-19343 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12915369/HBASE-19343-branch-1-v2.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 41856beac078 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | branch-1 / 764798d | | maven | version: Apache Maven 3.0.5 | | Default Java | 1.7.0_171 | | Multi-JDK versions | /usr/lib/jvm/java-8-openjdk-amd64:1.8.0_163 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_171 | | compile | https://builds.apache.org/job/PreCommit-HBASE-Build/12044/artifact/patchprocess/branch-compile-hbase-server-jdk1.8.0_163.txt | | compile | https://builds.apache.org/job/PreCommit-HBASE-Build/12044/artifact/patchprocess/branch-compile-hbase-server-jdk1.7.0_171.txt | | compile | https://builds.apache.org/job/PreCommit-HBASE-Build/12044/artifact/patchprocess/patch-compile-hbase-server-jdk1.8.0_163.txt | | javac | https://builds.apache.org/job/PreCommit-HBASE-Build/12044/artifact/patchprocess/patch-compile-hbase-server-jdk1.8.0_163.txt | | compile | https://builds.apache.org/job/PreCommit-HBASE-Build/12044/artifact/patchprocess/patch-compile-hbase-server-jdk1.7.0_171.txt | | javac | https://builds.apache.org/job/PreCommit-HBASE-Build/12044/artifact/patchprocess/patch-compile-hbase-server-jdk1.7.0_171.txt | | hadoopcheck | https://builds.apache.org/job/PreCommit-HBASE-Build/12044/artifact/patchprocess/patch-javac-2.4.1.txt | | hadoopcheck | https://builds.apache.org/job/PreCommit-HBASE-Build/12044/artifact/patchprocess/patch-javac-2.5.2.txt | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/12044/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/12044/testReport/ | | Max. process+thread count | 3590 (vs. ulimit of 10000) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/12044/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Restore snapshot makes parent split region online > -------------------------------------------------- > > Key: HBASE-19343 > URL: https://issues.apache.org/jira/browse/HBASE-19343 > Project: HBase > Issue Type: Bug > Components: snapshots > Reporter: Pankaj Kumar > Assignee: Pankaj Kumar > Priority: Major > Fix For: 1.5.0 > > Attachments: HBASE-19343-branch-1-v2.patch, > HBASE-19343-branch-1.patch, Snapshot.jpg > > > Restore snapshot makes parent split region online as shown in the attached > snapshot. > Steps to reproduce > ===================== > 1. Create table > 2. Insert few records into the table > 3. flush the table > 4. Split the table > 5. Create snapshot before catalog janitor clears the parent region entry from > meta. > 6. Restore snapshot > We can see the problem in meta entries, > Meta content before restore snapshot: > {noformat} > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:regioninfo, timestamp=1511537565964, value={ENCODED => > 077a12b0b3c91b053fa95223635f9543, NAME => > 't1,,1511537529449.077a12b0b3c91b053fa95223635f9543.', STARTKEY => > '', ENDKEY => > '', OFFLINE => true, SPLIT => true} > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:seqnumDuringOpen, timestamp=1511537530107, > value=\x00\x00\x00\x00\x00\x00\x00\x02 > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:server, timestamp=1511537530107, value=host-xx:16020 > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:serverstartcode, timestamp=1511537530107, value=1511537511523 > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:splitA, timestamp=1511537565964, value={ENCODED => > 3c7c866d4df370c586131a4cbe0ef6a8, NAME => > 't1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8.', STARTKEY => '', > ENDKEY => 'm'} > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:splitB, timestamp=1511537565964, value={ENCODED => > dc7facd824c85b94e5bf6a2e6b5f5efc, NAME => > 't1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc.', STARTKEY => 'm > ', ENDKEY => ''} > t1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8. > column=info:regioninfo, timestamp=1511537566075, value={ENCODED => > 3c7c866d4df370c586131a4cbe0ef6a8, NAME => > 't1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8.', STARTKEY => > '', ENDKEY => > 'm'} > t1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8. > column=info:seqnumDuringOpen, timestamp=1511537566075, > value=\x00\x00\x00\x00\x00\x00\x00\x02 > t1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8. > column=info:server, timestamp=1511537566075, value=host-xx:16020 > t1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8. > column=info:serverstartcode, timestamp=1511537566075, value=1511537511523 > t1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc. > column=info:regioninfo, timestamp=1511537566069, value={ENCODED => > dc7facd824c85b94e5bf6a2e6b5f5efc, NAME => > 't1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc.', STARTKEY = > > 'm', ENDKEY => > ''} > t1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc. > column=info:seqnumDuringOpen, timestamp=1511537566069, > value=\x00\x00\x00\x00\x00\x00\x00\x08 > t1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc. > column=info:server, timestamp=1511537566069, value=host-xx:16020 > t1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc. > column=info:serverstartcode, timestamp=1511537566069, value=1511537511523 > {noformat} > Meta content after restore snapshot: > {noformat} > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:regioninfo, timestamp=1511537667635, value={ENCODED => > 077a12b0b3c91b053fa95223635f9543, NAME => > 't1,,1511537529449.077a12b0b3c91b053fa95223635f9543.', STARTKEY => > '', ENDKEY => > ''} > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:seqnumDuringOpen, timestamp=1511537667635, > value=\x00\x00\x00\x00\x00\x00\x00\x0A > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:server, timestamp=1511537667635, value=host-xx:16020 > t1,,1511537529449.077a12b0b3c91b053fa95223635f9543. > column=info:serverstartcode, timestamp=1511537667635, value=1511537511523 > t1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8. > column=info:regioninfo, timestamp=1511537667598, value={ENCODED => > 3c7c866d4df370c586131a4cbe0ef6a8, NAME => > 't1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8.', STARTKEY => > '', ENDKEY => > 'm'} > t1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8. > column=info:seqnumDuringOpen, timestamp=1511537667598, > value=\x00\x00\x00\x00\x00\x00\x00\x0B > t1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8. > column=info:server, timestamp=1511537667598, value=host-xx:16020 > t1,,1511537565718.3c7c866d4df370c586131a4cbe0ef6a8. > column=info:serverstartcode, timestamp=1511537667598, value=1511537511523 > t1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc. > column=info:regioninfo, timestamp=1511537667621, value={ENCODED => > dc7facd824c85b94e5bf6a2e6b5f5efc, NAME => > 't1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc.', STARTKEY = > > 'm', ENDKEY => > ''} > t1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc. > column=info:seqnumDuringOpen, timestamp=1511537667621, > value=\x00\x00\x00\x00\x00\x00\x00\x0D > t1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc. > column=info:server, timestamp=1511537667621, value=host-xx:16020 > t1,m,1511537565718.dc7facd824c85b94e5bf6a2e6b5f5efc. > column=info:serverstartcode, timestamp=1511537667621, value=1511537511523 > {noformat} > Root Cause: > We dont update the region split information in .regioninfo file in HDFS, but > while restoring the snapshot we set regioninfo based on the .regioninfo > entries, > {code} > // Identify which region are still available and which not. > // NOTE: we rely upon the region name as: "table name, start key, end key" > List<HRegionInfo> tableRegions = getTableRegions(); > if (tableRegions != null) { > monitor.rethrowException(); > for (HRegionInfo regionInfo: tableRegions) { > String regionName = regionInfo.getEncodedName(); > if (regionNames.contains(regionName)) { > LOG.info("region to restore: " + regionName); > regionNames.remove(regionName); > metaChanges.addRegionToRestore(regionInfo); > } else { > LOG.info("region to remove: " + regionName); > metaChanges.addRegionToRemove(regionInfo); > } > } > {code} > Here getTableRegions() is retrieved from HDFS. > There can be two solutions, > 1. Set the regioninfo based on the snapshot-manifest details. > 2. Update the .regioninfo after region split -- This message was sent by Atlassian JIRA (v7.6.3#76005)