[
https://issues.apache.org/jira/browse/HBASE-18381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16087869#comment-16087869
]
Hadoop QA commented on HBASE-18381:
-----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m
1s{color} | {color:blue} The patch file was not named according to hbase's
naming conventions. Please see
https://yetus.apache.org/documentation/0.4.0/precommit-patchnames for
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s{color} | {color:green} The patch appears to include 1 new or modified test
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
15s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m
53s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings.
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}
31m 54s{color} | {color:green} Patch does not cause any errors with Hadoop
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}148m
8s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
17s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}195m 24s{color} |
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18381 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12877333/18381.v2.txt |
| Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck
hbaseanti checkstyle compile |
| uname | Linux c73c1044b6c7 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality |
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
|
| git revision | master / 79a702d |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| findbugs |
https://builds.apache.org/job/PreCommit-HBASE-Build/7657/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
|
| Test Results |
https://builds.apache.org/job/PreCommit-HBASE-Build/7657/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output |
https://builds.apache.org/job/PreCommit-HBASE-Build/7657/console |
| Powered by | Apache Yetus 0.4.0 http://yetus.apache.org |
This message was automatically generated.
> HBase regionserver crashes when reading MOB file with column qualifier >64MB
> ----------------------------------------------------------------------------
>
> Key: HBASE-18381
> URL: https://issues.apache.org/jira/browse/HBASE-18381
> Project: HBase
> Issue Type: Bug
> Components: regionserver
> Affects Versions: 2.0.0-alpha-1
> Environment: HBase 1.2.0-cdh5.10.0
> Reporter: Daniel Jelinski
> Assignee: Ted Yu
> Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 18381.v1.txt, 18381.v2.txt
>
>
> After putting a cell with 64MB column qualifier to a MOB-enabled table,
> region server crashes when flushing data. Subsequent WAL replay attempts also
> result in region server crashes.
> Gist of code used to create the table:
> private String table = "poisonPill";
> private byte[] familyBytes = Bytes.toBytes("cf");
> private void createTable(Connection conn) throws IOException {
> Admin hbase_admin = conn.getAdmin();
> HTableDescriptor htable = new HTableDescriptor(TableName.valueOf(table));
> HColumnDescriptor hfamily = new HColumnDescriptor(familyBytes);
> hfamily.setMobEnabled(true);
> htable.setConfiguration("hfile.format.version","3");
> htable.addFamily(hfamily);
> hbase_admin.createTable(htable);
> }
> private void killTable(Connection conn) throws IOException {
> Table tbl = conn.getTable(TableName.valueOf(table));
> byte[] data = new byte[1<<26];
> byte[] smalldata = new byte[0];
> Put put = new Put(Bytes.toBytes("1"));
> put.addColumn(familyBytes, data, smalldata);
> tbl.put(put);
> }
> Region server logs (redacted):
> 2017-07-11 09:34:53,747 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Flushing 1/1 column families, memstore=64.00 MB; WAL is null, using passed
> sequenceid=7
> 2017-07-11 09:34:53,757 WARN org.apache.hadoop.hbase.io.hfile.HFileWriterV2:
> A minimum HFile version of 3 is required to support cell attributes/tags.
> Consider setting hfile.format.version accordingly.
> 2017-07-11 09:34:54,504 INFO
> org.apache.hadoop.hbase.mob.DefaultMobStoreFlusher: Flushed, sequenceid=7,
> memsize=67109096, hasBloomFilter=true, into tmp file
> hdfs://sandbox/hbase/data/default/poisonPill/f82e20f32302dfdd95c89ecc3be5a211/.tmp/7858d223eddd4199ad220fc77bb612eb
> 2017-07-11 09:34:54,694 ERROR org.apache.hadoop.hbase.regionserver.HStore:
> Failed to open store file :
> hdfs://sandbox/hbase/data/default/poisonPill/f82e20f32302dfdd95c89ecc3be5a211/.tmp/7858d223eddd4199ad220fc77bb612eb,
> keeping it in tmp location
> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile
> Trailer from file
> hdfs://sandbox/hbase/data/default/poisonPill/f82e20f32302dfdd95c89ecc3be5a211/.tmp/7858d223eddd4199ad220fc77bb612eb
> at
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
> at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
> at
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.<init>(StoreFile.java:1105)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:265)
> at
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:404)
> at
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:509)
> at
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:499)
> at
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:675)
> at
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:667)
> at
> org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1746)
> at
> org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:942)
> at
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2299)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2372)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2102)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:4139)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:3934)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:828)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:799)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6480)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6441)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6412)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6368)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6319)
> at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:362)
> at
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
> at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol
> message was too large. May be malicious. Use
> CodedInputStream.setSizeLimit() to increase the size limit.
> at
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
> at
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
> at
> com.google.protobuf.CodedInputStream.isAtEnd(CodedInputStream.java:701)
> at
> com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:99)
> at
> org.apache.hadoop.hbase.protobuf.generated.HFileProtos$FileInfoProto.<init>(HFileProtos.java:82)
> at
> org.apache.hadoop.hbase.protobuf.generated.HFileProtos$FileInfoProto.<init>(HFileProtos.java:46)
> at
> org.apache.hadoop.hbase.protobuf.generated.HFileProtos$FileInfoProto$1.parsePartialFrom(HFileProtos.java:135)
> at
> org.apache.hadoop.hbase.protobuf.generated.HFileProtos$FileInfoProto$1.parsePartialFrom(HFileProtos.java:130)
> at
> com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:200)
> at
> com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:241)
> at
> com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253)
> at
> com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259)
> at
> com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49)
> at
> org.apache.hadoop.hbase.protobuf.generated.HFileProtos$FileInfoProto.parseDelimitedFrom(HFileProtos.java:297)
> at org.apache.hadoop.hbase.io.hfile.HFile$FileInfo.read(HFile.java:752)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:161)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.<init>(HFileReaderV3.java:77)
> at
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
> ... 28 more
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)