[ 
https://issues.apache.org/jira/browse/HBASE-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15093528#comment-15093528
 ] 

Hadoop QA commented on HBASE-15085:
-----------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
36s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 56s 
{color} | {color:red} hbase-server in master has 83 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 13s 
{color} | {color:red} Patch generated 1 new checkstyle issues in hbase-server 
(total was 18, now 19). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 26s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.4.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 46s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.4.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 6s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.5.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 28s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.5.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 49s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.5.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 8m 13s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 9m 37s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 59s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 12m 23s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 46s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 14s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
9s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 113m 30s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0 Timed out junit tests | 
org.apache.hadoop.hbase.master.normalizer.TestSimpleRegionNormalizerOnCluster |
|   | org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer |
|   | org.apache.hadoop.hbase.master.TestRestartCluster |
|   | org.apache.hadoop.hbase.master.handler.TestTableDescriptorModification |
|   | org.apache.hadoop.hbase.master.handler.TestCreateTableHandler |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781750/HBASE-15085-v4.patch |
| JIRA Issue | HBASE-15085 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8ee9158 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/75/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/75/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/75/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/75/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_79.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/75/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0.txt
 |
| JDK v1.7.0_79  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/75/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Max memory used | 438MB |
| Powered by | Apache Yetus 0.1.0   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/75/console |


This message was automatically generated.



> IllegalStateException was thrown when scanning on bulkloaded HFiles
> -------------------------------------------------------------------
>
>                 Key: HBASE-15085
>                 URL: https://issues.apache.org/jira/browse/HBASE-15085
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.98.12, 1.1.2
>         Environment: HBase-0.98.12 & Hadoop-2.6.0 & JDK1.7
> HBase-1.1.2 & Hadoop-2.6.0 & JDK1.7
>            Reporter: Victor Xu
>            Assignee: Victor Xu
>              Labels: hfile
>         Attachments: HBASE-15085-0.98-v1.patch, HBASE-15085-0.98-v2.patch, 
> HBASE-15085-0.98-v3.patch, HBASE-15085-0.98-v4.patch, 
> HBASE-15085-0.98-v4.patch, HBASE-15085-branch-1.0-v1.patch, 
> HBASE-15085-branch-1.1-v1.patch, HBASE-15085-branch-1.2-v1.patch, 
> HBASE-15085-v1.patch, HBASE-15085-v2.patch, HBASE-15085-v3.patch, 
> HBASE-15085-v4.patch, HBASE-15085-v4.patch, HBASE-15085-v4.patch, 
> HBASE-15085-v4.patch
>
>
> IllegalStateException was thrown when we scanned from an HFile which was bulk 
> loaded several minutes ago, as shown below:
> {code}
> 2015-12-16 22:20:54,456 ERROR 
> com.taobao.kart.coprocessor.server.KartCoprocessor: 
> icbu_ae_ws_product,/0055,1450275490479.6a6a700f465ad074287fed720c950f7c. 
> batchNotify exception
> java.lang.IllegalStateException: EncodedScanner works only on encoded data 
> blocks
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1042)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:1093)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:188)
>         at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1879)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:4068)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2029)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2015)
>         at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1992)
> {code}
> I used 'hbase hfile' command to analyse the meta and block info of the hfile, 
> finding that even through the DATA_BLOCK_ENCODING was 'DIFF' in FileInfo, the 
> actual data blocks was written without any encoding algorithms(BlockType was 
> 'DATA', not 'ENCODED_DATA'):
> {code}
> Fileinfo:
>     BLOOM_FILTER_TYPE = ROW
>     BULKLOAD_SOURCE_TASK = attempt_1442077249005_606706_r_000012_0
>     BULKLOAD_TIMESTAMP = \x00\x00\x01R\x12$\x13\x12
>     DATA_BLOCK_ENCODING = DIFF
> ...
> DataBlock Header:
> HFileBlock [ fileOffset=0 headerSize()=33 blockType=DATA 
> onDiskSizeWithoutHeader=65591 uncompressedSizeWithoutHeader=65571 
> prevBlockOffset=-1 isUseHBaseChecksum()=true checksumType=CRC32 
> bytesPerChecksum=16384 onDiskDataSizeWithHeader=65604 
> getOnDiskSizeWithHeader()=65624 totalChecksumBytes()=20 isUnpacked()=true 
> buf=[ java.nio.HeapByteBuffer[pos=0 lim=65624 cap=65657], 
> array().length=65657, arrayOffset()=0 ] 
> dataBeginsWith=\x00\x00\x003\x00\x00\x00\x0A\x00\x10/0008:1000000008\x01dprod 
> fileContext=HFileContext [ usesHBaseChecksum=true checksumType=CRC32 
> bytesPerChecksum=16384 blocksize=65536 encoding=NONE includesMvcc=true 
> includesTags=false compressAlgo=NONE compressTags=false cryptoContext=[ 
> cipher=NONE keyHash=NONE ] ] ]
> {code}
> The data block encoding in file info was not consistent with the one in data 
> block, which means there must be something wrong with the bulkload process.
> After debugging on each step of bulkload, I found that LoadIncrementalHFiles 
> had a bug when loading hfile into a splitted region. 
> {code}
> /**
>    * Copy half of an HFile into a new HFile.
>    */
>   private static void copyHFileHalf(
>       Configuration conf, Path inFile, Path outFile, Reference reference,
>       HColumnDescriptor familyDescriptor)
>   throws IOException {
>     FileSystem fs = inFile.getFileSystem(conf);
>     CacheConfig cacheConf = new CacheConfig(conf);
>     HalfStoreFileReader halfReader = null;
>     StoreFile.Writer halfWriter = null;
>     try {
>       halfReader = new HalfStoreFileReader(fs, inFile, cacheConf, reference, 
> conf);
>       Map<byte[], byte[]> fileInfo = halfReader.loadFileInfo();
>       int blocksize = familyDescriptor.getBlocksize();
>       Algorithm compression = familyDescriptor.getCompression();
>       BloomType bloomFilterType = familyDescriptor.getBloomFilterType();
> // use CF's DATA_BLOCK_ENCODING to initialize HFile writer
>       HFileContext hFileContext = new HFileContextBuilder()
>                                   .withCompression(compression)
>                                   
> .withChecksumType(HStore.getChecksumType(conf))
>                                   
> .withBytesPerCheckSum(HStore.getBytesPerChecksum(conf))
>                                   .withBlockSize(blocksize)
>                                   
> .withDataBlockEncoding(familyDescriptor.getDataBlockEncoding())
>                                   .build();
>       halfWriter = new StoreFile.WriterBuilder(conf, cacheConf,
>           fs)
>               .withFilePath(outFile)
>               .withBloomType(bloomFilterType)
>               .withFileContext(hFileContext)
>               .build();
>       HFileScanner scanner = halfReader.getScanner(false, false, false);
>       scanner.seekTo();
>       do {
>         KeyValue kv = KeyValueUtil.ensureKeyValue(scanner.getKeyValue());
>         halfWriter.append(kv);
>       } while (scanner.next());
> // force encoding setting with the original HFile's file info
>       for (Map.Entry<byte[],byte[]> entry : fileInfo.entrySet()) {
>         if (shouldCopyHFileMetaKey(entry.getKey())) {
>           halfWriter.appendFileInfo(entry.getKey(), entry.getValue());
>         }
>       }
>     } finally {
>       if (halfWriter != null) halfWriter.close();
>       if (halfReader != null) 
> halfReader.close(cacheConf.shouldEvictOnClose());
>     }
>   }
> {code}
> As shown above, when an HFile which has a DIFF encoding is bulkloaded into a 
> splitted region whose CF's DATA_BLOCK_ENCODING is NONE, the two new HFiles 
> would have inconsistent encodings.
> Besides, it would be OK if splitting region's DATA_BLOCK_ENCODING is DIFF and 
> bulk loaded HFile has NONE, because the initial bulkloaded HFile would not 
> write the encoding info into its meta (NoOpDataBlockEncoder.saveMetadata() is 
> empty), and It then would not rewrite encoding in two generated Files in 
> copyHFileHalf(). Two new HFiles' meta info would be consistent with their 
> block headers, which would all be DIFF. So, no Exception would be thrown when 
> scanning these files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to