Re: Region comapction failed

2017-01-13 Thread Ted Yu
w.r.t. #2, I did a quick search for bloom related fixes.

I found HBASE-13123 but it was in 1.0.2

Planning to spend more time in the next few days.

On Fri, Jan 13, 2017 at 5:29 PM, Pankaj kr <pankaj...@huawei.com> wrote:

> Thanks Ted for replying.
>
> Actually issue happened in production environment and there are many
> HFiles in that store (can't get the file). As we don't log the file name
> which is corrupted, Is there anyway to get the corrupted  file name?
>
> Block encoding is "NONE", table schema has bloom filter as "ROW",
> compression type is "Snappy" and durability is SKIP_WAL.
>
>
> Regards,
> Pankaj
>
>
> -Original Message-
> From: Ted Yu [mailto:yuzhih...@gmail.com]
> Sent: Friday, January 13, 2017 10:30 PM
> To: d...@hbase.apache.org
> Cc: user@hbase.apache.org
> Subject: Re: Region comapction failed
>
> In the second case, the error happened when writing hfile. Can you track
> down the path of the new file so that further investigation can be done ?
>
> Does the table use any encoding ?
>
> Thanks
>
> > On Jan 13, 2017, at 2:47 AM, Pankaj kr <pankaj...@huawei.com> wrote:
> >
> > Hi,
> >
> > We met a weird issue in our production environment.
> >
> > Region compaction is always failing with  following errors,
> >
> > 1.
> > 2017-01-10 02:19:10,427 | ERROR | regionserver/RS-HOST/RS-IP:
> PORT-longCompactions-1483858654825 | Compaction failed Request =
> regionName=., storeName=XYZ, fileCount=6, fileSize=100.7 M (3.2 M, 20.8
> M, 15.1 M, 20.9 M, 21.0 M, 19.7 M), priority=-5, time=1747414906352088 |
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$
> CompactionRunner.doCompaction(CompactSplitThread.java:562)
> > java.io.IOException: ScanWildcardColumnTracker.checkColumn ran into a
> column actually smaller than the previous column:  XXX
> >at org.apache.hadoop.hbase.regionserver.
> ScanWildcardColumnTracker.checkVersions(ScanWildcardColumnTracker.
> java:114)
> >at org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.
> match(ScanQueryMatcher.java:457)
> >at org.apache.hadoop.hbase.regionserver.StoreScanner.
> next(StoreScanner.java:551)
> >at org.apache.hadoop.hbase.regionserver.compactions.
> Compactor.performCompaction(Compactor.java:328)
> >at org.apache.hadoop.hbase.regionserver.compactions.
> DefaultCompactor.compact(DefaultCompactor.java:104)
> >at org.apache.hadoop.hbase.regionserver.
> DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.
> java:133)
> >at org.apache.hadoop.hbase.regionserver.HStore.compact(
> HStore.java:1243)
> >at org.apache.hadoop.hbase.regionserver.HRegion.compact(
> HRegion.java:1895)
> >at org.apache.hadoop.hbase.regionserver.
> CompactSplitThread$CompactionRunner.doCompaction(
> CompactSplitThread.java:546)
> >at org.apache.hadoop.hbase.regionserver.
> CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:583)
> >at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> >at java.util.concurrent.ThreadPoolExecuto
> >
> > 2.
> > 2017-01-10 02:33:53,009 | ERROR | regionserver/RS-HOST/RS-IP:
> PORT-longCompactions-1483686810953 | Compaction failed Request =
> regionName=YY, storeName=ABC, fileCount=6, fileSize=125.3 M (20.9 M,
> 20.9 M, 20.9 M, 20.9 M, 20.9 M, 20.9 M), priority=-68,
> time=1748294500157323 | org.apache.hadoop.hbase.regionserver.
> CompactSplitThread$CompactionRunner.doCompaction(
> CompactSplitThread.java:562)
> > java.io.IOException: Non-increasing Bloom keys: XX
> after 
> >at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.
> appendGeneralBloomfilter(StoreFile.java:911)
> >at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.
> append(StoreFile.java:947)
> >at org.apache.hadoop.hbase.regionserver.compactions.
> Compactor.performCompaction(Compactor.java:337)
> >at org.apache.hadoop.hbase.regionserver.compactions.
> DefaultCompactor.compact(DefaultCompactor.java:104)
> >at org.apache.hadoop.hbase.regionserver.
> DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.
> java:133)
> >at org.apache.hadoop.hbase.regionserver.HStore.compact(
> HStore.java:1243)
> >at org.apache.hadoop.hbase.regionserver.HRegion.compact(
> HRegion.java:1895)
> >   

Re: Region comapction failed

2017-01-13 Thread Ted Yu
In the second case, the error happened when writing hfile. Can you track down 
the path of the new file so that further investigation can be done ?

Does the table use any encoding ?

Thanks

> On Jan 13, 2017, at 2:47 AM, Pankaj kr  wrote:
> 
> Hi,
> 
> We met a weird issue in our production environment.
> 
> Region compaction is always failing with  following errors,
> 
> 1.
> 2017-01-10 02:19:10,427 | ERROR | 
> regionserver/RS-HOST/RS-IP:PORT-longCompactions-1483858654825 | Compaction 
> failed Request = regionName=., storeName=XYZ, fileCount=6, fileSize=100.7 
> M (3.2 M, 20.8 M, 15.1 M, 20.9 M, 21.0 M, 19.7 M), priority=-5, 
> time=1747414906352088 | 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:562)
> java.io.IOException: ScanWildcardColumnTracker.checkColumn ran into a column 
> actually smaller than the previous column:  XXX
>at 
> org.apache.hadoop.hbase.regionserver.ScanWildcardColumnTracker.checkVersions(ScanWildcardColumnTracker.java:114)
>at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:457)
>at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:551)
>at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:328)
>at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:104)
>at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:133)
>at 
> org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1243)
>at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1895)
>at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:546)
>at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:583)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>at java.util.concurrent.ThreadPoolExecuto
> 
> 2.
> 2017-01-10 02:33:53,009 | ERROR | 
> regionserver/RS-HOST/RS-IP:PORT-longCompactions-1483686810953 | Compaction 
> failed Request = regionName=YY, storeName=ABC, fileCount=6, 
> fileSize=125.3 M (20.9 M, 20.9 M, 20.9 M, 20.9 M, 20.9 M, 20.9 M), 
> priority=-68, time=1748294500157323 | 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:562)
> java.io.IOException: Non-increasing Bloom keys: XX after 
> 
>at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:911)
>at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:947)
>at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:337)
>at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:104)
>at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:133)
>at 
> org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1243)
>at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1895)
>at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:546)
>at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:583)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>at java.lang.Thread.run(Thread.java:745)
> 
> HBase version : 1.0.2
> 
> We have verified all the HFiles in the store using HFilePrettyPrinter with 
> "k" (checkrow), all report is normal. Full scan is also successful.
> We don't have the access to the actual data and may be customer wont agree to 
>  share that .
> 
> Have anyone faced this issue, any pointers will be much appreciated.
> 
> Thanks & Regards,
> Pankaj