To update this thread, this was caused by a bug: HBASE-9648. --Tom
On Sat, Sep 21, 2013 at 9:49 AM, Tom Brown <[email protected]> wrote: > I am still receiving thousands these log messages for the same region > withing a very short time frame. I have read the compaction documentation, > but have not been able to reconcile the documentation with the behavior I'm > seeing, so I'm starting to think it's a bug. > > To help diagnose further, I have included a recent example, hot off the > server (see below) > > Thanks, > Tom Brown > > This particular region (c5f15027ae1d4aa1d5b6046aea6f63a4) is about 800MB, > comprised of 25 store files. Given that, I could reasonably expect up to 25 > messages for the region. However, there were at least 388 (I didn't look > past the 60 second cutoff in the region server status page. However, after > finishing this email, there were over 300 new entries. I can only assume it > continued full-throttle during this time). > > Here is a small snippet from the region server status page. You'll notice > they're all identical except for the timestamp: > > Sat Sep 21 09:19:10 MDT 2013 Compacting d in > compound4,pub-aid\x09CBD46B77\x09\x80\x00\x00\x00\x00\x05\xBB/\x80\x00\x01>\xDD1\xAB`,1371513244597.c5f15027ae1d4aa1d5b6046aea6f63a4. > COMPLETE > (since 0sec ago) Compaction complete (since 0sec ago) > Sat Sep 21 09:19:10 MDT 2013 Compacting d in > compound4,pub-aid\x09CBD46B77\x09\x80\x00\x00\x00\x00\x05\xBB/\x80\x00\x01>\xDD1\xAB`,1371513244597.c5f15027ae1d4aa1d5b6046aea6f63a4. > COMPLETE > (since 0sec ago) Compaction complete (since 0sec ago) > Sat Sep 21 09:19:10 MDT 2013 Compacting d in > compound4,pub-aid\x09CBD46B77\x09\x80\x00\x00\x00\x00\x05\xBB/\x80\x00\x01>\xDD1\xAB`,1371513244597.c5f15027ae1d4aa1d5b6046aea6f63a4. > COMPLETE > (since 0sec ago) Compaction complete (since 0sec ago) > Sat Sep 21 09:19:10 MDT 2013 Compacting d in > compound4,pub-aid\x09CBD46B77\x09\x80\x00\x00\x00\x00\x05\xBB/\x80\x00\x01>\xDD1\xAB`,1371513244597.c5f15027ae1d4aa1d5b6046aea6f63a4. > COMPLETE > (since 0sec ago) Compaction complete (since 0sec ago) > > > Here is a snippet of the region server log during this time. For each item > in the region server status, there are 13 lines in the log file (claiming > to have created a new store file?). > > 2013-09-21 09:19:10,717 INFO > org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO > DeleteFamily was added to HFile (hdfs:// > hdpmgr001.pse.movenetworks.com:8020/hbase/compound4/c5f15027ae1d4aa1d5b6046aea6f63a4/.tmp/885989d2a3be4a2a8d1e2df86b023f09 > ) > 2013-09-21 09:19:10,746 INFO > org.apache.hadoop.hbase.regionserver.StoreFile: HFile Bloom filter type for > 885989d2a3be4a2a8d1e2df86b023f09: NONE, but ROW specified in column family > configuration > 2013-09-21 09:19:10,746 INFO org.apache.hadoop.hbase.regionserver.Store: > Renaming compacted file at hdfs:// > hdpmgr001.pse.movenetworks.com:8020/hbase/compound4/c5f15027ae1d4aa1d5b6046aea6f63a4/.tmp/885989d2a3be4a2a8d1e2df86b023f09to > hdfs:// > hdpmgr001.pse.movenetworks.com:8020/hbase/compound4/c5f15027ae1d4aa1d5b6046aea6f63a4/d/885989d2a3be4a2a8d1e2df86b023f09 > 2013-09-21 09:19:10,784 INFO > org.apache.hadoop.hbase.regionserver.StoreFile: HFile Bloom filter type for > 885989d2a3be4a2a8d1e2df86b023f09: NONE, but ROW specified in column family > configuration > 2013-09-21 09:19:10,803 INFO org.apache.hadoop.hbase.regionserver.Store: > Completed compaction of 1 file(s) in d of > compound4,pub-aid\x09CBD46B77\x09\x80\x00\x00\x00\x00\x05\xBB/\x80\x00\x01>\xDD1\xAB`,1371513244597.c5f15027ae1d4aa1d5b6046aea6f63a4. > into 885989d2a3be4a2a8d1e2df86b023f09, size=491.0; total size for store is > 815.0m > 2013-09-21 09:19:10,804 INFO > org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: > completed compaction: > regionName=compound4,pub-aid\x09CBD46B77\x09\x80\x00\x00\x00\x00\x05\xBB/\x80\x00\x01>\xDD1\xAB`,1371513244597.c5f15027ae1d4aa1d5b6046aea6f63a4., > storeName=d, fileCount=1, fileSize=491.0, priority=-18, > time=16739078447048764; duration=0sec > 2013-09-21 09:19:10,804 INFO > org.apache.hadoop.hbase.regionserver.compactions.CompactSelection: Deleting > the expired store file by compaction: hdfs:// > hdpmgr001.pse.movenetworks.com:8020/hbase/compound4/c5f15027ae1d4aa1d5b6046aea6f63a4/d/885989d2a3be4a2a8d1e2df86b023f09whose > maxTimeStamp is -1 while the max expired timestamp is 1371136750804 > 2013-09-21 09:19:10,804 INFO org.apache.hadoop.hbase.regionserver.HRegion: > Starting compaction on d in region > compound4,pub-aid\x09CBD46B77\x09\x80\x00\x00\x00\x00\x05\xBB/\x80\x00\x01>\xDD1\xAB`,1371513244597.c5f15027ae1d4aa1d5b6046aea6f63a4. > 2013-09-21 09:19:10,804 INFO org.apache.hadoop.hbase.regionserver.Store: > Starting compaction of 1 file(s) in d of > compound4,pub-aid\x09CBD46B77\x09\x80\x00\x00\x00\x00\x05\xBB/\x80\x00\x01>\xDD1\xAB`,1371513244597.c5f15027ae1d4aa1d5b6046aea6f63a4. > into tmpdir=hdfs:// > hdpmgr001.pse.movenetworks.com:8020/hbase/compound4/c5f15027ae1d4aa1d5b6046aea6f63a4/.tmp, > seqid=20650509441, totalSize=491.0 > 2013-09-21 09:19:10,818 INFO org.apache.hadoop.hbase.util.FSUtils: > FileSystem doesn't support getDefaultReplication > 2013-09-21 09:19:10,818 INFO org.apache.hadoop.hbase.util.FSUtils: > FileSystem doesn't support getDefaultBlockSize > 2013-09-21 09:19:10,824 INFO > org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for > hdfs:// > hdpmgr001.pse.movenetworks.com:8020/hbase/compound4/c5f15027ae1d4aa1d5b6046aea6f63a4/.tmp/89ab4a09d8c44b9983439fdb3fb4497a: > ROW, CompoundBloomFilterWriter > 2013-09-21 09:19:10,824 INFO > org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter > type for hdfs:// > hdpmgr001.pse.movenetworks.com:8020/hbase/compound4/c5f15027ae1d4aa1d5b6046aea6f63a4/.tmp/89ab4a09d8c44b9983439fdb3fb4497a: > CompoundBloomFilterWriter > 2013-09-21 09:19:10,841 INFO > org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO > DeleteFamily was added to HFile (hdfs:// > hdpmgr001.pse.movenetworks.com:8020/hbase/compound4/c5f15027ae1d4aa1d5b6046aea6f63a4/.tmp/89ab4a09d8c44b9983439fdb3fb4497a > ) > 2013-09-21 09:19:10,871 INFO > org.apache.hadoop.hbase.regionserver.StoreFile: HFile Bloom filter type for > 89ab4a09d8c44b9983439fdb3fb4497a: NONE, but ROW specified in column family > configuration > 2013-09-21 09:19:10,871 INFO org.apache.hadoop.hbase.regionserver.Store: > Renaming compacted file at hdfs:// > hdpmgr001.pse.movenetworks.com:8020/hbase/compound4/c5f15027ae1d4aa1d5b6046aea6f63a4/.tmp/89ab4a09d8c44b9983439fdb3fb4497ato > hdfs:// > hdpmgr001.pse.movenetworks.com:8020/hbase/compound4/c5f15027ae1d4aa1d5b6046aea6f63a4/d/89ab4a09d8c44b9983439fdb3fb4497a > 2013-09-21 09:19:10,949 INFO > org.apache.hadoop.hbase.regionserver.StoreFile: HFile Bloom filter type for > 89ab4a09d8c44b9983439fdb3fb4497a: NONE, but ROW specified in column family > configuration > 2013-09-21 09:19:10,969 INFO org.apache.hadoop.hbase.regionserver.Store: > Completed compaction of 1 file(s) in d of > compound4,pub-aid\x09CBD46B77\x09\x80\x00\x00\x00\x00\x05\xBB/\x80\x00\x01>\xDD1\xAB`,1371513244597.c5f15027ae1d4aa1d5b6046aea6f63a4. > into 89ab4a09d8c44b9983439fdb3fb4497a, size=491.0; total size for store is > 815.0m > 2013-09-21 09:19:10,969 INFO > org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: > completed compaction: > regionName=compound4,pub-aid\x09CBD46B77\x09\x80\x00\x00\x00\x00\x05\xBB/\x80\x00\x01>\xDD1\xAB`,1371513244597.c5f15027ae1d4aa1d5b6046aea6f63a4., > storeName=d, fileCount=1, fileSize=491.0, priority=-18, > time=16739078567419356; duration=0sec > 2013-09-21 09:19:10,969 INFO > org.apache.hadoop.hbase.regionserver.compactions.CompactSelection: Deleting > the expired store file by compaction: hdfs:// > hdpmgr001.pse.movenetworks.com:8020/hbase/compound4/c5f15027ae1d4aa1d5b6046aea6f63a4/d/89ab4a09d8c44b9983439fdb3fb4497awhose > maxTimeStamp is -1 while the max expired timestamp is 1371136750969 > 2013-09-21 09:19:10,970 INFO org.apache.hadoop.hbase.regionserver.HRegion: > Starting compaction on d in region > compound4,pub-aid\x09CBD46B77\x09\x80\x00\x00\x00\x00\x05\xBB/\x80\x00\x01>\xDD1\xAB`,1371513244597.c5f15027ae1d4aa1d5b6046aea6f63a4.2013-09-21 > 09:19:10,970 INFO org.apache.hadoop.hbase.regionserver.Store: Starting > compaction of 1 file(s) in d of > compound4,pub-aid\x09CBD46B77\x09\x80\x00\x00\x00\x00\x05\xBB/\x80\x00\x01>\xDD1\xAB`,1371513244597.c5f15027ae1d4aa1d5b6046aea6f63a4. > into tmpdir=hdfs:// > hdpmgr001.pse.movenetworks.com:8020/hbase/compound4/c5f15027ae1d4aa1d5b6046aea6f63a4/.tmp, > seqid=20650509441, totalSize=491.0 > > > --------------------- > > > > > > On Wed, Sep 4, 2013 at 11:27 AM, Tom Brown <[email protected]> wrote: > >> No, just one column family (called "d", not surprisingly). >> >> --Tom >> >> >> On Wed, Sep 4, 2013 at 9:54 AM, Jimmy Xiang <[email protected]> wrote: >> >>> Here "d" should be the column family being compacted. >>> Do you have 3-5 column families of the same region being compacted? >>> >>> >>> On Wed, Sep 4, 2013 at 8:36 AM, Tom Brown <[email protected]> wrote: >>> >>> > Is it normal to receive 3-5 distinct "Compaction Complete" statuses >>> for the >>> > same region each second? For any individual region, it continuously >>> > generates "Compacting d in {theregion}... Compaction Complete" >>> statuses for >>> > minutes or hours. >>> > >>> > In that status message, what is "d"? >>> > >>> > --Tom >>> > >>> > >>> > On Wed, Sep 4, 2013 at 6:21 AM, Frank Chow <[email protected]> >>> > wrote: >>> > >>> > > Hi Tom, >>> > > Below parameters may help to reduce the compaction number: >>> > > >>> > > <property> >>> > > <name>hbase.hstore.compactionThreshold</name> >>> > > <value>3</value> >>> > > <description> >>> > > If more than this number of HStoreFiles in any one HStore >>> > > (one HStoreFile is written per flush of memstore) then a >>> compaction >>> > > is run to rewrite all HStoreFiles files as one. Larger numbers >>> > > put off compaction but when it runs, it takes longer to complete. >>> > > </description> >>> > > </property> >>> > > >>> > > Make the value larger, if there are too many small files in the >>> region >>> > > dir, this may help a lot. >>> > > >>> > > >>> > > >>> > > >>> > > >>> > > >>> > > >>> > > >>> > > Frank Chow >>> > >>> >> >> >
