[ 
https://issues.apache.org/jira/browse/HBASE-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776868#comment-13776868
 ] 

Jean-Marc Spaggiari commented on HBASE-9648:
--------------------------------------------

Here are the details for the file which was causing the issue.

{code}
jmspaggiari@t430s:~/workspace/hbase-0.94.10$ bin/hbase 
org.apache.hadoop.hbase.io.hfile.HFile -m -s -v -f 
fca0882dc7624342a8f4fce4b89420ff 
13/09/24 14:54:09 INFO util.ChecksumType: Checksum can use java.util.zip.CRC32
Scanning -> fca0882dc7624342a8f4fce4b89420ff
13/09/24 14:54:09 INFO hfile.CacheConfig: Allocating LruBlockCache with maximum 
size 247.9m
13/09/24 14:54:09 ERROR metrics.SchemaMetrics: Inconsistent configuration. 
Previous configuration for using table name in metrics: true, new 
configuration: false
13/09/24 14:54:09 WARN metrics.SchemaConfigured: Could not determine table and 
column family of the HFile path fca0882dc7624342a8f4fce4b89420ff. Expecting at 
least 5 path components.
13/09/24 14:54:09 WARN snappy.LoadSnappy: Snappy native library is available
13/09/24 14:54:09 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/09/24 14:54:09 INFO snappy.LoadSnappy: Snappy native library loaded
13/09/24 14:54:09 INFO compress.CodecPool: Got brand-new decompressor
Block index size as per heapsize: 336
reader=fca0882dc7624342a8f4fce4b89420ff,
    compression=snappy,
    cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
[cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] 
[cacheEvictOnClose=false] [cacheCompressed=false],
    firstKey=null,
    lastKey=null,
    avgKeyLen=0,
    avgValueLen=0,
    entries=0,
    length=491
Trailer:
    fileinfoOffset=56,
    loadOnOpenDataOffset=0,
    dataIndexCount=0,
    metaIndexCount=0,
    totalUncomressedBytes=489,
    entryCount=0,
    compressionCodec=SNAPPY,
    uncompressedDataIndexSize=0,
    numDataIndexLevels=1,
    firstDataBlockOffset=-1,
    lastDataBlockOffset=0,
    comparatorClassName=org.apache.hadoop.hbase.KeyValue$KeyComparator,
    majorVersion=2,
    minorVersion=0
Fileinfo:
    DATA_BLOCK_ENCODING = NONE
    DELETE_FAMILY_COUNT = \x00\x00\x00\x00\x00\x00\x00\x00
    EARLIEST_PUT_TS = \x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF
    MAJOR_COMPACTION_KEY = \x00
    MAX_SEQ_ID_KEY = 19978535453
    TIMERANGE = -1....-1
    hfile.AVG_KEY_LEN = 0
    hfile.AVG_VALUE_LEN = 0
Unable to retrieve the midkey
Bloom filter:
    Not present
Delete Family Bloom filter:
    Not present
Stats:
no data available for statistics
Scanned kv count -> 0
{code}
                
> collection one expired storefile causes it to be replaced by another expired 
> storefile
> --------------------------------------------------------------------------------------
>
>                 Key: HBASE-9648
>                 URL: https://issues.apache.org/jira/browse/HBASE-9648
>             Project: HBase
>          Issue Type: Bug
>          Components: Compaction
>            Reporter: Sergey Shelukhin
>
> There's a shortcut in compaction selection that causes the selection of 
> expired store files to quickly delete.
> However, there's also the code that ensures we write at least one file to 
> preserve seqnum. This new empty file is "expired", because it has no data, 
> presumably.
> So it's collected again, etc.
> This affects 94, probably also 96.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to