[jira] [Updated] (HBASE-5920) New Compactions Logic can silently prevent user-initiated compactions from occurring

2012-05-18 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5920:
-

Attachment: 5290-094.txt

Here is what I applied to 0.94.  Its trunk patch w/ some minor fixup in 
HRegionServer.

 New Compactions Logic can silently prevent user-initiated compactions from 
 occurring
 

 Key: HBASE-5920
 URL: https://issues.apache.org/jira/browse/HBASE-5920
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.92.1
Reporter: Derek Wollenstein
Priority: Minor
  Labels: compaction
 Attachments: 5290-094.txt, HBASE-5920-0.92.1-1.patch, 
 HBASE-5920-0.92.1-2.patch, HBASE-5920-0.92.1.patch, HBASE-5920-trunk-1.patch, 
 HBASE-5920-trunk.patch


 There seem to be some tuning settings in which manually triggered major 
 compactions will do nothing, including loggic
 From Store.java in the function
   ListStoreFile compactSelection(ListStoreFile candidates)
 When a user manually triggers a compaction, this follows the same logic as a 
 normal compaction check.  when a user manually triggers a major compaction, 
 something similar happens.  Putting this all together:
 1. If a user triggers a major compaction, this is checked against a max files 
 threshold (hbase.hstore.compaction.max). If the number of storefiles to 
 compact is  max files, then we downgrade to a minor compaction
 2. If we are in a minor compaction, we do the following checks:
a. If the file is less than a minimum size 
 (hbase.hstore.compaction.min.size) we automatically include it
b. Otherwise, we check how the size compares to the next largest size.  
 based on hbase.hstore.compaction.ratio.  
   c. If the number of files included is less than a minimum count 
 (hbase.hstore.compaction.min) then don't compact.
 In many of the exit strategies, we aren't seeing an error message.
 The net-net of this is that if we have a mix of very large and very small 
 files, we may end up having too many files to do a major compact, but too few 
 files to do a minor compact.
 I'm trying to go through and see if I'm understanding things correctly, but 
 this seems like the bug
 To put it another way
 2012-05-02 20:09:36,389 DEBUG 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction 
 requested: 
 regionName=str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.,
  store
 Name=c, fileCount=15, fileSize=1.5g (20.2k, 362.5m, 155.3k, 3.0m, 30.7k, 
 361.2m, 6.9m, 4.7m, 14.7k, 363.4m, 30.9m, 3.2m, 7.3k, 362.9m, 23.5m), 
 priority=-9, time=3175046817624398; Because: Recursive enqueue; 
 compaction_queue=(59:0), split_queue=0
 When we had a minimum compaction size of 128M, and default settings for 
 hbase.hstore.compaction.min,hbase.hstore.compaction.max,hbase.hstore.compaction.ratio,
  we were not getting a compaction to run even if we ran
 major_compact 
 'str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.' from 
 the ruby shell.  Note that we had many tiny regions (20k, 155k, 3m, 30k,..) 
 and several large regions (362.5m,361.2m,363.4m,362.9m).  I think the bimodal 
 nature of the sizes prevented us from doing a compaction.
 I'm not 100% sure where this errored out because when I manually triggered a 
 compaction, I did not see
 '  // if we don't have enough files to compact, just wait 
   if (filesToCompact.size()  this.minFilesToCompact) {  
 if (LOG.isDebugEnabled()) {  
   LOG.debug(Skipped compaction of  + this.storeNameStr 
 + .  Only  + (end - start) +  file(s) of size
 + StringUtils.humanReadableInt(totalSize)
 +  have met compaction criteria.); 
 }
 ' 
 being printed in the logs (and I know DEBUG logging was enabled because I saw 
 this elsewhere).  
 I'd be happy with better error messages when we decide not to compact for 
 user enabled compactions.
 I'd also like to see some override that says user triggered major compaction 
 always occurs, but maybe that's a bad idea for other reasons.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5920) New Compactions Logic can silently prevent user-initiated compactions from occurring

2012-05-18 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5920:
-


Committed to 0.92, 0.94 and to trunk.  Thanks for the patch Derek

 New Compactions Logic can silently prevent user-initiated compactions from 
 occurring
 

 Key: HBASE-5920
 URL: https://issues.apache.org/jira/browse/HBASE-5920
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.92.1
Reporter: Derek Wollenstein
Assignee: Derek Wollenstein
Priority: Minor
  Labels: compaction
 Attachments: 5290-094.txt, HBASE-5920-0.92.1-1.patch, 
 HBASE-5920-0.92.1-2.patch, HBASE-5920-0.92.1.patch, HBASE-5920-trunk-1.patch, 
 HBASE-5920-trunk.patch


 There seem to be some tuning settings in which manually triggered major 
 compactions will do nothing, including loggic
 From Store.java in the function
   ListStoreFile compactSelection(ListStoreFile candidates)
 When a user manually triggers a compaction, this follows the same logic as a 
 normal compaction check.  when a user manually triggers a major compaction, 
 something similar happens.  Putting this all together:
 1. If a user triggers a major compaction, this is checked against a max files 
 threshold (hbase.hstore.compaction.max). If the number of storefiles to 
 compact is  max files, then we downgrade to a minor compaction
 2. If we are in a minor compaction, we do the following checks:
a. If the file is less than a minimum size 
 (hbase.hstore.compaction.min.size) we automatically include it
b. Otherwise, we check how the size compares to the next largest size.  
 based on hbase.hstore.compaction.ratio.  
   c. If the number of files included is less than a minimum count 
 (hbase.hstore.compaction.min) then don't compact.
 In many of the exit strategies, we aren't seeing an error message.
 The net-net of this is that if we have a mix of very large and very small 
 files, we may end up having too many files to do a major compact, but too few 
 files to do a minor compact.
 I'm trying to go through and see if I'm understanding things correctly, but 
 this seems like the bug
 To put it another way
 2012-05-02 20:09:36,389 DEBUG 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction 
 requested: 
 regionName=str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.,
  store
 Name=c, fileCount=15, fileSize=1.5g (20.2k, 362.5m, 155.3k, 3.0m, 30.7k, 
 361.2m, 6.9m, 4.7m, 14.7k, 363.4m, 30.9m, 3.2m, 7.3k, 362.9m, 23.5m), 
 priority=-9, time=3175046817624398; Because: Recursive enqueue; 
 compaction_queue=(59:0), split_queue=0
 When we had a minimum compaction size of 128M, and default settings for 
 hbase.hstore.compaction.min,hbase.hstore.compaction.max,hbase.hstore.compaction.ratio,
  we were not getting a compaction to run even if we ran
 major_compact 
 'str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.' from 
 the ruby shell.  Note that we had many tiny regions (20k, 155k, 3m, 30k,..) 
 and several large regions (362.5m,361.2m,363.4m,362.9m).  I think the bimodal 
 nature of the sizes prevented us from doing a compaction.
 I'm not 100% sure where this errored out because when I manually triggered a 
 compaction, I did not see
 '  // if we don't have enough files to compact, just wait 
   if (filesToCompact.size()  this.minFilesToCompact) {  
 if (LOG.isDebugEnabled()) {  
   LOG.debug(Skipped compaction of  + this.storeNameStr 
 + .  Only  + (end - start) +  file(s) of size
 + StringUtils.humanReadableInt(totalSize)
 +  have met compaction criteria.); 
 }
 ' 
 being printed in the logs (and I know DEBUG logging was enabled because I saw 
 this elsewhere).  
 I'd be happy with better error messages when we decide not to compact for 
 user enabled compactions.
 I'd also like to see some override that says user triggered major compaction 
 always occurs, but maybe that's a bad idea for other reasons.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5920) New Compactions Logic can silently prevent user-initiated compactions from occurring

2012-05-16 Thread Derek Wollenstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Wollenstein updated HBASE-5920:
-

Status: Open  (was: Patch Available)

 New Compactions Logic can silently prevent user-initiated compactions from 
 occurring
 

 Key: HBASE-5920
 URL: https://issues.apache.org/jira/browse/HBASE-5920
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.92.1
Reporter: Derek Wollenstein
Priority: Minor
  Labels: compaction
 Attachments: HBASE-5920-0.92.1.patch


 There seem to be some tuning settings in which manually triggered major 
 compactions will do nothing, including loggic
 From Store.java in the function
   ListStoreFile compactSelection(ListStoreFile candidates)
 When a user manually triggers a compaction, this follows the same logic as a 
 normal compaction check.  when a user manually triggers a major compaction, 
 something similar happens.  Putting this all together:
 1. If a user triggers a major compaction, this is checked against a max files 
 threshold (hbase.hstore.compaction.max). If the number of storefiles to 
 compact is  max files, then we downgrade to a minor compaction
 2. If we are in a minor compaction, we do the following checks:
a. If the file is less than a minimum size 
 (hbase.hstore.compaction.min.size) we automatically include it
b. Otherwise, we check how the size compares to the next largest size.  
 based on hbase.hstore.compaction.ratio.  
   c. If the number of files included is less than a minimum count 
 (hbase.hstore.compaction.min) then don't compact.
 In many of the exit strategies, we aren't seeing an error message.
 The net-net of this is that if we have a mix of very large and very small 
 files, we may end up having too many files to do a major compact, but too few 
 files to do a minor compact.
 I'm trying to go through and see if I'm understanding things correctly, but 
 this seems like the bug
 To put it another way
 2012-05-02 20:09:36,389 DEBUG 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction 
 requested: 
 regionName=str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.,
  store
 Name=c, fileCount=15, fileSize=1.5g (20.2k, 362.5m, 155.3k, 3.0m, 30.7k, 
 361.2m, 6.9m, 4.7m, 14.7k, 363.4m, 30.9m, 3.2m, 7.3k, 362.9m, 23.5m), 
 priority=-9, time=3175046817624398; Because: Recursive enqueue; 
 compaction_queue=(59:0), split_queue=0
 When we had a minimum compaction size of 128M, and default settings for 
 hbase.hstore.compaction.min,hbase.hstore.compaction.max,hbase.hstore.compaction.ratio,
  we were not getting a compaction to run even if we ran
 major_compact 
 'str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.' from 
 the ruby shell.  Note that we had many tiny regions (20k, 155k, 3m, 30k,..) 
 and several large regions (362.5m,361.2m,363.4m,362.9m).  I think the bimodal 
 nature of the sizes prevented us from doing a compaction.
 I'm not 100% sure where this errored out because when I manually triggered a 
 compaction, I did not see
 '  // if we don't have enough files to compact, just wait 
   if (filesToCompact.size()  this.minFilesToCompact) {  
 if (LOG.isDebugEnabled()) {  
   LOG.debug(Skipped compaction of  + this.storeNameStr 
 + .  Only  + (end - start) +  file(s) of size
 + StringUtils.humanReadableInt(totalSize)
 +  have met compaction criteria.); 
 }
 ' 
 being printed in the logs (and I know DEBUG logging was enabled because I saw 
 this elsewhere).  
 I'd be happy with better error messages when we decide not to compact for 
 user enabled compactions.
 I'd also like to see some override that says user triggered major compaction 
 always occurs, but maybe that's a bad idea for other reasons.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5920) New Compactions Logic can silently prevent user-initiated compactions from occurring

2012-05-16 Thread Derek Wollenstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Wollenstein updated HBASE-5920:
-

Attachment: (was: HBASE-5920.patch)

 New Compactions Logic can silently prevent user-initiated compactions from 
 occurring
 

 Key: HBASE-5920
 URL: https://issues.apache.org/jira/browse/HBASE-5920
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.92.1
Reporter: Derek Wollenstein
Priority: Minor
  Labels: compaction
 Attachments: HBASE-5920-0.92.1.patch


 There seem to be some tuning settings in which manually triggered major 
 compactions will do nothing, including loggic
 From Store.java in the function
   ListStoreFile compactSelection(ListStoreFile candidates)
 When a user manually triggers a compaction, this follows the same logic as a 
 normal compaction check.  when a user manually triggers a major compaction, 
 something similar happens.  Putting this all together:
 1. If a user triggers a major compaction, this is checked against a max files 
 threshold (hbase.hstore.compaction.max). If the number of storefiles to 
 compact is  max files, then we downgrade to a minor compaction
 2. If we are in a minor compaction, we do the following checks:
a. If the file is less than a minimum size 
 (hbase.hstore.compaction.min.size) we automatically include it
b. Otherwise, we check how the size compares to the next largest size.  
 based on hbase.hstore.compaction.ratio.  
   c. If the number of files included is less than a minimum count 
 (hbase.hstore.compaction.min) then don't compact.
 In many of the exit strategies, we aren't seeing an error message.
 The net-net of this is that if we have a mix of very large and very small 
 files, we may end up having too many files to do a major compact, but too few 
 files to do a minor compact.
 I'm trying to go through and see if I'm understanding things correctly, but 
 this seems like the bug
 To put it another way
 2012-05-02 20:09:36,389 DEBUG 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction 
 requested: 
 regionName=str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.,
  store
 Name=c, fileCount=15, fileSize=1.5g (20.2k, 362.5m, 155.3k, 3.0m, 30.7k, 
 361.2m, 6.9m, 4.7m, 14.7k, 363.4m, 30.9m, 3.2m, 7.3k, 362.9m, 23.5m), 
 priority=-9, time=3175046817624398; Because: Recursive enqueue; 
 compaction_queue=(59:0), split_queue=0
 When we had a minimum compaction size of 128M, and default settings for 
 hbase.hstore.compaction.min,hbase.hstore.compaction.max,hbase.hstore.compaction.ratio,
  we were not getting a compaction to run even if we ran
 major_compact 
 'str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.' from 
 the ruby shell.  Note that we had many tiny regions (20k, 155k, 3m, 30k,..) 
 and several large regions (362.5m,361.2m,363.4m,362.9m).  I think the bimodal 
 nature of the sizes prevented us from doing a compaction.
 I'm not 100% sure where this errored out because when I manually triggered a 
 compaction, I did not see
 '  // if we don't have enough files to compact, just wait 
   if (filesToCompact.size()  this.minFilesToCompact) {  
 if (LOG.isDebugEnabled()) {  
   LOG.debug(Skipped compaction of  + this.storeNameStr 
 + .  Only  + (end - start) +  file(s) of size
 + StringUtils.humanReadableInt(totalSize)
 +  have met compaction criteria.); 
 }
 ' 
 being printed in the logs (and I know DEBUG logging was enabled because I saw 
 this elsewhere).  
 I'd be happy with better error messages when we decide not to compact for 
 user enabled compactions.
 I'd also like to see some override that says user triggered major compaction 
 always occurs, but maybe that's a bad idea for other reasons.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5920) New Compactions Logic can silently prevent user-initiated compactions from occurring

2012-05-16 Thread Derek Wollenstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Wollenstein updated HBASE-5920:
-

Attachment: HBASE-5920-0.92.1.patch

Creating a new version of the patch with svn

 New Compactions Logic can silently prevent user-initiated compactions from 
 occurring
 

 Key: HBASE-5920
 URL: https://issues.apache.org/jira/browse/HBASE-5920
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.92.1
Reporter: Derek Wollenstein
Priority: Minor
  Labels: compaction
 Attachments: HBASE-5920-0.92.1.patch


 There seem to be some tuning settings in which manually triggered major 
 compactions will do nothing, including loggic
 From Store.java in the function
   ListStoreFile compactSelection(ListStoreFile candidates)
 When a user manually triggers a compaction, this follows the same logic as a 
 normal compaction check.  when a user manually triggers a major compaction, 
 something similar happens.  Putting this all together:
 1. If a user triggers a major compaction, this is checked against a max files 
 threshold (hbase.hstore.compaction.max). If the number of storefiles to 
 compact is  max files, then we downgrade to a minor compaction
 2. If we are in a minor compaction, we do the following checks:
a. If the file is less than a minimum size 
 (hbase.hstore.compaction.min.size) we automatically include it
b. Otherwise, we check how the size compares to the next largest size.  
 based on hbase.hstore.compaction.ratio.  
   c. If the number of files included is less than a minimum count 
 (hbase.hstore.compaction.min) then don't compact.
 In many of the exit strategies, we aren't seeing an error message.
 The net-net of this is that if we have a mix of very large and very small 
 files, we may end up having too many files to do a major compact, but too few 
 files to do a minor compact.
 I'm trying to go through and see if I'm understanding things correctly, but 
 this seems like the bug
 To put it another way
 2012-05-02 20:09:36,389 DEBUG 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction 
 requested: 
 regionName=str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.,
  store
 Name=c, fileCount=15, fileSize=1.5g (20.2k, 362.5m, 155.3k, 3.0m, 30.7k, 
 361.2m, 6.9m, 4.7m, 14.7k, 363.4m, 30.9m, 3.2m, 7.3k, 362.9m, 23.5m), 
 priority=-9, time=3175046817624398; Because: Recursive enqueue; 
 compaction_queue=(59:0), split_queue=0
 When we had a minimum compaction size of 128M, and default settings for 
 hbase.hstore.compaction.min,hbase.hstore.compaction.max,hbase.hstore.compaction.ratio,
  we were not getting a compaction to run even if we ran
 major_compact 
 'str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.' from 
 the ruby shell.  Note that we had many tiny regions (20k, 155k, 3m, 30k,..) 
 and several large regions (362.5m,361.2m,363.4m,362.9m).  I think the bimodal 
 nature of the sizes prevented us from doing a compaction.
 I'm not 100% sure where this errored out because when I manually triggered a 
 compaction, I did not see
 '  // if we don't have enough files to compact, just wait 
   if (filesToCompact.size()  this.minFilesToCompact) {  
 if (LOG.isDebugEnabled()) {  
   LOG.debug(Skipped compaction of  + this.storeNameStr 
 + .  Only  + (end - start) +  file(s) of size
 + StringUtils.humanReadableInt(totalSize)
 +  have met compaction criteria.); 
 }
 ' 
 being printed in the logs (and I know DEBUG logging was enabled because I saw 
 this elsewhere).  
 I'd be happy with better error messages when we decide not to compact for 
 user enabled compactions.
 I'd also like to see some override that says user triggered major compaction 
 always occurs, but maybe that's a bad idea for other reasons.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5920) New Compactions Logic can silently prevent user-initiated compactions from occurring

2012-05-16 Thread Derek Wollenstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Wollenstein updated HBASE-5920:
-

Release Note: 
This patch makes the following changes:
1) Trace-level logging whenever a compaction is requested
2) Debug-level logging whenever compaction is changed
3) If a user requests a major compaction, this will stay a major compaction 
even if it violated max files (easy to take this part out)
3a) If a user-initiates a max compaction that requires too many files to be 
compacted, this will log an error. 
4) Migrates utility functions from HBaseTestCase (Deprecated?) to 
HBaseTestingUtility to ease testing compaction behavior in TestCompaction
  Status: Patch Available  (was: Open)

Trying one more time -- I was just using diff rather than svn diff last time. 

 New Compactions Logic can silently prevent user-initiated compactions from 
 occurring
 

 Key: HBASE-5920
 URL: https://issues.apache.org/jira/browse/HBASE-5920
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.92.1
Reporter: Derek Wollenstein
Priority: Minor
  Labels: compaction
 Attachments: HBASE-5920-0.92.1.patch


 There seem to be some tuning settings in which manually triggered major 
 compactions will do nothing, including loggic
 From Store.java in the function
   ListStoreFile compactSelection(ListStoreFile candidates)
 When a user manually triggers a compaction, this follows the same logic as a 
 normal compaction check.  when a user manually triggers a major compaction, 
 something similar happens.  Putting this all together:
 1. If a user triggers a major compaction, this is checked against a max files 
 threshold (hbase.hstore.compaction.max). If the number of storefiles to 
 compact is  max files, then we downgrade to a minor compaction
 2. If we are in a minor compaction, we do the following checks:
a. If the file is less than a minimum size 
 (hbase.hstore.compaction.min.size) we automatically include it
b. Otherwise, we check how the size compares to the next largest size.  
 based on hbase.hstore.compaction.ratio.  
   c. If the number of files included is less than a minimum count 
 (hbase.hstore.compaction.min) then don't compact.
 In many of the exit strategies, we aren't seeing an error message.
 The net-net of this is that if we have a mix of very large and very small 
 files, we may end up having too many files to do a major compact, but too few 
 files to do a minor compact.
 I'm trying to go through and see if I'm understanding things correctly, but 
 this seems like the bug
 To put it another way
 2012-05-02 20:09:36,389 DEBUG 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction 
 requested: 
 regionName=str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.,
  store
 Name=c, fileCount=15, fileSize=1.5g (20.2k, 362.5m, 155.3k, 3.0m, 30.7k, 
 361.2m, 6.9m, 4.7m, 14.7k, 363.4m, 30.9m, 3.2m, 7.3k, 362.9m, 23.5m), 
 priority=-9, time=3175046817624398; Because: Recursive enqueue; 
 compaction_queue=(59:0), split_queue=0
 When we had a minimum compaction size of 128M, and default settings for 
 hbase.hstore.compaction.min,hbase.hstore.compaction.max,hbase.hstore.compaction.ratio,
  we were not getting a compaction to run even if we ran
 major_compact 
 'str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.' from 
 the ruby shell.  Note that we had many tiny regions (20k, 155k, 3m, 30k,..) 
 and several large regions (362.5m,361.2m,363.4m,362.9m).  I think the bimodal 
 nature of the sizes prevented us from doing a compaction.
 I'm not 100% sure where this errored out because when I manually triggered a 
 compaction, I did not see
 '  // if we don't have enough files to compact, just wait 
   if (filesToCompact.size()  this.minFilesToCompact) {  
 if (LOG.isDebugEnabled()) {  
   LOG.debug(Skipped compaction of  + this.storeNameStr 
 + .  Only  + (end - start) +  file(s) of size
 + StringUtils.humanReadableInt(totalSize)
 +  have met compaction criteria.); 
 }
 ' 
 being printed in the logs (and I know DEBUG logging was enabled because I saw 
 this elsewhere).  
 I'd be happy with better error messages when we decide not to compact for 
 user enabled compactions.
 I'd also like to see some override that says user triggered major compaction 
 always occurs, but maybe that's a bad idea for other reasons.

--
This message is automatically generated by 

[jira] [Updated] (HBASE-5920) New Compactions Logic can silently prevent user-initiated compactions from occurring

2012-05-16 Thread Derek Wollenstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Wollenstein updated HBASE-5920:
-

Attachment: HBASE-5920-0.92.1-1.patch

changing location again

 New Compactions Logic can silently prevent user-initiated compactions from 
 occurring
 

 Key: HBASE-5920
 URL: https://issues.apache.org/jira/browse/HBASE-5920
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.92.1
Reporter: Derek Wollenstein
Priority: Minor
  Labels: compaction
 Attachments: HBASE-5920-0.92.1-1.patch, HBASE-5920-0.92.1.patch


 There seem to be some tuning settings in which manually triggered major 
 compactions will do nothing, including loggic
 From Store.java in the function
   ListStoreFile compactSelection(ListStoreFile candidates)
 When a user manually triggers a compaction, this follows the same logic as a 
 normal compaction check.  when a user manually triggers a major compaction, 
 something similar happens.  Putting this all together:
 1. If a user triggers a major compaction, this is checked against a max files 
 threshold (hbase.hstore.compaction.max). If the number of storefiles to 
 compact is  max files, then we downgrade to a minor compaction
 2. If we are in a minor compaction, we do the following checks:
a. If the file is less than a minimum size 
 (hbase.hstore.compaction.min.size) we automatically include it
b. Otherwise, we check how the size compares to the next largest size.  
 based on hbase.hstore.compaction.ratio.  
   c. If the number of files included is less than a minimum count 
 (hbase.hstore.compaction.min) then don't compact.
 In many of the exit strategies, we aren't seeing an error message.
 The net-net of this is that if we have a mix of very large and very small 
 files, we may end up having too many files to do a major compact, but too few 
 files to do a minor compact.
 I'm trying to go through and see if I'm understanding things correctly, but 
 this seems like the bug
 To put it another way
 2012-05-02 20:09:36,389 DEBUG 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction 
 requested: 
 regionName=str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.,
  store
 Name=c, fileCount=15, fileSize=1.5g (20.2k, 362.5m, 155.3k, 3.0m, 30.7k, 
 361.2m, 6.9m, 4.7m, 14.7k, 363.4m, 30.9m, 3.2m, 7.3k, 362.9m, 23.5m), 
 priority=-9, time=3175046817624398; Because: Recursive enqueue; 
 compaction_queue=(59:0), split_queue=0
 When we had a minimum compaction size of 128M, and default settings for 
 hbase.hstore.compaction.min,hbase.hstore.compaction.max,hbase.hstore.compaction.ratio,
  we were not getting a compaction to run even if we ran
 major_compact 
 'str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.' from 
 the ruby shell.  Note that we had many tiny regions (20k, 155k, 3m, 30k,..) 
 and several large regions (362.5m,361.2m,363.4m,362.9m).  I think the bimodal 
 nature of the sizes prevented us from doing a compaction.
 I'm not 100% sure where this errored out because when I manually triggered a 
 compaction, I did not see
 '  // if we don't have enough files to compact, just wait 
   if (filesToCompact.size()  this.minFilesToCompact) {  
 if (LOG.isDebugEnabled()) {  
   LOG.debug(Skipped compaction of  + this.storeNameStr 
 + .  Only  + (end - start) +  file(s) of size
 + StringUtils.humanReadableInt(totalSize)
 +  have met compaction criteria.); 
 }
 ' 
 being printed in the logs (and I know DEBUG logging was enabled because I saw 
 this elsewhere).  
 I'd be happy with better error messages when we decide not to compact for 
 user enabled compactions.
 I'd also like to see some override that says user triggered major compaction 
 always occurs, but maybe that's a bad idea for other reasons.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5920) New Compactions Logic can silently prevent user-initiated compactions from occurring

2012-05-16 Thread Derek Wollenstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Wollenstein updated HBASE-5920:
-

Attachment: HBASE-5920-trunk.patch

Thanks @zhihong - I've re-created my patch against trunk.  I've also removed 
some of the refactoring of unit tests since it looks like HBaseTestCase is 
continuing to see expanded use (and I don't want to mix a test refactoring with 
a compaction change)

 New Compactions Logic can silently prevent user-initiated compactions from 
 occurring
 

 Key: HBASE-5920
 URL: https://issues.apache.org/jira/browse/HBASE-5920
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.92.1
Reporter: Derek Wollenstein
Priority: Minor
  Labels: compaction
 Attachments: HBASE-5920-0.92.1-1.patch, HBASE-5920-0.92.1.patch, 
 HBASE-5920-trunk.patch


 There seem to be some tuning settings in which manually triggered major 
 compactions will do nothing, including loggic
 From Store.java in the function
   ListStoreFile compactSelection(ListStoreFile candidates)
 When a user manually triggers a compaction, this follows the same logic as a 
 normal compaction check.  when a user manually triggers a major compaction, 
 something similar happens.  Putting this all together:
 1. If a user triggers a major compaction, this is checked against a max files 
 threshold (hbase.hstore.compaction.max). If the number of storefiles to 
 compact is  max files, then we downgrade to a minor compaction
 2. If we are in a minor compaction, we do the following checks:
a. If the file is less than a minimum size 
 (hbase.hstore.compaction.min.size) we automatically include it
b. Otherwise, we check how the size compares to the next largest size.  
 based on hbase.hstore.compaction.ratio.  
   c. If the number of files included is less than a minimum count 
 (hbase.hstore.compaction.min) then don't compact.
 In many of the exit strategies, we aren't seeing an error message.
 The net-net of this is that if we have a mix of very large and very small 
 files, we may end up having too many files to do a major compact, but too few 
 files to do a minor compact.
 I'm trying to go through and see if I'm understanding things correctly, but 
 this seems like the bug
 To put it another way
 2012-05-02 20:09:36,389 DEBUG 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction 
 requested: 
 regionName=str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.,
  store
 Name=c, fileCount=15, fileSize=1.5g (20.2k, 362.5m, 155.3k, 3.0m, 30.7k, 
 361.2m, 6.9m, 4.7m, 14.7k, 363.4m, 30.9m, 3.2m, 7.3k, 362.9m, 23.5m), 
 priority=-9, time=3175046817624398; Because: Recursive enqueue; 
 compaction_queue=(59:0), split_queue=0
 When we had a minimum compaction size of 128M, and default settings for 
 hbase.hstore.compaction.min,hbase.hstore.compaction.max,hbase.hstore.compaction.ratio,
  we were not getting a compaction to run even if we ran
 major_compact 
 'str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.' from 
 the ruby shell.  Note that we had many tiny regions (20k, 155k, 3m, 30k,..) 
 and several large regions (362.5m,361.2m,363.4m,362.9m).  I think the bimodal 
 nature of the sizes prevented us from doing a compaction.
 I'm not 100% sure where this errored out because when I manually triggered a 
 compaction, I did not see
 '  // if we don't have enough files to compact, just wait 
   if (filesToCompact.size()  this.minFilesToCompact) {  
 if (LOG.isDebugEnabled()) {  
   LOG.debug(Skipped compaction of  + this.storeNameStr 
 + .  Only  + (end - start) +  file(s) of size
 + StringUtils.humanReadableInt(totalSize)
 +  have met compaction criteria.); 
 }
 ' 
 being printed in the logs (and I know DEBUG logging was enabled because I saw 
 this elsewhere).  
 I'd be happy with better error messages when we decide not to compact for 
 user enabled compactions.
 I'd also like to see some override that says user triggered major compaction 
 always occurs, but maybe that's a bad idea for other reasons.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5920) New Compactions Logic can silently prevent user-initiated compactions from occurring

2012-05-16 Thread Derek Wollenstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Wollenstein updated HBASE-5920:
-

Attachment: (was: HBASE-5920-trunk.patch)

 New Compactions Logic can silently prevent user-initiated compactions from 
 occurring
 

 Key: HBASE-5920
 URL: https://issues.apache.org/jira/browse/HBASE-5920
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.92.1
Reporter: Derek Wollenstein
Priority: Minor
  Labels: compaction
 Attachments: HBASE-5920-0.92.1-1.patch, HBASE-5920-0.92.1.patch, 
 HBASE-5920-trunk.patch


 There seem to be some tuning settings in which manually triggered major 
 compactions will do nothing, including loggic
 From Store.java in the function
   ListStoreFile compactSelection(ListStoreFile candidates)
 When a user manually triggers a compaction, this follows the same logic as a 
 normal compaction check.  when a user manually triggers a major compaction, 
 something similar happens.  Putting this all together:
 1. If a user triggers a major compaction, this is checked against a max files 
 threshold (hbase.hstore.compaction.max). If the number of storefiles to 
 compact is  max files, then we downgrade to a minor compaction
 2. If we are in a minor compaction, we do the following checks:
a. If the file is less than a minimum size 
 (hbase.hstore.compaction.min.size) we automatically include it
b. Otherwise, we check how the size compares to the next largest size.  
 based on hbase.hstore.compaction.ratio.  
   c. If the number of files included is less than a minimum count 
 (hbase.hstore.compaction.min) then don't compact.
 In many of the exit strategies, we aren't seeing an error message.
 The net-net of this is that if we have a mix of very large and very small 
 files, we may end up having too many files to do a major compact, but too few 
 files to do a minor compact.
 I'm trying to go through and see if I'm understanding things correctly, but 
 this seems like the bug
 To put it another way
 2012-05-02 20:09:36,389 DEBUG 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction 
 requested: 
 regionName=str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.,
  store
 Name=c, fileCount=15, fileSize=1.5g (20.2k, 362.5m, 155.3k, 3.0m, 30.7k, 
 361.2m, 6.9m, 4.7m, 14.7k, 363.4m, 30.9m, 3.2m, 7.3k, 362.9m, 23.5m), 
 priority=-9, time=3175046817624398; Because: Recursive enqueue; 
 compaction_queue=(59:0), split_queue=0
 When we had a minimum compaction size of 128M, and default settings for 
 hbase.hstore.compaction.min,hbase.hstore.compaction.max,hbase.hstore.compaction.ratio,
  we were not getting a compaction to run even if we ran
 major_compact 
 'str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.' from 
 the ruby shell.  Note that we had many tiny regions (20k, 155k, 3m, 30k,..) 
 and several large regions (362.5m,361.2m,363.4m,362.9m).  I think the bimodal 
 nature of the sizes prevented us from doing a compaction.
 I'm not 100% sure where this errored out because when I manually triggered a 
 compaction, I did not see
 '  // if we don't have enough files to compact, just wait 
   if (filesToCompact.size()  this.minFilesToCompact) {  
 if (LOG.isDebugEnabled()) {  
   LOG.debug(Skipped compaction of  + this.storeNameStr 
 + .  Only  + (end - start) +  file(s) of size
 + StringUtils.humanReadableInt(totalSize)
 +  have met compaction criteria.); 
 }
 ' 
 being printed in the logs (and I know DEBUG logging was enabled because I saw 
 this elsewhere).  
 I'd be happy with better error messages when we decide not to compact for 
 user enabled compactions.
 I'd also like to see some override that says user triggered major compaction 
 always occurs, but maybe that's a bad idea for other reasons.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5920) New Compactions Logic can silently prevent user-initiated compactions from occurring

2012-05-16 Thread Derek Wollenstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Wollenstein updated HBASE-5920:
-

Attachment: HBASE-5920-trunk.patch

That was odd -- the patch in the jenkins log was empty -- trying to upload again

 New Compactions Logic can silently prevent user-initiated compactions from 
 occurring
 

 Key: HBASE-5920
 URL: https://issues.apache.org/jira/browse/HBASE-5920
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.92.1
Reporter: Derek Wollenstein
Priority: Minor
  Labels: compaction
 Attachments: HBASE-5920-0.92.1-1.patch, HBASE-5920-0.92.1.patch, 
 HBASE-5920-trunk.patch


 There seem to be some tuning settings in which manually triggered major 
 compactions will do nothing, including loggic
 From Store.java in the function
   ListStoreFile compactSelection(ListStoreFile candidates)
 When a user manually triggers a compaction, this follows the same logic as a 
 normal compaction check.  when a user manually triggers a major compaction, 
 something similar happens.  Putting this all together:
 1. If a user triggers a major compaction, this is checked against a max files 
 threshold (hbase.hstore.compaction.max). If the number of storefiles to 
 compact is  max files, then we downgrade to a minor compaction
 2. If we are in a minor compaction, we do the following checks:
a. If the file is less than a minimum size 
 (hbase.hstore.compaction.min.size) we automatically include it
b. Otherwise, we check how the size compares to the next largest size.  
 based on hbase.hstore.compaction.ratio.  
   c. If the number of files included is less than a minimum count 
 (hbase.hstore.compaction.min) then don't compact.
 In many of the exit strategies, we aren't seeing an error message.
 The net-net of this is that if we have a mix of very large and very small 
 files, we may end up having too many files to do a major compact, but too few 
 files to do a minor compact.
 I'm trying to go through and see if I'm understanding things correctly, but 
 this seems like the bug
 To put it another way
 2012-05-02 20:09:36,389 DEBUG 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction 
 requested: 
 regionName=str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.,
  store
 Name=c, fileCount=15, fileSize=1.5g (20.2k, 362.5m, 155.3k, 3.0m, 30.7k, 
 361.2m, 6.9m, 4.7m, 14.7k, 363.4m, 30.9m, 3.2m, 7.3k, 362.9m, 23.5m), 
 priority=-9, time=3175046817624398; Because: Recursive enqueue; 
 compaction_queue=(59:0), split_queue=0
 When we had a minimum compaction size of 128M, and default settings for 
 hbase.hstore.compaction.min,hbase.hstore.compaction.max,hbase.hstore.compaction.ratio,
  we were not getting a compaction to run even if we ran
 major_compact 
 'str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.' from 
 the ruby shell.  Note that we had many tiny regions (20k, 155k, 3m, 30k,..) 
 and several large regions (362.5m,361.2m,363.4m,362.9m).  I think the bimodal 
 nature of the sizes prevented us from doing a compaction.
 I'm not 100% sure where this errored out because when I manually triggered a 
 compaction, I did not see
 '  // if we don't have enough files to compact, just wait 
   if (filesToCompact.size()  this.minFilesToCompact) {  
 if (LOG.isDebugEnabled()) {  
   LOG.debug(Skipped compaction of  + this.storeNameStr 
 + .  Only  + (end - start) +  file(s) of size
 + StringUtils.humanReadableInt(totalSize)
 +  have met compaction criteria.); 
 }
 ' 
 being printed in the logs (and I know DEBUG logging was enabled because I saw 
 this elsewhere).  
 I'd be happy with better error messages when we decide not to compact for 
 user enabled compactions.
 I'd also like to see some override that says user triggered major compaction 
 always occurs, but maybe that's a bad idea for other reasons.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5920) New Compactions Logic can silently prevent user-initiated compactions from occurring

2012-05-15 Thread Derek Wollenstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Wollenstein updated HBASE-5920:
-

Status: Patch Available  (was: Open)

This patch makes the following changes:
1) Trace-level logging whenever a compaction is requested
2) Debug-level logging whenever compaction is changed
3) If a user requests a major compaction, this will stay a major compaction 
even if it violated max files (easy to take this part out)
  3a) If a user-initiates a max compaction that requires too many files to be 
compacted, this will log an error.  
4) Migrates utility functions from HBaseTestCase (Deprecated?) to 
HBaseTestingUtility to ease testing compaction behavior in TestCompaction

 New Compactions Logic can silently prevent user-initiated compactions from 
 occurring
 

 Key: HBASE-5920
 URL: https://issues.apache.org/jira/browse/HBASE-5920
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.92.1
Reporter: Derek Wollenstein
Priority: Minor
  Labels: compaction
 Attachments: HBASE-5920.patch


 There seem to be some tuning settings in which manually triggered major 
 compactions will do nothing, including loggic
 From Store.java in the function
   ListStoreFile compactSelection(ListStoreFile candidates)
 When a user manually triggers a compaction, this follows the same logic as a 
 normal compaction check.  when a user manually triggers a major compaction, 
 something similar happens.  Putting this all together:
 1. If a user triggers a major compaction, this is checked against a max files 
 threshold (hbase.hstore.compaction.max). If the number of storefiles to 
 compact is  max files, then we downgrade to a minor compaction
 2. If we are in a minor compaction, we do the following checks:
a. If the file is less than a minimum size 
 (hbase.hstore.compaction.min.size) we automatically include it
b. Otherwise, we check how the size compares to the next largest size.  
 based on hbase.hstore.compaction.ratio.  
   c. If the number of files included is less than a minimum count 
 (hbase.hstore.compaction.min) then don't compact.
 In many of the exit strategies, we aren't seeing an error message.
 The net-net of this is that if we have a mix of very large and very small 
 files, we may end up having too many files to do a major compact, but too few 
 files to do a minor compact.
 I'm trying to go through and see if I'm understanding things correctly, but 
 this seems like the bug
 To put it another way
 2012-05-02 20:09:36,389 DEBUG 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction 
 requested: 
 regionName=str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.,
  store
 Name=c, fileCount=15, fileSize=1.5g (20.2k, 362.5m, 155.3k, 3.0m, 30.7k, 
 361.2m, 6.9m, 4.7m, 14.7k, 363.4m, 30.9m, 3.2m, 7.3k, 362.9m, 23.5m), 
 priority=-9, time=3175046817624398; Because: Recursive enqueue; 
 compaction_queue=(59:0), split_queue=0
 When we had a minimum compaction size of 128M, and default settings for 
 hbase.hstore.compaction.min,hbase.hstore.compaction.max,hbase.hstore.compaction.ratio,
  we were not getting a compaction to run even if we ran
 major_compact 
 'str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.' from 
 the ruby shell.  Note that we had many tiny regions (20k, 155k, 3m, 30k,..) 
 and several large regions (362.5m,361.2m,363.4m,362.9m).  I think the bimodal 
 nature of the sizes prevented us from doing a compaction.
 I'm not 100% sure where this errored out because when I manually triggered a 
 compaction, I did not see
 '  // if we don't have enough files to compact, just wait 
   if (filesToCompact.size()  this.minFilesToCompact) {  
 if (LOG.isDebugEnabled()) {  
   LOG.debug(Skipped compaction of  + this.storeNameStr 
 + .  Only  + (end - start) +  file(s) of size
 + StringUtils.humanReadableInt(totalSize)
 +  have met compaction criteria.); 
 }
 ' 
 being printed in the logs (and I know DEBUG logging was enabled because I saw 
 this elsewhere).  
 I'd be happy with better error messages when we decide not to compact for 
 user enabled compactions.
 I'd also like to see some override that says user triggered major compaction 
 always occurs, but maybe that's a bad idea for other reasons.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 

[jira] [Updated] (HBASE-5920) New Compactions Logic can silently prevent user-initiated compactions from occurring

2012-05-15 Thread Derek Wollenstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Wollenstein updated HBASE-5920:
-

Attachment: HBASE-5920.patch

I'm not sure if this is being done correctly, but I've provided a patch that 
seems to do the right thing here.  It includes a large change area because of 
the refactoring of HBaseTestCase utilities into HBaseTestingUtility.

 New Compactions Logic can silently prevent user-initiated compactions from 
 occurring
 

 Key: HBASE-5920
 URL: https://issues.apache.org/jira/browse/HBASE-5920
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.92.1
Reporter: Derek Wollenstein
Priority: Minor
  Labels: compaction
 Attachments: HBASE-5920.patch


 There seem to be some tuning settings in which manually triggered major 
 compactions will do nothing, including loggic
 From Store.java in the function
   ListStoreFile compactSelection(ListStoreFile candidates)
 When a user manually triggers a compaction, this follows the same logic as a 
 normal compaction check.  when a user manually triggers a major compaction, 
 something similar happens.  Putting this all together:
 1. If a user triggers a major compaction, this is checked against a max files 
 threshold (hbase.hstore.compaction.max). If the number of storefiles to 
 compact is  max files, then we downgrade to a minor compaction
 2. If we are in a minor compaction, we do the following checks:
a. If the file is less than a minimum size 
 (hbase.hstore.compaction.min.size) we automatically include it
b. Otherwise, we check how the size compares to the next largest size.  
 based on hbase.hstore.compaction.ratio.  
   c. If the number of files included is less than a minimum count 
 (hbase.hstore.compaction.min) then don't compact.
 In many of the exit strategies, we aren't seeing an error message.
 The net-net of this is that if we have a mix of very large and very small 
 files, we may end up having too many files to do a major compact, but too few 
 files to do a minor compact.
 I'm trying to go through and see if I'm understanding things correctly, but 
 this seems like the bug
 To put it another way
 2012-05-02 20:09:36,389 DEBUG 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction 
 requested: 
 regionName=str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.,
  store
 Name=c, fileCount=15, fileSize=1.5g (20.2k, 362.5m, 155.3k, 3.0m, 30.7k, 
 361.2m, 6.9m, 4.7m, 14.7k, 363.4m, 30.9m, 3.2m, 7.3k, 362.9m, 23.5m), 
 priority=-9, time=3175046817624398; Because: Recursive enqueue; 
 compaction_queue=(59:0), split_queue=0
 When we had a minimum compaction size of 128M, and default settings for 
 hbase.hstore.compaction.min,hbase.hstore.compaction.max,hbase.hstore.compaction.ratio,
  we were not getting a compaction to run even if we ran
 major_compact 
 'str,44594594594594592,1334939064521.f7aed25b55d4d7988af763bede9ce74e.' from 
 the ruby shell.  Note that we had many tiny regions (20k, 155k, 3m, 30k,..) 
 and several large regions (362.5m,361.2m,363.4m,362.9m).  I think the bimodal 
 nature of the sizes prevented us from doing a compaction.
 I'm not 100% sure where this errored out because when I manually triggered a 
 compaction, I did not see
 '  // if we don't have enough files to compact, just wait 
   if (filesToCompact.size()  this.minFilesToCompact) {  
 if (LOG.isDebugEnabled()) {  
   LOG.debug(Skipped compaction of  + this.storeNameStr 
 + .  Only  + (end - start) +  file(s) of size
 + StringUtils.humanReadableInt(totalSize)
 +  have met compaction criteria.); 
 }
 ' 
 being printed in the logs (and I know DEBUG logging was enabled because I saw 
 this elsewhere).  
 I'd be happy with better error messages when we decide not to compact for 
 user enabled compactions.
 I'd also like to see some override that says user triggered major compaction 
 always occurs, but maybe that's a bad idea for other reasons.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira