[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464501#comment-13464501
 ] 

Ted Yu commented on HBASE-6871:
---

I assume this bug needs to be fixed in 0.92 branch as well.

> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 6871.txt, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
> D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
> ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
> INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
> onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, 
> prevBlockOffset=-1, 
> dataBeginsWith=\x00\x00\x00\x9B\x00\x00\x00\x00\x00\x00\x03#\x00\x00\x050\x00\x00\x08\xB7\x00\x00\x0Cr\x00\x00\x0F\xFA\x00\x00\x120,
>  fileOffset=218942at 
> org.apache.hadoop.hbase.io.

[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464496#comment-13464496
 ] 

Lars Hofhansl commented on HBASE-6871:
--

Patch looks good (as far as I can tell, I'll trust Mikhail on the initial 
version).
I'll make a 0.94 patch tomorrow.

> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 6871.txt, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
> D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
> ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
> INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
> onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, 
> prevBlockOffset=-1, 
> dataBeginsWith=\x00\x00\x00\x9B\x00\x00\x00\x00\x00\x00\x03#\x00\x00\x050\x00\x00\x08\xB7\x00\x00\x0Cr\x00\x00\x0F\xFA\x00

[jira] [Commented] (HBASE-6875) Remove commons-httpclient, -component, and up versions on other jars (remove unused repository)

2012-09-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464493#comment-13464493
 ] 

Ted Yu commented on HBASE-6875:
---

I wonder if the following build error is related to this patch (trunk build 
3383):
{code}
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR]   The project org.apache.hbase:hbase-server:0.95-SNAPSHOT 
( has 
2 errors
[ERROR] 'dependencies.dependency.version' for 
commons-configuration:commons-configuration:jar is missing. @ line 318, column 
17
[ERROR] 'dependencies.dependency.version' for 
commons-httpclient:commons-httpclient:jar is missing. @ line 330, column 17
{code}

> Remove commons-httpclient, -component, and up versions on other jars (remove 
> unused repository)
> ---
>
> Key: HBASE-6875
> URL: https://issues.apache.org/jira/browse/HBASE-6875
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.96.0
>Reporter: stack
>Assignee: stack
> Fix For: 0.96.0
>
> Attachments: pom.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6879) Add HBase Code Template

2012-09-26 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464492#comment-13464492
 ] 

Jesse Yates commented on HBASE-6879:


Yeah, its not setting for me either. I've never seen this not work on older 
versions. Can you get any template to work?

> Add HBase Code Template
> ---
>
> Key: HBASE-6879
> URL: https://issues.apache.org/jira/browse/HBASE-6879
> Project: HBase
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Attachments: HBase Code Template.xml
>
>
> Add a standard code template to do along with the code formatter for HBase. 
> This helps make sure people have the correct license and general commenting 
> for auto-generated elements.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6876) Clean up WARNs and log messages around startup

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464487#comment-13464487
 ] 

Hudson commented on HBASE-6876:
---

Integrated in HBase-TRUNK #3383 (See 
[https://builds.apache.org/job/HBase-TRUNK/3383/])
HBASE-6876 Clean up WARNs and log messages around startup; REAPPLY 
(Revision 1390848)
HBASE-6876 Clean up WARNs and log messages around startup; REVERT OF OVERCOMMIT 
(Revision 1390847)
HBASE-6876 Clean up WARNs and log messages around startup (Revision 1390846)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcEngine.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java

stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcEngine.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java

stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcEngine.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java


> Clean up WARNs and log messages around startup
> --
>
> Key: HBASE-6876
> URL: https://issues.apache.org/jira/browse/HBASE-6876
> Project: HBase
>  Issue Type: Improvement
>Reporter: stack
>Assignee: stack
> Fix For: 0.96.0
>
> Attachments: logging2.txt, logging.txt
>
>
> I was looking at our startup messages and some of the 'normal' messages are a 
> bit frightening at face value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6875) Remove commons-httpclient, -component, and up versions on other jars (remove unused repository)

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464486#comment-13464486
 ] 

Hudson commented on HBASE-6875:
---

Integrated in HBase-TRUNK #3383 (See 
[https://builds.apache.org/job/HBase-TRUNK/3383/])
HBASE-6875 Remove commons-httpclient, -component, and up versions on other 
jars (remove unused repository) (Revision 1390858)

 Result = FAILURE
stack : 
Files : 
* /hbase/trunk/pom.xml


> Remove commons-httpclient, -component, and up versions on other jars (remove 
> unused repository)
> ---
>
> Key: HBASE-6875
> URL: https://issues.apache.org/jira/browse/HBASE-6875
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.96.0
>Reporter: stack
>Assignee: stack
> Fix For: 0.96.0
>
> Attachments: pom.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6875) Remove commons-httpclient, -component, and up versions on other jars (remove unused repository)

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6875:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Release Note: Removed unused libs commons-httpclient and 
commons-component.  Upped commons-codec to 1.7 from 1.4, commons-io from 2.1 to 
2.4, commons-lang from 2.5 to 2.6, jruby from 1.6.5 to 1.6.8 (1.7 jruby is 14M, 
1.6 is 10M), mockito-all from 1.9 to 2.4.1, zookeeper from 3.4.3 to 3.4.4
   Status: Resolved  (was: Patch Available)

Committed to trunk.

> Remove commons-httpclient, -component, and up versions on other jars (remove 
> unused repository)
> ---
>
> Key: HBASE-6875
> URL: https://issues.apache.org/jira/browse/HBASE-6875
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.96.0
>Reporter: stack
>Assignee: stack
> Fix For: 0.96.0
>
> Attachments: pom.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6878) DistributerLogSplit can fail to resubmit a task done if there is an exception during the log archiving

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464471#comment-13464471
 ] 

stack commented on HBASE-6878:
--

taskFinisher will not return Status.DONE under what circumstances?  If we get 
an IOE splitting the log it seems.  And that could happen within the timeout... 
and task won't be resubmitted if we pass in CHECK?  I don't know this code 
well.  Jimmy?  You have an opinion.  Let me trying pinging Prakash.


> DistributerLogSplit can fail to resubmit a task done if there is an exception 
> during the log archiving
> --
>
> Key: HBASE-6878
> URL: https://issues.apache.org/jira/browse/HBASE-6878
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: nkeywal
>Priority: Minor
>
> The code in SplitLogManager# getDataSetWatchSuccess is:
> {code}
> if (slt.isDone()) {
>   LOG.info("task " + path + " entered state: " + slt.toString());
>   if (taskFinisher != null && !ZKSplitLog.isRescanNode(watcher, path)) {
> if (taskFinisher.finish(slt.getServerName(), 
> ZKSplitLog.getFileName(path)) == Status.DONE) {
>   setDone(path, SUCCESS);
> } else {
>   resubmitOrFail(path, CHECK);
> }
>   } else {
> setDone(path, SUCCESS);
>   }
> {code}
>   resubmitOrFail(path, CHECK);
> should be 
>   resubmitOrFail(path, FORCE);
> Without it, the task won't be resubmitted if the delay is not reached, and 
> the task will be marked as failed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6876) Clean up WARNs and log messages around startup

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6876:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
   Status: Resolved  (was: Patch Available)

Small log message fixes.  Committing to trunk (Committed once then had to 
revert and reapply because I committed more than this patch by mistake).

> Clean up WARNs and log messages around startup
> --
>
> Key: HBASE-6876
> URL: https://issues.apache.org/jira/browse/HBASE-6876
> Project: HBase
>  Issue Type: Improvement
>Reporter: stack
>Assignee: stack
> Fix For: 0.96.0
>
> Attachments: logging2.txt, logging.txt
>
>
> I was looking at our startup messages and some of the 'normal' messages are a 
> bit frightening at face value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6880) Failure in assigning root causes system hang

2012-09-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464468#comment-13464468
 ] 

ramkrishna.s.vasudevan commented on HBASE-6880:
---

@Jimmy
I think the problem that you are mentioning here is the one that i found in 
HBASE-6698 in one the QA buids.
Once the RS registers to master we tend to assignRoot and there we fail if in 
the RPC layer the flag 'started' is not set.
Should we go till the assign flow and then retry the assign? Could there be 
anyother better way or something?  I understand that HBASE-6881 tries to solve 
this.  What you feel?

> Failure in assigning root causes system hang
> 
>
> Key: HBASE-6880
> URL: https://issues.apache.org/jira/browse/HBASE-6880
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>
> In looking into a TestReplication failure, I found out sometimes assignRoot 
> could fail, for example, RS is not serving traffic yet.  In this case, the 
> master will keep waiting for root to be available, which could never happen.
>  
> Need to gracefully terminate master if root is not assigned properly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6854) Deletion of SPLITTING node on split rollback should clear the region from RIT

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464465#comment-13464465
 ] 

stack commented on HBASE-6854:
--

Looking at the patch, why we do this:

{code}
-  LOG.debug("Ephemeral node deleted, regionserver crashed?, " +
-"clearing from RIT; rs=" + rs);
+  LOG.debug("Ephemeral node deleted, regionserver crashed?, offlining 
the region"
+  + rs.getRegion() + "clearing from RIT; rs=" + rs);
{code}

The lone 'rs' will become rs.toString.  rs is a RegionState.  When I look at 
RegionState.toString, it includes region name so why add the rs.getRegion?

The above can be fixed on commit because otherwise, this looks like a good bug 
fix.

> Deletion of SPLITTING node on split rollback should clear the region from RIT
> -
>
> Key: HBASE-6854
> URL: https://issues.apache.org/jira/browse/HBASE-6854
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
> Fix For: 0.94.3
>
> Attachments: HBASE-6854.patch
>
>
> If a failure happens in split before OFFLINING_PARENT, we tend to rollback 
> the split including deleting the znodes created.
> On deletion of the RS_ZK_SPLITTING node we are getting a callback but not 
> remvoving from RIT. We need to remove it from RIT, anyway SSH logic is well 
> guarded in case the delete event comes due to RS down scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6876) Clean up WARNs and log messages around startup

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464461#comment-13464461
 ] 

Hadoop QA commented on HBASE-6876:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12546798/logging2.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

-1 javadoc.  The javadoc tool appears to have generated 140 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 6 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2938//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2938//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2938//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2938//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2938//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2938//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2938//console

This message is automatically generated.

> Clean up WARNs and log messages around startup
> --
>
> Key: HBASE-6876
> URL: https://issues.apache.org/jira/browse/HBASE-6876
> Project: HBase
>  Issue Type: Improvement
>Reporter: stack
>Assignee: stack
> Attachments: logging2.txt, logging.txt
>
>
> I was looking at our startup messages and some of the 'normal' messages are a 
> bit frightening at face value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6871:
-

Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 6871.txt, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
> D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
> ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
> INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
> onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, 
> prevBlockOffset=-1, 
> dataBeginsWith=\x00\x00\x00\x9B\x00\x00\x00\x00\x00\x00\x03#\x00\x00\x050\x00\x00\x08\xB7\x00\x00\x0Cr\x00\x00\x0F\xFA\x00\x00\x120,
>  fileOffset=218942at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.validateBlockType(HFileReaderV2.java:

[jira] [Updated] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6871:
-

Attachment: 6871.txt

Forward port of Mikhails' 0.89-fb patch.  Minor changes using getDataTestDir 
instead of getTestDir in test and I added categorization (Small -- it ran in 
3seconds when I tested it).

> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 6871.txt, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
> D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
> ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
> INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
> onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, 
> prevBlockOffset=-1, 
> dataBeginsWith=\x00\x00\x00\x9B\x00\x00\x00\x00\x00\x00\x03#\x00\x00\x050\x00\x00\x08\xB7\x00\x00\x0Cr\

[jira] [Commented] (HBASE-6854) Deletion of SPLITTING node on split rollback should clear the region from RIT

2012-09-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464443#comment-13464443
 ] 

ramkrishna.s.vasudevan commented on HBASE-6854:
---

Found the problem. It is a testcase related issue.  The RIT that was got in the 
testcase should be updated everytime.
Will update the patch and commit it once i reach home.


> Deletion of SPLITTING node on split rollback should clear the region from RIT
> -
>
> Key: HBASE-6854
> URL: https://issues.apache.org/jira/browse/HBASE-6854
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
> Fix For: 0.94.3
>
> Attachments: HBASE-6854.patch
>
>
> If a failure happens in split before OFFLINING_PARENT, we tend to rollback 
> the split including deleting the znodes created.
> On deletion of the RS_ZK_SPLITTING node we are getting a callback but not 
> remvoving from RIT. We need to remove it from RIT, anyway SSH logic is well 
> guarded in case the delete event comes due to RS down scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

2012-09-26 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464431#comment-13464431
 ] 

Harsh J commented on HBASE-5071:


We could close it with a note that it does not affect HFileV2.

> HFile has a possible cast issue.
> 
>
> Key: HBASE-5071
> URL: https://issues.apache.org/jira/browse/HBASE-5071
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, io
>Affects Versions: 0.90.0
>Reporter: Harsh J
>  Labels: hfile
>
> HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
> {code}
> int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
> FixedFileTrailer.trailerSize());
> {code}
> Which on trunk today, for HFile v1 is:
> {code}
> int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
> trailer.getTrailerSize());
> {code}
> This computed (and casted) integer is then used to build an array of the same 
> size. But if fileSize is very large (>> Integer.MAX_VALUE), then there's an 
> easy chance this can go negative at some point and spew out exceptions such 
> as:
> {code}
> java.lang.NegativeArraySizeException 
> at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
> at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
>  
> at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
>  
> at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
> at org.apache.hadoop.hbase.regionserver.Store.(Store.java:209) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
>  
> at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
> {code}
> Did we accidentally limit single region sizes this way?
> (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
> issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6853) IllegalArgument Exception is thrown when an empty region is spliitted.

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464427#comment-13464427
 ] 

stack commented on HBASE-6853:
--

[~ram_krish] Yes.  +1 on HBASE-6853_splitfailure.patch

> IllegalArgument Exception is thrown when an empty region is spliitted.
> --
>
> Key: HBASE-6853
> URL: https://issues.apache.org/jira/browse/HBASE-6853
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.1, 0.94.1
>Reporter: ramkrishna.s.vasudevan
> Attachments: HBASE-6853_2_splitsuccess.patch, 
> HBASE-6853_splitfailure.patch
>
>
> This is w.r.t a mail sent in the dev mail list.
> Empty region split should be handled gracefully.  Either we should not allow 
> the split to happen if we know that the region is empty or we should allow 
> the split to happen by setting the no of threads to the thread pool executor 
> as 1.
> {code}
> int nbFiles = hstoreFilesToSplit.size();
> ThreadFactoryBuilder builder = new ThreadFactoryBuilder();
> builder.setNameFormat("StoreFileSplitter-%1$d");
> ThreadFactory factory = builder.build();
> ThreadPoolExecutor threadPool =
>   (ThreadPoolExecutor) Executors.newFixedThreadPool(nbFiles, factory);
> List> futures = new ArrayList>(nbFiles);
> {code}
> Here the nbFiles needs to be a non zero positive value.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6879) Add HBase Code Template

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464424#comment-13464424
 ] 

stack commented on HBASE-6879:
--

Does it work for you?  I tried doing import under code templates and it didn't 
seem to show.  File doesn't seem to have Apache license in it for new files?

> Add HBase Code Template
> ---
>
> Key: HBASE-6879
> URL: https://issues.apache.org/jira/browse/HBASE-6879
> Project: HBase
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Attachments: HBase Code Template.xml
>
>
> Add a standard code template to do along with the code formatter for HBase. 
> This helps make sure people have the correct license and general commenting 
> for auto-generated elements.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6876) Clean up WARNs and log messages around startup

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6876:
-

Status: Patch Available  (was: Open)

> Clean up WARNs and log messages around startup
> --
>
> Key: HBASE-6876
> URL: https://issues.apache.org/jira/browse/HBASE-6876
> Project: HBase
>  Issue Type: Improvement
>Reporter: stack
>Assignee: stack
> Attachments: logging2.txt, logging.txt
>
>
> I was looking at our startup messages and some of the 'normal' messages are a 
> bit frightening at face value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6876) Clean up WARNs and log messages around startup

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6876:
-

Status: Open  (was: Patch Available)

> Clean up WARNs and log messages around startup
> --
>
> Key: HBASE-6876
> URL: https://issues.apache.org/jira/browse/HBASE-6876
> Project: HBase
>  Issue Type: Improvement
>Reporter: stack
>Assignee: stack
> Attachments: logging2.txt, logging.txt
>
>
> I was looking at our startup messages and some of the 'normal' messages are a 
> bit frightening at face value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6876) Clean up WARNs and log messages around startup

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6876:
-

Attachment: logging2.txt

Rebase

> Clean up WARNs and log messages around startup
> --
>
> Key: HBASE-6876
> URL: https://issues.apache.org/jira/browse/HBASE-6876
> Project: HBase
>  Issue Type: Improvement
>Reporter: stack
>Assignee: stack
> Attachments: logging2.txt, logging.txt
>
>
> I was looking at our startup messages and some of the 'normal' messages are a 
> bit frightening at face value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6881) All regionservers are marked offline even there is still one up

2012-09-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464406#comment-13464406
 ] 

ramkrishna.s.vasudevan commented on HBASE-6881:
---

@Jimmy
I typed my comment yesterday but i think it got lost due to some reasons.
In HBASE-6438 we tried to handle the scenario where the region that is slow in 
processing could transition the node to OPENING and handleREgion could update 
the memory state to OPENING just after the assign retry as changed the state to 
OFFLINE.  This where we tried to handle HBASE-6438 with some special handling.
Rajesh has commented what i missed out yesterday.

> All regionservers are marked offline even there is still one up
> ---
>
> Key: HBASE-6881
> URL: https://issues.apache.org/jira/browse/HBASE-6881
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: trunk-6881.patch
>
>
> {noformat}
> +RegionPlan newPlan = plan;
> +if (!regionAlreadyInTransitionException) {
> +  // Force a new plan and reassign. Will return null if no servers.
> +  newPlan = getRegionPlan(state, plan.getDestination(), true);
> +}
> +if (newPlan == null) {
>this.timeoutMonitor.setAllRegionServersOffline(true);
>LOG.warn("Unable to find a viable location to assign region " +
>  state.getRegion().getRegionNameAsString());
> {noformat}
> Here, when newPlan is null, plan.getDestination() could be up actually.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464390#comment-13464390
 ] 

Phabricator commented on HBASE-6871:


mbautin has closed the revision "[jira] [HBASE-6871] [89-fb] Block index 
corruption test case and fix".

CHANGED PRIOR TO COMMIT
  https://reviews.facebook.net/D5703?vs=18759&id=18765#differential-review-toc

REVISION DETAIL
  https://reviews.facebook.net/D5703

COMMIT
  https://reviews.facebook.net/rHBASEEIGHTNINEFBBRANCH1390819

To: lhofhansl, Kannan, Liyin, stack, JIRA, mbautin


> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
> D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
> ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
> INTERMEDIATE_IN

[jira] [Commented] (HBASE-6679) RegionServer aborts due to race between compaction and split

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464381#comment-13464381
 ] 

Hudson commented on HBASE-6679:
---

Integrated in HBase-TRUNK #3382 (See 
[https://builds.apache.org/job/HBase-TRUNK/3382/])
HBASE-6679 RegionServer aborts due to race between compaction and split 
(Revision 1390781)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


> RegionServer aborts due to race between compaction and split
> 
>
> Key: HBASE-6679
> URL: https://issues.apache.org/jira/browse/HBASE-6679
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.92.3, 0.94.3, 0.96.0
>
> Attachments: 6679-1.094.patch, 6679-1.patch, 
> rs-crash-parallel-compact-split.log
>
>
> In our nightlies, we have seen RS aborts due to compaction and split racing. 
> Original parent file gets deleted after the compaction, and hence, the 
> daughters don't find the parent data file. The RS kills itself when this 
> happens. Will attach a snippet of the relevant RS logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6881) All regionservers are marked offline even there is still one up

2012-09-26 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464378#comment-13464378
 ] 

rajeshbabu commented on HBASE-6881:
---

@Jimmy,
bq.For the removed code, it is not needed, because if 
regionAlreadyInTransition, we already transition the state to offline state in 
the exception handling part.
there is a possibility that state can be changed to OPENING in handleRegion, 
after setting OFFLINE in exception handling part. That's why we are not 
aborting master even state is not offline/closed. We were able to produce this 
scenario while testing HBASE-6438. Please correct me if wrong.

> All regionservers are marked offline even there is still one up
> ---
>
> Key: HBASE-6881
> URL: https://issues.apache.org/jira/browse/HBASE-6881
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: trunk-6881.patch
>
>
> {noformat}
> +RegionPlan newPlan = plan;
> +if (!regionAlreadyInTransitionException) {
> +  // Force a new plan and reassign. Will return null if no servers.
> +  newPlan = getRegionPlan(state, plan.getDestination(), true);
> +}
> +if (newPlan == null) {
>this.timeoutMonitor.setAllRegionServersOffline(true);
>LOG.warn("Unable to find a viable location to assign region " +
>  state.getRegion().getRegionNameAsString());
> {noformat}
> Here, when newPlan is null, plan.getDestination() could be up actually.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6679) RegionServer aborts due to race between compaction and split

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464379#comment-13464379
 ] 

Hudson commented on HBASE-6679:
---

Integrated in HBase-0.94 #491 (See 
[https://builds.apache.org/job/HBase-0.94/491/])
HBASE-6679 RegionServer aborts due to race between compaction and split 
(Revision 1390783)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java


> RegionServer aborts due to race between compaction and split
> 
>
> Key: HBASE-6679
> URL: https://issues.apache.org/jira/browse/HBASE-6679
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.92.3, 0.94.3, 0.96.0
>
> Attachments: 6679-1.094.patch, 6679-1.patch, 
> rs-crash-parallel-compact-split.log
>
>
> In our nightlies, we have seen RS aborts due to compaction and split racing. 
> Original parent file gets deleted after the compaction, and hence, the 
> daughters don't find the parent data file. The RS kills itself when this 
> happens. Will attach a snippet of the relevant RS logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464350#comment-13464350
 ] 

Phabricator commented on HBASE-6871:


mbautin has commented on the revision "[jira] [HBASE-6871] [89-fb] Block index 
corruption test case and fix".

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java:988 I 
could have thrown AssertionErrors but I think these exceptions are more 
specific. If the index writer is configured as a single-level-only, then the 
operation of writing inline blocks is unsupported. If curInlineChunk is nul, 
that means this function has been called with closing=true and calling it again 
in this state is illegal.

REVISION DETAIL
  https://reviews.facebook.net/D5703

BRANCH
  repro_interm_index_bug_v7

To: lhofhansl, Kannan, Liyin, stack, JIRA, mbautin


> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
> D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
> ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecu

[jira] [Commented] (HBASE-6702) ResourceChecker refinement

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464332#comment-13464332
 ] 

Hudson commented on HBASE-6702:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #194 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/194/])
HBASE-6702  ResourceChecker refinement (Revision 1390433)

 Result = FAILURE
nkeywal : 
Files : 
* /hbase/trunk/hbase-common/pom.xml
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/IntegrationTests.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/LargeTests.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/MediumTests.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/ResourceCheckerJUnitListener.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/SmallTests.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBytes.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestLoadTestKVGenerator.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestThreads.java
* /hbase/trunk/hbase-it/pom.xml
* /hbase/trunk/hbase-server/pom.xml
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/IntegrationTests.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/LargeTests.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/MediumTests.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/ResourceCheckerJUnitRule.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/ServerResourceCheckerJUnitListener.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/SmallTests.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestAcidGuarantees.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestClusterBootOrder.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestCompare.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestDrainingServer.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestFSTableDescriptorForceCreation.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestFullLogReconstruction.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestGlobalMemStoreSize.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestHBaseTestingUtility.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestHRegionLocation.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestHServerAddress.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestHServerInfo.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestInfoServers.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestMultiVersions.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestSerialization.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestServerName.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/catalog/TestCatalogTracker.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/catalog/TestCatalogTrackerOnCluster.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/catalog/TestMetaMigrationConvertingToPB.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/catalog/TestMetaReaderEditor.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/catalog/TestMetaReaderEditorNoCluster.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAttributes.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFakeKeyInFilter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestGet.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTablePool.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableUtil.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hb

[jira] [Commented] (HBASE-6884) Update documentation on unit tests

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464330#comment-13464330
 ] 

Hudson commented on HBASE-6884:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #194 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/194/])
HBASE-6884 Update documentation on unit tests (Revision 1390687)
HBASE-6884 Update documentation on unit tests (Revision 1390648)

 Result = FAILURE
stack : 
Files : 
* /hbase/trunk/src/docbkx/developer.xml

stack : 
Files : 
* /hbase/trunk/src/docbkx/developer.xml


> Update documentation on unit tests
> --
>
> Key: HBASE-6884
> URL: https://issues.apache.org/jira/browse/HBASE-6884
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 6884-addendum.txt, 6884.txt
>
>
> Points to address:
> - we don't have anymore JUnit rules in the tests
> - we should document how to run the test faster.
> - some stuff is not used (run only a category) and should be removed from the 
> doc imho.
> Below the proposal:
> --
> 15.6.2. Unit Tests
> HBase unit tests are subdivided into three categories: small, medium and 
> large, with corresponding JUnit categories: SmallTests, MediumTests, 
> LargeTests. JUnit categories are denoted using java annotations and look like 
> this in your unit test code.
> ...
> @Category(SmallTests.class)
> public class TestHRegionInfo {
>   @Test
>   public void testCreateHRegionInfoName() throws Exception {
> // ...
>   }
> }
> The above example shows how to mark a test as belonging to the small 
> category. HBase uses a patched maven surefire plugin and maven profiles to 
> implement its unit test characterizations. 
> 15.6.2.4. Running tests
> Below we describe how to run the HBase junit categories.
> 15.6.2.4.1. Default: small and medium category tests
> Running
> mvn test
> will execute all small tests in a single JVM (no fork) and then medium tests 
> in a separate JVM for each test instance. Medium tests are NOT executed if 
> there is an error in a small test. Large tests are NOT executed. There is one 
> report for small tests, and one report for medium tests if they are executed.
> 15.6.2.4.2. Running all tests
> Running
> mvn test -P runAllTests
> will execute small tests in a single JVM then medium and large tests in a 
> separate JVM for each test. Medium and large tests are NOT executed if there 
> is an error in a small test. Large tests are NOT executed if there is an 
> error in a small or medium test. There is one report for small tests, and one 
> report for medium and large tests if they are executed
> 15.6.2.4.3. Running a single test or all tests in a package
> To run an individual test, e.g. MyTest, do
> mvn test -P localTests -Dtest=MyTest
> You can also pass multiple, individual tests as a comma-delimited list:
> mvn test -P localTests -Dtest=MyTest1,MyTest2,MyTest3
> You can also pass a package, which will run all tests under the package:
> mvn test -P localTests -Dtest=org.apache.hadoop.hbase.client.*
> The -P localTests will remove the JUnit category effect (without this 
> specific profile, the categories are taken into account). Each junit tests is 
> executed in a separate JVM (A fork per test class). There is no 
> parallelization when localTests profile is set. You will see a new message at 
> the end of the report: "[INFO] Tests are skipped". It's harmless.
> 15.6.2.4.4. Running test faster
> [replace previous chapter]
> By default, mvn test -P runAllTests runs 5 tests in parallel. It can be 
> increased for many developper machine. Consider that you can have 2 tests in 
> parallel per core, and you need about 2Gb of memory per test. Hence, if you 
> have a 8 cores and 24Gb box, you can have 16 tests in parallel.
> The setting is:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=12
> To increase the speed, you can as well use a ramdisk. You will need 2Gb of 
> memory to run all the test. You will also need to delete the files between 
> two test run.
> The typical way to configure a ramdisk on Linux is:
> sudo mkdir /ram2G
> sudo mount -t tmpfs -o size=2048M tmpfs /ram2G
> You can then use it to run all HBase tests with the command:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=8 
> -Dtest.build.data.basedirectory=/ram2G

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6885) Typo in the Javadoc for close method of HTableInterface class

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464331#comment-13464331
 ] 

Hudson commented on HBASE-6885:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #194 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/194/])
HBASE-6885 Typo in the Javadoc for close method of HTableInterface class 
(Revision 1390673)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java


> Typo in the Javadoc for close method of HTableInterface class
> -
>
> Key: HBASE-6885
> URL: https://issues.apache.org/jira/browse/HBASE-6885
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.94.1
>Reporter: Jingguo Yao
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: HTableInterface-HBASE-6885.patch
>
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> "help" in "Releases any resources help or pending changes in internal 
> buffers" should be "held".

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5961) New standard HBase code formatter

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464329#comment-13464329
 ] 

Hudson commented on HBASE-5961:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #194 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/194/])
HBASE-5961 New standard HBase code formatter; ADDENDUM -- ADD LICENSE 
(Revision 1390713)
HBASE-5961 New standard HBase code formatter; ADDENDUM -- ADD RAT EXCLUSION FOR 
ECLIPSE FORMATTER FILE (Revision 1390703)

 Result = FAILURE
stack : 
Files : 
* /hbase/trunk/dev-support/hbase_eclipse_formatter.xml
* /hbase/trunk/pom.xml

stack : 
Files : 
* /hbase/trunk/pom.xml


> New standard HBase code formatter
> -
>
> Key: HBASE-5961
> URL: https://issues.apache.org/jira/browse/HBASE-5961
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.96.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 5961-addendum2.txt, 5961-addendum.txt, 
> HBase-Formmatter.xml
>
>
> There is currently no good way of passing out the formmatter currently the 
> 'standard' in HBase. The standard Apache formatter is actually not very close 
> to what we are considering 'good'/'pretty' code. Further, its not trivial to 
> get a good formatter setup.
> Proposing two things: 
> 1) Adding a formmatter to the dev tools and calling out the formmatter usage 
> in the docs
> 2) Move to a 'better' formmatter that is not the standard apache formmatter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-6679) RegionServer aborts due to race between compaction and split

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-6679.
--

   Resolution: Fixed
Fix Version/s: 0.96.0
   0.94.3
 Hadoop Flags: Reviewed

Committed to 0.92, 0.94 and trunk.  Thanks for the patch Devaraj.

> RegionServer aborts due to race between compaction and split
> 
>
> Key: HBASE-6679
> URL: https://issues.apache.org/jira/browse/HBASE-6679
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.92.3, 0.94.3, 0.96.0
>
> Attachments: 6679-1.094.patch, 6679-1.patch, 
> rs-crash-parallel-compact-split.log
>
>
> In our nightlies, we have seen RS aborts due to compaction and split racing. 
> Original parent file gets deleted after the compaction, and hence, the 
> daughters don't find the parent data file. The RS kills itself when this 
> happens. Will attach a snippet of the relevant RS logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6679) RegionServer aborts due to race between compaction and split

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6679:
-

Attachment: 6679-1.094.patch

0.94 version

> RegionServer aborts due to race between compaction and split
> 
>
> Key: HBASE-6679
> URL: https://issues.apache.org/jira/browse/HBASE-6679
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.92.3
>
> Attachments: 6679-1.094.patch, 6679-1.patch, 
> rs-crash-parallel-compact-split.log
>
>
> In our nightlies, we have seen RS aborts due to compaction and split racing. 
> Original parent file gets deleted after the compaction, and hence, the 
> daughters don't find the parent data file. The RS kills itself when this 
> happens. Will attach a snippet of the relevant RS logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6889) Ignore source control files with apache-rat

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464310#comment-13464310
 ] 

Hadoop QA commented on HBASE-6889:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12546771/hbase-6889-mvn-v0.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

-1 javadoc.  The javadoc tool appears to have generated 140 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 6 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2937//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2937//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2937//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2937//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2937//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2937//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2937//console

This message is automatically generated.

> Ignore source control files with apache-rat
> ---
>
> Key: HBASE-6889
> URL: https://issues.apache.org/jira/browse/HBASE-6889
> Project: HBase
>  Issue Type: Bug
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Attachments: hbase-6889-mvn-v0.patch
>
>
> Running 'mvn apache-rat:check' locally causes a failure because it finds the 
> source control files, making it hard to check that you didn't include a file 
> without a source header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6881) All regionservers are marked offline even there is still one up

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464305#comment-13464305
 ] 

stack commented on HBASE-6881:
--

Thanks for the explanation.

bq. For the removed code, it is not needed, because if 
regionAlreadyInTransition, we already transition the state to offline state in 
the exception handling part.

Is the above comment suppose to be relate to this code?

{code}
 if (!hijack && !state.isClosed() && !state.isOffline()) {
-  if (!regionAlreadyInTransitionException ) {
-String msg = "Unexpected state : " + state + " .. Cannot transit it to 
OFFLINE.";
-this.server.abort(msg, new IllegalStateException(msg));
-return -1;
-  } else {
-LOG.debug("Unexpected state : " + state
-+ " but retrying to assign because 
RegionAlreadyInTransitionException.");
-  }
+  String msg = "Unexpected state : " + state + " .. Cannot transit it to 
OFFLINE.";
+  this.server.abort(msg, new IllegalStateException(msg));
+  return -1;
 }
{code}

If so, I'm not sure I follow.  The above code is a change which has us abort 
always if state is not closed or offline.  The old code did that  EXCEPT in the 
case where regionAlreadyInTransitionException is set.  You've removed this 
latter special handling.

Or are you saying that there is no need for this special exception handling 
because we've successfully off-lined the region at #1498 in patched version up 
on RB at this line: {code}currentState = regionStates.updateRegionState({code}?

I buy the other comments.  I see how excluding a server could be a problem.  
This is a necessary fix.  I see that we preserve the basic idea of hbase-6384 
which is keep old plan if RegionAlreadyInTransition (and now too if server not 
ready).



> All regionservers are marked offline even there is still one up
> ---
>
> Key: HBASE-6881
> URL: https://issues.apache.org/jira/browse/HBASE-6881
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: trunk-6881.patch
>
>
> {noformat}
> +RegionPlan newPlan = plan;
> +if (!regionAlreadyInTransitionException) {
> +  // Force a new plan and reassign. Will return null if no servers.
> +  newPlan = getRegionPlan(state, plan.getDestination(), true);
> +}
> +if (newPlan == null) {
>this.timeoutMonitor.setAllRegionServersOffline(true);
>LOG.warn("Unable to find a viable location to assign region " +
>  state.getRegion().getRegionNameAsString());
> {noformat}
> Here, when newPlan is null, plan.getDestination() could be up actually.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6679) RegionServer aborts due to race between compaction and split

2012-09-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464299#comment-13464299
 ] 

Lars Hofhansl commented on HBASE-6679:
--

I assume 0.94 and trunk have the issue, right?

> RegionServer aborts due to race between compaction and split
> 
>
> Key: HBASE-6679
> URL: https://issues.apache.org/jira/browse/HBASE-6679
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.92.3
>
> Attachments: 6679-1.patch, rs-crash-parallel-compact-split.log
>
>
> In our nightlies, we have seen RS aborts due to compaction and split racing. 
> Original parent file gets deleted after the compaction, and hence, the 
> daughters don't find the parent data file. The RS kills itself when this 
> happens. Will attach a snippet of the relevant RS logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

2012-09-26 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464300#comment-13464300
 ] 

Chris Trezzo commented on HBASE-5071:
-

It seems like this might not be a problem in HFileV2. Should we just close this 
issue?

> HFile has a possible cast issue.
> 
>
> Key: HBASE-5071
> URL: https://issues.apache.org/jira/browse/HBASE-5071
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, io
>Affects Versions: 0.90.0
>Reporter: Harsh J
>  Labels: hfile
>
> HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
> {code}
> int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
> FixedFileTrailer.trailerSize());
> {code}
> Which on trunk today, for HFile v1 is:
> {code}
> int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
> trailer.getTrailerSize());
> {code}
> This computed (and casted) integer is then used to build an array of the same 
> size. But if fileSize is very large (>> Integer.MAX_VALUE), then there's an 
> easy chance this can go negative at some point and spew out exceptions such 
> as:
> {code}
> java.lang.NegativeArraySizeException 
> at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
> at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
>  
> at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
>  
> at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
> at org.apache.hadoop.hbase.regionserver.Store.(Store.java:209) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
>  
> at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
> {code}
> Did we accidentally limit single region sizes this way?
> (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
> issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

2012-09-26 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464296#comment-13464296
 ] 

Chris Trezzo commented on HBASE-5071:
-

[~mikhail] Do you want to have a look?

> HFile has a possible cast issue.
> 
>
> Key: HBASE-5071
> URL: https://issues.apache.org/jira/browse/HBASE-5071
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, io
>Affects Versions: 0.90.0
>Reporter: Harsh J
>  Labels: hfile
>
> HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
> {code}
> int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
> FixedFileTrailer.trailerSize());
> {code}
> Which on trunk today, for HFile v1 is:
> {code}
> int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
> trailer.getTrailerSize());
> {code}
> This computed (and casted) integer is then used to build an array of the same 
> size. But if fileSize is very large (>> Integer.MAX_VALUE), then there's an 
> easy chance this can go negative at some point and spew out exceptions such 
> as:
> {code}
> java.lang.NegativeArraySizeException 
> at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
> at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
>  
> at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
>  
> at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
> at org.apache.hadoop.hbase.regionserver.Store.(Store.java:209) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
>  
> at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
> {code}
> Did we accidentally limit single region sizes this way?
> (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
> issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5071) HFile has a possible cast issue.

2012-09-26 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated HBASE-5071:


Component/s: HFile

> HFile has a possible cast issue.
> 
>
> Key: HBASE-5071
> URL: https://issues.apache.org/jira/browse/HBASE-5071
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, io
>Affects Versions: 0.90.0
>Reporter: Harsh J
>  Labels: hfile
>
> HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
> {code}
> int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
> FixedFileTrailer.trailerSize());
> {code}
> Which on trunk today, for HFile v1 is:
> {code}
> int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
> trailer.getTrailerSize());
> {code}
> This computed (and casted) integer is then used to build an array of the same 
> size. But if fileSize is very large (>> Integer.MAX_VALUE), then there's an 
> easy chance this can go negative at some point and spew out exceptions such 
> as:
> {code}
> java.lang.NegativeArraySizeException 
> at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
> at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
>  
> at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
>  
> at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
> at org.apache.hadoop.hbase.regionserver.Store.(Store.java:209) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
>  
> at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
> {code}
> Did we accidentally limit single region sizes this way?
> (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
> issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6679) RegionServer aborts due to race between compaction and split

2012-09-26 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-6679:
---

Attachment: 6679-1.patch

Patch as per the previous comment.

> RegionServer aborts due to race between compaction and split
> 
>
> Key: HBASE-6679
> URL: https://issues.apache.org/jira/browse/HBASE-6679
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.92.3
>
> Attachments: 6679-1.patch, rs-crash-parallel-compact-split.log
>
>
> In our nightlies, we have seen RS aborts due to compaction and split racing. 
> Original parent file gets deleted after the compaction, and hence, the 
> daughters don't find the parent data file. The RS kills itself when this 
> happens. Will attach a snippet of the relevant RS logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6611) Forcing region state offline cause double assignment

2012-09-26 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464287#comment-13464287
 ] 

Jimmy Xiang commented on HBASE-6611:


Posted a patch on RB: https://reviews.apache.org/r/7305/.  Please review.

I did some performance testing and found the async zookeeper node offline is 
big performance +, so it is kept.  Without this patch,
it took around 290 seconds to bulk assign 10,339 regions to 4 region servers.  
With this patch, it took around 300 seconds.
However, without async zookeeper node offline, it took around 400 seconds.

As to force close regions, it is not touched and still working as expected.

> Forcing region state offline cause double assignment
> 
>
> Key: HBASE-6611
> URL: https://issues.apache.org/jira/browse/HBASE-6611
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.96.0
>
>
> In assigning a region, assignment manager forces the region state offline if 
> it is not. This could cause double assignment, for example, if the region is 
> already assigned and in the Open state, you should not just change it's state 
> to Offline, and assign it again.
> I think this could be the root cause for all double assignments IF the region 
> state is reliable.
> After this loophole is closed, TestHBaseFsck should come up a different way 
> to create some assignment inconsistencies, for example, calling region server 
> to open a region directly. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6055) Snapshots in HBase 0.96

2012-09-26 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464283#comment-13464283
 ] 

Jesse Yates commented on HBASE-6055:


{quote}
Another semi-unrelated note... currently we keep full logs files, and the 
restore needs to split them (see the restore code SnapshotLogSplitter, 
https://github.com/matteobertozzi/hbase/blob/snapshot-dev/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/restore/RestoreSnapshotHelper.java#L398)
Can we move this logic at the end of the take snapshot operation and split the 
logs in .snapshot/region/recover.edits?
{quote}

If we move it into the snapshot operation, then that will slow down the overall 
operation and make it more difficult to reason about how long a snapshot 
'should' take. In particular, this becomes difficult because we want to give 
the client firm time bounds, but the log splitting is not time bounded (AFAIK). 
 

An alternative would be to have a background snapshot-log-splitter task that 
just goes through and splits logs for snapshots. It would basically comb though 
the snapshot directory, looking for snapshots. If it finds one it hasn't seen, 
it starts doing the current log splitting on that snapshot (which looks 
basically like the root directory of hbase - less the ROOT and META tables - so 
it should be almost, if not entirely, drop-in useable). When the logs are 
split, we would have to do a little extra checking to make sure that we don't 
restore a snapshot mid-split, or that if we do that it handles it properly. 

> Snapshots in HBase 0.96
> ---
>
> Key: HBASE-6055
> URL: https://issues.apache.org/jira/browse/HBASE-6055
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: hbase-6055, 0.96.0
>
> Attachments: Snapshots in HBase.docx
>
>
> Continuation of HBASE-50 for the current trunk. Since the implementation has 
> drastically changed, opening as a new ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6679) RegionServer aborts due to race between compaction and split

2012-09-26 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464280#comment-13464280
 ] 

Devaraj Das commented on HBASE-6679:


bq. I suppose we could make the reference volatile so all threads catch the 
update.

Yeah, [~stack], I have the same opinion - we close the issue with a fix that 
makes the reference volatile (and that'd justify my hours of debugging ).

bq. But you can't see how the two threads can run concurrently? (not to say it 
not possible)

At least from the regionserver logs it is evident that this didn't happen. From 
the code, the compactions and splits happen in executors, where the split 
happens in an executor with a thread pool of at most one thread. Once a 
compaction completes, the executor fires off a request for split (that may or 
may not happen based on checks done within the request handler). The compaction 
executor doesn't wait for the split to complete, and so technically, it's 
possible that split & compaction could be running in parallel. But at a finer 
granularity, there are locks being taken at different points in split/compact 
(and the important places are protected with HRegion.lock). There are also 
checks for things like HRegion.writeState that are checked/set at places in 
compaction/split.

So IMHO things are wired together okay (but yeah, usual disclaimer - may have 
missed something :-) )

bq. Good on you Deva.

You too :-)

> RegionServer aborts due to race between compaction and split
> 
>
> Key: HBASE-6679
> URL: https://issues.apache.org/jira/browse/HBASE-6679
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.92.3
>
> Attachments: rs-crash-parallel-compact-split.log
>
>
> In our nightlies, we have seen RS aborts due to compaction and split racing. 
> Original parent file gets deleted after the compaction, and hence, the 
> daughters don't find the parent data file. The RS kills itself when this 
> happens. Will attach a snippet of the relevant RS logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6055) Snapshots in HBase 0.96

2012-09-26 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464276#comment-13464276
 ] 

Jesse Yates commented on HBASE-6055:


bq. When you're talking about hfiles, you are referring to the log files right? 
I've a bit a of confusion reading your comment, bacause the log files are 
sequence files. anyway...

Oops, typing tired. Yeah, I mean hlogs the entire time.

{quote}
The logs in /hbase/.logs are splitted (new files are created in 
region/recover.edits) and if you look at HRegion.replayRecoveredEditsIfAny(), 
the content of recover.edits is removed as soon as the edits are applied. 
Removed, not archived. And this means that as soon as the table goes online, 
the snapshot doesn't have a way to read those files.

but as you've said, the original (full) log is still available during split, 
but moved to the archive (.oldlogs) as soon as the split is done.

This means that if you see files in recover.edits, you should have the full 
logs in /hbase/.logs folder. And you can keep a reference to them, as you do 
for the online snapshot
{quote}

Keeping all the logs in .oldlogs as well as .logs will cover a LOT more hlogs 
than are necessary to restore the table. Better would be just just reference 
all the files in the recovered.edits directory, but I worry that there will 
probably be some race conditions (especially in cases where a server is brought 
up and down multiple times). Easier just seems to be to remove the log file 
when when all the recovered.edits are finished. For instance, we could use the 
FileLink stuff Matteo is working on to ref-count that hlog and only delete it 
when the last 'reference' (or file derived from that hlog) is gone from the 
recovered.edits directory

> Snapshots in HBase 0.96
> ---
>
> Key: HBASE-6055
> URL: https://issues.apache.org/jira/browse/HBASE-6055
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: hbase-6055, 0.96.0
>
> Attachments: Snapshots in HBase.docx
>
>
> Continuation of HBASE-50 for the current trunk. Since the implementation has 
> drastically changed, opening as a new ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6889) Ignore source control files with apache-rat

2012-09-26 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-6889:
---

Status: Patch Available  (was: Open)

> Ignore source control files with apache-rat
> ---
>
> Key: HBASE-6889
> URL: https://issues.apache.org/jira/browse/HBASE-6889
> Project: HBase
>  Issue Type: Bug
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Attachments: hbase-6889-mvn-v0.patch
>
>
> Running 'mvn apache-rat:check' locally causes a failure because it finds the 
> source control files, making it hard to check that you didn't include a file 
> without a source header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6889) Ignore source control files with apache-rat

2012-09-26 Thread Jesse Yates (JIRA)
Jesse Yates created HBASE-6889:
--

 Summary: Ignore source control files with apache-rat
 Key: HBASE-6889
 URL: https://issues.apache.org/jira/browse/HBASE-6889
 Project: HBase
  Issue Type: Bug
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: hbase-6889-mvn-v0.patch

Running 'mvn apache-rat:check' locally causes a failure because it finds the 
source control files, making it hard to check that you didn't include a file 
without a source header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6889) Ignore source control files with apache-rat

2012-09-26 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-6889:
---

Attachment: hbase-6889-mvn-v0.patch

Attaching patch that excludes .git and .svn folders from rat:check. Works 
locally.

> Ignore source control files with apache-rat
> ---
>
> Key: HBASE-6889
> URL: https://issues.apache.org/jira/browse/HBASE-6889
> Project: HBase
>  Issue Type: Bug
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Attachments: hbase-6889-mvn-v0.patch
>
>
> Running 'mvn apache-rat:check' locally causes a failure because it finds the 
> source control files, making it hard to check that you didn't include a file 
> without a source header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5961) New standard HBase code formatter

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464270#comment-13464270
 ] 

Hudson commented on HBASE-5961:
---

Integrated in HBase-TRUNK #3381 (See 
[https://builds.apache.org/job/HBase-TRUNK/3381/])
HBASE-5961 New standard HBase code formatter; ADDENDUM -- ADD LICENSE 
(Revision 1390713)
HBASE-5961 New standard HBase code formatter; ADDENDUM -- ADD RAT EXCLUSION FOR 
ECLIPSE FORMATTER FILE (Revision 1390703)

 Result = FAILURE
stack : 
Files : 
* /hbase/trunk/dev-support/hbase_eclipse_formatter.xml
* /hbase/trunk/pom.xml

stack : 
Files : 
* /hbase/trunk/pom.xml


> New standard HBase code formatter
> -
>
> Key: HBASE-5961
> URL: https://issues.apache.org/jira/browse/HBASE-5961
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.96.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 5961-addendum2.txt, 5961-addendum.txt, 
> HBase-Formmatter.xml
>
>
> There is currently no good way of passing out the formmatter currently the 
> 'standard' in HBase. The standard Apache formatter is actually not very close 
> to what we are considering 'good'/'pretty' code. Further, its not trivial to 
> get a good formatter setup.
> Proposing two things: 
> 1) Adding a formmatter to the dev tools and calling out the formmatter usage 
> in the docs
> 2) Move to a 'better' formmatter that is not the standard apache formmatter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6885) Typo in the Javadoc for close method of HTableInterface class

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464272#comment-13464272
 ] 

Hudson commented on HBASE-6885:
---

Integrated in HBase-TRUNK #3381 (See 
[https://builds.apache.org/job/HBase-TRUNK/3381/])
HBASE-6885 Typo in the Javadoc for close method of HTableInterface class 
(Revision 1390673)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java


> Typo in the Javadoc for close method of HTableInterface class
> -
>
> Key: HBASE-6885
> URL: https://issues.apache.org/jira/browse/HBASE-6885
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.94.1
>Reporter: Jingguo Yao
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: HTableInterface-HBASE-6885.patch
>
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> "help" in "Releases any resources help or pending changes in internal 
> buffers" should be "held".

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6884) Update documentation on unit tests

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464271#comment-13464271
 ] 

Hudson commented on HBASE-6884:
---

Integrated in HBase-TRUNK #3381 (See 
[https://builds.apache.org/job/HBase-TRUNK/3381/])
HBASE-6884 Update documentation on unit tests (Revision 1390687)
HBASE-6884 Update documentation on unit tests (Revision 1390648)

 Result = FAILURE
stack : 
Files : 
* /hbase/trunk/src/docbkx/developer.xml

stack : 
Files : 
* /hbase/trunk/src/docbkx/developer.xml


> Update documentation on unit tests
> --
>
> Key: HBASE-6884
> URL: https://issues.apache.org/jira/browse/HBASE-6884
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 6884-addendum.txt, 6884.txt
>
>
> Points to address:
> - we don't have anymore JUnit rules in the tests
> - we should document how to run the test faster.
> - some stuff is not used (run only a category) and should be removed from the 
> doc imho.
> Below the proposal:
> --
> 15.6.2. Unit Tests
> HBase unit tests are subdivided into three categories: small, medium and 
> large, with corresponding JUnit categories: SmallTests, MediumTests, 
> LargeTests. JUnit categories are denoted using java annotations and look like 
> this in your unit test code.
> ...
> @Category(SmallTests.class)
> public class TestHRegionInfo {
>   @Test
>   public void testCreateHRegionInfoName() throws Exception {
> // ...
>   }
> }
> The above example shows how to mark a test as belonging to the small 
> category. HBase uses a patched maven surefire plugin and maven profiles to 
> implement its unit test characterizations. 
> 15.6.2.4. Running tests
> Below we describe how to run the HBase junit categories.
> 15.6.2.4.1. Default: small and medium category tests
> Running
> mvn test
> will execute all small tests in a single JVM (no fork) and then medium tests 
> in a separate JVM for each test instance. Medium tests are NOT executed if 
> there is an error in a small test. Large tests are NOT executed. There is one 
> report for small tests, and one report for medium tests if they are executed.
> 15.6.2.4.2. Running all tests
> Running
> mvn test -P runAllTests
> will execute small tests in a single JVM then medium and large tests in a 
> separate JVM for each test. Medium and large tests are NOT executed if there 
> is an error in a small test. Large tests are NOT executed if there is an 
> error in a small or medium test. There is one report for small tests, and one 
> report for medium and large tests if they are executed
> 15.6.2.4.3. Running a single test or all tests in a package
> To run an individual test, e.g. MyTest, do
> mvn test -P localTests -Dtest=MyTest
> You can also pass multiple, individual tests as a comma-delimited list:
> mvn test -P localTests -Dtest=MyTest1,MyTest2,MyTest3
> You can also pass a package, which will run all tests under the package:
> mvn test -P localTests -Dtest=org.apache.hadoop.hbase.client.*
> The -P localTests will remove the JUnit category effect (without this 
> specific profile, the categories are taken into account). Each junit tests is 
> executed in a separate JVM (A fork per test class). There is no 
> parallelization when localTests profile is set. You will see a new message at 
> the end of the report: "[INFO] Tests are skipped". It's harmless.
> 15.6.2.4.4. Running test faster
> [replace previous chapter]
> By default, mvn test -P runAllTests runs 5 tests in parallel. It can be 
> increased for many developper machine. Consider that you can have 2 tests in 
> parallel per core, and you need about 2Gb of memory per test. Hence, if you 
> have a 8 cores and 24Gb box, you can have 16 tests in parallel.
> The setting is:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=12
> To increase the speed, you can as well use a ramdisk. You will need 2Gb of 
> memory to run all the test. You will also need to delete the files between 
> two test run.
> The typical way to configure a ramdisk on Linux is:
> sudo mkdir /ram2G
> sudo mount -t tmpfs -o size=2048M tmpfs /ram2G
> You can then use it to run all HBase tests with the command:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=8 
> -Dtest.build.data.basedirectory=/ram2G

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464265#comment-13464265
 ] 

Phabricator commented on HBASE-6871:


lhofhansl has commented on the revision "[jira] [HBASE-6871] [89-fb] Block 
index corruption test case and fix".

  Can't pretend to fully understands what going on in the test, but if it 
reproduces the problem that's perfect. Thanks for doing this Mikhail.

  The fix is different from the one proposed on the issue.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java:988 Are 
these expected exceptional cases?
  If not, should we have asserts instead?

REVISION DETAIL
  https://reviews.facebook.net/D5703

BRANCH
  repro_interm_index_bug_v7

To: lhofhansl, Kannan, Liyin, stack, JIRA, mbautin


> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
> D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
> ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 

[jira] [Commented] (HBASE-6881) All regionservers are marked offline even there is still one up

2012-09-26 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464262#comment-13464262
 ] 

Jimmy Xiang commented on HBASE-6881:


Thanks for the review.  I put it on RB for easy review:  
https://reviews.apache.org/r/7303/

For the removed code, it is not needed, because if regionAlreadyInTransition, 
we already transition the state to offline state in the exception handling part.

For the final param, sure, I will use a local one. The state changes because we 
transition region state in this method.

As to the comment, because we don't exclude the server of the origin plan.  
Even we force a new plan, it is possible to get the original server since it is 
randomly selected.  If there is only one region server as in most unit tests, 
it will be the existing plan.  The reason we don't want to exclude the original 
server is that it could be the only server up at that time (as in most unit 
tests).  If we exclude it, the newPlan will be null and all servers will be 
marked offline.  If the region is ROOT, then it leads to HBASE-6880 and hanging 
unit test.





> All regionservers are marked offline even there is still one up
> ---
>
> Key: HBASE-6881
> URL: https://issues.apache.org/jira/browse/HBASE-6881
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: trunk-6881.patch
>
>
> {noformat}
> +RegionPlan newPlan = plan;
> +if (!regionAlreadyInTransitionException) {
> +  // Force a new plan and reassign. Will return null if no servers.
> +  newPlan = getRegionPlan(state, plan.getDestination(), true);
> +}
> +if (newPlan == null) {
>this.timeoutMonitor.setAllRegionServersOffline(true);
>LOG.warn("Unable to find a viable location to assign region " +
>  state.getRegion().getRegionNameAsString());
> {noformat}
> Here, when newPlan is null, plan.getDestination() could be up actually.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-6871:
---

Attachment: D5703.5.patch

mbautin updated the revision "[jira] [HBASE-6871] [89-fb] Block index 
corruption test case and fix".
Reviewers: lhofhansl, Kannan, Liyin, stack, JIRA

  Adding the bugfix.

REVISION DETAIL
  https://reviews.facebook.net/D5703

AFFECTED FILES
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
  
src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java

To: lhofhansl, Kannan, Liyin, stack, JIRA, mbautin


> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
> D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
> ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Expected blo

[jira] [Updated] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-6871:
---

Attachment: D5703.4.patch

mbautin updated the revision "[jira] [HBASE-6871] [89-fb] Test case to 
reproduce block index corruption".
Reviewers: lhofhansl, Kannan, Liyin, stack, JIRA

  Fixing confusing loop

REVISION DETAIL
  https://reviews.facebook.net/D5703

AFFECTED FILES
  
src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java

To: lhofhansl, Kannan, Liyin, stack, JIRA, mbautin


> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
> D5703.3.patch, D5703.4.patch, hbase-6871-0.94.patch, ImportHFile.java, 
> test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
> INTERMEDIATE_INDEX: blockType=INTERMEDIATE_IND

[jira] [Updated] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-6871:
---

Attachment: D5703.3.patch

mbautin updated the revision "[jira] [HBASE-6871] [89-fb] Test case to 
reproduce block index corruption".
Reviewers: lhofhansl, Kannan, Liyin, stack, JIRA

  Fix the comment

REVISION DETAIL
  https://reviews.facebook.net/D5703

AFFECTED FILES
  
src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java

To: lhofhansl, Kannan, Liyin, stack, JIRA, mbautin


> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
> D5703.3.patch, hbase-6871-0.94.patch, ImportHFile.java, 
> test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
> INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
> onDiskSizeWith

[jira] [Updated] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-6871:
---

Attachment: D5703.2.patch

mbautin updated the revision "[jira] [HBASE-6871] [89-fb] Test case to 
reproduce block index corruption".
Reviewers: lhofhansl, Kannan, Liyin, stack, JIRA

  Addressing Michael's comments.

REVISION DETAIL
  https://reviews.facebook.net/D5703

AFFECTED FILES
  
src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java

To: lhofhansl, Kannan, Liyin, stack, JIRA, mbautin


> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
> hbase-6871-0.94.patch, ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
> INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
> onDiskSizeWithout

[jira] [Commented] (HBASE-6881) All regionservers are marked offline even there is still one up

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464240#comment-13464240
 ] 

stack commented on HBASE-6881:
--

This patch seems to be undoing hbase-6438?  Its ok to remove this bit added by 
hbase-6438?

{code}
-  if (!regionAlreadyInTransitionException ) {
-String msg = "Unexpected state : " + state + " .. Cannot transit it to 
OFFLINE.";
-this.server.abort(msg, new IllegalStateException(msg));
-return -1;
-  } else {
-LOG.debug("Unexpected state : " + state
-+ " but retrying to assign because 
RegionAlreadyInTransitionException.");
-  }
{code}

I suppose you are tightening up our state handling but the above seemed like a 
legit exception over in hbase-6438?


Why change the final on the param?

{code}
-  private void assign(final HRegionInfo region, final RegionState state,
+  private void assign(final HRegionInfo region, RegionState state,
{code}

You update state inside the method? You have to?  Can you make a local variable 
instead and leave the passed param final? Might make it easier to read?

Here, what does the comment mean?

{code}
+  // The new plan could be the same as the existing plan.
+  newPlan = getRegionPlan(state, true);
{code}

We could get same plan again?  Is that ok?  It could be going back to the same 
server again since we don't pass the old destination server from previous plan? 
 Is that OK?



> All regionservers are marked offline even there is still one up
> ---
>
> Key: HBASE-6881
> URL: https://issues.apache.org/jira/browse/HBASE-6881
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: trunk-6881.patch
>
>
> {noformat}
> +RegionPlan newPlan = plan;
> +if (!regionAlreadyInTransitionException) {
> +  // Force a new plan and reassign. Will return null if no servers.
> +  newPlan = getRegionPlan(state, plan.getDestination(), true);
> +}
> +if (newPlan == null) {
>this.timeoutMonitor.setAllRegionServersOffline(true);
>LOG.warn("Unable to find a viable location to assign region " +
>  state.getRegion().getRegionNameAsString());
> {noformat}
> Here, when newPlan is null, plan.getDestination() could be up actually.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464208#comment-13464208
 ] 

Phabricator commented on HBASE-6871:


stack has commented on the revision "[jira] [HBASE-6871] [89-fb] Test case to 
reproduce block index corruption".

  Nice test Mikhail.   You might consider explaining in a comment why 7 kvs of 
the type in the test bring on the issue (unless you think it fine just pointing 
at the issue).

REVISION DETAIL
  https://reviews.facebook.net/D5703

BRANCH
  repro_interm_index_bug

To: lhofhansl, Kannan, Liyin, stack, JIRA, mbautin


> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, hbase-6871-0.94.patch, 
> ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
> INTERMEDIATE_INDEX: blockType=INT

[jira] [Commented] (HBASE-3896) Make AssignmentManager standalone testable by having its constructor take Interfaces rather than a CatalogTracker and a ServerManager

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464199#comment-13464199
 ] 

stack commented on HBASE-3896:
--

[~cody.mar...@gmail.com] Look for other issues w/ noob.  Thanks for your work 
getting this issue closed.

> Make AssignmentManager standalone testable by having its constructor take 
> Interfaces rather than a CatalogTracker and a ServerManager
> -
>
> Key: HBASE-3896
> URL: https://issues.apache.org/jira/browse/HBASE-3896
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: Cody Marcel
>
> If we could stand up an instance of AssignmentManager, a core fat class that 
> has a bunch of critical logic managing state transitions, then it'd be easier 
> writing unit tests around its logic.  Currently its hard because it takes a 
> ServerManager and a CatalogTracker, but a little bit of work could turn these 
> into Interfaces.  SM looks easy to do.  Changing CT into an Interface instead 
> might ripple a little through the code base but it'd probably be well worth 
> it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6888) HBase scripts ignore any HBASE_OPTS set in the environment

2012-09-26 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-6888:
--

Fix Version/s: 0.96.0
 Assignee: Aditya Kishore
Affects Version/s: 0.96.0
   0.94.0
   Status: Patch Available  (was: Open)

Submitting patch for trunk/

> HBase scripts ignore any HBASE_OPTS set in the environment
> --
>
> Key: HBASE-6888
> URL: https://issues.apache.org/jira/browse/HBASE-6888
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.94.0, 0.96.0
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: HBASE-6888_trunk.patch
>
>
> hbase-env.sh which is sourced by hbase-config.sh which is eventually sourced 
> by the main 'hbase' script defines HBASE_OPTS form scratch, ignoring any 
> previous value set in the environment.
> This prevents from passing additional JVM parameters to HBase programs 
> (shell, hbck, etc) launched through these scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6888) HBase scripts ignore any HBASE_OPTS set in the environment

2012-09-26 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-6888:
--

Attachment: HBASE-6888_trunk.patch

> HBase scripts ignore any HBASE_OPTS set in the environment
> --
>
> Key: HBASE-6888
> URL: https://issues.apache.org/jira/browse/HBASE-6888
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Reporter: Aditya Kishore
>Priority: Minor
> Attachments: HBASE-6888_trunk.patch
>
>
> hbase-env.sh which is sourced by hbase-config.sh which is eventually sourced 
> by the main 'hbase' script defines HBASE_OPTS form scratch, ignoring any 
> previous value set in the environment.
> This prevents from passing additional JVM parameters to HBase programs 
> (shell, hbck, etc) launched through these scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6888) HBase scripts ignore any HBASE_OPTS set in the environment

2012-09-26 Thread Aditya Kishore (JIRA)
Aditya Kishore created HBASE-6888:
-

 Summary: HBase scripts ignore any HBASE_OPTS set in the environment
 Key: HBASE-6888
 URL: https://issues.apache.org/jira/browse/HBASE-6888
 Project: HBase
  Issue Type: Bug
  Components: scripts
Reporter: Aditya Kishore
Priority: Minor
 Attachments: HBASE-6888_trunk.patch

hbase-env.sh which is sourced by hbase-config.sh which is eventually sourced by 
the main 'hbase' script defines HBASE_OPTS form scratch, ignoring any previous 
value set in the environment.

This prevents from passing additional JVM parameters to HBase programs (shell, 
hbck, etc) launched through these scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6871:
-

Fix Version/s: 0.96.0
   0.94.3

> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, hbase-6871-0.94.patch, 
> ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
> INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
> onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, 
> prevBlockOffset=-1, 
> dataBeginsWith=\x00\x00\x00\x9B\x00\x00\x00\x00\x00\x00\x03#\x00\x00\x050\x00\x00\x08\xB7\x00\x00\x0Cr\x00\x00\x0F\xFA\x00\x00\x120,
>  fileOffset=218942at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.validateBlockType(HFileReaderV2.java:378)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(H

[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464150#comment-13464150
 ] 

Phabricator commented on HBASE-6871:


Kannan has accepted the revision "[jira] [HBASE-6871] [89-fb] Test case to 
reproduce block index corruption".

  nice test!

REVISION DETAIL
  https://reviews.facebook.net/D5703

BRANCH
  repro_interm_index_bug

To: lhofhansl, Kannan, Liyin, stack, JIRA, mbautin


> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, hbase-6871-0.94.patch, 
> ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
> INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
> onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, 
> prevBlockOffset=-1, 
> dataBeginsWith=\x00\x00\x00\x9B\x00\x00\x00\x00\x00\x00\x03#\x00\x00\x050\x00\x00\x08\xB7\x0

[jira] [Resolved] (HBASE-3896) Make AssignmentManager standalone testable by having its constructor take Interfaces rather than a CatalogTracker and a ServerManager

2012-09-26 Thread Cody Marcel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cody Marcel resolved HBASE-3896.


Resolution: Won't Fix

> Make AssignmentManager standalone testable by having its constructor take 
> Interfaces rather than a CatalogTracker and a ServerManager
> -
>
> Key: HBASE-3896
> URL: https://issues.apache.org/jira/browse/HBASE-3896
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: Cody Marcel
>
> If we could stand up an instance of AssignmentManager, a core fat class that 
> has a bunch of critical logic managing state transitions, then it'd be easier 
> writing unit tests around its logic.  Currently its hard because it takes a 
> ServerManager and a CatalogTracker, but a little bit of work could turn these 
> into Interfaces.  SM looks easy to do.  Changing CT into an Interface instead 
> might ripple a little through the code base but it'd probably be well worth 
> it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work stopped] (HBASE-6886) Extract Interface from ServerManager

2012-09-26 Thread Cody Marcel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-6886 stopped by Cody Marcel.

> Extract Interface from ServerManager
> 
>
> Key: HBASE-6886
> URL: https://issues.apache.org/jira/browse/HBASE-6886
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>  Labels: noob
>
> Making a subtask for ServerManager to keep changelists smaller

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-6886) Extract Interface from ServerManager

2012-09-26 Thread Cody Marcel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cody Marcel resolved HBASE-6886.


Resolution: Won't Fix

see parent comments

> Extract Interface from ServerManager
> 
>
> Key: HBASE-6886
> URL: https://issues.apache.org/jira/browse/HBASE-6886
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>  Labels: noob
>
> Making a subtask for ServerManager to keep changelists smaller

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work stopped] (HBASE-3896) Make AssignmentManager standalone testable by having its constructor take Interfaces rather than a CatalogTracker and a ServerManager

2012-09-26 Thread Cody Marcel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-3896 stopped by Cody Marcel.

> Make AssignmentManager standalone testable by having its constructor take 
> Interfaces rather than a CatalogTracker and a ServerManager
> -
>
> Key: HBASE-3896
> URL: https://issues.apache.org/jira/browse/HBASE-3896
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: Cody Marcel
>
> If we could stand up an instance of AssignmentManager, a core fat class that 
> has a bunch of critical logic managing state transitions, then it'd be easier 
> writing unit tests around its logic.  Currently its hard because it takes a 
> ServerManager and a CatalogTracker, but a little bit of work could turn these 
> into Interfaces.  SM looks easy to do.  Changing CT into an Interface instead 
> might ripple a little through the code base but it'd probably be well worth 
> it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3896) Make AssignmentManager standalone testable by having its constructor take Interfaces rather than a CatalogTracker and a ServerManager

2012-09-26 Thread Cody Marcel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464144#comment-13464144
 ] 

Cody Marcel commented on HBASE-3896:


I still think it would be an overall cleaner solution to have interfaces and 
true mock implementations that can be reused elsewhere, but I don't have strong 
opinions on it. Any advantage would be minimal at best. I was mainly looking 
for something easy to get my feet wet. I will close this.

> Make AssignmentManager standalone testable by having its constructor take 
> Interfaces rather than a CatalogTracker and a ServerManager
> -
>
> Key: HBASE-3896
> URL: https://issues.apache.org/jira/browse/HBASE-3896
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: Cody Marcel
>
> If we could stand up an instance of AssignmentManager, a core fat class that 
> has a bunch of critical logic managing state transitions, then it'd be easier 
> writing unit tests around its logic.  Currently its hard because it takes a 
> ServerManager and a CatalogTracker, but a little bit of work could turn these 
> into Interfaces.  SM looks easy to do.  Changing CT into an Interface instead 
> might ripple a little through the code base but it'd probably be well worth 
> it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-6871:
---

Attachment: D5703.1.patch

mbautin requested code review of "[jira] [HBASE-6871] [89-fb] Test case to 
reproduce block index corruption".
Reviewers: lhofhansl, Kannan, Liyin, stack, JIRA

  A small test case to reproduce an incorrect HFile generated when an inline 
index chunk is promoted to root chunk under some circumstances.

TEST PLAN
  Run test, ensure that it fails without the fix

REVISION DETAIL
  https://reviews.facebook.net/D5703

AFFECTED FILES
  
src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/13389/

To: lhofhansl, Kannan, Liyin, stack, JIRA, mbautin


> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, hbase-6871-0.94.patch, 
> ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 
> 

[jira] [Assigned] (HBASE-6788) Convert AuthenticationProtocol to protocol buffer service

2012-09-26 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling reassigned HBASE-6788:


Assignee: Gary Helmling

> Convert AuthenticationProtocol to protocol buffer service
> -
>
> Key: HBASE-6788
> URL: https://issues.apache.org/jira/browse/HBASE-6788
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Gary Helmling
>Assignee: Gary Helmling
>Priority: Blocker
> Fix For: 0.96.0
>
>
> With coprocessor endpoints now exposed as protobuf defined services, we 
> should convert over all of our built-in endpoints to PB services.
> AccessControllerProtocol was converted as part of HBASE-5448, but the 
> authentication token provider still needs to be changed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread Mikhail Bautin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464138#comment-13464138
 ] 

Mikhail Bautin commented on HBASE-6871:
---

Actually, the non-root index format is more space-efficient than the root index 
format, but this bug appears to happen when keys are sufficiently large (such 
as in this case--long URLs) to simultaneously push both of these sizes beyond 
the chunk size. I have a unit test that reproduces this, will upload a patch 
soon.

> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, hbase-6871-0.94.patch, 
> ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
> 
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
> at 
> org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
>   
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
> INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
> onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, 
> prevBlockOffset=-1, 
> dataBeginsWith=\x00\x00\x00\x9B\x00\x00\x00

[jira] [Commented] (HBASE-3896) Make AssignmentManager standalone testable by having its constructor take Interfaces rather than a CatalogTracker and a ServerManager

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464141#comment-13464141
 ] 

stack commented on HBASE-3896:
--

bq. Mockito already can just mock those out (eg. ServerManager manager = 
Mockito.mock(ServerManager.class)), so another interface isn't really all that 
necessary.

That is true (I think that it this way in TestAM).

[~cody.mar...@gmail.com] I think we should close this issue given Jesses' 
reasoning.  This issue was filed in May of 2011 by me when I probably made my 
first attempt at trying to do a standalone AM and failed thinking I needed SM 
and CT to be Interfaces.  Later in the year, I wrote the first 
TestAssignmentManager implementation which stands up an AM w/o its Master 
wrapping using Mockito.  Therein I did the trick Jesse suggests of getting 
around the need of an Interface by doing ServerManager manager = 
Mockito.mock(ServerManager.class)).  I think I should have closed this issue at 
that time having done a workaround (And Jesse makes good argument that making 
Interfaces of SM and CT won't help elsewhere).  What you think?

> Make AssignmentManager standalone testable by having its constructor take 
> Interfaces rather than a CatalogTracker and a ServerManager
> -
>
> Key: HBASE-3896
> URL: https://issues.apache.org/jira/browse/HBASE-3896
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: Cody Marcel
>
> If we could stand up an instance of AssignmentManager, a core fat class that 
> has a bunch of critical logic managing state transitions, then it'd be easier 
> writing unit tests around its logic.  Currently its hard because it takes a 
> ServerManager and a CatalogTracker, but a little bit of work could turn these 
> into Interfaces.  SM looks easy to do.  Changing CT into an Interface instead 
> might ripple a little through the code base but it'd probably be well worth 
> it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5961) New standard HBase code formatter

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464131#comment-13464131
 ] 

stack commented on HBASE-5961:
--

Added addendum2

> New standard HBase code formatter
> -
>
> Key: HBASE-5961
> URL: https://issues.apache.org/jira/browse/HBASE-5961
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.96.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 5961-addendum2.txt, 5961-addendum.txt, 
> HBase-Formmatter.xml
>
>
> There is currently no good way of passing out the formmatter currently the 
> 'standard' in HBase. The standard Apache formatter is actually not very close 
> to what we are considering 'good'/'pretty' code. Further, its not trivial to 
> get a good formatter setup.
> Proposing two things: 
> 1) Adding a formmatter to the dev tools and calling out the formmatter usage 
> in the docs
> 2) Move to a 'better' formmatter that is not the standard apache formmatter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5961) New standard HBase code formatter

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5961:
-

Attachment: 5961-addendum2.txt

Add license and remove the exclusion as per Cody +1

> New standard HBase code formatter
> -
>
> Key: HBASE-5961
> URL: https://issues.apache.org/jira/browse/HBASE-5961
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.96.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 5961-addendum2.txt, 5961-addendum.txt, 
> HBase-Formmatter.xml
>
>
> There is currently no good way of passing out the formmatter currently the 
> 'standard' in HBase. The standard Apache formatter is actually not very close 
> to what we are considering 'good'/'pretty' code. Further, its not trivial to 
> get a good formatter setup.
> Proposing two things: 
> 1) Adding a formmatter to the dev tools and calling out the formmatter usage 
> in the docs
> 2) Move to a 'better' formmatter that is not the standard apache formmatter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5961) New standard HBase code formatter

2012-09-26 Thread Cody Marcel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464124#comment-13464124
 ] 

Cody Marcel commented on HBASE-5961:


+1 to adding the licences.

> New standard HBase code formatter
> -
>
> Key: HBASE-5961
> URL: https://issues.apache.org/jira/browse/HBASE-5961
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.96.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 5961-addendum.txt, HBase-Formmatter.xml
>
>
> There is currently no good way of passing out the formmatter currently the 
> 'standard' in HBase. The standard Apache formatter is actually not very close 
> to what we are considering 'good'/'pretty' code. Further, its not trivial to 
> get a good formatter setup.
> Proposing two things: 
> 1) Adding a formmatter to the dev tools and calling out the formmatter usage 
> in the docs
> 2) Move to a 'better' formmatter that is not the standard apache formmatter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5961) New standard HBase code formatter

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464118#comment-13464118
 ] 

stack commented on HBASE-5961:
--

Committed exclusion addendum (I suppose I could equally as well added the 
apache license...)

> New standard HBase code formatter
> -
>
> Key: HBASE-5961
> URL: https://issues.apache.org/jira/browse/HBASE-5961
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.96.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 5961-addendum.txt, HBase-Formmatter.xml
>
>
> There is currently no good way of passing out the formmatter currently the 
> 'standard' in HBase. The standard Apache formatter is actually not very close 
> to what we are considering 'good'/'pretty' code. Further, its not trivial to 
> get a good formatter setup.
> Proposing two things: 
> 1) Adding a formmatter to the dev tools and calling out the formmatter usage 
> in the docs
> 2) Move to a 'better' formmatter that is not the standard apache formmatter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3896) Make AssignmentManager standalone testable by having its constructor take Interfaces rather than a CatalogTracker and a ServerManager

2012-09-26 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464117#comment-13464117
 ] 

Jesse Yates commented on HBASE-3896:


bq. Other implementations would be mocks that implement the SM and CT 
Interfaces?

Mockito already can just mock those out (eg. ServerManager manager = 
Mockito.mock(ServerManager.class)), so another interface isn't really all that 
necessary. 

Just my $0.02 :)

> Make AssignmentManager standalone testable by having its constructor take 
> Interfaces rather than a CatalogTracker and a ServerManager
> -
>
> Key: HBASE-3896
> URL: https://issues.apache.org/jira/browse/HBASE-3896
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: Cody Marcel
>
> If we could stand up an instance of AssignmentManager, a core fat class that 
> has a bunch of critical logic managing state transitions, then it'd be easier 
> writing unit tests around its logic.  Currently its hard because it takes a 
> ServerManager and a CatalogTracker, but a little bit of work could turn these 
> into Interfaces.  SM looks easy to do.  Changing CT into an Interface instead 
> might ripple a little through the code base but it'd probably be well worth 
> it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5961) New standard HBase code formatter

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5961:
-

Attachment: 5961-addendum.txt

Addendum excluding formatter file from rat check

> New standard HBase code formatter
> -
>
> Key: HBASE-5961
> URL: https://issues.apache.org/jira/browse/HBASE-5961
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.96.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 5961-addendum.txt, HBase-Formmatter.xml
>
>
> There is currently no good way of passing out the formmatter currently the 
> 'standard' in HBase. The standard Apache formatter is actually not very close 
> to what we are considering 'good'/'pretty' code. Further, its not trivial to 
> get a good formatter setup.
> Proposing two things: 
> 1) Adding a formmatter to the dev tools and calling out the formmatter usage 
> in the docs
> 2) Move to a 'better' formmatter that is not the standard apache formmatter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6679) RegionServer aborts due to race between compaction and split

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464107#comment-13464107
 ] 

stack commented on HBASE-6679:
--

bq. Yes.. but that still could be exposed to the problems of memory 
inconsistencies when multiple threads are accessing the object in 
unsynchronized/non-volatile ways, no?

We swap in the new list when compaction completes.  I suppose we could make the 
reference volatile so all threads catch the update.

But you can't see how the two threads can run concurrently? (not to say it not 
possible)

I'd say close this issue if you don't have the logs to disprove it a case of 
dbl-assign, a more plausible explanation to my mind?

Good on you Deva.

> RegionServer aborts due to race between compaction and split
> 
>
> Key: HBASE-6679
> URL: https://issues.apache.org/jira/browse/HBASE-6679
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.92.3
>
> Attachments: rs-crash-parallel-compact-split.log
>
>
> In our nightlies, we have seen RS aborts due to compaction and split racing. 
> Original parent file gets deleted after the compaction, and hence, the 
> daughters don't find the parent data file. The RS kills itself when this 
> happens. Will attach a snippet of the relevant RS logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6887) Convert security-related shell commands to use PB-based AccessControlService

2012-09-26 Thread Gary Helmling (JIRA)
Gary Helmling created HBASE-6887:


 Summary: Convert security-related shell commands to use PB-based 
AccessControlService
 Key: HBASE-6887
 URL: https://issues.apache.org/jira/browse/HBASE-6887
 Project: HBase
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.96.0
Reporter: Gary Helmling


The security-related HBase shell commands (grant, revoke, user_permission) are 
still using the old CoprocessorProtocol-based AccessControllerProtocol endpoint 
for dynamic RPC.  These need to be converted to use the protocol buffer based 
AccessControlService interface added in HBASE-5448.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6435) Reading WAL files after a recovery leads to time lost in HDFS timeouts when using dead datanodes

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464100#comment-13464100
 ] 

stack commented on HBASE-6435:
--

So not a requirement but a strong suggestion?

Yeah, we should discuss on dev.

> Reading WAL files after a recovery leads to time lost in HDFS timeouts when 
> using dead datanodes
> 
>
> Key: HBASE-6435
> URL: https://issues.apache.org/jira/browse/HBASE-6435
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
> Fix For: 0.96.0
>
> Attachments: 6435.unfinished.patch, 6435.v10.patch, 6435.v10.patch, 
> 6435.v12.patch, 6435.v12.patch, 6435.v12.patch, 6435-v12.txt, 6435.v13.patch, 
> 6435.v14.patch, 6435.v2.patch, 6435.v7.patch, 6435.v8.patch, 6435.v9.patch, 
> 6435.v9.patch, 6535.v11.patch
>
>
> HBase writes a Write-Ahead-Log to revover from hardware failure. This log is 
> written on hdfs.
> Through ZooKeeper, HBase gets informed usually in 30s that it should start 
> the recovery process. 
> This means reading the Write-Ahead-Log to replay the edits on the other 
> servers.
> In standards deployments, HBase process (regionserver) are deployed on the 
> same box as the datanodes.
> It means that when the box stops, we've actually lost one of the edits, as we 
> lost both the regionserver and the datanode.
> As HDFS marks a node as dead after ~10 minutes, it appears as available when 
> we try to read the blocks to recover. As such, we are delaying the recovery 
> process by 60 seconds as the read will usually fail with a socket timeout. If 
> the file is still opened for writing, it adds an extra 20s + a risk of losing 
> edits if we connect with ipc to the dead DN.
> Possible solutions are:
> - shorter dead datanodes detection by the NN. Requires a NN code change.
> - better dead datanodes management in DFSClient. Requires a DFS code change.
> - NN customisation to write the WAL files on another DN instead of the local 
> one.
> - reordering the blocks returned by the NN on the client side to put the 
> blocks on the same DN as the dead RS at the end of the priority queue. 
> Requires a DFS code change or a kind of workaround.
> The solution retained is the last one. Compared to what was discussed on the 
> mailing list, the proposed patch will not modify HDFS source code but adds a 
> proxy. This for two reasons:
> - Some HDFS functions managing block orders are static 
> (MD5MD5CRC32FileChecksum). Implementing the hook in the DFSClient would 
> require to implement partially the fix, change the DFS interface to make this 
> function non static, or put the hook static. None of these solution is very 
> clean. 
> - Adding a proxy allows to put all the code in HBase, simplifying dependency 
> management.
> Nevertheless, it would be better to have this in HDFS. But this solution 
> allows to target the last version only, and this could allow minimal 
> interface changes such as non static methods.
> Moreover, writing the blocks to the non local DN would be an even better 
> solution long term.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6884) Update documentation on unit tests

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464086#comment-13464086
 ] 

stack commented on HBASE-6884:
--

Committed addendum. Thanks N.  Thats clearer.

> Update documentation on unit tests
> --
>
> Key: HBASE-6884
> URL: https://issues.apache.org/jira/browse/HBASE-6884
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 6884-addendum.txt, 6884.txt
>
>
> Points to address:
> - we don't have anymore JUnit rules in the tests
> - we should document how to run the test faster.
> - some stuff is not used (run only a category) and should be removed from the 
> doc imho.
> Below the proposal:
> --
> 15.6.2. Unit Tests
> HBase unit tests are subdivided into three categories: small, medium and 
> large, with corresponding JUnit categories: SmallTests, MediumTests, 
> LargeTests. JUnit categories are denoted using java annotations and look like 
> this in your unit test code.
> ...
> @Category(SmallTests.class)
> public class TestHRegionInfo {
>   @Test
>   public void testCreateHRegionInfoName() throws Exception {
> // ...
>   }
> }
> The above example shows how to mark a test as belonging to the small 
> category. HBase uses a patched maven surefire plugin and maven profiles to 
> implement its unit test characterizations. 
> 15.6.2.4. Running tests
> Below we describe how to run the HBase junit categories.
> 15.6.2.4.1. Default: small and medium category tests
> Running
> mvn test
> will execute all small tests in a single JVM (no fork) and then medium tests 
> in a separate JVM for each test instance. Medium tests are NOT executed if 
> there is an error in a small test. Large tests are NOT executed. There is one 
> report for small tests, and one report for medium tests if they are executed.
> 15.6.2.4.2. Running all tests
> Running
> mvn test -P runAllTests
> will execute small tests in a single JVM then medium and large tests in a 
> separate JVM for each test. Medium and large tests are NOT executed if there 
> is an error in a small test. Large tests are NOT executed if there is an 
> error in a small or medium test. There is one report for small tests, and one 
> report for medium and large tests if they are executed
> 15.6.2.4.3. Running a single test or all tests in a package
> To run an individual test, e.g. MyTest, do
> mvn test -P localTests -Dtest=MyTest
> You can also pass multiple, individual tests as a comma-delimited list:
> mvn test -P localTests -Dtest=MyTest1,MyTest2,MyTest3
> You can also pass a package, which will run all tests under the package:
> mvn test -P localTests -Dtest=org.apache.hadoop.hbase.client.*
> The -P localTests will remove the JUnit category effect (without this 
> specific profile, the categories are taken into account). Each junit tests is 
> executed in a separate JVM (A fork per test class). There is no 
> parallelization when localTests profile is set. You will see a new message at 
> the end of the report: "[INFO] Tests are skipped". It's harmless.
> 15.6.2.4.4. Running test faster
> [replace previous chapter]
> By default, mvn test -P runAllTests runs 5 tests in parallel. It can be 
> increased for many developper machine. Consider that you can have 2 tests in 
> parallel per core, and you need about 2Gb of memory per test. Hence, if you 
> have a 8 cores and 24Gb box, you can have 16 tests in parallel.
> The setting is:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=12
> To increase the speed, you can as well use a ramdisk. You will need 2Gb of 
> memory to run all the test. You will also need to delete the files between 
> two test run.
> The typical way to configure a ramdisk on Linux is:
> sudo mkdir /ram2G
> sudo mount -t tmpfs -o size=2048M tmpfs /ram2G
> You can then use it to run all HBase tests with the command:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=8 
> -Dtest.build.data.basedirectory=/ram2G

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5844) Delete the region servers znode after a regions server crash

2012-09-26 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464093#comment-13464093
 ] 

nkeywal commented on HBASE-5844:


It's strange, I didn't reproduce it. I should, because it seems logical. Will 
look into it and create jiras.
Anyway, there are bugs around this scenario. For example, when it fails we now 
have a new pid file, but this pid does not match the process. This is true in 
0.90 as well. If there is no process, the error for the stop (in 0.96) will be 
??no regionserver to stop because kill -0 of pid 49938 failed with status 1??. 
If another process took this id (yes it should not happen often), the kill will 
succeed.

> Delete the region servers znode after a regions server crash
> 
>
> Key: HBASE-5844
> URL: https://issues.apache.org/jira/browse/HBASE-5844
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, scripts
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
> Fix For: 0.96.0
>
> Attachments: 5844.v1.patch, 5844.v2.patch, 5844.v3.patch, 
> 5844.v3.patch, 5844.v4.patch
>
>
> today, if the regions server crashes, its znode is not deleted in ZooKeeper. 
> So the recovery process will stop only after a timeout, usually 30s.
> By deleting the znode in start script, we remove this delay and the recovery 
> starts immediately.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6884) Update documentation on unit tests

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6884:
-

Attachment: 6884-addendum.txt

Addendum w/ Nkeywals' edit.

> Update documentation on unit tests
> --
>
> Key: HBASE-6884
> URL: https://issues.apache.org/jira/browse/HBASE-6884
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 6884-addendum.txt, 6884.txt
>
>
> Points to address:
> - we don't have anymore JUnit rules in the tests
> - we should document how to run the test faster.
> - some stuff is not used (run only a category) and should be removed from the 
> doc imho.
> Below the proposal:
> --
> 15.6.2. Unit Tests
> HBase unit tests are subdivided into three categories: small, medium and 
> large, with corresponding JUnit categories: SmallTests, MediumTests, 
> LargeTests. JUnit categories are denoted using java annotations and look like 
> this in your unit test code.
> ...
> @Category(SmallTests.class)
> public class TestHRegionInfo {
>   @Test
>   public void testCreateHRegionInfoName() throws Exception {
> // ...
>   }
> }
> The above example shows how to mark a test as belonging to the small 
> category. HBase uses a patched maven surefire plugin and maven profiles to 
> implement its unit test characterizations. 
> 15.6.2.4. Running tests
> Below we describe how to run the HBase junit categories.
> 15.6.2.4.1. Default: small and medium category tests
> Running
> mvn test
> will execute all small tests in a single JVM (no fork) and then medium tests 
> in a separate JVM for each test instance. Medium tests are NOT executed if 
> there is an error in a small test. Large tests are NOT executed. There is one 
> report for small tests, and one report for medium tests if they are executed.
> 15.6.2.4.2. Running all tests
> Running
> mvn test -P runAllTests
> will execute small tests in a single JVM then medium and large tests in a 
> separate JVM for each test. Medium and large tests are NOT executed if there 
> is an error in a small test. Large tests are NOT executed if there is an 
> error in a small or medium test. There is one report for small tests, and one 
> report for medium and large tests if they are executed
> 15.6.2.4.3. Running a single test or all tests in a package
> To run an individual test, e.g. MyTest, do
> mvn test -P localTests -Dtest=MyTest
> You can also pass multiple, individual tests as a comma-delimited list:
> mvn test -P localTests -Dtest=MyTest1,MyTest2,MyTest3
> You can also pass a package, which will run all tests under the package:
> mvn test -P localTests -Dtest=org.apache.hadoop.hbase.client.*
> The -P localTests will remove the JUnit category effect (without this 
> specific profile, the categories are taken into account). Each junit tests is 
> executed in a separate JVM (A fork per test class). There is no 
> parallelization when localTests profile is set. You will see a new message at 
> the end of the report: "[INFO] Tests are skipped". It's harmless.
> 15.6.2.4.4. Running test faster
> [replace previous chapter]
> By default, mvn test -P runAllTests runs 5 tests in parallel. It can be 
> increased for many developper machine. Consider that you can have 2 tests in 
> parallel per core, and you need about 2Gb of memory per test. Hence, if you 
> have a 8 cores and 24Gb box, you can have 16 tests in parallel.
> The setting is:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=12
> To increase the speed, you can as well use a ramdisk. You will need 2Gb of 
> memory to run all the test. You will also need to delete the files between 
> two test run.
> The typical way to configure a ramdisk on Linux is:
> sudo mkdir /ram2G
> sudo mount -t tmpfs -o size=2048M tmpfs /ram2G
> You can then use it to run all HBase tests with the command:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=8 
> -Dtest.build.data.basedirectory=/ram2G

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6846) BitComparator bug - ArrayIndexOutOfBoundsException

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464080#comment-13464080
 ] 

stack commented on HBASE-6846:
--

Should we test length > 1 before subtracting 1 from it?

> BitComparator bug - ArrayIndexOutOfBoundsException
> --
>
> Key: HBASE-6846
> URL: https://issues.apache.org/jira/browse/HBASE-6846
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.1
> Environment: HBase 0.94.1 + Hadoop 2.0.0-cdh4.0.1
>Reporter: Lucian George Iordache
> Attachments: HBASE-6846.patch
>
>
> The HBase 0.94.1 BitComparator introduced a bug in the method "compareTo":
> @Override
>   public int compareTo(byte[] value, int offset, int length) {
> if (length != this.value.length) {
>   return 1;
> }
> int b = 0;
> //Iterating backwards is faster because we can quit after one non-zero 
> byte.
> for (int i = value.length - 1; i >= 0 && b == 0; i--) {
>   switch (bitOperator) {
> case AND:
>   b = (this.value[i] & value[i+offset]) & 0xff;
>   break;
> case OR:
>   b = (this.value[i] | value[i+offset]) & 0xff;
>   break;
> case XOR:
>   b = (this.value[i] ^ value[i+offset]) & 0xff;
>   break;
>   }
> }
> return b == 0 ? 1 : 0;
>   }
> I've encountered this problem when using a BitComparator with a configured 
> this.value.length=8, and in the HBase table there were KeyValues with 
> keyValue.getBuffer().length=207911 bytes. In this case:
> for (int i = 207910; i >= 0 && b == 0; i--) {
>   switch (bitOperator) {
> case AND:
>   b = (this.value[207910] ... ==> ArrayIndexOutOfBoundsException
>   break;
> That loop should use:
>   for (int i = length - 1; i >= 0 && b == 0; i--) { (or this.value.length.)
> Should I provide a patch for correcting the problem?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6884) Update documentation on unit tests

2012-09-26 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464069#comment-13464069
 ] 

nkeywal commented on HBASE-6884:


Actually I was not sure on how to say it clearly and simply, that's what I took 
this example :-)

So I would propose:
{noformat}
It can be increased on a developer's machine. Allowing that you can have 2
tests in parallel per core, and you need about 2Gb of memory per test process,
if you have a 8 cores and 24Gb box, you can have 12 tests in parallel: the 
number
of cores would allow 16 tests, but the memory available limits it to 12 (that 
is, 24/2).
To run all tests, with 12 tests in parallel, do this:
mvn test -P runAllTests -Dsurefire.secondPartThreadCount=12.
{noformat}


> Update documentation on unit tests
> --
>
> Key: HBASE-6884
> URL: https://issues.apache.org/jira/browse/HBASE-6884
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 6884.txt
>
>
> Points to address:
> - we don't have anymore JUnit rules in the tests
> - we should document how to run the test faster.
> - some stuff is not used (run only a category) and should be removed from the 
> doc imho.
> Below the proposal:
> --
> 15.6.2. Unit Tests
> HBase unit tests are subdivided into three categories: small, medium and 
> large, with corresponding JUnit categories: SmallTests, MediumTests, 
> LargeTests. JUnit categories are denoted using java annotations and look like 
> this in your unit test code.
> ...
> @Category(SmallTests.class)
> public class TestHRegionInfo {
>   @Test
>   public void testCreateHRegionInfoName() throws Exception {
> // ...
>   }
> }
> The above example shows how to mark a test as belonging to the small 
> category. HBase uses a patched maven surefire plugin and maven profiles to 
> implement its unit test characterizations. 
> 15.6.2.4. Running tests
> Below we describe how to run the HBase junit categories.
> 15.6.2.4.1. Default: small and medium category tests
> Running
> mvn test
> will execute all small tests in a single JVM (no fork) and then medium tests 
> in a separate JVM for each test instance. Medium tests are NOT executed if 
> there is an error in a small test. Large tests are NOT executed. There is one 
> report for small tests, and one report for medium tests if they are executed.
> 15.6.2.4.2. Running all tests
> Running
> mvn test -P runAllTests
> will execute small tests in a single JVM then medium and large tests in a 
> separate JVM for each test. Medium and large tests are NOT executed if there 
> is an error in a small test. Large tests are NOT executed if there is an 
> error in a small or medium test. There is one report for small tests, and one 
> report for medium and large tests if they are executed
> 15.6.2.4.3. Running a single test or all tests in a package
> To run an individual test, e.g. MyTest, do
> mvn test -P localTests -Dtest=MyTest
> You can also pass multiple, individual tests as a comma-delimited list:
> mvn test -P localTests -Dtest=MyTest1,MyTest2,MyTest3
> You can also pass a package, which will run all tests under the package:
> mvn test -P localTests -Dtest=org.apache.hadoop.hbase.client.*
> The -P localTests will remove the JUnit category effect (without this 
> specific profile, the categories are taken into account). Each junit tests is 
> executed in a separate JVM (A fork per test class). There is no 
> parallelization when localTests profile is set. You will see a new message at 
> the end of the report: "[INFO] Tests are skipped". It's harmless.
> 15.6.2.4.4. Running test faster
> [replace previous chapter]
> By default, mvn test -P runAllTests runs 5 tests in parallel. It can be 
> increased for many developper machine. Consider that you can have 2 tests in 
> parallel per core, and you need about 2Gb of memory per test. Hence, if you 
> have a 8 cores and 24Gb box, you can have 16 tests in parallel.
> The setting is:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=12
> To increase the speed, you can as well use a ramdisk. You will need 2Gb of 
> memory to run all the test. You will also need to delete the files between 
> two test run.
> The typical way to configure a ramdisk on Linux is:
> sudo mkdir /ram2G
> sudo mount -t tmpfs -o size=2048M tmpfs /ram2G
> You can then use it to run all HBase tests with the command:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=8 
> -Dtest.build.data.basedirectory=/ram2G

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com

[jira] [Updated] (HBASE-6885) Typo in the Javadoc for close method of HTableInterface class

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6885:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk.  Thanks for the patch Jingguo.

> Typo in the Javadoc for close method of HTableInterface class
> -
>
> Key: HBASE-6885
> URL: https://issues.apache.org/jira/browse/HBASE-6885
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.94.1
>Reporter: Jingguo Yao
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: HTableInterface-HBASE-6885.patch
>
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> "help" in "Releases any resources help or pending changes in internal 
> buffers" should be "held".

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3896) Make AssignmentManager standalone testable by having its constructor take Interfaces rather than a CatalogTracker and a ServerManager

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464063#comment-13464063
 ] 

stack commented on HBASE-3896:
--

bq. I don't think that abstracting out the serverManger and catalogtracker has 
any real value, at the moment. I don't think there are other implementations of 
those classes, so pulling out an interface only makes things more complicated, 
not less. 

Other implementations would be mocks that implement the SM and CT Interfaces?

Otherwise, appreciate the interjection.  Good input.

> Make AssignmentManager standalone testable by having its constructor take 
> Interfaces rather than a CatalogTracker and a ServerManager
> -
>
> Key: HBASE-3896
> URL: https://issues.apache.org/jira/browse/HBASE-3896
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: Cody Marcel
>
> If we could stand up an instance of AssignmentManager, a core fat class that 
> has a bunch of critical logic managing state transitions, then it'd be easier 
> writing unit tests around its logic.  Currently its hard because it takes a 
> ServerManager and a CatalogTracker, but a little bit of work could turn these 
> into Interfaces.  SM looks easy to do.  Changing CT into an Interface instead 
> might ripple a little through the code base but it'd probably be well worth 
> it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-26 Thread Mikhail Bautin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464056#comment-13464056
 ] 

Mikhail Bautin commented on HBASE-6871:
---

[~feng wang]: nice catch! My understanding of this bug is somewhat different 
from yours, though. There are no new data blocks being written after 
shouldWriteBlock(true) is called when closing an HFileWriterV2. What I think is 
happening is that a new block gets written and a new entry is added to the root 
index right before writeInlineBlocks(true) is called in HFileWriterV2.close. If 
the "closing" parameter was set to false, a new leaf-level block would have 
been written at this point. However, as you described, the current index chunk 
gets promoted to a root index and then split into intermediate index blocks, 
which is incorrect, because in this case there are no leaf index blocks.

I think your patch fixes this particular issue, but another issue remains. We 
use different formats for non-root and root index blocks, and the non-root 
index format is slightly more space-efficient. Thus, it is still possible that 
the current index chunk is promoted to become the root index chunk because its 
"non-root" size is under the configured index chunk size, but its "root" size 
turns out to be above the configured index chunk size, so we create 
intermediate blocks in writeIndexBlocks and run into the same problem.

I think the correct solution is that once we decide to promote the index chunk 
to the root chunk, we should never split it into intermediate blocks. That will 
fix both issues. I will try to come up with a test and a patch today.


> HFileBlockIndex Write Error BlockIndex in HFile V2
> --
>
> Key: HBASE-6871
> URL: https://issues.apache.org/jira/browse/HBASE-6871
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.1
> Environment: redhat 5u4
>Reporter: Fenng Wang
>Priority: Critical
> Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
> 787179746cc347ce9bb36f1989d17419.hfile, 
> 960a026ca370464f84903ea58114bc75.hfile, 
> d0026fa8d59b4df291718f59dd145aad.hfile, hbase-6871-0.94.patch, 
> ImportHFile.java, test_hfile_block_index.sh
>
>
> After writing some data, compaction and scan operation both failure, the 
> exception message is below:
> 2012-09-18 06:32:26,227 ERROR 
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
> Compaction failed 
> regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
> storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
> 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
> time=45826250816757428java.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
>  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
> firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
>  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
> avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
> cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
>  to key 
> http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
> 
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> 
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244

[jira] [Commented] (HBASE-6884) Update documentation on unit tests

2012-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464057#comment-13464057
 ] 

stack commented on HBASE-6884:
--

I don't get it still (pardon me):

{code}
It can be increased on a developer's machine. Allowing that you can have 2
tests in parallel per core, and you need about 2Gb of memory per test,
if you have a 8 cores and 24Gb box, you can have 16 tests in parallel.
To run 16 in parallel, do this:
mvn test -P runAllTests -Dsurefire.secondPartThreadCount=12.
{code}

So, thread count is 12 on command line because 24GB of data but the text 
leading up says 16.  Is it the 16 in the text that I should change to 12?


> Update documentation on unit tests
> --
>
> Key: HBASE-6884
> URL: https://issues.apache.org/jira/browse/HBASE-6884
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 6884.txt
>
>
> Points to address:
> - we don't have anymore JUnit rules in the tests
> - we should document how to run the test faster.
> - some stuff is not used (run only a category) and should be removed from the 
> doc imho.
> Below the proposal:
> --
> 15.6.2. Unit Tests
> HBase unit tests are subdivided into three categories: small, medium and 
> large, with corresponding JUnit categories: SmallTests, MediumTests, 
> LargeTests. JUnit categories are denoted using java annotations and look like 
> this in your unit test code.
> ...
> @Category(SmallTests.class)
> public class TestHRegionInfo {
>   @Test
>   public void testCreateHRegionInfoName() throws Exception {
> // ...
>   }
> }
> The above example shows how to mark a test as belonging to the small 
> category. HBase uses a patched maven surefire plugin and maven profiles to 
> implement its unit test characterizations. 
> 15.6.2.4. Running tests
> Below we describe how to run the HBase junit categories.
> 15.6.2.4.1. Default: small and medium category tests
> Running
> mvn test
> will execute all small tests in a single JVM (no fork) and then medium tests 
> in a separate JVM for each test instance. Medium tests are NOT executed if 
> there is an error in a small test. Large tests are NOT executed. There is one 
> report for small tests, and one report for medium tests if they are executed.
> 15.6.2.4.2. Running all tests
> Running
> mvn test -P runAllTests
> will execute small tests in a single JVM then medium and large tests in a 
> separate JVM for each test. Medium and large tests are NOT executed if there 
> is an error in a small test. Large tests are NOT executed if there is an 
> error in a small or medium test. There is one report for small tests, and one 
> report for medium and large tests if they are executed
> 15.6.2.4.3. Running a single test or all tests in a package
> To run an individual test, e.g. MyTest, do
> mvn test -P localTests -Dtest=MyTest
> You can also pass multiple, individual tests as a comma-delimited list:
> mvn test -P localTests -Dtest=MyTest1,MyTest2,MyTest3
> You can also pass a package, which will run all tests under the package:
> mvn test -P localTests -Dtest=org.apache.hadoop.hbase.client.*
> The -P localTests will remove the JUnit category effect (without this 
> specific profile, the categories are taken into account). Each junit tests is 
> executed in a separate JVM (A fork per test class). There is no 
> parallelization when localTests profile is set. You will see a new message at 
> the end of the report: "[INFO] Tests are skipped". It's harmless.
> 15.6.2.4.4. Running test faster
> [replace previous chapter]
> By default, mvn test -P runAllTests runs 5 tests in parallel. It can be 
> increased for many developper machine. Consider that you can have 2 tests in 
> parallel per core, and you need about 2Gb of memory per test. Hence, if you 
> have a 8 cores and 24Gb box, you can have 16 tests in parallel.
> The setting is:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=12
> To increase the speed, you can as well use a ramdisk. You will need 2Gb of 
> memory to run all the test. You will also need to delete the files between 
> two test run.
> The typical way to configure a ramdisk on Linux is:
> sudo mkdir /ram2G
> sudo mount -t tmpfs -o size=2048M tmpfs /ram2G
> You can then use it to run all HBase tests with the command:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=8 
> -Dtest.build.data.basedirectory=/ram2G

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6884) Update documentation on unit tests

2012-09-26 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464051#comment-13464051
 ] 

nkeywal commented on HBASE-6884:


It's 12 because you have 24 Gb of memory.
It's the text before that is unclear: "Consider that you can have 2 tests in 
parallel per core, and you need about 2Gb of memory per test.".

> Update documentation on unit tests
> --
>
> Key: HBASE-6884
> URL: https://issues.apache.org/jira/browse/HBASE-6884
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 6884.txt
>
>
> Points to address:
> - we don't have anymore JUnit rules in the tests
> - we should document how to run the test faster.
> - some stuff is not used (run only a category) and should be removed from the 
> doc imho.
> Below the proposal:
> --
> 15.6.2. Unit Tests
> HBase unit tests are subdivided into three categories: small, medium and 
> large, with corresponding JUnit categories: SmallTests, MediumTests, 
> LargeTests. JUnit categories are denoted using java annotations and look like 
> this in your unit test code.
> ...
> @Category(SmallTests.class)
> public class TestHRegionInfo {
>   @Test
>   public void testCreateHRegionInfoName() throws Exception {
> // ...
>   }
> }
> The above example shows how to mark a test as belonging to the small 
> category. HBase uses a patched maven surefire plugin and maven profiles to 
> implement its unit test characterizations. 
> 15.6.2.4. Running tests
> Below we describe how to run the HBase junit categories.
> 15.6.2.4.1. Default: small and medium category tests
> Running
> mvn test
> will execute all small tests in a single JVM (no fork) and then medium tests 
> in a separate JVM for each test instance. Medium tests are NOT executed if 
> there is an error in a small test. Large tests are NOT executed. There is one 
> report for small tests, and one report for medium tests if they are executed.
> 15.6.2.4.2. Running all tests
> Running
> mvn test -P runAllTests
> will execute small tests in a single JVM then medium and large tests in a 
> separate JVM for each test. Medium and large tests are NOT executed if there 
> is an error in a small test. Large tests are NOT executed if there is an 
> error in a small or medium test. There is one report for small tests, and one 
> report for medium and large tests if they are executed
> 15.6.2.4.3. Running a single test or all tests in a package
> To run an individual test, e.g. MyTest, do
> mvn test -P localTests -Dtest=MyTest
> You can also pass multiple, individual tests as a comma-delimited list:
> mvn test -P localTests -Dtest=MyTest1,MyTest2,MyTest3
> You can also pass a package, which will run all tests under the package:
> mvn test -P localTests -Dtest=org.apache.hadoop.hbase.client.*
> The -P localTests will remove the JUnit category effect (without this 
> specific profile, the categories are taken into account). Each junit tests is 
> executed in a separate JVM (A fork per test class). There is no 
> parallelization when localTests profile is set. You will see a new message at 
> the end of the report: "[INFO] Tests are skipped". It's harmless.
> 15.6.2.4.4. Running test faster
> [replace previous chapter]
> By default, mvn test -P runAllTests runs 5 tests in parallel. It can be 
> increased for many developper machine. Consider that you can have 2 tests in 
> parallel per core, and you need about 2Gb of memory per test. Hence, if you 
> have a 8 cores and 24Gb box, you can have 16 tests in parallel.
> The setting is:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=12
> To increase the speed, you can as well use a ramdisk. You will need 2Gb of 
> memory to run all the test. You will also need to delete the files between 
> two test run.
> The typical way to configure a ramdisk on Linux is:
> sudo mkdir /ram2G
> sudo mount -t tmpfs -o size=2048M tmpfs /ram2G
> You can then use it to run all HBase tests with the command:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=8 
> -Dtest.build.data.basedirectory=/ram2G

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6702) ResourceChecker refinement

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6702:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I committed the resource checker doc as part of the HBASE-6884 commit.  Thanks 
for the nice cleanup nkeywal.

> ResourceChecker refinement
> --
>
> Key: HBASE-6702
> URL: https://issues.apache.org/jira/browse/HBASE-6702
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.96.0
>Reporter: Jesse Yates
>Assignee: nkeywal
>Priority: Critical
> Fix For: 0.96.0
>
> Attachments: 6702.v1.patch, 6702.v4.patch, 6702.v5.patch
>
>
> This was based on some discussion from HBASE-6234.
> The ResourceChecker was added by N. Keywal to help resolve some hadoop qa 
> issues, but has since not be widely utilized. Further, with modularization we 
> have had to drop the ResourceChecker from the tests that are moved into the 
> hbase-common module because bringing the ResourceChecker up to hbase-common 
> would involved bringing all its dependencies (which are quite far reaching).
> The question then is, what should we do with it? Get rid of it? Refactor and 
> resuse? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-6884) Update documentation on unit tests

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-6884.
--

  Resolution: Fixed
Assignee: nkeywal
Hadoop Flags: Reviewed

Committed the patch to trunk.  Thanks for nkeywal.

> Update documentation on unit tests
> --
>
> Key: HBASE-6884
> URL: https://issues.apache.org/jira/browse/HBASE-6884
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Assignee: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 6884.txt
>
>
> Points to address:
> - we don't have anymore JUnit rules in the tests
> - we should document how to run the test faster.
> - some stuff is not used (run only a category) and should be removed from the 
> doc imho.
> Below the proposal:
> --
> 15.6.2. Unit Tests
> HBase unit tests are subdivided into three categories: small, medium and 
> large, with corresponding JUnit categories: SmallTests, MediumTests, 
> LargeTests. JUnit categories are denoted using java annotations and look like 
> this in your unit test code.
> ...
> @Category(SmallTests.class)
> public class TestHRegionInfo {
>   @Test
>   public void testCreateHRegionInfoName() throws Exception {
> // ...
>   }
> }
> The above example shows how to mark a test as belonging to the small 
> category. HBase uses a patched maven surefire plugin and maven profiles to 
> implement its unit test characterizations. 
> 15.6.2.4. Running tests
> Below we describe how to run the HBase junit categories.
> 15.6.2.4.1. Default: small and medium category tests
> Running
> mvn test
> will execute all small tests in a single JVM (no fork) and then medium tests 
> in a separate JVM for each test instance. Medium tests are NOT executed if 
> there is an error in a small test. Large tests are NOT executed. There is one 
> report for small tests, and one report for medium tests if they are executed.
> 15.6.2.4.2. Running all tests
> Running
> mvn test -P runAllTests
> will execute small tests in a single JVM then medium and large tests in a 
> separate JVM for each test. Medium and large tests are NOT executed if there 
> is an error in a small test. Large tests are NOT executed if there is an 
> error in a small or medium test. There is one report for small tests, and one 
> report for medium and large tests if they are executed
> 15.6.2.4.3. Running a single test or all tests in a package
> To run an individual test, e.g. MyTest, do
> mvn test -P localTests -Dtest=MyTest
> You can also pass multiple, individual tests as a comma-delimited list:
> mvn test -P localTests -Dtest=MyTest1,MyTest2,MyTest3
> You can also pass a package, which will run all tests under the package:
> mvn test -P localTests -Dtest=org.apache.hadoop.hbase.client.*
> The -P localTests will remove the JUnit category effect (without this 
> specific profile, the categories are taken into account). Each junit tests is 
> executed in a separate JVM (A fork per test class). There is no 
> parallelization when localTests profile is set. You will see a new message at 
> the end of the report: "[INFO] Tests are skipped". It's harmless.
> 15.6.2.4.4. Running test faster
> [replace previous chapter]
> By default, mvn test -P runAllTests runs 5 tests in parallel. It can be 
> increased for many developper machine. Consider that you can have 2 tests in 
> parallel per core, and you need about 2Gb of memory per test. Hence, if you 
> have a 8 cores and 24Gb box, you can have 16 tests in parallel.
> The setting is:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=12
> To increase the speed, you can as well use a ramdisk. You will need 2Gb of 
> memory to run all the test. You will also need to delete the files between 
> two test run.
> The typical way to configure a ramdisk on Linux is:
> sudo mkdir /ram2G
> sudo mount -t tmpfs -o size=2048M tmpfs /ram2G
> You can then use it to run all HBase tests with the command:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=8 
> -Dtest.build.data.basedirectory=/ram2G

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6884) Update documentation on unit tests

2012-09-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6884:
-

Attachment: 6884.txt

Patch for the doc that adds in nkeywal's stuff below and adds a Resource 
Checker section -- how it works.  nkeywal i change 
this-Dsurefire.secondPartThreadCount=12... to be 
-Dsurefire.secondPartThreadCount=16.  Is that right?  12 does not seem to agree 
w/ the text that precedes.

> Update documentation on unit tests
> --
>
> Key: HBASE-6884
> URL: https://issues.apache.org/jira/browse/HBASE-6884
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.96.0
>Reporter: nkeywal
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: 6884.txt
>
>
> Points to address:
> - we don't have anymore JUnit rules in the tests
> - we should document how to run the test faster.
> - some stuff is not used (run only a category) and should be removed from the 
> doc imho.
> Below the proposal:
> --
> 15.6.2. Unit Tests
> HBase unit tests are subdivided into three categories: small, medium and 
> large, with corresponding JUnit categories: SmallTests, MediumTests, 
> LargeTests. JUnit categories are denoted using java annotations and look like 
> this in your unit test code.
> ...
> @Category(SmallTests.class)
> public class TestHRegionInfo {
>   @Test
>   public void testCreateHRegionInfoName() throws Exception {
> // ...
>   }
> }
> The above example shows how to mark a test as belonging to the small 
> category. HBase uses a patched maven surefire plugin and maven profiles to 
> implement its unit test characterizations. 
> 15.6.2.4. Running tests
> Below we describe how to run the HBase junit categories.
> 15.6.2.4.1. Default: small and medium category tests
> Running
> mvn test
> will execute all small tests in a single JVM (no fork) and then medium tests 
> in a separate JVM for each test instance. Medium tests are NOT executed if 
> there is an error in a small test. Large tests are NOT executed. There is one 
> report for small tests, and one report for medium tests if they are executed.
> 15.6.2.4.2. Running all tests
> Running
> mvn test -P runAllTests
> will execute small tests in a single JVM then medium and large tests in a 
> separate JVM for each test. Medium and large tests are NOT executed if there 
> is an error in a small test. Large tests are NOT executed if there is an 
> error in a small or medium test. There is one report for small tests, and one 
> report for medium and large tests if they are executed
> 15.6.2.4.3. Running a single test or all tests in a package
> To run an individual test, e.g. MyTest, do
> mvn test -P localTests -Dtest=MyTest
> You can also pass multiple, individual tests as a comma-delimited list:
> mvn test -P localTests -Dtest=MyTest1,MyTest2,MyTest3
> You can also pass a package, which will run all tests under the package:
> mvn test -P localTests -Dtest=org.apache.hadoop.hbase.client.*
> The -P localTests will remove the JUnit category effect (without this 
> specific profile, the categories are taken into account). Each junit tests is 
> executed in a separate JVM (A fork per test class). There is no 
> parallelization when localTests profile is set. You will see a new message at 
> the end of the report: "[INFO] Tests are skipped". It's harmless.
> 15.6.2.4.4. Running test faster
> [replace previous chapter]
> By default, mvn test -P runAllTests runs 5 tests in parallel. It can be 
> increased for many developper machine. Consider that you can have 2 tests in 
> parallel per core, and you need about 2Gb of memory per test. Hence, if you 
> have a 8 cores and 24Gb box, you can have 16 tests in parallel.
> The setting is:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=12
> To increase the speed, you can as well use a ramdisk. You will need 2Gb of 
> memory to run all the test. You will also need to delete the files between 
> two test run.
> The typical way to configure a ramdisk on Linux is:
> sudo mkdir /ram2G
> sudo mount -t tmpfs -o size=2048M tmpfs /ram2G
> You can then use it to run all HBase tests with the command:
> mvn test -P runAllTests -Dsurefire.secondPartThreadCount=8 
> -Dtest.build.data.basedirectory=/ram2G

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3896) Make AssignmentManager standalone testable by having its constructor take Interfaces rather than a CatalogTracker and a ServerManager

2012-09-26 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13463995#comment-13463995
 ] 

Jesse Yates commented on HBASE-3896:


I don't think that abstracting out the serverManger and catalogtracker has any 
real value, at the moment. I don't think there are other implementations of 
those classes, so pulling out an interface only makes things more complicated, 
not less. If there were other implementations, for different use cases, then it 
would make sense to put it all behind an interface to swap them out easily. 

I'm +1 on stack's original comment though that mocking out the classes passes 
into the AssignmentManager (which is already pretty well setup for testing 
since it can take in all its dependencies) it enough to make the assignment 
manager easy to test.

However, I do think there is value in making the serverManger more composition 
based. Cody mentioned (offline) functions like stop that could be more cleanly 
abstracted not only here, but around the codebase (I'm thinking you have a 
StopBuilder that takes in a bunch of things to stop and then builds your 
'stopper' so you can close everything out easily).  I'm all for adding a couple 
new classes to help break out the functions of the SM some more, but that 
shouldn't happen to just hide functionality from certain parts of the code 
(e.g. hmaster shouldn't know about function X, so we make another interface 
that doesn't include that leads to a really complex inheritance heirarchy 
that is hard to reason about and makes the concrete classes even harder to 
read).

TL;DR anything to make the SM smaller (break out functionality) and composition 
based would be great, but the root of this ticket should be solvable via 
mocking. Maybe, at the very least if we still end up doing this, rename the 
ticket?

> Make AssignmentManager standalone testable by having its constructor take 
> Interfaces rather than a CatalogTracker and a ServerManager
> -
>
> Key: HBASE-3896
> URL: https://issues.apache.org/jira/browse/HBASE-3896
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: Cody Marcel
>
> If we could stand up an instance of AssignmentManager, a core fat class that 
> has a bunch of critical logic managing state transitions, then it'd be easier 
> writing unit tests around its logic.  Currently its hard because it takes a 
> ServerManager and a CatalogTracker, but a little bit of work could turn these 
> into Interfaces.  SM looks easy to do.  Changing CT into an Interface instead 
> might ripple a little through the code base but it'd probably be well worth 
> it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HBASE-6886) Extract Interface from ServerManager

2012-09-26 Thread Cody Marcel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-6886 started by Cody Marcel.

> Extract Interface from ServerManager
> 
>
> Key: HBASE-6886
> URL: https://issues.apache.org/jira/browse/HBASE-6886
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>  Labels: noob
>
> Making a subtask for ServerManager to keep changelists smaller

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3896) Make AssignmentManager standalone testable by having its constructor take Interfaces rather than a CatalogTracker and a ServerManager

2012-09-26 Thread Cody Marcel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13463968#comment-13463968
 ] 

Cody Marcel commented on HBASE-3896:


I am making subtasks for this to break the changes up into smaller change lists.

> Make AssignmentManager standalone testable by having its constructor take 
> Interfaces rather than a CatalogTracker and a ServerManager
> -
>
> Key: HBASE-3896
> URL: https://issues.apache.org/jira/browse/HBASE-3896
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: Cody Marcel
>
> If we could stand up an instance of AssignmentManager, a core fat class that 
> has a bunch of critical logic managing state transitions, then it'd be easier 
> writing unit tests around its logic.  Currently its hard because it takes a 
> ServerManager and a CatalogTracker, but a little bit of work could turn these 
> into Interfaces.  SM looks easy to do.  Changing CT into an Interface instead 
> might ripple a little through the code base but it'd probably be well worth 
> it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HBASE-3896) Make AssignmentManager standalone testable by having its constructor take Interfaces rather than a CatalogTracker and a ServerManager

2012-09-26 Thread Cody Marcel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-3896 started by Cody Marcel.

> Make AssignmentManager standalone testable by having its constructor take 
> Interfaces rather than a CatalogTracker and a ServerManager
> -
>
> Key: HBASE-3896
> URL: https://issues.apache.org/jira/browse/HBASE-3896
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: Cody Marcel
>
> If we could stand up an instance of AssignmentManager, a core fat class that 
> has a bunch of critical logic managing state transitions, then it'd be easier 
> writing unit tests around its logic.  Currently its hard because it takes a 
> ServerManager and a CatalogTracker, but a little bit of work could turn these 
> into Interfaces.  SM looks easy to do.  Changing CT into an Interface instead 
> might ripple a little through the code base but it'd probably be well worth 
> it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6886) Extract Interface from ServerManager

2012-09-26 Thread Cody Marcel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cody Marcel updated HBASE-6886:
---

Assignee: Cody Marcel

> Extract Interface from ServerManager
> 
>
> Key: HBASE-6886
> URL: https://issues.apache.org/jira/browse/HBASE-6886
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>  Labels: noob
>
> Making a subtask for ServerManager to keep changelists smaller

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >