[jira] [Updated] (HBASE-6875) Remove commons-httpclient, -component, and up versions on other jars (remove unused repository)

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6875:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Release Note: Removed unused libs commons-httpclient and 
commons-component.  Upped commons-codec to 1.7 from 1.4, commons-io from 2.1 to 
2.4, commons-lang from 2.5 to 2.6, jruby from 1.6.5 to 1.6.8 (1.7 jruby is 14M, 
1.6 is 10M), mockito-all from 1.9 to 2.4.1, zookeeper from 3.4.3 to 3.4.4
   Status: Resolved  (was: Patch Available)

Committed to trunk.

 Remove commons-httpclient, -component, and up versions on other jars (remove 
 unused repository)
 ---

 Key: HBASE-6875
 URL: https://issues.apache.org/jira/browse/HBASE-6875
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.96.0
Reporter: stack
Assignee: stack
 Fix For: 0.96.0

 Attachments: pom.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6876) Clean up WARNs and log messages around startup

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464487#comment-13464487
 ] 

Hudson commented on HBASE-6876:
---

Integrated in HBase-TRUNK #3383 (See 
[https://builds.apache.org/job/HBase-TRUNK/3383/])
HBASE-6876 Clean up WARNs and log messages around startup; REAPPLY 
(Revision 1390848)
HBASE-6876 Clean up WARNs and log messages around startup; REVERT OF OVERCOMMIT 
(Revision 1390847)
HBASE-6876 Clean up WARNs and log messages around startup (Revision 1390846)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcEngine.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java

stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcEngine.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java

stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcEngine.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java


 Clean up WARNs and log messages around startup
 --

 Key: HBASE-6876
 URL: https://issues.apache.org/jira/browse/HBASE-6876
 Project: HBase
  Issue Type: Improvement
Reporter: stack
Assignee: stack
 Fix For: 0.96.0

 Attachments: logging2.txt, logging.txt


 I was looking at our startup messages and some of the 'normal' messages are a 
 bit frightening at face value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6875) Remove commons-httpclient, -component, and up versions on other jars (remove unused repository)

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464486#comment-13464486
 ] 

Hudson commented on HBASE-6875:
---

Integrated in HBase-TRUNK #3383 (See 
[https://builds.apache.org/job/HBase-TRUNK/3383/])
HBASE-6875 Remove commons-httpclient, -component, and up versions on other 
jars (remove unused repository) (Revision 1390858)

 Result = FAILURE
stack : 
Files : 
* /hbase/trunk/pom.xml


 Remove commons-httpclient, -component, and up versions on other jars (remove 
 unused repository)
 ---

 Key: HBASE-6875
 URL: https://issues.apache.org/jira/browse/HBASE-6875
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.96.0
Reporter: stack
Assignee: stack
 Fix For: 0.96.0

 Attachments: pom.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6879) Add HBase Code Template

2012-09-27 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464492#comment-13464492
 ] 

Jesse Yates commented on HBASE-6879:


Yeah, its not setting for me either. I've never seen this not work on older 
versions. Can you get any template to work?

 Add HBase Code Template
 ---

 Key: HBASE-6879
 URL: https://issues.apache.org/jira/browse/HBASE-6879
 Project: HBase
  Issue Type: Bug
  Components: build, documentation
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: HBase Code Template.xml


 Add a standard code template to do along with the code formatter for HBase. 
 This helps make sure people have the correct license and general commenting 
 for auto-generated elements.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6875) Remove commons-httpclient, -component, and up versions on other jars (remove unused repository)

2012-09-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464493#comment-13464493
 ] 

Ted Yu commented on HBASE-6875:
---

I wonder if the following build error is related to this patch (trunk build 
3383):
{code}
[ERROR] The build could not read 1 project - [Help 1]
[ERROR]
[ERROR]   The project org.apache.hbase:hbase-server:0.95-SNAPSHOT 
(https://builds.apache.org/job/HBase-TRUNK/ws/trunk/hbase-server/pom.xml) has 
2 errors
[ERROR] 'dependencies.dependency.version' for 
commons-configuration:commons-configuration:jar is missing. @ line 318, column 
17
[ERROR] 'dependencies.dependency.version' for 
commons-httpclient:commons-httpclient:jar is missing. @ line 330, column 17
{code}

 Remove commons-httpclient, -component, and up versions on other jars (remove 
 unused repository)
 ---

 Key: HBASE-6875
 URL: https://issues.apache.org/jira/browse/HBASE-6875
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.96.0
Reporter: stack
Assignee: stack
 Fix For: 0.96.0

 Attachments: pom.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-27 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464496#comment-13464496
 ] 

Lars Hofhansl commented on HBASE-6871:
--

Patch looks good (as far as I can tell, I'll trust Mikhail on the initial 
version).
I'll make a 0.94 patch tomorrow.

 HFileBlockIndex Write Error BlockIndex in HFile V2
 --

 Key: HBASE-6871
 URL: https://issues.apache.org/jira/browse/HBASE-6871
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.1
 Environment: redhat 5u4
Reporter: Fenng Wang
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 6871.txt, 
 787179746cc347ce9bb36f1989d17419.hfile, 
 960a026ca370464f84903ea58114bc75.hfile, 
 d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
 D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
 ImportHFile.java, test_hfile_block_index.sh


 After writing some data, compaction and scan operation both failure, the 
 exception message is below:
 2012-09-18 06:32:26,227 ERROR 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
 Compaction failed 
 regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
 storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
 time=45826250816757428java.io.IOException: Could not reseek 
 StoreFileScanner[HFileScanner for reader 
 reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
 [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
 [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
 firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
 avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
 cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
  to key 
 http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
 
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
 
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
 at 
 org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
   
 at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 

 at 
 org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
 at 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
 INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
 onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, 
 prevBlockOffset=-1, 
 dataBeginsWith=\x00\x00\x00\x9B\x00\x00\x00\x00\x00\x00\x03#\x00\x00\x050\x00\x00\x08\xB7\x00\x00\x0Cr\x00\x00\x0F\xFA\x00\x00\x120,
  fileOffset=218942at 
 

[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464501#comment-13464501
 ] 

Ted Yu commented on HBASE-6871:
---

I assume this bug needs to be fixed in 0.92 branch as well.

 HFileBlockIndex Write Error BlockIndex in HFile V2
 --

 Key: HBASE-6871
 URL: https://issues.apache.org/jira/browse/HBASE-6871
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.1
 Environment: redhat 5u4
Reporter: Fenng Wang
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 6871.txt, 
 787179746cc347ce9bb36f1989d17419.hfile, 
 960a026ca370464f84903ea58114bc75.hfile, 
 d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
 D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
 ImportHFile.java, test_hfile_block_index.sh


 After writing some data, compaction and scan operation both failure, the 
 exception message is below:
 2012-09-18 06:32:26,227 ERROR 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
 Compaction failed 
 regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
 storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
 time=45826250816757428java.io.IOException: Could not reseek 
 StoreFileScanner[HFileScanner for reader 
 reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
 [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
 [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
 firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
 avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
 cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
  to key 
 http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
 
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
 
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
 at 
 org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
   
 at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 

 at 
 org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
 at 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
 INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
 onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, 
 prevBlockOffset=-1, 
 dataBeginsWith=\x00\x00\x00\x9B\x00\x00\x00\x00\x00\x00\x03#\x00\x00\x050\x00\x00\x08\xB7\x00\x00\x0Cr\x00\x00\x0F\xFA\x00\x00\x120,
  fileOffset=218942at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.validateBlockType(HFileReaderV2.java:378)
 at 
 

[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-27 Thread Fenng Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464533#comment-13464533
 ] 

Fenng Wang commented on HBASE-6871:
---

[~mikhail]:Your explanation is helpful for me, my patch can only solve this 
particular issue, the difference between root size and nonroot size of 
BlockIndexChunk maybe trigger this bug again, I will use the new patch to 
resolve my issue completely, thank you!

 HFileBlockIndex Write Error BlockIndex in HFile V2
 --

 Key: HBASE-6871
 URL: https://issues.apache.org/jira/browse/HBASE-6871
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.1
 Environment: redhat 5u4
Reporter: Fenng Wang
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 6871.txt, 
 787179746cc347ce9bb36f1989d17419.hfile, 
 960a026ca370464f84903ea58114bc75.hfile, 
 d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
 D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
 ImportHFile.java, test_hfile_block_index.sh


 After writing some data, compaction and scan operation both failure, the 
 exception message is below:
 2012-09-18 06:32:26,227 ERROR 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
 Compaction failed 
 regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
 storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
 time=45826250816757428java.io.IOException: Could not reseek 
 StoreFileScanner[HFileScanner for reader 
 reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
 [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
 [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
 firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
 avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
 cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
  to key 
 http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
 
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
 
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
 at 
 org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
   
 at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 

 at 
 org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
 at 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
 INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
 onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, 
 prevBlockOffset=-1, 
 

[jira] [Updated] (HBASE-6805) Extend co-processor framework to provide observers for filter operations

2012-09-27 Thread Cheng Hao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cheng Hao updated HBASE-6805:
-

Attachment: extend_coprocessor.patch

Please check the patch attached. Hope it make more sense.

 Extend co-processor framework to provide observers for filter operations
 

 Key: HBASE-6805
 URL: https://issues.apache.org/jira/browse/HBASE-6805
 Project: HBase
  Issue Type: Sub-task
  Components: Coprocessors
Affects Versions: 0.96.0
Reporter: Jason Dai
 Attachments: extend_coprocessor.patch


 There are several filter operations (e.g., filterKeyValue, filterRow, 
 transform, etc.) at the region server side that either exclude KVs from the 
 returned results, or transform the returned KV. We need to provide observers 
 (e.g., preFilterKeyValue and postFilterKeyValue) for these operations in the 
 same way as the observers for other data access operations (e.g., preGet and 
 postGet). This extension is needed to support DOT (e.g., extracting 
 individual fields from the document in the observers before passing them to 
 the related filter operations) 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6679) RegionServer aborts due to race between compaction and split

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464631#comment-13464631
 ] 

Hudson commented on HBASE-6679:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #195 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/195/])
HBASE-6679 RegionServer aborts due to race between compaction and split 
(Revision 1390781)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


 RegionServer aborts due to race between compaction and split
 

 Key: HBASE-6679
 URL: https://issues.apache.org/jira/browse/HBASE-6679
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Assignee: Devaraj Das
 Fix For: 0.92.3, 0.94.3, 0.96.0

 Attachments: 6679-1.094.patch, 6679-1.patch, 
 rs-crash-parallel-compact-split.log


 In our nightlies, we have seen RS aborts due to compaction and split racing. 
 Original parent file gets deleted after the compaction, and hence, the 
 daughters don't find the parent data file. The RS kills itself when this 
 happens. Will attach a snippet of the relevant RS logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6875) Remove commons-httpclient, -component, and up versions on other jars (remove unused repository)

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464632#comment-13464632
 ] 

Hudson commented on HBASE-6875:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #195 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/195/])
HBASE-6875 Remove commons-httpclient, -component, and up versions on other 
jars (remove unused repository) (Revision 1390858)

 Result = FAILURE
stack : 
Files : 
* /hbase/trunk/pom.xml


 Remove commons-httpclient, -component, and up versions on other jars (remove 
 unused repository)
 ---

 Key: HBASE-6875
 URL: https://issues.apache.org/jira/browse/HBASE-6875
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.96.0
Reporter: stack
Assignee: stack
 Fix For: 0.96.0

 Attachments: pom.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6876) Clean up WARNs and log messages around startup

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464633#comment-13464633
 ] 

Hudson commented on HBASE-6876:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #195 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/195/])
HBASE-6876 Clean up WARNs and log messages around startup; REAPPLY 
(Revision 1390848)
HBASE-6876 Clean up WARNs and log messages around startup; REVERT OF OVERCOMMIT 
(Revision 1390847)
HBASE-6876 Clean up WARNs and log messages around startup (Revision 1390846)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcEngine.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java

stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcEngine.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java

stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcEngine.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java


 Clean up WARNs and log messages around startup
 --

 Key: HBASE-6876
 URL: https://issues.apache.org/jira/browse/HBASE-6876
 Project: HBase
  Issue Type: Improvement
Reporter: stack
Assignee: stack
 Fix For: 0.96.0

 Attachments: logging2.txt, logging.txt


 I was looking at our startup messages and some of the 'normal' messages are a 
 bit frightening at face value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6784) TestCoprocessorScanPolicy is sometimes flaky when run locally

2012-09-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464802#comment-13464802
 ] 

ramkrishna.s.vasudevan commented on HBASE-6784:
---

@Lars
I was thinking that the millisecond diff was not working fine in WINDOWS seeing 
HBASE-6833.  I have a question here,
Why in Jenkins it has never failed and frequently fails when run in WINDOWS?  
Just wanted to know the reason in the delay that happens in the timestamp 
updation.  Thanks Lars.

 TestCoprocessorScanPolicy is sometimes flaky when run locally
 -

 Key: HBASE-6784
 URL: https://issues.apache.org/jira/browse/HBASE-6784
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.94.2, 0.96.0

 Attachments: 6784.txt


 The problem is not seen in jenkins build.  
 When we run TestCoprocessorScanPolicy.testBaseCases locally or in our 
 internal jenkins we tend to get random failures.  The reason is the 2 puts 
 that we do here is sometimes getting the same timestamp.  This is leading to 
 improper scan results as the version check tends to skip one of the row 
 seeing the timestamp to be same. Marking this as minor.  As we are trying to 
 solve testcase related failures just raising this incase we need to resolve 
 this also.
 For eg,
 Both the puts are getting the time
 {code}
 time 1347635287360
 time 1347635287360
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6784) TestCoprocessorScanPolicy is sometimes flaky when run locally

2012-09-27 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464828#comment-13464828
 ] 

Lars Hofhansl commented on HBASE-6784:
--

@Ram:
Most Unix's have good timers (I know Linux has), so it is rare that the two 
Puts here receive the same timestamp (but still possible, so the test was still 
bad).

The last time I played with timers on Windows I found that the timer resolution 
there is about 10ms (but that was many years ago and might be better now).


 TestCoprocessorScanPolicy is sometimes flaky when run locally
 -

 Key: HBASE-6784
 URL: https://issues.apache.org/jira/browse/HBASE-6784
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.94.2, 0.96.0

 Attachments: 6784.txt


 The problem is not seen in jenkins build.  
 When we run TestCoprocessorScanPolicy.testBaseCases locally or in our 
 internal jenkins we tend to get random failures.  The reason is the 2 puts 
 that we do here is sometimes getting the same timestamp.  This is leading to 
 improper scan results as the version check tends to skip one of the row 
 seeing the timestamp to be same. Marking this as minor.  As we are trying to 
 solve testcase related failures just raising this incase we need to resolve 
 this also.
 For eg,
 Both the puts are getting the time
 {code}
 time 1347635287360
 time 1347635287360
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6853) IllegalArgument Exception is thrown when an empty region is spliitted.

2012-09-27 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-6853:
--

Status: Patch Available  (was: Open)

Hadoop QA trial

 IllegalArgument Exception is thrown when an empty region is spliitted.
 --

 Key: HBASE-6853
 URL: https://issues.apache.org/jira/browse/HBASE-6853
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.1, 0.92.1
Reporter: ramkrishna.s.vasudevan
 Attachments: HBASE-6853_2_splitsuccess.patch, 
 HBASE-6853_splitfailure.patch


 This is w.r.t a mail sent in the dev mail list.
 Empty region split should be handled gracefully.  Either we should not allow 
 the split to happen if we know that the region is empty or we should allow 
 the split to happen by setting the no of threads to the thread pool executor 
 as 1.
 {code}
 int nbFiles = hstoreFilesToSplit.size();
 ThreadFactoryBuilder builder = new ThreadFactoryBuilder();
 builder.setNameFormat(StoreFileSplitter-%1$d);
 ThreadFactory factory = builder.build();
 ThreadPoolExecutor threadPool =
   (ThreadPoolExecutor) Executors.newFixedThreadPool(nbFiles, factory);
 ListFutureVoid futures = new ArrayListFutureVoid(nbFiles);
 {code}
 Here the nbFiles needs to be a non zero positive value.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HBASE-6854) Deletion of SPLITTING node on split rollback should clear the region from RIT

2012-09-27 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan reassigned HBASE-6854:
-

Assignee: ramkrishna.s.vasudevan

 Deletion of SPLITTING node on split rollback should clear the region from RIT
 -

 Key: HBASE-6854
 URL: https://issues.apache.org/jira/browse/HBASE-6854
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.94.3

 Attachments: HBASE-6854.patch, HBASE-6854.patch


 If a failure happens in split before OFFLINING_PARENT, we tend to rollback 
 the split including deleting the znodes created.
 On deletion of the RS_ZK_SPLITTING node we are getting a callback but not 
 remvoving from RIT. We need to remove it from RIT, anyway SSH logic is well 
 guarded in case the delete event comes due to RS down scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6854) Deletion of SPLITTING node on split rollback should clear the region from RIT

2012-09-27 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-6854:
--

Attachment: HBASE-6854.patch

This is what i committed.

 Deletion of SPLITTING node on split rollback should clear the region from RIT
 -

 Key: HBASE-6854
 URL: https://issues.apache.org/jira/browse/HBASE-6854
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.94.3

 Attachments: HBASE-6854.patch, HBASE-6854.patch


 If a failure happens in split before OFFLINING_PARENT, we tend to rollback 
 the split including deleting the znodes created.
 On deletion of the RS_ZK_SPLITTING node we are getting a callback but not 
 remvoving from RIT. We need to remove it from RIT, anyway SSH logic is well 
 guarded in case the delete event comes due to RS down scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-6854) Deletion of SPLITTING node on split rollback should clear the region from RIT

2012-09-27 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan resolved HBASE-6854.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

Committed to 0.94.
Thanks for the review Stack and Bijieshan

 Deletion of SPLITTING node on split rollback should clear the region from RIT
 -

 Key: HBASE-6854
 URL: https://issues.apache.org/jira/browse/HBASE-6854
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.94.3

 Attachments: HBASE-6854.patch, HBASE-6854.patch


 If a failure happens in split before OFFLINING_PARENT, we tend to rollback 
 the split including deleting the znodes created.
 On deletion of the RS_ZK_SPLITTING node we are getting a callback but not 
 remvoving from RIT. We need to remove it from RIT, anyway SSH logic is well 
 guarded in case the delete event comes due to RS down scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6784) TestCoprocessorScanPolicy is sometimes flaky when run locally

2012-09-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464844#comment-13464844
 ] 

ramkrishna.s.vasudevan commented on HBASE-6784:
---

Ok Thank you :)

 TestCoprocessorScanPolicy is sometimes flaky when run locally
 -

 Key: HBASE-6784
 URL: https://issues.apache.org/jira/browse/HBASE-6784
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.94.2, 0.96.0

 Attachments: 6784.txt


 The problem is not seen in jenkins build.  
 When we run TestCoprocessorScanPolicy.testBaseCases locally or in our 
 internal jenkins we tend to get random failures.  The reason is the 2 puts 
 that we do here is sometimes getting the same timestamp.  This is leading to 
 improper scan results as the version check tends to skip one of the row 
 seeing the timestamp to be same. Marking this as minor.  As we are trying to 
 solve testcase related failures just raising this incase we need to resolve 
 this also.
 For eg,
 Both the puts are getting the time
 {code}
 time 1347635287360
 time 1347635287360
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6880) Failure in assigning root causes system hang

2012-09-27 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464865#comment-13464865
 ] 

Jimmy Xiang commented on HBASE-6880:


@Ram, are you going to fix this in HBASE-6698?  If so, we can close this as a 
duplicate.

HBASE-6881 is just partially fixing this issue, by making the issue happens a 
little less.

I was thinking we should let assignRoot return something to indicate if it is 
successful.  If not,
there is no point to wait for it any more.  We can retry several times.  If it 
still doesn't
work, then abort the master, instead of hanging there forever. No retry and 
fail fast is also ok
with me, which may be cleaner in some sense.

Even it assignRoot does return something say the assign is going on, it may not 
succeed.  So
we also need to make sure the timeout monitor can fix it.



 Failure in assigning root causes system hang
 

 Key: HBASE-6880
 URL: https://issues.apache.org/jira/browse/HBASE-6880
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang

 In looking into a TestReplication failure, I found out sometimes assignRoot 
 could fail, for example, RS is not serving traffic yet.  In this case, the 
 master will keep waiting for root to be available, which could never happen.
  
 Need to gracefully terminate master if root is not assigned properly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-6880) Failure in assigning root causes system hang

2012-09-27 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464865#comment-13464865
 ] 

Jimmy Xiang edited comment on HBASE-6880 at 9/28/12 3:45 AM:
-

@Ram, are you going to fix this in HBASE-6698?  If so, we can close this as a 
duplicate.

HBASE-6881 is just partially fixing this issue, by making the issue happens a 
little less.

I was thinking we should let assignRoot return something to indicate if it is 
successful.  If not,
there is no point to wait for it any more.  We can retry several times.  If it 
still doesn't
work, then abort the master, instead of hanging there forever. No retry and 
fail fast is also ok
with me, which may be cleaner in some sense.

Even if assignRoot does return something say the assign is going on, it may not 
succeed.  So
we also need to make sure the timeout monitor can fix it.



  was (Author: jxiang):
@Ram, are you going to fix this in HBASE-6698?  If so, we can close this as 
a duplicate.

HBASE-6881 is just partially fixing this issue, by making the issue happens a 
little less.

I was thinking we should let assignRoot return something to indicate if it is 
successful.  If not,
there is no point to wait for it any more.  We can retry several times.  If it 
still doesn't
work, then abort the master, instead of hanging there forever. No retry and 
fail fast is also ok
with me, which may be cleaner in some sense.

Even it assignRoot does return something say the assign is going on, it may not 
succeed.  So
we also need to make sure the timeout monitor can fix it.


  
 Failure in assigning root causes system hang
 

 Key: HBASE-6880
 URL: https://issues.apache.org/jira/browse/HBASE-6880
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang

 In looking into a TestReplication failure, I found out sometimes assignRoot 
 could fail, for example, RS is not serving traffic yet.  In this case, the 
 master will keep waiting for root to be available, which could never happen.
  
 Need to gracefully terminate master if root is not assigned properly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6878) DistributerLogSplit can fail to resubmit a task done if there is an exception during the log archiving

2012-09-27 Thread Prakash Khemani (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464874#comment-13464874
 ] 

Prakash Khemani commented on HBASE-6878:


The logic to indefinitely retry a failing log-splitting task is not inside 
SplitLogManager. SplitLogManager will retry a task finite number of times. If 
it fails then it is the outer Master layers that indefinitely retry. the reason 
for this behavior is to build tools around distributed log splitting. If 
distributed log splitting were being used by a tool then you wouldn't want it 
to indefinitely retry.

So the behavior outlined in this bug report is correct. But this behavior 
shouldn't lead to any bug.

(There are only a few places in SplitLogManager where it resubmits the task 
forcefully, disregarding the retry limit. I think the only two cases are when a 
region server (splitlogworker) dies and when a splitlogworker resigns from 
the task (i.e. gives up the task even though there were no failures))

 DistributerLogSplit can fail to resubmit a task done if there is an exception 
 during the log archiving
 --

 Key: HBASE-6878
 URL: https://issues.apache.org/jira/browse/HBASE-6878
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: nkeywal
Priority: Minor

 The code in SplitLogManager# getDataSetWatchSuccess is:
 {code}
 if (slt.isDone()) {
   LOG.info(task  + path +  entered state:  + slt.toString());
   if (taskFinisher != null  !ZKSplitLog.isRescanNode(watcher, path)) {
 if (taskFinisher.finish(slt.getServerName(), 
 ZKSplitLog.getFileName(path)) == Status.DONE) {
   setDone(path, SUCCESS);
 } else {
   resubmitOrFail(path, CHECK);
 }
   } else {
 setDone(path, SUCCESS);
   }
 {code}
   resubmitOrFail(path, CHECK);
 should be 
   resubmitOrFail(path, FORCE);
 Without it, the task won't be resubmitted if the delay is not reached, and 
 the task will be marked as failed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6881) All regionservers are marked offline even there is still one up

2012-09-27 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464875#comment-13464875
 ] 

Jimmy Xiang commented on HBASE-6881:


Really good question.  It is great to work with you guys and the community.
My understanding is that you three all have concerns about the removed code.

I think it is ok to remove it.  As stack pointed out, I think #1498 already 
transitions the state to offline.
handRegion can not transition the state now since we already hold the locker 
for this region, and will keep
the locker till the assign returns.

I will look into it more and think about if there is a better way to handle 
RegionAlreadyInTransition.



 All regionservers are marked offline even there is still one up
 ---

 Key: HBASE-6881
 URL: https://issues.apache.org/jira/browse/HBASE-6881
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Attachments: trunk-6881.patch


 {noformat}
 +RegionPlan newPlan = plan;
 +if (!regionAlreadyInTransitionException) {
 +  // Force a new plan and reassign. Will return null if no servers.
 +  newPlan = getRegionPlan(state, plan.getDestination(), true);
 +}
 +if (newPlan == null) {
this.timeoutMonitor.setAllRegionServersOffline(true);
LOG.warn(Unable to find a viable location to assign region  +
  state.getRegion().getRegionNameAsString());
 {noformat}
 Here, when newPlan is null, plan.getDestination() could be up actually.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6890) Colorize test-patch results that goes to JIRA as a comment

2012-09-27 Thread Harsh J (JIRA)
Harsh J created HBASE-6890:
--

 Summary: Colorize test-patch results that goes to JIRA as a comment
 Key: HBASE-6890
 URL: https://issues.apache.org/jira/browse/HBASE-6890
 Project: HBase
  Issue Type: Improvement
  Components: build
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Attachments: HBASE-6890.patch

If HBase wishes its JIRAs to have colorized results similar to those observed 
on HADOOP/HDFS/MAPREDUCE now, like for example HADOOP-8845, then please feel 
free to use this patch attached and have fun :)

HADOOP-side the patch was via HADOOP-8838 and HADOOP-8840.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6890) Colorize test-patch results that goes to JIRA as a comment

2012-09-27 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HBASE-6890:
---

Attachment: HBASE-6890.patch

 Colorize test-patch results that goes to JIRA as a comment
 --

 Key: HBASE-6890
 URL: https://issues.apache.org/jira/browse/HBASE-6890
 Project: HBase
  Issue Type: Improvement
  Components: build
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Attachments: HBASE-6890.patch


 If HBase wishes its JIRAs to have colorized results similar to those observed 
 on HADOOP/HDFS/MAPREDUCE now, like for example HADOOP-8845, then please feel 
 free to use this patch attached and have fun :)
 HADOOP-side the patch was via HADOOP-8838 and HADOOP-8840.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6890) Colorize test-patch results that goes to JIRA as a comment

2012-09-27 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HBASE-6890:
---

Status: Patch Available  (was: Open)

 Colorize test-patch results that goes to JIRA as a comment
 --

 Key: HBASE-6890
 URL: https://issues.apache.org/jira/browse/HBASE-6890
 Project: HBase
  Issue Type: Improvement
  Components: build
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Attachments: HBASE-6890.patch


 If HBase wishes its JIRAs to have colorized results similar to those observed 
 on HADOOP/HDFS/MAPREDUCE now, like for example HADOOP-8845, then please feel 
 free to use this patch attached and have fun :)
 HADOOP-side the patch was via HADOOP-8838 and HADOOP-8840.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6853) IllegalArgument Exception is thrown when an empty region is spliitted.

2012-09-27 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-6853:
--

Attachment: HBASE-6853.patch

Patch for 0.94.  Wanted to rebase the patch for trunk but am getting some 
compilation error.  Will do it after resolving.

 IllegalArgument Exception is thrown when an empty region is spliitted.
 --

 Key: HBASE-6853
 URL: https://issues.apache.org/jira/browse/HBASE-6853
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.1
Reporter: ramkrishna.s.vasudevan
 Attachments: HBASE-6853_2_splitsuccess.patch, HBASE-6853.patch, 
 HBASE-6853_splitfailure.patch


 This is w.r.t a mail sent in the dev mail list.
 Empty region split should be handled gracefully.  Either we should not allow 
 the split to happen if we know that the region is empty or we should allow 
 the split to happen by setting the no of threads to the thread pool executor 
 as 1.
 {code}
 int nbFiles = hstoreFilesToSplit.size();
 ThreadFactoryBuilder builder = new ThreadFactoryBuilder();
 builder.setNameFormat(StoreFileSplitter-%1$d);
 ThreadFactory factory = builder.build();
 ThreadPoolExecutor threadPool =
   (ThreadPoolExecutor) Executors.newFixedThreadPool(nbFiles, factory);
 ListFutureVoid futures = new ArrayListFutureVoid(nbFiles);
 {code}
 Here the nbFiles needs to be a non zero positive value.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6854) Deletion of SPLITTING node on split rollback should clear the region from RIT

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464892#comment-13464892
 ] 

Hudson commented on HBASE-6854:
---

Integrated in HBase-0.94 #492 (See 
[https://builds.apache.org/job/HBase-0.94/492/])
HBASE-6854 Deletion of SPLITTING node on split rollback should clear the 
region from RIT (Ram)

Submitted by:Ram
Reviewed by:Bijieshan, Stack (Revision 1391074)

 Result = FAILURE
ramkrishna : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java


 Deletion of SPLITTING node on split rollback should clear the region from RIT
 -

 Key: HBASE-6854
 URL: https://issues.apache.org/jira/browse/HBASE-6854
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.94.3

 Attachments: HBASE-6854.patch, HBASE-6854.patch


 If a failure happens in split before OFFLINING_PARENT, we tend to rollback 
 the split including deleting the znodes created.
 On deletion of the RS_ZK_SPLITTING node we are getting a callback but not 
 remvoving from RIT. We need to remove it from RIT, anyway SSH logic is well 
 guarded in case the delete event comes due to RS down scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6880) Failure in assigning root causes system hang

2012-09-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464907#comment-13464907
 ] 

ramkrishna.s.vasudevan commented on HBASE-6880:
---

@Jimmy
I did not raise any issue to fix that.  So let this JIRA be there to fix this.
bq.HBASE-6881 is just partially fixing this issue, by making the issue happens 
a little less.
Yes, right

Seeing the HMaster startup code and the RS start up code, retry should be the 
ideal way.  
Adding a retry logic in the assign code on ServerNotRunningYetException will be 
it affect normal regions also?
 

 Failure in assigning root causes system hang
 

 Key: HBASE-6880
 URL: https://issues.apache.org/jira/browse/HBASE-6880
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang

 In looking into a TestReplication failure, I found out sometimes assignRoot 
 could fail, for example, RS is not serving traffic yet.  In this case, the 
 master will keep waiting for root to be available, which could never happen.
  
 Need to gracefully terminate master if root is not assigned properly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-6880) Failure in assigning root causes system hang

2012-09-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464907#comment-13464907
 ] 

ramkrishna.s.vasudevan edited comment on HBASE-6880 at 9/28/12 4:39 AM:


@Jimmy
I did not raise any issue to fix that.  So let this JIRA be there to fix this.
bq.HBASE-6881 is just partially fixing this issue, by making the issue happens 
a little less.
Yes, right

Seeing the HMaster startup code and the RS start up code, retry should be the 
ideal way.  
Adding a retry logic in the assign code on ServerNotRunningYetException will it 
be affecting normal regions also?
 

  was (Author: ram_krish):
@Jimmy
I did not raise any issue to fix that.  So let this JIRA be there to fix this.
bq.HBASE-6881 is just partially fixing this issue, by making the issue happens 
a little less.
Yes, right

Seeing the HMaster startup code and the RS start up code, retry should be the 
ideal way.  
Adding a retry logic in the assign code on ServerNotRunningYetException will be 
it affect normal regions also?
 
  
 Failure in assigning root causes system hang
 

 Key: HBASE-6880
 URL: https://issues.apache.org/jira/browse/HBASE-6880
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang

 In looking into a TestReplication failure, I found out sometimes assignRoot 
 could fail, for example, RS is not serving traffic yet.  In this case, the 
 master will keep waiting for root to be available, which could never happen.
  
 Need to gracefully terminate master if root is not assigned properly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6611) Forcing region state offline cause double assignment

2012-09-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464908#comment-13464908
 ] 

ramkrishna.s.vasudevan commented on HBASE-6611:
---

@Jimmy
Will review this tomorrow or over the weekend.  Nice work Jimmy.  

 Forcing region state offline cause double assignment
 

 Key: HBASE-6611
 URL: https://issues.apache.org/jira/browse/HBASE-6611
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0


 In assigning a region, assignment manager forces the region state offline if 
 it is not. This could cause double assignment, for example, if the region is 
 already assigned and in the Open state, you should not just change it's state 
 to Offline, and assign it again.
 I think this could be the root cause for all double assignments IF the region 
 state is reliable.
 After this loophole is closed, TestHBaseFsck should come up a different way 
 to create some assignment inconsistencies, for example, calling region server 
 to open a region directly. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6875) Remove commons-httpclient, -component, and up versions on other jars (remove unused repository)

2012-09-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464913#comment-13464913
 ] 

stack commented on HBASE-6875:
--

I committed the addendum.

 Remove commons-httpclient, -component, and up versions on other jars (remove 
 unused repository)
 ---

 Key: HBASE-6875
 URL: https://issues.apache.org/jira/browse/HBASE-6875
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.96.0
Reporter: stack
Assignee: stack
 Fix For: 0.96.0

 Attachments: addendum.txt, pom.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6875) Remove commons-httpclient, -component, and up versions on other jars (remove unused repository)

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6875:
-

Attachment: addendum.txt

[~ted_yu] It is.  I messed up application last night.  Fixing w/ this addendum.

 Remove commons-httpclient, -component, and up versions on other jars (remove 
 unused repository)
 ---

 Key: HBASE-6875
 URL: https://issues.apache.org/jira/browse/HBASE-6875
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.96.0
Reporter: stack
Assignee: stack
 Fix For: 0.96.0

 Attachments: addendum.txt, pom.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6853) IllegalArgument Exception is thrown when an empty region is spliitted.

2012-09-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464914#comment-13464914
 ] 

stack commented on HBASE-6853:
--

[~ram_krish] Sorry Ram.  I broke the build.  Fixed now.

 IllegalArgument Exception is thrown when an empty region is spliitted.
 --

 Key: HBASE-6853
 URL: https://issues.apache.org/jira/browse/HBASE-6853
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.1
Reporter: ramkrishna.s.vasudevan
 Attachments: HBASE-6853_2_splitsuccess.patch, HBASE-6853.patch, 
 HBASE-6853_splitfailure.patch


 This is w.r.t a mail sent in the dev mail list.
 Empty region split should be handled gracefully.  Either we should not allow 
 the split to happen if we know that the region is empty or we should allow 
 the split to happen by setting the no of threads to the thread pool executor 
 as 1.
 {code}
 int nbFiles = hstoreFilesToSplit.size();
 ThreadFactoryBuilder builder = new ThreadFactoryBuilder();
 builder.setNameFormat(StoreFileSplitter-%1$d);
 ThreadFactory factory = builder.build();
 ThreadPoolExecutor threadPool =
   (ThreadPoolExecutor) Executors.newFixedThreadPool(nbFiles, factory);
 ListFutureVoid futures = new ArrayListFutureVoid(nbFiles);
 {code}
 Here the nbFiles needs to be a non zero positive value.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464931#comment-13464931
 ] 

stack commented on HBASE-6871:
--

[~feng wang] Does this patch fix your issue?  If you run your script that 
manufactures this condition, with this patch in place, do we no longer throw 
the exception reported?   Thanks.

 HFileBlockIndex Write Error BlockIndex in HFile V2
 --

 Key: HBASE-6871
 URL: https://issues.apache.org/jira/browse/HBASE-6871
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.1
 Environment: redhat 5u4
Reporter: Fenng Wang
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 6871.txt, 
 787179746cc347ce9bb36f1989d17419.hfile, 
 960a026ca370464f84903ea58114bc75.hfile, 
 d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
 D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
 ImportHFile.java, test_hfile_block_index.sh


 After writing some data, compaction and scan operation both failure, the 
 exception message is below:
 2012-09-18 06:32:26,227 ERROR 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
 Compaction failed 
 regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
 storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
 time=45826250816757428java.io.IOException: Could not reseek 
 StoreFileScanner[HFileScanner for reader 
 reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
 [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
 [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
 firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
 avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
 cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
  to key 
 http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
 
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
 
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
 at 
 org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
   
 at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 

 at 
 org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
 at 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
 INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
 onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, 
 prevBlockOffset=-1, 
 dataBeginsWith=\x00\x00\x00\x9B\x00\x00\x00\x00\x00\x00\x03#\x00\x00\x050\x00\x00\x08\xB7\x00\x00\x0Cr\x00\x00\x0F\xFA\x00\x00\x120,
  

[jira] [Commented] (HBASE-6880) Failure in assigning root causes system hang

2012-09-27 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464934#comment-13464934
 ] 

Jimmy Xiang commented on HBASE-6880:


Cool.  As to ServerNotRunningYetException, HBASE-6881 depends on the normal 
retry mechanism.
We can discuss it more in HBASE-6881.

 Failure in assigning root causes system hang
 

 Key: HBASE-6880
 URL: https://issues.apache.org/jira/browse/HBASE-6880
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang

 In looking into a TestReplication failure, I found out sometimes assignRoot 
 could fail, for example, RS is not serving traffic yet.  In this case, the 
 master will keep waiting for root to be available, which could never happen.
  
 Need to gracefully terminate master if root is not assigned properly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6439) Ignore .archive directory as a table

2012-09-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464951#comment-13464951
 ] 

stack commented on HBASE-6439:
--

bq. Yeah, but its technically configurable, so that's not so nice...

Lets undo the fact that its configurable.

[~sameerv] Want to add a link to your review here?  Thanks.

 Ignore .archive directory as a table
 

 Key: HBASE-6439
 URL: https://issues.apache.org/jira/browse/HBASE-6439
 Project: HBase
  Issue Type: Bug
  Components: io, regionserver
Affects Versions: 0.96.0
Reporter: Jesse Yates
Assignee: Sameer Vaishampayan
  Labels: newbie

 From a recent test run:
 {quote}
 2012-07-22 02:27:30,699 WARN  [IPC Server handler 0 on 47087] 
 util.FSTableDescriptors(168): The following folder is in HBase's root 
 directory and doesn't contain a table descriptor, do consider deleting it: 
 .archive
 {quote}
 With the addition of HBASE-5547, table-level folders are no-longer all table 
 folders. FSTableDescriptors needs to then have a 'gold-list' that we can 
 update with directories that aren't tables so we don't have this kind of 
 thing showing up in the logs.
 Currently, we have the following block:
 {quote}
 invocations++;
 if (HTableDescriptor.ROOT_TABLEDESC.getNameAsString().equals(tablename)) {
   cachehits++;
   return HTableDescriptor.ROOT_TABLEDESC;
 }
 if (HTableDescriptor.META_TABLEDESC.getNameAsString().equals(tablename)) {
   cachehits++;
   return HTableDescriptor.META_TABLEDESC;
 }
 {quote}
 to handle special cases, but that's a bit clunky and not clean in terms of 
 table-level directories that need to be ignored.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6829) [WINDOWS] Tests should ensure that HLog is closed

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6829:
-

Status: Patch Available  (was: Open)

Trying against hadoopqa

 [WINDOWS] Tests should ensure that HLog is closed
 -

 Key: HBASE-6829
 URL: https://issues.apache.org/jira/browse/HBASE-6829
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Attachments: hbase-6829_v1-0.94.patch, hbase-6829_v1-trunk.patch, 
 hbase-6829_v2-0.94.patch, hbase-6829_v2-trunk.patch


 TestCacheOnWriteInSchema and TestCompactSelection fails with 
 {code}
 java.io.IOException: Target HLog directory already exists: 
 ./target/test-data/2d814e66-75d3-4c1b-92c7-a49d9972e8fd/TestCacheOnWriteInSchema/logs
   at org.apache.hadoop.hbase.regionserver.wal.HLog.init(HLog.java:385)
   at org.apache.hadoop.hbase.regionserver.wal.HLog.init(HLog.java:316)
   at 
 org.apache.hadoop.hbase.regionserver.TestCacheOnWriteInSchema.setUp(TestCacheOnWriteInSchema.java:162)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6875) Remove commons-httpclient, -component, and up versions on other jars (remove unused repository)

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464956#comment-13464956
 ] 

Hudson commented on HBASE-6875:
---

Integrated in HBase-TRUNK #3384 (See 
[https://builds.apache.org/job/HBase-TRUNK/3384/])
HBASE-6875 Remove commons-httpclient, -component, and up versions on other 
jars (remove unused repository) (Revision 1391136)

 Result = FAILURE
stack : 
Files : 
* /hbase/trunk/hbase-server/pom.xml


 Remove commons-httpclient, -component, and up versions on other jars (remove 
 unused repository)
 ---

 Key: HBASE-6875
 URL: https://issues.apache.org/jira/browse/HBASE-6875
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.96.0
Reporter: stack
Assignee: stack
 Fix For: 0.96.0

 Attachments: addendum.txt, pom.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6610) HFileLink: Hardlink alternative for snapshot restore

2012-09-27 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-6610:
---

Attachment: HBASE-6610-v8.patch

 HFileLink: Hardlink alternative for snapshot restore
 

 Key: HBASE-6610
 URL: https://issues.apache.org/jira/browse/HBASE-6610
 Project: HBase
  Issue Type: Sub-task
  Components: io
Affects Versions: 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
  Labels: snapshot
 Fix For: 0.96.0

 Attachments: HBASE-6610-v1.patch, HBASE-6610-v2.patch, 
 HBASE-6610-v3.patch, HBASE-6610-v5.patch, HBASE-6610-v6.patch, 
 HBASE-6610-v7.patch, HBASE-6610-v8.patch


 To avoid copying data during restore snapshot we need to introduce an HFile 
 Link  that allows to reference a file that can be in the original path 
 (/hbase/table/region/cf/hfile) or, if the file is archived, in the archive 
 directory (/hbase/.archive/table/region/cf/hfile).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6071) getRegionServerWithRetires, should log unsuccessful attempts and exceptions.

2012-09-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464972#comment-13464972
 ] 

stack commented on HBASE-6071:
--

Patch looks fine.  I'd change the below to throw an InterruptedIOException:

{code}
+logAndGiveUp(new IOException(Giving up after tries= + tries, e), 
tries, exceptions);
{code}

... but I can do that on commit.

Are you running this change?  Does it work for you?

 getRegionServerWithRetires, should log unsuccessful attempts and exceptions.
 

 Key: HBASE-6071
 URL: https://issues.apache.org/jira/browse/HBASE-6071
 Project: HBase
  Issue Type: Improvement
  Components: Client, IPC/RPC
Affects Versions: 0.92.0, 0.94.0
Reporter: Igal Shilman
Priority: Minor
  Labels: client, ipc
 Attachments: HBASE-6071.patch, HBASE-6071.v2.patch, 
 HBASE-6071.v3.patch, HBASE-6071.v4.patch, HBASE-6071.v5.patch, 
 HConnectionManager_HBASE-6071-0.90.0.patch, lease-exception.txt


 HConnectionImplementation.getRegionServerWithRetries might terminate w/ an 
 exception different then a DoNotRetryIOException, thus silently drops 
 exceptions from previous attempts.
 [~ted_yu] suggested 
 ([here|http://mail-archives.apache.org/mod_mbox/hbase-user/201205.mbox/%3CCAFebPXBq9V9BVdzRTNr-MB3a1Lz78SZj6gvP6On0b%2Bajt9StAg%40mail.gmail.com%3E])
  adding a log message inside the catch block describing the exception type 
 and details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6439) Ignore .archive directory as a table

2012-09-27 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464980#comment-13464980
 ] 

Jesse Yates commented on HBASE-6439:


[~stack] +1 that sounds good

Sameer would you mind rolling that into your patch?

 Ignore .archive directory as a table
 

 Key: HBASE-6439
 URL: https://issues.apache.org/jira/browse/HBASE-6439
 Project: HBase
  Issue Type: Bug
  Components: io, regionserver
Affects Versions: 0.96.0
Reporter: Jesse Yates
Assignee: Sameer Vaishampayan
  Labels: newbie

 From a recent test run:
 {quote}
 2012-07-22 02:27:30,699 WARN  [IPC Server handler 0 on 47087] 
 util.FSTableDescriptors(168): The following folder is in HBase's root 
 directory and doesn't contain a table descriptor, do consider deleting it: 
 .archive
 {quote}
 With the addition of HBASE-5547, table-level folders are no-longer all table 
 folders. FSTableDescriptors needs to then have a 'gold-list' that we can 
 update with directories that aren't tables so we don't have this kind of 
 thing showing up in the logs.
 Currently, we have the following block:
 {quote}
 invocations++;
 if (HTableDescriptor.ROOT_TABLEDESC.getNameAsString().equals(tablename)) {
   cachehits++;
   return HTableDescriptor.ROOT_TABLEDESC;
 }
 if (HTableDescriptor.META_TABLEDESC.getNameAsString().equals(tablename)) {
   cachehits++;
   return HTableDescriptor.META_TABLEDESC;
 }
 {quote}
 to handle special cases, but that's a bit clunky and not clean in terms of 
 table-level directories that need to be ignored.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira



[jira] [Commented] (HBASE-6827) [WINDOWS] TestScannerTimeout fails expecting a timeout

2012-09-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464983#comment-13464983
 ] 

stack commented on HBASE-6827:
--

+1 on commit

 [WINDOWS] TestScannerTimeout fails expecting a timeout
 --

 Key: HBASE-6827
 URL: https://issues.apache.org/jira/browse/HBASE-6827
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: hbase-6827_v1-0.94.patch, hbase-6827_v1-trunk.patch


 TestScannerTimeout.test2481() fails with:
 {code}
 java.lang.AssertionError: We should be timing out
   at org.junit.Assert.fail(Assert.java:93)
   at 
 org.apache.hadoop.hbase.client.TestScannerTimeout.test2481(TestScannerTimeout.java:117)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6820) [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon shutdown()

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6820:
-

Status: Patch Available  (was: Open)

+1 on commit.  Passing by hadoopqa to make sure it doesn't break anything 
unexpectedly.

 [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon 
 shutdown()
 --

 Key: HBASE-6820
 URL: https://issues.apache.org/jira/browse/HBASE-6820
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Attachments: hbase-6820_v1-0.94.patch, hbase-6820_v1-trunk.patch


 MiniZookeeperCluster.shutdown() shuts down the ZookeeperServer and 
 NIOServerCnxnFactory. However, MiniZookeeperCluster uses a deprecated 
 ZookeeperServer constructor, which in turn constructs its own FileTxnSnapLog, 
 and ZKDatabase. Since ZookeeperServer.shutdown() does not close() the 
 ZKDatabase, we have to explicitly close it in MiniZookeeperCluster.shutdown().
 Tests effected by this are
 {code}
 TestSplitLogManager
 TestSplitLogWorker
 TestOfflineMetaRebuildBase
 TestOfflineMetaRebuildHole
 TestOfflineMetaRebuildOverlap
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6829) [WINDOWS] Tests should ensure that HLog is closed

2012-09-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464993#comment-13464993
 ] 

stack commented on HBASE-6829:
--

Patch looks a little brittle.  We seem to depend on there forever being one 
test only in here that uses the new hlog and region data members.  The new data 
members are initialized in a test but closed in the @After.  At least test hlog 
and region datamembers are non-null?Should the regions and hlog be made in 
a @Before?

 [WINDOWS] Tests should ensure that HLog is closed
 -

 Key: HBASE-6829
 URL: https://issues.apache.org/jira/browse/HBASE-6829
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Attachments: hbase-6829_v1-0.94.patch, hbase-6829_v1-trunk.patch, 
 hbase-6829_v2-0.94.patch, hbase-6829_v2-trunk.patch


 TestCacheOnWriteInSchema and TestCompactSelection fails with 
 {code}
 java.io.IOException: Target HLog directory already exists: 
 ./target/test-data/2d814e66-75d3-4c1b-92c7-a49d9972e8fd/TestCacheOnWriteInSchema/logs
   at org.apache.hadoop.hbase.regionserver.wal.HLog.init(HLog.java:385)
   at org.apache.hadoop.hbase.regionserver.wal.HLog.init(HLog.java:316)
   at 
 org.apache.hadoop.hbase.regionserver.TestCacheOnWriteInSchema.setUp(TestCacheOnWriteInSchema.java:162)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6828) [WINDOWS] TestMemoryBoundedLogMessageBuffer failures

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6828:
-

Status: Patch Available  (was: Open)

+1 on commit if this hadoopqa run does not uncover something odd

 [WINDOWS] TestMemoryBoundedLogMessageBuffer failures
 

 Key: HBASE-6828
 URL: https://issues.apache.org/jira/browse/HBASE-6828
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Attachments: hbase-6828_v1-0.94.patch, hbase-6828_v1-trunk.patch


 TestMemoryBoundedLogMessageBuffer fails because of a suspected \n line ending 
 difference.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6832) [WINDOWS] Tests should use explicit timestamp for Puts, and not rely on implicit RS timing

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6832:
-

Status: Patch Available  (was: Open)

+1 on commit if this patch passes hadoopqa.

 [WINDOWS] Tests should use explicit timestamp for Puts, and not rely on 
 implicit RS timing  
 

 Key: HBASE-6832
 URL: https://issues.apache.org/jira/browse/HBASE-6832
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Attachments: hbase-6832_v1-0.94.patch, hbase-6832_v1-trunk.patch


 TestRegionObserverBypass.testMulti() fails with 
 {code}
 java.lang.AssertionError: expected:1 but was:0
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.failNotEquals(Assert.java:647)
   at org.junit.Assert.assertEquals(Assert.java:128)
   at org.junit.Assert.assertEquals(Assert.java:472)
   at org.junit.Assert.assertEquals(Assert.java:456)
   at 
 org.apache.hadoop.hbase.coprocessor.TestRegionObserverBypass.checkRowAndDelete(TestRegionObserverBypass.java:173)
   at 
 org.apache.hadoop.hbase.coprocessor.TestRegionObserverBypass.testMulti(TestRegionObserverBypass.java:166)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6439) Ignore .archive directory as a table

2012-09-27 Thread Sameer Vaishampayan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465000#comment-13465000
 ] 

Sameer Vaishampayan commented on HBASE-6439:


Here's the review:

https://reviews.apache.org/r/7225/

[~jesse_yates] [~saint@gmail.com] - Say what ? I am misunderstanding this 
bug then. I figured that there are log lines because some directories are not 
deletable. Also that it was to be made configurable from what was a constant. 
Aren't some archive dirs created under region dirs: as in HFileArchiveUtil  
getStoreArchivePath method ? and therefore not a constant ?

 Ignore .archive directory as a table
 

 Key: HBASE-6439
 URL: https://issues.apache.org/jira/browse/HBASE-6439
 Project: HBase
  Issue Type: Bug
  Components: io, regionserver
Affects Versions: 0.96.0
Reporter: Jesse Yates
Assignee: Sameer Vaishampayan
  Labels: newbie

 From a recent test run:
 {quote}
 2012-07-22 02:27:30,699 WARN  [IPC Server handler 0 on 47087] 
 util.FSTableDescriptors(168): The following folder is in HBase's root 
 directory and doesn't contain a table descriptor, do consider deleting it: 
 .archive
 {quote}
 With the addition of HBASE-5547, table-level folders are no-longer all table 
 folders. FSTableDescriptors needs to then have a 'gold-list' that we can 
 update with directories that aren't tables so we don't have this kind of 
 thing showing up in the logs.
 Currently, we have the following block:
 {quote}
 invocations++;
 if (HTableDescriptor.ROOT_TABLEDESC.getNameAsString().equals(tablename)) {
   cachehits++;
   return HTableDescriptor.ROOT_TABLEDESC;
 }
 if (HTableDescriptor.META_TABLEDESC.getNameAsString().equals(tablename)) {
   cachehits++;
   return HTableDescriptor.META_TABLEDESC;
 }
 {quote}
 to handle special cases, but that's a bit clunky and not clean in terms of 
 table-level directories that need to be ignored.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6439) Ignore .archive directory as a table

2012-09-27 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465002#comment-13465002
 ] 

Jesse Yates commented on HBASE-6439:


yeah, it was configurable, but stack was saying that we should just make 
.archive the directory we always archive files. This patch then becomes the 
removal of the configuration element, fixing all the places it looks in the 
conf for archive directory (more that getStoreArchivePath, but not too many 
places), adding a constant value for the .archive directory and then, finally, 
updating the non-table-dirs constant.

Make sense?

 Ignore .archive directory as a table
 

 Key: HBASE-6439
 URL: https://issues.apache.org/jira/browse/HBASE-6439
 Project: HBase
  Issue Type: Bug
  Components: io, regionserver
Affects Versions: 0.96.0
Reporter: Jesse Yates
Assignee: Sameer Vaishampayan
  Labels: newbie

 From a recent test run:
 {quote}
 2012-07-22 02:27:30,699 WARN  [IPC Server handler 0 on 47087] 
 util.FSTableDescriptors(168): The following folder is in HBase's root 
 directory and doesn't contain a table descriptor, do consider deleting it: 
 .archive
 {quote}
 With the addition of HBASE-5547, table-level folders are no-longer all table 
 folders. FSTableDescriptors needs to then have a 'gold-list' that we can 
 update with directories that aren't tables so we don't have this kind of 
 thing showing up in the logs.
 Currently, we have the following block:
 {quote}
 invocations++;
 if (HTableDescriptor.ROOT_TABLEDESC.getNameAsString().equals(tablename)) {
   cachehits++;
   return HTableDescriptor.ROOT_TABLEDESC;
 }
 if (HTableDescriptor.META_TABLEDESC.getNameAsString().equals(tablename)) {
   cachehits++;
   return HTableDescriptor.META_TABLEDESC;
 }
 {quote}
 to handle special cases, but that's a bit clunky and not clean in terms of 
 table-level directories that need to be ignored.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6610) HFileLink: Hardlink alternative for snapshot restore

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6610:
-

Status: Open  (was: Patch Available)

 HFileLink: Hardlink alternative for snapshot restore
 

 Key: HBASE-6610
 URL: https://issues.apache.org/jira/browse/HBASE-6610
 Project: HBase
  Issue Type: Sub-task
  Components: io
Affects Versions: 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
  Labels: snapshot
 Fix For: 0.96.0

 Attachments: HBASE-6610-v1.patch, HBASE-6610-v2.patch, 
 HBASE-6610-v3.patch, HBASE-6610-v5.patch, HBASE-6610-v6.patch, 
 HBASE-6610-v7.patch, HBASE-6610-v8.patch


 To avoid copying data during restore snapshot we need to introduce an HFile 
 Link  that allows to reference a file that can be in the original path 
 (/hbase/table/region/cf/hfile) or, if the file is archived, in the archive 
 directory (/hbase/.archive/table/region/cf/hfile).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6610) HFileLink: Hardlink alternative for snapshot restore

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6610:
-

Status: Patch Available  (was: Open)

Trying v8 against hadoopqa

 HFileLink: Hardlink alternative for snapshot restore
 

 Key: HBASE-6610
 URL: https://issues.apache.org/jira/browse/HBASE-6610
 Project: HBase
  Issue Type: Sub-task
  Components: io
Affects Versions: 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
  Labels: snapshot
 Fix For: 0.96.0

 Attachments: HBASE-6610-v1.patch, HBASE-6610-v2.patch, 
 HBASE-6610-v3.patch, HBASE-6610-v5.patch, HBASE-6610-v6.patch, 
 HBASE-6610-v7.patch, HBASE-6610-v8.patch


 To avoid copying data during restore snapshot we need to introduce an HFile 
 Link  that allows to reference a file that can be in the original path 
 (/hbase/table/region/cf/hfile) or, if the file is archived, in the archive 
 directory (/hbase/.archive/table/region/cf/hfile).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6601) TestImportExport failing against Hadoop 2

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6601:
-

Status: Open  (was: Patch Available)

 TestImportExport failing against Hadoop 2
 -

 Key: HBASE-6601
 URL: https://issues.apache.org/jira/browse/HBASE-6601
 Project: HBase
  Issue Type: Bug
Reporter: Scott Forman
 Attachments: HBASE-6601-0.94-changeFunctSign.patch, 
 HBASE-6601-0.94.patch, HBASE-6601-0.94.patch, 
 HBASE-6601-trunk-changeFunctSign.patch, 
 HBASE-6601-trunk-changeFunctSign.patch, HBASE-6601-trunk.patch, 
 HBASE-6601-trunk.patch, HBASE-6601-trunk.patch


 TestImportExport.testSimpleCase is failing with the following exception:
 java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl.unwrapAndThrowException(YarnRemoteExceptionPBImpl.java:135)
   at 
 org.apache.hadoop.yarn.api.impl.pb.client.ClientRMProtocolPBClientImpl.getNewApplication(ClientRMProtocolPBClientImpl.java:134)
   at 
 org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(ResourceMgrDelegate.java:181)
   at org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.java:214)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:337)
   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1216)
   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1213)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1213)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1234)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportExport.testSimpleCase(TestImportExport.java:114)
 
 Caused by: com.google.protobuf.ServiceException: java.net.ConnectException: 
 Call From asurus.iridiant.net/50.23.172.109 to 0.0.0.0:8032 failed on 
 connection exception: java.net.ConnectException: Connection refused; For more 
 details see:  http://wiki.apache.org/hadoop/ConnectionRefused
 
 The problem is that a connection to the YARN resource manager is being made 
 at the default address (0.0.0.0:8032) instead of the actual address that it 
 is listening on.  
 This test creates two miniclusters, one for map reduce and one for hbase, and 
 each minicluster has its own Configuration.  The Configuration for the map 
 reduce minicluster has the correct resource manager address, while the 
 Configuration for the hbase minicluster has the default resource manager 
 address.  Since the test is using only the Configuration from the hbase 
 minicluster, it sees the default address for the resource manager, instead of 
 the actual address.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6601) TestImportExport failing against Hadoop 2

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6601:
-

Attachment: HBASE-6601-trunk.patch

Reapplying so hadoopqa picks up trunk rather than 0.94 patch

 TestImportExport failing against Hadoop 2
 -

 Key: HBASE-6601
 URL: https://issues.apache.org/jira/browse/HBASE-6601
 Project: HBase
  Issue Type: Bug
Reporter: Scott Forman
 Attachments: HBASE-6601-0.94-changeFunctSign.patch, 
 HBASE-6601-0.94.patch, HBASE-6601-0.94.patch, 
 HBASE-6601-trunk-changeFunctSign.patch, 
 HBASE-6601-trunk-changeFunctSign.patch, HBASE-6601-trunk.patch, 
 HBASE-6601-trunk.patch, HBASE-6601-trunk.patch


 TestImportExport.testSimpleCase is failing with the following exception:
 java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl.unwrapAndThrowException(YarnRemoteExceptionPBImpl.java:135)
   at 
 org.apache.hadoop.yarn.api.impl.pb.client.ClientRMProtocolPBClientImpl.getNewApplication(ClientRMProtocolPBClientImpl.java:134)
   at 
 org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(ResourceMgrDelegate.java:181)
   at org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.java:214)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:337)
   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1216)
   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1213)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1213)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1234)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportExport.testSimpleCase(TestImportExport.java:114)
 
 Caused by: com.google.protobuf.ServiceException: java.net.ConnectException: 
 Call From asurus.iridiant.net/50.23.172.109 to 0.0.0.0:8032 failed on 
 connection exception: java.net.ConnectException: Connection refused; For more 
 details see:  http://wiki.apache.org/hadoop/ConnectionRefused
 
 The problem is that a connection to the YARN resource manager is being made 
 at the default address (0.0.0.0:8032) instead of the actual address that it 
 is listening on.  
 This test creates two miniclusters, one for map reduce and one for hbase, and 
 each minicluster has its own Configuration.  The Configuration for the map 
 reduce minicluster has the correct resource manager address, while the 
 Configuration for the hbase minicluster has the default resource manager 
 address.  Since the test is using only the Configuration from the hbase 
 minicluster, it sees the default address for the resource manager, instead of 
 the actual address.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6601) TestImportExport failing against Hadoop 2

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6601:
-

Status: Patch Available  (was: Open)

 TestImportExport failing against Hadoop 2
 -

 Key: HBASE-6601
 URL: https://issues.apache.org/jira/browse/HBASE-6601
 Project: HBase
  Issue Type: Bug
Reporter: Scott Forman
 Attachments: HBASE-6601-0.94-changeFunctSign.patch, 
 HBASE-6601-0.94.patch, HBASE-6601-0.94.patch, 
 HBASE-6601-trunk-changeFunctSign.patch, 
 HBASE-6601-trunk-changeFunctSign.patch, HBASE-6601-trunk.patch, 
 HBASE-6601-trunk.patch, HBASE-6601-trunk.patch


 TestImportExport.testSimpleCase is failing with the following exception:
 java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl.unwrapAndThrowException(YarnRemoteExceptionPBImpl.java:135)
   at 
 org.apache.hadoop.yarn.api.impl.pb.client.ClientRMProtocolPBClientImpl.getNewApplication(ClientRMProtocolPBClientImpl.java:134)
   at 
 org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(ResourceMgrDelegate.java:181)
   at org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.java:214)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:337)
   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1216)
   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1213)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1213)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1234)
   at 
 org.apache.hadoop.hbase.mapreduce.TestImportExport.testSimpleCase(TestImportExport.java:114)
 
 Caused by: com.google.protobuf.ServiceException: java.net.ConnectException: 
 Call From asurus.iridiant.net/50.23.172.109 to 0.0.0.0:8032 failed on 
 connection exception: java.net.ConnectException: Connection refused; For more 
 details see:  http://wiki.apache.org/hadoop/ConnectionRefused
 
 The problem is that a connection to the YARN resource manager is being made 
 at the default address (0.0.0.0:8032) instead of the actual address that it 
 is listening on.  
 This test creates two miniclusters, one for map reduce and one for hbase, and 
 each minicluster has its own Configuration.  The Configuration for the map 
 reduce minicluster has the correct resource manager address, while the 
 Configuration for the hbase minicluster has the default resource manager 
 address.  Since the test is using only the Configuration from the hbase 
 minicluster, it sees the default address for the resource manager, instead of 
 the actual address.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6826) [WINDOWS] TestFromClientSide failures

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6826:
-

Status: Patch Available  (was: Open)

+1 on commit.  Running by hadoopqa just in case.

 [WINDOWS] TestFromClientSide failures
 -

 Key: HBASE-6826
 URL: https://issues.apache.org/jira/browse/HBASE-6826
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Attachments: hbase-6826_v1-0.94.patch, hbase-6826_v1-trunk.patch


 The following tests fail for TestFromClientSide: 
 {code}
 testPoolBehavior()
 testClientPoolRoundRobin()
 testClientPoolThreadLocal()
 {code}
 The first test fails due to the fact that the test (wrongly) assumes that 
 ThredPoolExecutor can reclaim the thread immediately. 
 The second and third tests seem to fail because that Put's to the table does 
 not specify an explicit timestamp, but on windows, consecutive calls to put 
 happen to finish in the same milisecond so that the resulting mutations have 
 the same timestamp, thus there is only one version of the cell value.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6829) [WINDOWS] Tests should ensure that HLog is closed

2012-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465016#comment-13465016
 ] 

Hadoop QA commented on HBASE-6829:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12546381/hbase-6829_v2-trunk.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

-1 javadoc.  The javadoc tool appears to have generated 140 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 6 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor
  org.apache.hadoop.hbase.regionserver.TestAtomicOperation

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2943//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2943//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2943//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2943//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2943//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2943//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2943//console

This message is automatically generated.

 [WINDOWS] Tests should ensure that HLog is closed
 -

 Key: HBASE-6829
 URL: https://issues.apache.org/jira/browse/HBASE-6829
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Attachments: hbase-6829_v1-0.94.patch, hbase-6829_v1-trunk.patch, 
 hbase-6829_v2-0.94.patch, hbase-6829_v2-trunk.patch


 TestCacheOnWriteInSchema and TestCompactSelection fails with 
 {code}
 java.io.IOException: Target HLog directory already exists: 
 ./target/test-data/2d814e66-75d3-4c1b-92c7-a49d9972e8fd/TestCacheOnWriteInSchema/logs
   at org.apache.hadoop.hbase.regionserver.wal.HLog.init(HLog.java:385)
   at org.apache.hadoop.hbase.regionserver.wal.HLog.init(HLog.java:316)
   at 
 org.apache.hadoop.hbase.regionserver.TestCacheOnWriteInSchema.setUp(TestCacheOnWriteInSchema.java:162)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6822) [WINDOWS] MiniZookeeperCluster multiple daemons bind to the same port

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6822:
-

Status: Patch Available  (was: Open)

+1 on commit.  Running by hadoopqa to check

 [WINDOWS] MiniZookeeperCluster multiple daemons bind to the same port
 -

 Key: HBASE-6822
 URL: https://issues.apache.org/jira/browse/HBASE-6822
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: hbase-6822_v1-0.94.patch, hbase-6822_v1-trunk.patch


 TestHBaseTestingUtility.testMiniZooKeeper() tests whether the mini zk cluster 
 is working by launching 5 threads corresponding to zk servers. 
 NIOServerCnxnFactory.configure() configures the socket as:
 {code}
 this.ss = ServerSocketChannel.open();
 ss.socket().setReuseAddress(true);
 {code}
 setReuseAddress() is set, because it allows the server to come back up and 
 bind to the same port before the socket is timed-out by the kernel.
 Under windows, the behavior on ServerSocket.setReuseAddress() is different 
 than on linux, in which it allows any process to bind to an already-bound 
 port. This causes ZK nodes starting on the same node, to be able to bind to 
 the same port. 
 The following part of the patch at 
 https://issues.apache.org/jira/browse/HADOOP-8223 deals with this case for 
 Hadoop:
 {code}
 if(Shell.WINDOWS) {
 +  // result of setting the SO_REUSEADDR flag is different on Windows
 +  // http://msdn.microsoft.com/en-us/library/ms740621(v=vs.85).aspx
 +  // without this 2 NN's can start on the same machine and listen on 
 +  // the same port with indeterminate routing of incoming requests to 
 them
 +  ret.setReuseAddress(false);
 +}
 {code}
 We should do the same in Zookeeper (I'll open a ZOOK issue). But in the 
 meantime, we can fix hbase tests to not rely on BindException to resolve for 
 bind errors. Especially, in  MiniZKCluster.startup() when starting more than 
 1 servers, we already know that we have to increment the port number. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6823) [WINDOWS] TestSplitTransaction fails due the Log handle is not released by a call to the DaughterOpener.start()

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6823:
-

Status: Patch Available  (was: Open)

+1 on commit.  Running by hadoopqa just in case.

 [WINDOWS] TestSplitTransaction fails due the Log handle is not released by a 
 call to the DaughterOpener.start()
 ---

 Key: HBASE-6823
 URL: https://issues.apache.org/jira/browse/HBASE-6823
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Attachments: hbase-6823_v1-0.94.patch, hbase-6823_v1-trunk.patch


 There are two unit test cases in HBase RegionServer test failed in the clean 
 up stage that failed to delete the files/folders created in the test. 
 testWholesomeSplit(org.apache.hadoop.hbase.regionserver.TestSplitTransaction):
  Failed delete of ./target/test-
 data/1c386abc-f159-492e-b21f-e89fab24d85b/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/a588d813fd26280c2b42e93565ed960c
 testRollback(org.apache.hadoop.hbase.regionserver.TestSplitTransaction): 
 Failed delete of ./target/test-data/6
 1a1a14b-0cc9-4dd6-93fd-4dc021e2bfcc/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/8090abc89528461fa284288c257662cd
 The root cause is triggered by ta call to the DaughterOpener.start() in 
 \src\hbase\src\main\java\org\apache\hadoop\hbase\regionserver\SplitTransactopn.Java
  (openDaughters() function). It left handles to the splited folder/file and 
 causing deleting of the file/folder failed in the Windows OS.
 Windows does not allow to delete a file, while there are open file handlers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6816) [WINDOWS] line endings on checkout for .sh files

2012-09-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465021#comment-13465021
 ] 

stack commented on HBASE-6816:
--

You added wrong patch here I think.

 [WINDOWS] line endings on checkout for .sh files
 

 Key: HBASE-6816
 URL: https://issues.apache.org/jira/browse/HBASE-6816
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: hbase-16_v1.patch


 On code checkout from svn or git, we need to ensure that the line endings for 
 .sh files are LF, so that they work with cygwin. This is important for 
 getting src/saveVersion.sh to work. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6831) [WINDOWS] HBaseTestingUtility.expireSession() does not expire zookeeper session

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6831:
-

Status: Patch Available  (was: Open)

+1 on commit.  Running by hadoopqa just in case.

 [WINDOWS] HBaseTestingUtility.expireSession() does not expire zookeeper 
 session
 ---

 Key: HBASE-6831
 URL: https://issues.apache.org/jira/browse/HBASE-6831
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Attachments: hbase-6831_v1-0.94.patch, hbase-6831_v1-trunk.patch


 TestReplicationPeer fails because it forces the zookeeper session expiration 
 by calling HBaseTestingUtilty.expireSesssion(), but that function fails to do 
 so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6829) [WINDOWS] Tests should ensure that HLog is closed

2012-09-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465024#comment-13465024
 ] 

stack commented on HBASE-6829:
--

These look unrelated.  Would suggest you run the tests locally again before 
commit to make sure it is indeed the case otherwise, go for it Enis.

 [WINDOWS] Tests should ensure that HLog is closed
 -

 Key: HBASE-6829
 URL: https://issues.apache.org/jira/browse/HBASE-6829
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Attachments: hbase-6829_v1-0.94.patch, hbase-6829_v1-trunk.patch, 
 hbase-6829_v2-0.94.patch, hbase-6829_v2-trunk.patch


 TestCacheOnWriteInSchema and TestCompactSelection fails with 
 {code}
 java.io.IOException: Target HLog directory already exists: 
 ./target/test-data/2d814e66-75d3-4c1b-92c7-a49d9972e8fd/TestCacheOnWriteInSchema/logs
   at org.apache.hadoop.hbase.regionserver.wal.HLog.init(HLog.java:385)
   at org.apache.hadoop.hbase.regionserver.wal.HLog.init(HLog.java:316)
   at 
 org.apache.hadoop.hbase.regionserver.TestCacheOnWriteInSchema.setUp(TestCacheOnWriteInSchema.java:162)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-5071) HFile has a possible cast issue.

2012-09-27 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo resolved HBASE-5071.
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Release Note: This issue only effects HFileV1 and is not an issue for 
HFileV2.

 HFile has a possible cast issue.
 

 Key: HBASE-5071
 URL: https://issues.apache.org/jira/browse/HBASE-5071
 Project: HBase
  Issue Type: Bug
  Components: HFile, io
Affects Versions: 0.90.0
Reporter: Harsh J
  Labels: hfile
 Fix For: 0.96.0


 HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
 {code}
 int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
 FixedFileTrailer.trailerSize());
 {code}
 Which on trunk today, for HFile v1 is:
 {code}
 int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
 trailer.getTrailerSize());
 {code}
 This computed (and casted) integer is then used to build an array of the same 
 size. But if fileSize is very large ( Integer.MAX_VALUE), then there's an 
 easy chance this can go negative at some point and spew out exceptions such 
 as:
 {code}
 java.lang.NegativeArraySizeException 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
  
 at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
  
 at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
 at org.apache.hadoop.hbase.regionserver.Store.init(Store.java:209) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
  
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
 {code}
 Did we accidentally limit single region sizes this way?
 (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
 issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5071) HFile has a possible cast issue.

2012-09-27 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465028#comment-13465028
 ] 

Chris Trezzo commented on HBASE-5071:
-

Closed and left a comment in the release notes. Thanks Harsh J!

 HFile has a possible cast issue.
 

 Key: HBASE-5071
 URL: https://issues.apache.org/jira/browse/HBASE-5071
 Project: HBase
  Issue Type: Bug
  Components: HFile, io
Affects Versions: 0.90.0
Reporter: Harsh J
  Labels: hfile
 Fix For: 0.96.0


 HBASE-3040 introduced this line originally in HFile.Reader#loadFileInfo(...):
 {code}
 int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - 
 FixedFileTrailer.trailerSize());
 {code}
 Which on trunk today, for HFile v1 is:
 {code}
 int sizeToLoadOnOpen = (int) (fileSize - trailer.getLoadOnOpenDataOffset() -
 trailer.getTrailerSize());
 {code}
 This computed (and casted) integer is then used to build an array of the same 
 size. But if fileSize is very large ( Integer.MAX_VALUE), then there's an 
 easy chance this can go negative at some point and spew out exceptions such 
 as:
 {code}
 java.lang.NegativeArraySizeException 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readAllIndex(HFile.java:805) 
 at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:832) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1003)
  
 at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382) 
 at 
 org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
  
 at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:267) 
 at org.apache.hadoop.hbase.regionserver.Store.init(Store.java:209) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2088)
  
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:358) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) 
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) 
 {code}
 Did we accidentally limit single region sizes this way?
 (Unsure about HFile v2's structure so far, so do not know if v2 has the same 
 issue.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5251) Some commands return 0 rows when 0 rows were processed successfully

2012-09-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465029#comment-13465029
 ] 

stack commented on HBASE-5251:
--

[~sameerv] I took another look.  Does this patch not pervert the original 
intent of formatter?  Formatter was about how to format the results for a 
particular output whether console, html for a webpage, or in some distant 
future, some fancy gui.  This patch changes formatter to instead hold the 
outputting context: i.e. admin or row output.  Maybe thats fine since we're not 
exploiting the original intent.  But maybe rather than remove formatter, we 
should introduce introduce context beyond what we currently have -- footer, 
row, header -- to include whatever you need outputting count of rows.

Otherwise, nice patch.

 Some commands return 0 rows when  0 rows were processed successfully
 ---

 Key: HBASE-5251
 URL: https://issues.apache.org/jira/browse/HBASE-5251
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.90.5
Reporter: David S. Wang
Assignee: Sameer Vaishampayan
Priority: Minor
  Labels: noob
 Attachments: patch7.diff, patch8.diff, patch9.diff


 From the hbase shell, I see this:
 hbase(main):049:0 scan 't1'
 ROW   COLUMN+CELL 
   
  r1   column=f1:c1, timestamp=1327104295560, value=value  
   
  r1   column=f1:c2, timestamp=1327104330625, value=value  
   
 1 row(s) in 0.0300 seconds
 hbase(main):050:0 deleteall 't1', 'r1'
 0 row(s) in 0.0080 seconds  == I expected this to read 
 2 row(s)
 hbase(main):051:0 scan 't1'   
 ROW   COLUMN+CELL 
   
 0 row(s) in 0.0090 seconds
 I expected the deleteall command to return 1 row(s) instead of 0, because 1 
 row was deleted.  Similar behavior for delete and some other commands.  Some 
 commands such as put work fine.
 Looking at the ruby shell code, it seems that formatter.footer() is called 
 even for commands that will not actually increment the number of rows 
 reported, such as deletes.  Perhaps there should be another similar function 
 to formatter.footer(), but that will not print out @row_count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6820) [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon shutdown()

2012-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465030#comment-13465030
 ] 

Hadoop QA commented on HBASE-6820:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12546135/hbase-6820_v1-trunk.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

-1 javadoc.  The javadoc tool appears to have generated 140 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 6 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.io.hfile.TestForceCacheImportantBlocks

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2945//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2945//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2945//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2945//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2945//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2945//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2945//console

This message is automatically generated.

 [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon 
 shutdown()
 --

 Key: HBASE-6820
 URL: https://issues.apache.org/jira/browse/HBASE-6820
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Attachments: hbase-6820_v1-0.94.patch, hbase-6820_v1-trunk.patch


 MiniZookeeperCluster.shutdown() shuts down the ZookeeperServer and 
 NIOServerCnxnFactory. However, MiniZookeeperCluster uses a deprecated 
 ZookeeperServer constructor, which in turn constructs its own FileTxnSnapLog, 
 and ZKDatabase. Since ZookeeperServer.shutdown() does not close() the 
 ZKDatabase, we have to explicitly close it in MiniZookeeperCluster.shutdown().
 Tests effected by this are
 {code}
 TestSplitLogManager
 TestSplitLogWorker
 TestOfflineMetaRebuildBase
 TestOfflineMetaRebuildHole
 TestOfflineMetaRebuildOverlap
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6832) [WINDOWS] Tests should use explicit timestamp for Puts, and not rely on implicit RS timing

2012-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465031#comment-13465031
 ] 

Hadoop QA commented on HBASE-6832:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12546129/hbase-6832_v1-trunk.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

-1 javadoc.  The javadoc tool appears to have generated 140 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 6 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2946//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2946//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2946//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2946//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2946//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2946//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2946//console

This message is automatically generated.

 [WINDOWS] Tests should use explicit timestamp for Puts, and not rely on 
 implicit RS timing  
 

 Key: HBASE-6832
 URL: https://issues.apache.org/jira/browse/HBASE-6832
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Attachments: hbase-6832_v1-0.94.patch, hbase-6832_v1-trunk.patch


 TestRegionObserverBypass.testMulti() fails with 
 {code}
 java.lang.AssertionError: expected:1 but was:0
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.failNotEquals(Assert.java:647)
   at org.junit.Assert.assertEquals(Assert.java:128)
   at org.junit.Assert.assertEquals(Assert.java:472)
   at org.junit.Assert.assertEquals(Assert.java:456)
   at 
 org.apache.hadoop.hbase.coprocessor.TestRegionObserverBypass.checkRowAndDelete(TestRegionObserverBypass.java:173)
   at 
 org.apache.hadoop.hbase.coprocessor.TestRegionObserverBypass.testMulti(TestRegionObserverBypass.java:166)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6861) HFileOutputFormat set TIMERANGE wrongly

2012-09-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465032#comment-13465032
 ] 

stack commented on HBASE-6861:
--

You have a patch Eugene?  Thanks.

 HFileOutputFormat set TIMERANGE wrongly
 ---

 Key: HBASE-6861
 URL: https://issues.apache.org/jira/browse/HBASE-6861
 Project: HBase
  Issue Type: Bug
Reporter: Eugene Morozov
Priority: Minor

 In case if timestamps for KeyValues specified differently for different 
 column families, then TIMERANGEs of both HFiles would be wrong.
 Example (in pseudo code): my reducer has a condition
 if ( condition ) {
   keyValue = new KeyValue(.., CF1, .., timestamp, ..);
 } else {
   keyValue = new KeyValue(.., CF2, .., ..); // - no timestamp
 }
 context.write( keyValue );
 These two keyValues would be written into two different HFiles.
 But the code, which is actually write do the following:
   // we now have the proper HLog writer. full steam ahead
   kv.updateLatestStamp(this.now);
   trt.includeTimestamp(kv);
   wl.writer.append(kv);
 Basically, two HFiles shares the same instance of trt (TimeRangeTracker), 
 which leads to the same TIMERANGEs of both of them. Which is definitely 
 incorrect, because first HFile must have TIMERANGE=timestamp...timestamp, 
 cause we do not write any other timestamps there. And another HFile must have 
 TIMERANGE=now...now by same meaning.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-27 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465033#comment-13465033
 ] 

Lars Hofhansl commented on HBASE-6871:
--

The trunk patch applies to 0.94 as well (with some offsets), so that's fine. I 
assume the same will be true for 0.92.


 HFileBlockIndex Write Error BlockIndex in HFile V2
 --

 Key: HBASE-6871
 URL: https://issues.apache.org/jira/browse/HBASE-6871
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.1
 Environment: redhat 5u4
Reporter: Fenng Wang
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 6871.txt, 
 787179746cc347ce9bb36f1989d17419.hfile, 
 960a026ca370464f84903ea58114bc75.hfile, 
 d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
 D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
 ImportHFile.java, test_hfile_block_index.sh


 After writing some data, compaction and scan operation both failure, the 
 exception message is below:
 2012-09-18 06:32:26,227 ERROR 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
 Compaction failed 
 regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
 storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
 time=45826250816757428java.io.IOException: Could not reseek 
 StoreFileScanner[HFileScanner for reader 
 reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
 [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
 [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
 firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
 avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
 cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
  to key 
 http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
 
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
 
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
 at 
 org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
   
 at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 

 at 
 org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
 at 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
 INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
 onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, 
 prevBlockOffset=-1, 
 dataBeginsWith=\x00\x00\x00\x9B\x00\x00\x00\x00\x00\x00\x03#\x00\x00\x050\x00\x00\x08\xB7\x00\x00\x0Cr\x00\x00\x0F\xFA\x00\x00\x120,
  fileOffset=218942at 
 

[jira] [Commented] (HBASE-6491) add limit function at ClientScanner

2012-09-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465036#comment-13465036
 ] 

stack commented on HBASE-6491:
--

What if nbRows is 1M and the rows are 100MB each?  Can this not be done as an 
option on Scan?  Then clientside we look for limit count and as you next 
through the results, we'll intercede and shut the scanner once we have hit 
limit?

 add limit function at ClientScanner
 ---

 Key: HBASE-6491
 URL: https://issues.apache.org/jira/browse/HBASE-6491
 Project: HBase
  Issue Type: New Feature
  Components: Client
Affects Versions: 0.96.0
Reporter: ronghai.ma
Assignee: ronghai.ma
  Labels: patch
 Fix For: 0.96.0

 Attachments: ClientScanner.java, HBASE-6491.patch


 Add a new method in ClientScanner to implement a function like LIMIT in MySQL.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5498) Secure Bulk Load

2012-09-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465042#comment-13465042
 ] 

stack commented on HBASE-5498:
--

Can we have review of Francis's last patch set?  [~ram_krish], or [~lakshman], 
or [~andrew.purt...@gmail.com]  Thanks.

 Secure Bulk Load
 

 Key: HBASE-5498
 URL: https://issues.apache.org/jira/browse/HBASE-5498
 Project: HBase
  Issue Type: Improvement
  Components: security
Reporter: Francis Liu
Assignee: Francis Liu
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-5498_94.patch, HBASE-5498_94.patch, 
 HBASE-5498_draft_94.patch, HBASE-5498_draft.patch, HBASE-5498_trunk.patch


 Design doc: 
 https://cwiki.apache.org/confluence/display/HCATALOG/HBase+Secure+Bulk+Load
 Short summary:
 Security as it stands does not cover the bulkLoadHFiles() feature. Users 
 calling this method will bypass ACLs. Also loading is made more cumbersome in 
 a secure setting because of hdfs privileges. bulkLoadHFiles() moves the data 
 from user's directory to the hbase directory, which would require certain 
 write access privileges set.
 Our solution is to create a coprocessor which makes use of AuthManager to 
 verify if a user has write access to the table. If so, launches a MR job as 
 the hbase user to do the importing (ie rewrite from text to hfiles). One 
 tricky part this job will have to do is impersonate the calling user when 
 reading the input files. We can do this by expecting the user to pass an hdfs 
 delegation token as part of the secureBulkLoad() coprocessor call and extend 
 an inputformat to make use of that token. The output is written to a 
 temporary directory accessible only by hbase and then bulkloadHFiles() is 
 called.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6881) All regionservers are marked offline even there is still one up

2012-09-27 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465043#comment-13465043
 ] 

Jimmy Xiang commented on HBASE-6881:


In bulk assign, if a region is in transition, we think it is already opened.  
Can we do the same for single assignment?
We need to check if the region is opening instead of closing though.  Or we 
should do the other way, changing bulk assign?

 All regionservers are marked offline even there is still one up
 ---

 Key: HBASE-6881
 URL: https://issues.apache.org/jira/browse/HBASE-6881
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Attachments: trunk-6881.patch


 {noformat}
 +RegionPlan newPlan = plan;
 +if (!regionAlreadyInTransitionException) {
 +  // Force a new plan and reassign. Will return null if no servers.
 +  newPlan = getRegionPlan(state, plan.getDestination(), true);
 +}
 +if (newPlan == null) {
this.timeoutMonitor.setAllRegionServersOffline(true);
LOG.warn(Unable to find a viable location to assign region  +
  state.getRegion().getRegionNameAsString());
 {noformat}
 Here, when newPlan is null, plan.getDestination() could be up actually.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6881) All regionservers are marked offline even there is still one up

2012-09-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465049#comment-13465049
 ] 

stack commented on HBASE-6881:
--

Bulk assign is not to be copied in my opinion; its lax, absent critical checks.

 All regionservers are marked offline even there is still one up
 ---

 Key: HBASE-6881
 URL: https://issues.apache.org/jira/browse/HBASE-6881
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Attachments: trunk-6881.patch


 {noformat}
 +RegionPlan newPlan = plan;
 +if (!regionAlreadyInTransitionException) {
 +  // Force a new plan and reassign. Will return null if no servers.
 +  newPlan = getRegionPlan(state, plan.getDestination(), true);
 +}
 +if (newPlan == null) {
this.timeoutMonitor.setAllRegionServersOffline(true);
LOG.warn(Unable to find a viable location to assign region  +
  state.getRegion().getRegionNameAsString());
 {noformat}
 Here, when newPlan is null, plan.getDestination() could be up actually.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5591) ThiftServerRunner.HBaseHandler.toBytes() is identical to Bytes.getBytes()

2012-09-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465054#comment-13465054
 ] 

stack commented on HBASE-5591:
--

This was committed to trunk a while back but will leave it open until Scott 
grants Apache permission.

[~schen] Would you mind granting permission for this already committed patch 
else we'll have to remove it from trunk.  Thanks boss.

 ThiftServerRunner.HBaseHandler.toBytes() is identical to Bytes.getBytes()
 -

 Key: HBASE-5591
 URL: https://issues.apache.org/jira/browse/HBASE-5591
 Project: HBase
  Issue Type: Improvement
Reporter: Scott Chen
Assignee: Scott Chen
Priority: Trivial
 Fix For: 0.96.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HBASE-5591.D2355.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5044) Clarify solution for problem described on http://hbase.apache.org/book/trouble.mapreduce.html

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5044:
-

Fix Version/s: (was: 0.96.0)

 Clarify solution for problem described on 
 http://hbase.apache.org/book/trouble.mapreduce.html
 -

 Key: HBASE-5044
 URL: https://issues.apache.org/jira/browse/HBASE-5044
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Reporter: Eugene Koontz
Assignee: Eugene Koontz
Priority: Trivial
 Attachments: HBASE-5044.patch


 Add some documentation regarding how to fix the problem described on :
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath
 Should be some text like: 
 {quote}
 You should run your mapreduce job with your {{HADOOP_CLASSPATH}} set to 
 include the HBase jar and HBase's configured classpath. For example 
 (substitute your own hbase jar location for is {{hbase-0.90.0-SNAPSHOT.jar}}):
 {quote}
 {code}
 HADOOP_CLASSPATH=${HBASE_HOME}/target/hbase-0.90.0-SNAPSHOT.jar:`${HBASE_HOME}/bin/hbase
  classpath` ${HADOOP_HOME}/bin/hadoop jar 
 ${HBASE_HOME}/target/hbase-0.90.0-SNAPSHOT.jar rowcounter usertable
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4782) Consistency Check Utility for META Schemas

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-4782:
-

Fix Version/s: (was: 0.96.0)

 Consistency Check Utility for META Schemas
 --

 Key: HBASE-4782
 URL: https://issues.apache.org/jira/browse/HBASE-4782
 Project: HBase
  Issue Type: Improvement
Reporter: Nicolas Spiegelberg
Assignee: Madhuwanti Vaidya
Priority: Trivial
 Attachments: HBASE-4782.patch


 Adding a script to check if table descriptors for each region in META for a 
 given table are consistent.  The script compares the table descriptors in 
 META for all regions of a particular table to the table's descriptor (which 
 is just the descriptor for the first region).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4802) Disable show table metrics in bulk loader

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-4802:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk.  Thanks for patch Liyin.

 Disable show table metrics in bulk loader
 -

 Key: HBASE-4802
 URL: https://issues.apache.org/jira/browse/HBASE-4802
 Project: HBase
  Issue Type: Bug
Reporter: Nicolas Spiegelberg
Assignee: Liyin Tang
Priority: Trivial
 Fix For: 0.96.0

 Attachments: HBASE-4802.patch


 During bulk load, the Configuration object may be set to null.  This caused 
 an NPE in per-CF metrics because it consults the Configuration to determine 
 whether to show the Table name.  Need to add simple change to allow the conf 
 to be null  not specify table name in that instance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-6871:
--

Attachment: 6871-hfile-index-0.92.txt

Patch for 0.92
When I ran the new test, I got the following:
{code}
2012-09-27 00:07:40,223 DEBUG [main] hfile.HFileWriterV2(192): Initialized with 
CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] 
[cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] 
[cacheEvictOnClose=false] [cacheCompressed=false]

Key: 
key0_4849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911012113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147

Key: 
key1_4849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911012113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147

Key: 
key2_4849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911012113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147

Key: 
key3_4849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911012113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147

2012-09-27 00:07:40,257 WARN  [main] 
fs.ChecksumFileSystem$ChecksumFSInputChecker(142): Problem opening checksum 
file: 
/Users/zhihyu/92hbase/target/test-data/647b36e2-aa0e-4a97-8443-989725a4c0df/TestHFileInlineToRootChunkConversion.hfile.
  Ignoring exception: java.io.EOFException
  at java.io.DataInputStream.readFully(DataInputStream.java:180)
  at java.io.DataInputStream.readFully(DataInputStream.java:152)
  at 
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:134)
  at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
  at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427)
  at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:438)
  at 
org.apache.hadoop.hbase.io.hfile.TestHFileInlineToRootChunkConversion.testWriteHFile(TestHFileInlineToRootChunkConversion.java:77)
{code}
Investigation is under way.

 HFileBlockIndex Write Error BlockIndex in HFile V2
 --

 Key: HBASE-6871
 URL: https://issues.apache.org/jira/browse/HBASE-6871
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.1
 Environment: redhat 5u4
Reporter: Fenng Wang
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
 6871-hfile-index-0.92.txt, 6871.txt, 787179746cc347ce9bb36f1989d17419.hfile, 
 960a026ca370464f84903ea58114bc75.hfile, 
 d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
 D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
 ImportHFile.java, test_hfile_block_index.sh


 After writing some data, compaction and scan operation both failure, the 
 exception message is below:
 2012-09-18 06:32:26,227 ERROR 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
 Compaction failed 
 regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
 storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
 time=45826250816757428java.io.IOException: Could not reseek 
 StoreFileScanner[HFileScanner for reader 
 reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
 [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
 [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
 

[jira] [Updated] (HBASE-6888) HBase scripts ignore any HBASE_OPTS set in the environment

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6888:
-

   Resolution: Fixed
Fix Version/s: 0.94.3
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and 0.94.  Thanks for the patch Aditya.

Our scripts came of Hadoop so I went to check theirs.  It looks like they now 
allow specifying OPTs in the env so they agree w/ you Aditya.

 HBase scripts ignore any HBASE_OPTS set in the environment
 --

 Key: HBASE-6888
 URL: https://issues.apache.org/jira/browse/HBASE-6888
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.94.0, 0.96.0
Reporter: Aditya Kishore
Assignee: Aditya Kishore
Priority: Minor
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6888_trunk.patch


 hbase-env.sh which is sourced by hbase-config.sh which is eventually sourced 
 by the main 'hbase' script defines HBASE_OPTS form scratch, ignoring any 
 previous value set in the environment.
 This prevents from passing additional JVM parameters to HBase programs 
 (shell, hbck, etc) launched through these scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6033) Adding some fuction to check if a table/region is in compaction

2012-09-27 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465098#comment-13465098
 ] 

Harsh J commented on HBASE-6033:


This was backported to 0.90, 0.92 and 0.94 releases. Can someone set the 
appropriate fix sub-versions for each?

 Adding some fuction to check if a table/region is in compaction
 ---

 Key: HBASE-6033
 URL: https://issues.apache.org/jira/browse/HBASE-6033
 Project: HBase
  Issue Type: New Feature
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: 6033-v7.txt, hbase-6033_v2.patch, hbase-6033_v3.patch, 
 hbase_6033_v5.patch, hbase_6033_v6.patch, table_ui.png


 This feature will be helpful to find out if a major compaction is going on.
 We can show if it is in any minor compaction too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6828) [WINDOWS] TestMemoryBoundedLogMessageBuffer failures

2012-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465101#comment-13465101
 ] 

Hadoop QA commented on HBASE-6828:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12546131/hbase-6828_v1-trunk.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

-1 javadoc.  The javadoc tool appears to have generated 140 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 6 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2947//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2947//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2947//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2947//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2947//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2947//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2947//console

This message is automatically generated.

 [WINDOWS] TestMemoryBoundedLogMessageBuffer failures
 

 Key: HBASE-6828
 URL: https://issues.apache.org/jira/browse/HBASE-6828
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Attachments: hbase-6828_v1-0.94.patch, hbase-6828_v1-trunk.patch


 TestMemoryBoundedLogMessageBuffer fails because of a suspected \n line ending 
 difference.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6882) Thrift IOError should include exception class

2012-09-27 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465103#comment-13465103
 ] 

Phabricator commented on HBASE-6882:


mbautin has closed the revision [jira] [HBASE-6882] [89-fb] Thrift IOError 
should include exception class.

CHANGED PRIOR TO COMMIT
  https://reviews.facebook.net/D5679?vs=18669id=18843#differential-review-toc

REVISION DETAIL
  https://reviews.facebook.net/D5679

COMMIT
  https://reviews.facebook.net/rHBASEEIGHTNINEFBBRANCH1391221

To: Liyin, Karthik, aaiyer, chip, JIRA, mbautin


 Thrift IOError should include exception class
 -

 Key: HBASE-6882
 URL: https://issues.apache.org/jira/browse/HBASE-6882
 Project: HBase
  Issue Type: Improvement
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin
 Attachments: D5679.1.patch


 Return exception class as part of IOError thrown from the Thrift proxy or the 
 embedded Thrift server in the regionserver.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6033) Adding some fuction to check if a table/region is in compaction

2012-09-27 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465107#comment-13465107
 ] 

Jimmy Xiang commented on HBASE-6033:


The backport was done with HBASE-6124.  I just linked it here.

 Adding some fuction to check if a table/region is in compaction
 ---

 Key: HBASE-6033
 URL: https://issues.apache.org/jira/browse/HBASE-6033
 Project: HBase
  Issue Type: New Feature
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: 6033-v7.txt, hbase-6033_v2.patch, hbase-6033_v3.patch, 
 hbase_6033_v5.patch, hbase_6033_v6.patch, table_ui.png


 This feature will be helpful to find out if a major compaction is going on.
 We can show if it is in any minor compaction too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-6871:
--

Fix Version/s: 0.92.3
   Status: Open  (was: Patch Available)

 HFileBlockIndex Write Error BlockIndex in HFile V2
 --

 Key: HBASE-6871
 URL: https://issues.apache.org/jira/browse/HBASE-6871
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.1
 Environment: redhat 5u4
Reporter: Fenng Wang
Priority: Critical
 Fix For: 0.92.3, 0.94.3, 0.96.0

 Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
 6871-hfile-index-0.92.txt, 6871-hfile-index-0.92-v2.txt, 6871.txt, 
 787179746cc347ce9bb36f1989d17419.hfile, 
 960a026ca370464f84903ea58114bc75.hfile, 
 d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
 D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
 ImportHFile.java, test_hfile_block_index.sh


 After writing some data, compaction and scan operation both failure, the 
 exception message is below:
 2012-09-18 06:32:26,227 ERROR 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
 Compaction failed 
 regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
 storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
 time=45826250816757428java.io.IOException: Could not reseek 
 StoreFileScanner[HFileScanner for reader 
 reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
 [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
 [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
 firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
 avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
 cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
  to key 
 http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
 
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
 
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
 at 
 org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
   
 at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 

 at 
 org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
 at 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
 INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
 onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, 
 prevBlockOffset=-1, 
 dataBeginsWith=\x00\x00\x00\x9B\x00\x00\x00\x00\x00\x00\x03#\x00\x00\x050\x00\x00\x08\xB7\x00\x00\x0Cr\x00\x00\x0F\xFA\x00\x00\x120,
  fileOffset=218942at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.validateBlockType(HFileReaderV2.java:378)
   

[jira] [Updated] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-6871:
--

Attachment: 6871-hfile-index-0.92-v2.txt

Patch v2 passes the new test.

Running test suite for 0.92

 HFileBlockIndex Write Error BlockIndex in HFile V2
 --

 Key: HBASE-6871
 URL: https://issues.apache.org/jira/browse/HBASE-6871
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.1
 Environment: redhat 5u4
Reporter: Fenng Wang
Priority: Critical
 Fix For: 0.92.3, 0.94.3, 0.96.0

 Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
 6871-hfile-index-0.92.txt, 6871-hfile-index-0.92-v2.txt, 6871.txt, 
 787179746cc347ce9bb36f1989d17419.hfile, 
 960a026ca370464f84903ea58114bc75.hfile, 
 d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
 D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
 ImportHFile.java, test_hfile_block_index.sh


 After writing some data, compaction and scan operation both failure, the 
 exception message is below:
 2012-09-18 06:32:26,227 ERROR 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
 Compaction failed 
 regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
 storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
 time=45826250816757428java.io.IOException: Could not reseek 
 StoreFileScanner[HFileScanner for reader 
 reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
 [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
 [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
 firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
 avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
 cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
  to key 
 http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
 
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
 
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
 at 
 org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
   
 at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 

 at 
 org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
 at 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
 INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
 onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, 
 prevBlockOffset=-1, 
 dataBeginsWith=\x00\x00\x00\x9B\x00\x00\x00\x00\x00\x00\x03#\x00\x00\x050\x00\x00\x08\xB7\x00\x00\x0Cr\x00\x00\x0F\xFA\x00\x00\x120,
  fileOffset=218942at 
 

[jira] [Commented] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465125#comment-13465125
 ] 

Ted Yu commented on HBASE-6871:
---

All HFile related tests passed for 0.92 patch v2:
{code}
Running org.apache.hadoop.hbase.io.hfile.TestHFile
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.824 sec
Running org.apache.hadoop.hbase.io.hfile.TestHFileBlock
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.716 sec
Running org.apache.hadoop.hbase.io.hfile.TestHFileBlockIndex
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.972 sec
Running org.apache.hadoop.hbase.io.hfile.TestHFileInlineToRootChunkConversion
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.489 sec
Running org.apache.hadoop.hbase.io.hfile.TestHFilePerformance
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.803 sec
Running org.apache.hadoop.hbase.io.hfile.TestHFileReaderV1
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.472 sec
Running org.apache.hadoop.hbase.io.hfile.TestHFileSeek
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.701 sec
Running org.apache.hadoop.hbase.io.hfile.TestHFileWriterV2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.823 sec
Running org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 178.295 sec
{code}

 HFileBlockIndex Write Error BlockIndex in HFile V2
 --

 Key: HBASE-6871
 URL: https://issues.apache.org/jira/browse/HBASE-6871
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.1
 Environment: redhat 5u4
Reporter: Fenng Wang
Priority: Critical
 Fix For: 0.92.3, 0.94.3, 0.96.0

 Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 
 6871-hfile-index-0.92.txt, 6871-hfile-index-0.92-v2.txt, 6871.txt, 
 787179746cc347ce9bb36f1989d17419.hfile, 
 960a026ca370464f84903ea58114bc75.hfile, 
 d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
 D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
 ImportHFile.java, test_hfile_block_index.sh


 After writing some data, compaction and scan operation both failure, the 
 exception message is below:
 2012-09-18 06:32:26,227 ERROR 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
 Compaction failed 
 regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
 storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
 time=45826250816757428java.io.IOException: Could not reseek 
 StoreFileScanner[HFileScanner for reader 
 reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
 [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
 [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
 firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
 avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
 cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
  to key 
 http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
 
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
 
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
 
 at 
 

[jira] [Commented] (HBASE-6831) [WINDOWS] HBaseTestingUtility.expireSession() does not expire zookeeper session

2012-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465133#comment-13465133
 ] 

Hadoop QA commented on HBASE-6831:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12546117/hbase-6831_v1-trunk.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

-1 javadoc.  The javadoc tool appears to have generated 140 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 6 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestFromClientSide
  org.apache.hadoop.hbase.master.TestSplitLogManager

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2948//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2948//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2948//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2948//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2948//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2948//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2948//console

This message is automatically generated.

 [WINDOWS] HBaseTestingUtility.expireSession() does not expire zookeeper 
 session
 ---

 Key: HBASE-6831
 URL: https://issues.apache.org/jira/browse/HBASE-6831
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Attachments: hbase-6831_v1-0.94.patch, hbase-6831_v1-trunk.patch


 TestReplicationPeer fails because it forces the zookeeper session expiration 
 by calling HBaseTestingUtilty.expireSesssion(), but that function fails to do 
 so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6826) [WINDOWS] TestFromClientSide failures

2012-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465134#comment-13465134
 ] 

Hadoop QA commented on HBASE-6826:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12546126/hbase-6826_v1-trunk.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2951//console

This message is automatically generated.

 [WINDOWS] TestFromClientSide failures
 -

 Key: HBASE-6826
 URL: https://issues.apache.org/jira/browse/HBASE-6826
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Attachments: hbase-6826_v1-0.94.patch, hbase-6826_v1-trunk.patch


 The following tests fail for TestFromClientSide: 
 {code}
 testPoolBehavior()
 testClientPoolRoundRobin()
 testClientPoolThreadLocal()
 {code}
 The first test fails due to the fact that the test (wrongly) assumes that 
 ThredPoolExecutor can reclaim the thread immediately. 
 The second and third tests seem to fail because that Put's to the table does 
 not specify an explicit timestamp, but on windows, consecutive calls to put 
 happen to finish in the same milisecond so that the resulting mutations have 
 the same timestamp, thus there is only one version of the cell value.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6881) All regionservers are marked offline even there is still one up

2012-09-27 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465145#comment-13465145
 ] 

Jimmy Xiang commented on HBASE-6881:


That's fine with me, agreed.


 All regionservers are marked offline even there is still one up
 ---

 Key: HBASE-6881
 URL: https://issues.apache.org/jira/browse/HBASE-6881
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Attachments: trunk-6881.patch


 {noformat}
 +RegionPlan newPlan = plan;
 +if (!regionAlreadyInTransitionException) {
 +  // Force a new plan and reassign. Will return null if no servers.
 +  newPlan = getRegionPlan(state, plan.getDestination(), true);
 +}
 +if (newPlan == null) {
this.timeoutMonitor.setAllRegionServersOffline(true);
LOG.warn(Unable to find a viable location to assign region  +
  state.getRegion().getRegionNameAsString());
 {noformat}
 Here, when newPlan is null, plan.getDestination() could be up actually.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2

2012-09-27 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6871:
-

Attachment: 6871-0.94.txt

0.94 patch. Passes the new test.

 HFileBlockIndex Write Error BlockIndex in HFile V2
 --

 Key: HBASE-6871
 URL: https://issues.apache.org/jira/browse/HBASE-6871
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.1
 Environment: redhat 5u4
Reporter: Fenng Wang
Priority: Critical
 Fix For: 0.92.3, 0.94.3, 0.96.0

 Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 6871-0.94.txt, 
 6871-hfile-index-0.92.txt, 6871-hfile-index-0.92-v2.txt, 6871.txt, 
 787179746cc347ce9bb36f1989d17419.hfile, 
 960a026ca370464f84903ea58114bc75.hfile, 
 d0026fa8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, 
 D5703.3.patch, D5703.4.patch, D5703.5.patch, hbase-6871-0.94.patch, 
 ImportHFile.java, test_hfile_block_index.sh


 After writing some data, compaction and scan operation both failure, the 
 exception message is below:
 2012-09-18 06:32:26,227 ERROR 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: 
 Compaction failed 
 regionName=hfile_test,,1347778722498.d220df43fb9d8af4633bd7f547613f9e., 
 storeName=page_info, fileCount=7, fileSize=1.3m (188.0k, 188.0k, 188.0k, 
 188.0k, 188.0k, 185.8k, 223.3k), priority=9, 
 time=45826250816757428java.io.IOException: Could not reseek 
 StoreFileScanner[HFileScanner for reader 
 reader=hdfs://hadoopdev1.cm6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118f58de47ad9d87cac438ee0895,
  compression=lzo, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] 
 [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] 
 [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], 
 firstKey=http://com.truereligionbrandjeans.www/Womens_Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449/DeleteColumn,
  lastKey=http://com.trura.www//page_info:page_type/1347763395089/Put, 
 avgKeyLen=776, avgValueLen=4, entries=12853, length=228611, 
 cur=http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=1/ts=0]
  to key 
 http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.html/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=0/ts=0
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:178)
 
 at 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
 
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:299)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:244)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
 
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
 at 
 org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1570)  
   
 at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:997) 

 at 
 org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1216)
 at 
 org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:250)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got 
 INTERMEDIATE_INDEX: blockType=INTERMEDIATE_INDEX, 
 onDiskSizeWithoutHeader=8514, uncompressedSizeWithoutHeader=131837, 
 prevBlockOffset=-1, 
 dataBeginsWith=\x00\x00\x00\x9B\x00\x00\x00\x00\x00\x00\x03#\x00\x00\x050\x00\x00\x08\xB7\x00\x00\x0Cr\x00\x00\x0F\xFA\x00\x00\x120,
  fileOffset=218942at 
 

[jira] [Commented] (HBASE-6888) HBase scripts ignore any HBASE_OPTS set in the environment

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465151#comment-13465151
 ] 

Hudson commented on HBASE-6888:
---

Integrated in HBase-TRUNK #3386 (See 
[https://builds.apache.org/job/HBase-TRUNK/3386/])
HBASE-6888 HBase scripts ignore any HBASE_OPTS set in the environment 
(Revision 1391211)

 Result = FAILURE
stack : 
Files : 
* /hbase/trunk/conf/hbase-env.sh


 HBase scripts ignore any HBASE_OPTS set in the environment
 --

 Key: HBASE-6888
 URL: https://issues.apache.org/jira/browse/HBASE-6888
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.94.0, 0.96.0
Reporter: Aditya Kishore
Assignee: Aditya Kishore
Priority: Minor
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6888_trunk.patch


 hbase-env.sh which is sourced by hbase-config.sh which is eventually sourced 
 by the main 'hbase' script defines HBASE_OPTS form scratch, ignoring any 
 previous value set in the environment.
 This prevents from passing additional JVM parameters to HBase programs 
 (shell, hbck, etc) launched through these scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4802) Disable show table metrics in bulk loader

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465150#comment-13465150
 ] 

Hudson commented on HBASE-4802:
---

Integrated in HBase-TRUNK #3386 (See 
[https://builds.apache.org/job/HBase-TRUNK/3386/])
HBASE-4802 Disable show table metrics in bulk loader (Revision 1391206)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaConfigured.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaMetrics.java


 Disable show table metrics in bulk loader
 -

 Key: HBASE-4802
 URL: https://issues.apache.org/jira/browse/HBASE-4802
 Project: HBase
  Issue Type: Bug
Reporter: Nicolas Spiegelberg
Assignee: Liyin Tang
Priority: Trivial
 Fix For: 0.96.0

 Attachments: HBASE-4802.patch


 During bulk load, the Configuration object may be set to null.  This caused 
 an NPE in per-CF metrics because it consults the Configuration to determine 
 whether to show the Table name.  Need to add simple change to allow the conf 
 to be null  not specify table name in that instance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6611) Forcing region state offline cause double assignment

2012-09-27 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465153#comment-13465153
 ] 

Jimmy Xiang commented on HBASE-6611:


Cool, thanks.

There is one problem with the patch I am still thinking about.

In the bulk assignment, I keep the async ZK node offline for the performance 
reason.  However, it depends on the zk event thread's callback to know if all 
nodes are created or not.  If the single event thread is blocked due to any 
locker which is held by the bulk assigner, there will be a deadlock.

What should we do about this?

Instead of async ZK node offline, I am thinking to have an executor service to 
sync ZK node offline so that we don't have too much performance degrade.



 Forcing region state offline cause double assignment
 

 Key: HBASE-6611
 URL: https://issues.apache.org/jira/browse/HBASE-6611
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0


 In assigning a region, assignment manager forces the region state offline if 
 it is not. This could cause double assignment, for example, if the region is 
 already assigned and in the Open state, you should not just change it's state 
 to Offline, and assign it again.
 I think this could be the root cause for all double assignments IF the region 
 state is reliable.
 After this loophole is closed, TestHBaseFsck should come up a different way 
 to create some assignment inconsistencies, for example, calling region server 
 to open a region directly. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6888) HBase scripts ignore any HBASE_OPTS set in the environment

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465155#comment-13465155
 ] 

Hudson commented on HBASE-6888:
---

Integrated in HBase-0.94 #494 (See 
[https://builds.apache.org/job/HBase-0.94/494/])
HBASE-6888 HBase scripts ignore any HBASE_OPTS set in the environment 
(Revision 1391214)

 Result = FAILURE
stack : 
Files : 
* /hbase/branches/0.94/conf/hbase-env.sh


 HBase scripts ignore any HBASE_OPTS set in the environment
 --

 Key: HBASE-6888
 URL: https://issues.apache.org/jira/browse/HBASE-6888
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.94.0, 0.96.0
Reporter: Aditya Kishore
Assignee: Aditya Kishore
Priority: Minor
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6888_trunk.patch


 hbase-env.sh which is sourced by hbase-config.sh which is eventually sourced 
 by the main 'hbase' script defines HBASE_OPTS form scratch, ignoring any 
 previous value set in the environment.
 This prevents from passing additional JVM parameters to HBase programs 
 (shell, hbck, etc) launched through these scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-3737) HTable - delete(ListDelete) doesn't use writebuffer

2012-09-27 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465160#comment-13465160
 ] 

Lars Hofhansl commented on HBASE-3737:
--

This is still the case in trunk.

Also looking at the Delete(ListDelete) code, the passed lists gets modified 
and will contain those Deletes that failed to be executed. The client 
presumably has to check and retry. I doubt anybody is doing that.

Put(ListPut) is similar (but worse IMHO). The call to the Put method happily 
returns even when there are left over Puts in the write buffer.

 HTable - delete(ListDelete) doesn't use writebuffer
 -

 Key: HBASE-3737
 URL: https://issues.apache.org/jira/browse/HBASE-3737
 Project: HBase
  Issue Type: Improvement
Reporter: Doug Meil
Priority: Minor

 I just realized that htable.delete(ListDelete) doesn't use the writebuffer 
 and processes the list immediately, but htable.put(ListPut) does use the 
 writebuffer (i.e., send when filled). Likewise, htable.delete(Delete) sends 
 immediately.
  
 Out of sheer curiosity, why?  With the 'batch' methods now in place, it seems 
 like it would be consistent for 'delete' and 'put' to use the writebuffer 
 (assuming it is expanded to hold more than Puts), whereas 'batch' methods 
 process immediately.
 This isn't a huge issue, but it does seem a little inconsistent. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4258) Metrics panel for the Off Heap Cache

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-4258:
-

Fix Version/s: (was: 0.96.0)

Moving out of 0.96; move it back if you disagree.

 Metrics panel for the Off Heap Cache
 

 Key: HBASE-4258
 URL: https://issues.apache.org/jira/browse/HBASE-4258
 Project: HBase
  Issue Type: Sub-task
Reporter: Li Pi
Assignee: Li Pi
Priority: Minor
 Attachments: hbase-4258v1.txt


 Currently, stats and configuration are handled by logs/an xml file, 
 respectively. There should be a better and more intuitive graphical interface 
 for this.
 We could also add metrics to the RS metrics page.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4259) Investigate different memory allocation models for off heap caching.

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-4259:
-

Fix Version/s: (was: 0.96.0)

Moving out of 0.96; move it back if you disagree.

 Investigate different memory allocation models for off heap caching.
 

 Key: HBASE-4259
 URL: https://issues.apache.org/jira/browse/HBASE-4259
 Project: HBase
  Issue Type: Sub-task
Reporter: Li Pi
Assignee: Li Pi
Priority: Minor

 Currently, the off heap cache uses Memcached's allocation model, which works 
 reasonably well, but other memory allocation models, such as fragmented 
 writes, or buddy allocation, may be better suited to the task, and require 
 less configuration from the users perspective.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4113) Add createAsync and splits by start and end key to the shell

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-4113:
-

Fix Version/s: (was: 0.96.0)

Moving out of 0.96; move it back if you disagree.

LarsG, if you are up for it, try your patch and commit if it works still (you 
got two +1s on it).

One thought: Could this issue have been done as an option on create command?

Thanks.

 Add createAsync and splits by start and end key to the shell
 

 Key: HBASE-4113
 URL: https://issues.apache.org/jira/browse/HBASE-4113
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.0
Reporter: Lars George
Priority: Minor
 Attachments: HBASE-4113.patch, HBASE-4113-v2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4360) Maintain information on the time a RS went dead

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-4360:
-

Fix Version/s: (was: 0.96.0)

Moving out of 0.96; move it back if you disagree.

This is a good idea.  If a patch shows up soon, will commit.. moving out for 
now since an improvement.

 Maintain information on the time a RS went dead
 ---

 Key: HBASE-4360
 URL: https://issues.apache.org/jira/browse/HBASE-4360
 Project: HBase
  Issue Type: Improvement
  Components: master
Affects Versions: 0.94.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor

 Just something that'd be generally helpful, is to maintain DeadServer info 
 with the last timestamp when it was determined as dead.
 Makes it easier to hunt the logs, and I don't think its much too expensive to 
 maintain (one additional update per dead determination).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-3731) NPE in HTable.getRegionsInfo()

2012-09-27 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-3731.
--

Resolution: Implemented

The mentioned code no longer exists in 0.96. Closing,

 NPE in HTable.getRegionsInfo()
 --

 Key: HBASE-3731
 URL: https://issues.apache.org/jira/browse/HBASE-3731
 Project: HBase
  Issue Type: Bug
Reporter: Liyin Tang

 In HTable.getRegionInfo
 code
 HRegionInfo info = Writables.getHRegionInfo(
 rowResult.getValue(HConstants.CATALOG_FAMILY,
 HConstants.REGIONINFO_QUALIFIER));
 /code
 But the rowResult.getValue() may return null, and Writables.getHRegionInfo 
 will throw NullPoinException when the parameter is null.
 2 fixes here: 
 1) In Writables.getHRegionInfo(). We need to check whether the data is null 
 before using data.length.
 2)code
 HRegionInfo info = Writables.getHRegionInfoOrNull(
 rowResult.getValue(HConstants.CATALOG_FAMILY,
 HConstants.REGIONINFO_QUALIFIER));
 if(info == null)
   return false
 /code
 Any thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4410) FilterList.filterKeyValue can return suboptimal ReturnCodes

2012-09-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-4410:
-

Fix Version/s: (was: 0.96.0)

Moving an improvement out of 0.96; move it back if you disagree JG

 FilterList.filterKeyValue can return suboptimal ReturnCodes
 ---

 Key: HBASE-4410
 URL: https://issues.apache.org/jira/browse/HBASE-4410
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Reporter: Jonathan Gray
Assignee: Jonathan Gray
Priority: Minor
 Attachments: HBASE-4410-v1.patch


 FilterList.filterKeyValue does not always return the most optimal ReturnCode 
 in both the AND and OR conditions.
 For example, if you have F1 AND F2, F1 returns SKIP.  It immediately returns 
 the SKIP.  However, if F2 would have returned NEXT_COL or NEXT_ROW or 
 SEEK_NEXT_USING_HINT, we would actually be able to return the more optimal 
 ReturnCode from F2.
 For AND conditions, we can always pick the *most restrictive* return code.
 For OR conditions, we must always pick the *least restrictive* return code.
 This JIRA is to review the FilterList.filterKeyValue() method to try and make 
 it more optimal and to add a new unit test which verifies the correct 
 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >