[jira] [Updated] (HBASE-5298) Add thrift metrics to thrift2

2012-02-08 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5298:
---

Attachment: HBASE-5298.D1629.3.patch

sc updated the revision HBASE-5298 [jira] Add thrift metrics to thrift2.
Reviewers: tedyu, JIRA

  Fix TestCallQueue and remove a dead variable.

REVISION DETAIL
  https://reviews.facebook.net/D1629

AFFECTED FILES
  src/main/java/org/apache/hadoop/hbase/thrift/TBoundedThreadPoolServer.java
  src/main/java/org/apache/hadoop/hbase/thrift/ThriftMetrics.java
  src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
  src/main/java/org/apache/hadoop/hbase/thrift2/ThriftHBaseServiceHandler.java
  src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
  src/test/java/org/apache/hadoop/hbase/thrift/TestCallQueue.java
  src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java
  
src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandler.java


 Add thrift metrics to thrift2
 -

 Key: HBASE-5298
 URL: https://issues.apache.org/jira/browse/HBASE-5298
 Project: HBase
  Issue Type: Improvement
  Components: metrics, thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: 5298-v3.txt, HBASE-5298.D1629.1.patch, 
 HBASE-5298.D1629.2.patch, HBASE-5298.D1629.3.patch


 We have added thrift metrics collection in HBASE-5186.
 It will be good to have them in thrift2 as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5311) Allow inmemory Memstore compactions

2012-02-08 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203364#comment-13203364
 ] 

Todd Lipcon commented on HBASE-5311:


The assumption with the code I was writing was a single flusher thread -- 
which could be enforced by making the flush/compact stuff synchronized on a 
separate lock. The idea is basically to allow the common case to run lockless 
and push all the effort to whoever wants to make updates to the data structure.

 Allow inmemory Memstore compactions
 ---

 Key: HBASE-5311
 URL: https://issues.apache.org/jira/browse/HBASE-5311
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
 Attachments: InternallyLayeredMap.java


 Just like we periodically compact the StoreFiles we should also periodically 
 compact the MemStore.
 During these compactions we eliminate deleted cells, expired cells, cells to 
 removed because of version count, etc, before we even do a memstore flush.
 Besides the optimization that we could get from this, it should also allow us 
 to remove the special handling of ICV, Increment, and Append (all of which 
 use upsert logic to avoid accumulating excessive cells in the Memstore).
 Not targeting this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5074) support checksums in HBase block cache

2012-02-08 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5074:
---

Attachment: D1521.5.patch

dhruba updated the revision [jira] [HBASE-5074] Support checksums in HBase 
block cache.
Reviewers: mbautin

  Incorporated review comments from Ted.

REVISION DETAIL
  https://reviews.facebook.net/D1521

AFFECTED FILES
  src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
  src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java
  src/test/java/org/apache/hadoop/hbase/regionserver/CreateRandomStoreFile.java
  
src/test/java/org/apache/hadoop/hbase/regionserver/handler/TestCloseRegionHandler.java
  src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java
  src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
  
src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java
  src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
  src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileReaderV1.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
  
src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
  src/test/java/org/apache/hadoop/hbase/util/MockRegionServerServices.java
  src/main/java/org/apache/hadoop/hbase/HConstants.java
  src/main/java/org/apache/hadoop/hbase/util/ChecksumType.java
  src/main/java/org/apache/hadoop/hbase/util/HFileSystem.java
  src/main/java/org/apache/hadoop/hbase/util/ChecksumByteArrayOutputStream.java
  src/main/java/org/apache/hadoop/hbase/util/CompoundBloomFilter.java
  src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java
  
src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java
  src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java
  
src/main/java/org/apache/hadoop/hbase/regionserver/metrics/RegionServerMetrics.java
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
  src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
  src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
  src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV1.java


 support checksums in HBase block cache
 --

 Key: HBASE-5074
 URL: https://issues.apache.org/jira/browse/HBASE-5074
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Attachments: D1521.1.patch, D1521.1.patch, D1521.2.patch, 
 D1521.2.patch, D1521.3.patch, D1521.3.patch, D1521.4.patch, D1521.4.patch, 
 D1521.5.patch, D1521.5.patch


 The current implementation of HDFS stores the data in one block file and the 
 metadata(checksum) in another block file. This means that every read into the 
 HBase block cache actually consumes two disk iops, one to the datafile and 
 one to the checksum file. This is a major problem for scaling HBase, because 
 HBase is usually bottlenecked on the number of random disk iops that the 
 storage-hardware offers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: 

[jira] [Updated] (HBASE-5074) support checksums in HBase block cache

2012-02-08 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5074:
---

Attachment: D1521.5.patch

dhruba updated the revision [jira] [HBASE-5074] Support checksums in HBase 
block cache.
Reviewers: mbautin

  Incorporated review comments from Ted.

REVISION DETAIL
  https://reviews.facebook.net/D1521

AFFECTED FILES
  src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
  src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java
  src/test/java/org/apache/hadoop/hbase/regionserver/CreateRandomStoreFile.java
  
src/test/java/org/apache/hadoop/hbase/regionserver/handler/TestCloseRegionHandler.java
  src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java
  src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
  
src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java
  src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
  src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileReaderV1.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
  
src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
  src/test/java/org/apache/hadoop/hbase/util/MockRegionServerServices.java
  src/main/java/org/apache/hadoop/hbase/HConstants.java
  src/main/java/org/apache/hadoop/hbase/util/ChecksumType.java
  src/main/java/org/apache/hadoop/hbase/util/HFileSystem.java
  src/main/java/org/apache/hadoop/hbase/util/ChecksumByteArrayOutputStream.java
  src/main/java/org/apache/hadoop/hbase/util/CompoundBloomFilter.java
  src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java
  
src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java
  src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java
  
src/main/java/org/apache/hadoop/hbase/regionserver/metrics/RegionServerMetrics.java
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
  src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
  src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
  src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV1.java


 support checksums in HBase block cache
 --

 Key: HBASE-5074
 URL: https://issues.apache.org/jira/browse/HBASE-5074
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Attachments: D1521.1.patch, D1521.1.patch, D1521.2.patch, 
 D1521.2.patch, D1521.3.patch, D1521.3.patch, D1521.4.patch, D1521.4.patch, 
 D1521.5.patch, D1521.5.patch


 The current implementation of HDFS stores the data in one block file and the 
 metadata(checksum) in another block file. This means that every read into the 
 HBase block cache actually consumes two disk iops, one to the datafile and 
 one to the checksum file. This is a major problem for scaling HBase, because 
 HBase is usually bottlenecked on the number of random disk iops that the 
 storage-hardware offers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: 

[jira] [Updated] (HBASE-5298) Add thrift metrics to thrift2

2012-02-08 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5298:
---

Attachment: HBASE-5298.D1629.4.patch

sc updated the revision HBASE-5298 [jira] Add thrift metrics to thrift2.
Reviewers: tedyu, JIRA

  Fixed the inline comments

REVISION DETAIL
  https://reviews.facebook.net/D1629

AFFECTED FILES
  src/main/java/org/apache/hadoop/hbase/thrift/TBoundedThreadPoolServer.java
  src/main/java/org/apache/hadoop/hbase/thrift/ThriftMetrics.java
  src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
  src/main/java/org/apache/hadoop/hbase/thrift2/ThriftHBaseServiceHandler.java
  src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java
  src/test/java/org/apache/hadoop/hbase/thrift/TestCallQueue.java
  src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java
  
src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandler.java


 Add thrift metrics to thrift2
 -

 Key: HBASE-5298
 URL: https://issues.apache.org/jira/browse/HBASE-5298
 Project: HBase
  Issue Type: Improvement
  Components: metrics, thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: 5298-v3.txt, HBASE-5298.D1629.1.patch, 
 HBASE-5298.D1629.2.patch, HBASE-5298.D1629.3.patch, HBASE-5298.D1629.4.patch


 We have added thrift metrics collection in HBASE-5186.
 It will be good to have them in thrift2 as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5074) support checksums in HBase block cache

2012-02-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203372#comment-13203372
 ] 

Phabricator commented on HBASE-5074:


dhruba has commented on the revision [jira] [HBASE-5074] Support checksums in 
HBase block cache.

  Todd: can you pl re-review this one more time (at least to ensure that your 
earlier concerns are addressed).

REVISION DETAIL
  https://reviews.facebook.net/D1521


 support checksums in HBase block cache
 --

 Key: HBASE-5074
 URL: https://issues.apache.org/jira/browse/HBASE-5074
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Attachments: D1521.1.patch, D1521.1.patch, D1521.2.patch, 
 D1521.2.patch, D1521.3.patch, D1521.3.patch, D1521.4.patch, D1521.4.patch, 
 D1521.5.patch, D1521.5.patch


 The current implementation of HDFS stores the data in one block file and the 
 metadata(checksum) in another block file. This means that every read into the 
 HBase block cache actually consumes two disk iops, one to the datafile and 
 one to the checksum file. This is a major problem for scaling HBase, because 
 HBase is usually bottlenecked on the number of random disk iops that the 
 storage-hardware offers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5074) support checksums in HBase block cache

2012-02-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203373#comment-13203373
 ] 

Phabricator commented on HBASE-5074:


dhruba has commented on the revision [jira] [HBASE-5074] Support checksums in 
HBase block cache.

  Todd: can you pl re-review this one more time (at least to ensure that your 
earlier concerns are addressed).

REVISION DETAIL
  https://reviews.facebook.net/D1521


 support checksums in HBase block cache
 --

 Key: HBASE-5074
 URL: https://issues.apache.org/jira/browse/HBASE-5074
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Attachments: D1521.1.patch, D1521.1.patch, D1521.2.patch, 
 D1521.2.patch, D1521.3.patch, D1521.3.patch, D1521.4.patch, D1521.4.patch, 
 D1521.5.patch, D1521.5.patch


 The current implementation of HDFS stores the data in one block file and the 
 metadata(checksum) in another block file. This means that every read into the 
 HBase block cache actually consumes two disk iops, one to the datafile and 
 one to the checksum file. This is a major problem for scaling HBase, because 
 HBase is usually bottlenecked on the number of random disk iops that the 
 storage-hardware offers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-5075) regionserver crashed,and failover

2012-02-08 Thread Resolved

 [ 
https://issues.apache.org/jira/browse/HBASE-5075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

代志远 resolved HBASE-5075.


   Resolution: Fixed
Fix Version/s: (was: 0.90.4)
   0.92.1

 regionserver crashed,and failover
 -

 Key: HBASE-5075
 URL: https://issues.apache.org/jira/browse/HBASE-5075
 Project: HBase
  Issue Type: Improvement
  Components: monitoring, regionserver, replication, zookeeper
Affects Versions: 0.90.4
Reporter: 代志远
 Fix For: 0.92.1


 regionserver crashed,it is too long time to notify hmaster.when hmaster know 
 regionserver's shutdown,it is long time to fetch the hlog's lease.
 hbase is a online db,availability is very important.
 i have a idea to improve availability,mintor node to check regionserver's 
 pid.if this pid notexsits,i think the rs down,i will delete the znode,and 
 force close the hlog file.
 so the period maybe 100ms.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5074) support checksums in HBase block cache

2012-02-08 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203399#comment-13203399
 ] 

Hadoop QA commented on HBASE-5074:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12513780/D1521.5.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 58 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -132 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 160 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.hfile.TestHFileBlock

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/923//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/923//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/923//console

This message is automatically generated.

 support checksums in HBase block cache
 --

 Key: HBASE-5074
 URL: https://issues.apache.org/jira/browse/HBASE-5074
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Attachments: D1521.1.patch, D1521.1.patch, D1521.2.patch, 
 D1521.2.patch, D1521.3.patch, D1521.3.patch, D1521.4.patch, D1521.4.patch, 
 D1521.5.patch, D1521.5.patch


 The current implementation of HDFS stores the data in one block file and the 
 metadata(checksum) in another block file. This means that every read into the 
 HBase block cache actually consumes two disk iops, one to the datafile and 
 one to the checksum file. This is a major problem for scaling HBase, because 
 HBase is usually bottlenecked on the number of random disk iops that the 
 storage-hardware offers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5298) Add thrift metrics to thrift2

2012-02-08 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203406#comment-13203406
 ] 

Hadoop QA commented on HBASE-5298:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12513778/HBASE-5298.D1629.4.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 9 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -136 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 156 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat
  org.apache.hadoop.hbase.mapred.TestTableMapReduce
  org.apache.hadoop.hbase.io.hfile.TestHFileBlock
  org.apache.hadoop.hbase.mapreduce.TestImportTsv

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/922//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/922//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/922//console

This message is automatically generated.

 Add thrift metrics to thrift2
 -

 Key: HBASE-5298
 URL: https://issues.apache.org/jira/browse/HBASE-5298
 Project: HBase
  Issue Type: Improvement
  Components: metrics, thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: 5298-v3.txt, HBASE-5298.D1629.1.patch, 
 HBASE-5298.D1629.2.patch, HBASE-5298.D1629.3.patch, HBASE-5298.D1629.4.patch


 We have added thrift metrics collection in HBASE-5186.
 It will be good to have them in thrift2 as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5075) regionserver crashed,and failover

2012-02-08 Thread Commented

[ 
https://issues.apache.org/jira/browse/HBASE-5075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203405#comment-13203405
 ] 

代志远 commented on HBASE-5075:


hbase is a online db,it's availability is very important.
if some regionserver carshed ,we can't wait too long to recovery service.
so my patch is to improve the ability of failover in hbase.

i come from alipay in china, we have some hbase cluster online, we have the 
second big cluster in china.

 regionserver crashed,and failover
 -

 Key: HBASE-5075
 URL: https://issues.apache.org/jira/browse/HBASE-5075
 Project: HBase
  Issue Type: Improvement
  Components: monitoring, regionserver, replication, zookeeper
Affects Versions: 0.90.4
Reporter: 代志远
 Fix For: 0.92.1


 regionserver crashed,it is too long time to notify hmaster.when hmaster know 
 regionserver's shutdown,it is long time to fetch the hlog's lease.
 hbase is a online db,availability is very important.
 i have a idea to improve availability,mintor node to check regionserver's 
 pid.if this pid notexsits,i think the rs down,i will delete the znode,and 
 force close the hlog file.
 so the period maybe 100ms.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HBASE-5075) regionserver crashed,and failover

2012-02-08 Thread Reopened

 [ 
https://issues.apache.org/jira/browse/HBASE-5075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

代志远 reopened HBASE-5075:



 regionserver crashed,and failover
 -

 Key: HBASE-5075
 URL: https://issues.apache.org/jira/browse/HBASE-5075
 Project: HBase
  Issue Type: Improvement
  Components: monitoring, regionserver, replication, zookeeper
Affects Versions: 0.92.1
Reporter: 代志远

 regionserver crashed,it is too long time to notify hmaster.when hmaster know 
 regionserver's shutdown,it is long time to fetch the hlog's lease.
 hbase is a online db,availability is very important.
 i have a idea to improve availability,mintor node to check regionserver's 
 pid.if this pid notexsits,i think the rs down,i will delete the znode,and 
 force close the hlog file.
 so the period maybe 100ms.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5075) regionserver crashed and failover

2012-02-08 Thread Updated

 [ 
https://issues.apache.org/jira/browse/HBASE-5075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

代志远 updated HBASE-5075:
---

Affects Version/s: (was: 0.90.4)
   0.92.1
Fix Version/s: (was: 0.92.1)
  Summary: regionserver crashed and failover  (was: regionserver 
crashed,and failover)

 regionserver crashed and failover
 -

 Key: HBASE-5075
 URL: https://issues.apache.org/jira/browse/HBASE-5075
 Project: HBase
  Issue Type: Improvement
  Components: monitoring, regionserver, replication, zookeeper
Affects Versions: 0.92.1
Reporter: 代志远

 regionserver crashed,it is too long time to notify hmaster.when hmaster know 
 regionserver's shutdown,it is long time to fetch the hlog's lease.
 hbase is a online db,availability is very important.
 i have a idea to improve availability,mintor node to check regionserver's 
 pid.if this pid notexsits,i think the rs down,i will delete the znode,and 
 force close the hlog file.
 so the period maybe 100ms.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5075) regionserver crashed and failover

2012-02-08 Thread Updated

 [ 
https://issues.apache.org/jira/browse/HBASE-5075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

代志远 updated HBASE-5075:
---

Description: 
regionserver crashed,it is too long time to notify hmaster.when hmaster know 
regionserver's shutdown,it is long time to fetch the hlog's lease.
hbase is a online db, availability is very important.
i have a idea to improve availability, monitor node to check regionserver's 
pid.if this pid not exsits,i think the rs down,i will delete the znode,and 
force close the hlog file.
so the period maybe 100ms.


  was:
regionserver crashed,it is too long time to notify hmaster.when hmaster know 
regionserver's shutdown,it is long time to fetch the hlog's lease.
hbase is a online db,availability is very important.
i have a idea to improve availability,mintor node to check regionserver's 
pid.if this pid notexsits,i think the rs down,i will delete the znode,and force 
close the hlog file.
so the period maybe 100ms.



 regionserver crashed and failover
 -

 Key: HBASE-5075
 URL: https://issues.apache.org/jira/browse/HBASE-5075
 Project: HBase
  Issue Type: Improvement
  Components: monitoring, regionserver, replication, zookeeper
Affects Versions: 0.92.1
Reporter: 代志远

 regionserver crashed,it is too long time to notify hmaster.when hmaster know 
 regionserver's shutdown,it is long time to fetch the hlog's lease.
 hbase is a online db, availability is very important.
 i have a idea to improve availability, monitor node to check regionserver's 
 pid.if this pid not exsits,i think the rs down,i will delete the znode,and 
 force close the hlog file.
 so the period maybe 100ms.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5075) regionserver crashed and failover

2012-02-08 Thread junhua yang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203425#comment-13203425
 ] 

junhua yang commented on HBASE-5075:


hi,
I think it is very important to shortent the recovery time.
Now waiting the regionserver recovery sometimes is very long and not acceptable 
for online service.
Lots of  error from client will be thrown and affect the custom.

So could you help to provide your solution and   @stack,  how do you think 
about hbase failover solution now?

Do you have any plan to improve it?


 regionserver crashed and failover
 -

 Key: HBASE-5075
 URL: https://issues.apache.org/jira/browse/HBASE-5075
 Project: HBase
  Issue Type: Improvement
  Components: monitoring, regionserver, replication, zookeeper
Affects Versions: 0.92.1
Reporter: 代志远

 regionserver crashed,it is too long time to notify hmaster.when hmaster know 
 regionserver's shutdown,it is long time to fetch the hlog's lease.
 hbase is a online db, availability is very important.
 i have a idea to improve availability, monitor node to check regionserver's 
 pid.if this pid not exsits,i think the rs down,i will delete the znode,and 
 force close the hlog file.
 so the period maybe 100ms.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5075) regionserver crashed and failover

2012-02-08 Thread Commented

[ 
https://issues.apache.org/jira/browse/HBASE-5075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203563#comment-13203563
 ] 

代志远 commented on HBASE-5075:


patch and develop plan

 regionserver crashed and failover
 -

 Key: HBASE-5075
 URL: https://issues.apache.org/jira/browse/HBASE-5075
 Project: HBase
  Issue Type: Improvement
  Components: monitoring, regionserver, replication, zookeeper
Affects Versions: 0.92.1
Reporter: 代志远

 regionserver crashed,it is too long time to notify hmaster.when hmaster know 
 regionserver's shutdown,it is long time to fetch the hlog's lease.
 hbase is a online db, availability is very important.
 i have a idea to improve availability, monitor node to check regionserver's 
 pid.if this pid not exsits,i think the rs down,i will delete the znode,and 
 force close the hlog file.
 so the period maybe 100ms.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5221) bin/hbase script doesn't look for Hadoop jars in the right place in trunk layout

2012-02-08 Thread Jimmy Xiang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203675#comment-13203675
 ] 

Jimmy Xiang commented on HBASE-5221:


Since 0.23, Hadoop re-organized the folder structure. They put the jars under 
each individual modules like hdfs, mapreduce, util and so on (under 
share/hadoop).  The common one is under share/haddop/common. 

I am not very clear about the story behind either.  Todd should know this much 
better.

 bin/hbase script doesn't look for Hadoop jars in the right place in trunk 
 layout
 

 Key: HBASE-5221
 URL: https://issues.apache.org/jira/browse/HBASE-5221
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Todd Lipcon
Assignee: Jimmy Xiang
 Attachments: hbase-5221.txt


 Running against an 0.24.0-SNAPSHOT hadoop:
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-common*.jar: No such file or 
 directory
 ls: cannot access /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-hdfs*.jar: 
 No such file or directory
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-mapred*.jar: No such file or 
 directory
 The jars are rooted deeper in the heirarchy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-3134) [replication] Add the ability to enable/disable streams

2012-02-08 Thread Teruyoshi Zenmyo (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teruyoshi Zenmyo updated HBASE-3134:


Attachment: HBASE-3134.patch

The attached file is the updated patch which is also uploaded to the 
reviewboard.

 [replication] Add the ability to enable/disable streams
 ---

 Key: HBASE-3134
 URL: https://issues.apache.org/jira/browse/HBASE-3134
 Project: HBase
  Issue Type: New Feature
  Components: replication
Reporter: Jean-Daniel Cryans
Assignee: Teruyoshi Zenmyo
Priority: Minor
  Labels: replication
 Fix For: 0.94.0

 Attachments: HBASE-3134.patch, HBASE-3134.patch, HBASE-3134.patch, 
 HBASE-3134.patch


 This jira was initially in the scope of HBASE-2201, but was pushed out since 
 it has low value compared to the required effort (and when want to ship 
 0.90.0 rather soonish).
 We need to design a way to enable/disable replication streams in a 
 determinate fashion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5221) bin/hbase script doesn't look for Hadoop jars in the right place in trunk layout

2012-02-08 Thread stack (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203687#comment-13203687
 ] 

stack commented on HBASE-5221:
--

No worries Jimmy.  That'll do.  Thanks. Let me commit.

 bin/hbase script doesn't look for Hadoop jars in the right place in trunk 
 layout
 

 Key: HBASE-5221
 URL: https://issues.apache.org/jira/browse/HBASE-5221
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Todd Lipcon
Assignee: Jimmy Xiang
 Attachments: hbase-5221.txt


 Running against an 0.24.0-SNAPSHOT hadoop:
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-common*.jar: No such file or 
 directory
 ls: cannot access /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-hdfs*.jar: 
 No such file or directory
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-mapred*.jar: No such file or 
 directory
 The jars are rooted deeper in the heirarchy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5298) Add thrift metrics to thrift2

2012-02-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203692#comment-13203692
 ] 

Phabricator commented on HBASE-5298:


tedyu has accepted the revision HBASE-5298 [jira] Add thrift metrics to 
thrift2.

REVISION DETAIL
  https://reviews.facebook.net/D1629


 Add thrift metrics to thrift2
 -

 Key: HBASE-5298
 URL: https://issues.apache.org/jira/browse/HBASE-5298
 Project: HBase
  Issue Type: Improvement
  Components: metrics, thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: 5298-v3.txt, HBASE-5298.D1629.1.patch, 
 HBASE-5298.D1629.2.patch, HBASE-5298.D1629.3.patch, HBASE-5298.D1629.4.patch


 We have added thrift metrics collection in HBASE-5186.
 It will be good to have them in thrift2 as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5221) bin/hbase script doesn't look for Hadoop jars in the right place in trunk layout

2012-02-08 Thread stack (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5221:
-

   Resolution: Fixed
Fix Version/s: 0.94.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk (Should we put it in 0.92?)   Thanks for the patch Jimmy.

 bin/hbase script doesn't look for Hadoop jars in the right place in trunk 
 layout
 

 Key: HBASE-5221
 URL: https://issues.apache.org/jira/browse/HBASE-5221
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Todd Lipcon
Assignee: Jimmy Xiang
 Fix For: 0.94.0

 Attachments: hbase-5221.txt


 Running against an 0.24.0-SNAPSHOT hadoop:
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-common*.jar: No such file or 
 directory
 ls: cannot access /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-hdfs*.jar: 
 No such file or directory
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-mapred*.jar: No such file or 
 directory
 The jars are rooted deeper in the heirarchy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-3134) [replication] Add the ability to enable/disable streams

2012-02-08 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203731#comment-13203731
 ] 

Hadoop QA commented on HBASE-3134:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12513828/HBASE-3134.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -136 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 156 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestRegionRebalancing
  org.apache.hadoop.hbase.io.hfile.TestHFileBlock
  org.apache.hadoop.hbase.mapreduce.TestImportTsv
  org.apache.hadoop.hbase.mapred.TestTableMapReduce
  org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/924//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/924//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/924//console

This message is automatically generated.

 [replication] Add the ability to enable/disable streams
 ---

 Key: HBASE-3134
 URL: https://issues.apache.org/jira/browse/HBASE-3134
 Project: HBase
  Issue Type: New Feature
  Components: replication
Reporter: Jean-Daniel Cryans
Assignee: Teruyoshi Zenmyo
Priority: Minor
  Labels: replication
 Fix For: 0.94.0

 Attachments: HBASE-3134.patch, HBASE-3134.patch, HBASE-3134.patch, 
 HBASE-3134.patch


 This jira was initially in the scope of HBASE-2201, but was pushed out since 
 it has low value compared to the required effort (and when want to ship 
 0.90.0 rather soonish).
 We need to design a way to enable/disable replication streams in a 
 determinate fashion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-3134) [replication] Add the ability to enable/disable streams

2012-02-08 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203737#comment-13203737
 ] 

Zhihong Yu commented on HBASE-3134:
---

@Teruyoshi:
Can you publish the URL for review request on review board ?

 [replication] Add the ability to enable/disable streams
 ---

 Key: HBASE-3134
 URL: https://issues.apache.org/jira/browse/HBASE-3134
 Project: HBase
  Issue Type: New Feature
  Components: replication
Reporter: Jean-Daniel Cryans
Assignee: Teruyoshi Zenmyo
Priority: Minor
  Labels: replication
 Fix For: 0.94.0

 Attachments: HBASE-3134.patch, HBASE-3134.patch, HBASE-3134.patch, 
 HBASE-3134.patch


 This jira was initially in the scope of HBASE-2201, but was pushed out since 
 it has low value compared to the required effort (and when want to ship 
 0.90.0 rather soonish).
 We need to design a way to enable/disable replication streams in a 
 determinate fashion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5345) CheckAndPut doesn't work when value is empty byte[]

2012-02-08 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203742#comment-13203742
 ] 

Zhihong Yu commented on HBASE-5345:
---

Integrated to 0.92 and TRUNK.

Thanks for the patch, Evert.

 CheckAndPut doesn't work when value is empty byte[]
 ---

 Key: HBASE-5345
 URL: https://issues.apache.org/jira/browse/HBASE-5345
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Evert Arckens
Assignee: Evert Arckens
 Fix For: 0.94.0, 0.92.1

 Attachments: 5345-v2.txt, 5345.txt, 
 checkAndMutateEmpty-HBASE-5345.patch


 When a value contains an empty byte[] and then a checkAndPut is performed 
 with an empty byte[] , the operation will fail.
 For example:
 Put put = new Put(row1);
 put.add(fam1, qf1, new byte[0]);
 table.put(put);
 put = new Put(row1);
 put.add(fam1, qf1, val1);
 table.checkAndPut(row1, fam1, qf1, new byte[0], put); --- false
 I think this is related to HBASE-3793 and HBASE-3468.
 Note that you will also get into this situation when first putting a null 
 value ( put.add(fam1,qf1,null) ), as this value will then be regarded and 
 returned as an empty byte[] upon a get.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5351) hbase completebulkload to a new table fails in a race

2012-02-08 Thread Jonathan Hsieh (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203740#comment-13203740
 ] 

Jonathan Hsieh commented on HBASE-5351:
---

I buy that the patch should fix the problem, and don't think we need to have a 
test here.  However, to prevent this problem in the future, can you update the 
javadoc comments in HBaseAdmin.createTableAsync to warn about this condition 
alerting devs that use this method to make sure the table is available before 
instantiating the HTable?




 hbase completebulkload to a new table fails in a race
 -

 Key: HBASE-5351
 URL: https://issues.apache.org/jira/browse/HBASE-5351
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.0, 0.92.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan

 I have a test that tests vanilla use of importtsv with importtsv.bulk.output 
 option followed by completebulkload to a new table.
 This sometimes fails as follows:
 11/12/19 15:02:39 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table:
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: ml_items_copy, row=ml_items_copy,,99
 at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:157)
 at org.apache.hadoop.hbase.client.MetaScanner.access$000(MetaScanner.java:52)
 at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:130)
 at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:127)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager.execute(HConnectionManager.java:359)
 at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:127)
 at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:103)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:875)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:929)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:817)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:781)
 at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:247)
 at org.apache.hadoop.hbase.client.HTable.init(HTable.java:211)
 at org.apache.hadoop.hbase.client.HTable.init(HTable.java:171)
 at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.createTable(LoadIncrementalHFiles.java:673)
 at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:697)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:83)
 at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.main(LoadIncrementalHFiles.java:707)
 The race appears to be calling HbAdmin.createTableAsync(htd, keys) and then 
 creating an HTable object before that call has actually completed.
 The following change to 
 /src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java 
 appears to fix the problem, but I have not been able to reproduce the race 
 reliably, in order to write a test.
 {code}
 -HTable table = new HTable(this.cfg, tableName);
 -
 -HConnection conn = table.getConnection();
  int ctr = 0;
 -while (!conn.isTableAvailable(table.getTableName())  
 (ctrTABLE_CREATE_MA
 +while (!this.hbAdmin.isTableAvailable(tableName)  
 (ctrTABLE_CREATE_MAX_R
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5345) CheckAndPut doesn't work when value is empty byte[]

2012-02-08 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5345:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 CheckAndPut doesn't work when value is empty byte[]
 ---

 Key: HBASE-5345
 URL: https://issues.apache.org/jira/browse/HBASE-5345
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Evert Arckens
Assignee: Evert Arckens
 Fix For: 0.94.0, 0.92.1

 Attachments: 5345-v2.txt, 5345.txt, 
 checkAndMutateEmpty-HBASE-5345.patch


 When a value contains an empty byte[] and then a checkAndPut is performed 
 with an empty byte[] , the operation will fail.
 For example:
 Put put = new Put(row1);
 put.add(fam1, qf1, new byte[0]);
 table.put(put);
 put = new Put(row1);
 put.add(fam1, qf1, val1);
 table.checkAndPut(row1, fam1, qf1, new byte[0], put); --- false
 I think this is related to HBASE-3793 and HBASE-3468.
 Note that you will also get into this situation when first putting a null 
 value ( put.add(fam1,qf1,null) ), as this value will then be regarded and 
 returned as an empty byte[] upon a get.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5298) Add thrift metrics to thrift2

2012-02-08 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203746#comment-13203746
 ] 

Zhihong Yu commented on HBASE-5298:
---

Integrated to TRUNK.

Thanks for the patch, Scott.

 Add thrift metrics to thrift2
 -

 Key: HBASE-5298
 URL: https://issues.apache.org/jira/browse/HBASE-5298
 Project: HBase
  Issue Type: Improvement
  Components: metrics, thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.94.0

 Attachments: 5298-v3.txt, HBASE-5298.D1629.1.patch, 
 HBASE-5298.D1629.2.patch, HBASE-5298.D1629.3.patch, HBASE-5298.D1629.4.patch


 We have added thrift metrics collection in HBASE-5186.
 It will be good to have them in thrift2 as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5298) Add thrift metrics to thrift2

2012-02-08 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5298:
--

Fix Version/s: 0.94.0

 Add thrift metrics to thrift2
 -

 Key: HBASE-5298
 URL: https://issues.apache.org/jira/browse/HBASE-5298
 Project: HBase
  Issue Type: Improvement
  Components: metrics, thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.94.0

 Attachments: 5298-v3.txt, HBASE-5298.D1629.1.patch, 
 HBASE-5298.D1629.2.patch, HBASE-5298.D1629.3.patch, HBASE-5298.D1629.4.patch


 We have added thrift metrics collection in HBASE-5186.
 It will be good to have them in thrift2 as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-3134) [replication] Add the ability to enable/disable streams

2012-02-08 Thread Teruyoshi Zenmyo (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203750#comment-13203750
 ] 

Teruyoshi Zenmyo commented on HBASE-3134:
-

Here is the URL.
https://reviews.apache.org/r/3686/


 [replication] Add the ability to enable/disable streams
 ---

 Key: HBASE-3134
 URL: https://issues.apache.org/jira/browse/HBASE-3134
 Project: HBase
  Issue Type: New Feature
  Components: replication
Reporter: Jean-Daniel Cryans
Assignee: Teruyoshi Zenmyo
Priority: Minor
  Labels: replication
 Fix For: 0.94.0

 Attachments: HBASE-3134.patch, HBASE-3134.patch, HBASE-3134.patch, 
 HBASE-3134.patch


 This jira was initially in the scope of HBASE-2201, but was pushed out since 
 it has low value compared to the required effort (and when want to ship 
 0.90.0 rather soonish).
 We need to design a way to enable/disable replication streams in a 
 determinate fashion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5330) TestCompactSelection - adding 2 test cases to testCompactionRatio

2012-02-08 Thread Doug Meil (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203754#comment-13203754
 ] 

Doug Meil commented on HBASE-5330:
--

Thanks Nicholas.  Mind if I commit the test after I update with these changes?  

Regarding, #2 Should return [3:7] because it's NOT actually doing a major 
compaction this sounds like it should be a separate Jira (bug/improvement), 
correct? 

 TestCompactSelection - adding 2 test cases to testCompactionRatio
 -

 Key: HBASE-5330
 URL: https://issues.apache.org/jira/browse/HBASE-5330
 Project: HBase
  Issue Type: Improvement
Reporter: Doug Meil
Assignee: Doug Meil
Priority: Minor
 Attachments: TestCompactSelection_hbase_5330.java.patch


 There were three existing assertions in TestCompactSelection 
 testCompactionRatio that did max # of files assertions...
 {code}
 assertEquals(maxFiles,
 
 store.compactSelection(sfCreate(7,6,5,4,3,2,1)).getFilesToCompact().size());
 {code}
 ... and for references ...
 {code}
   assertEquals(maxFiles,
 store.compactSelection(sfCreate(true, 
 7,6,5,4,3,2,1)).getFilesToCompact().size());
 {code}
  
 ... but they didn't assert against which StoreFiles got selected.  While the 
 number of StoreFiles is the same, the files selected are actually different, 
 and I thought that there should be explicit assertions showing that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5351) hbase completebulkload to a new table fails in a race

2012-02-08 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203757#comment-13203757
 ] 

Zhihong Yu commented on HBASE-5351:
---

@Gregory:
Please attach a patch.

 hbase completebulkload to a new table fails in a race
 -

 Key: HBASE-5351
 URL: https://issues.apache.org/jira/browse/HBASE-5351
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.0, 0.92.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan

 I have a test that tests vanilla use of importtsv with importtsv.bulk.output 
 option followed by completebulkload to a new table.
 This sometimes fails as follows:
 11/12/19 15:02:39 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table:
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: ml_items_copy, row=ml_items_copy,,99
 at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:157)
 at org.apache.hadoop.hbase.client.MetaScanner.access$000(MetaScanner.java:52)
 at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:130)
 at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:127)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager.execute(HConnectionManager.java:359)
 at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:127)
 at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:103)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:875)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:929)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:817)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:781)
 at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:247)
 at org.apache.hadoop.hbase.client.HTable.init(HTable.java:211)
 at org.apache.hadoop.hbase.client.HTable.init(HTable.java:171)
 at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.createTable(LoadIncrementalHFiles.java:673)
 at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:697)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:83)
 at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.main(LoadIncrementalHFiles.java:707)
 The race appears to be calling HbAdmin.createTableAsync(htd, keys) and then 
 creating an HTable object before that call has actually completed.
 The following change to 
 /src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java 
 appears to fix the problem, but I have not been able to reproduce the race 
 reliably, in order to write a test.
 {code}
 -HTable table = new HTable(this.cfg, tableName);
 -
 -HConnection conn = table.getConnection();
  int ctr = 0;
 -while (!conn.isTableAvailable(table.getTableName())  
 (ctrTABLE_CREATE_MA
 +while (!this.hbAdmin.isTableAvailable(tableName)  
 (ctrTABLE_CREATE_MAX_R
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5311) Allow inmemory Memstore compactions

2012-02-08 Thread Lars Hofhansl (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203776#comment-13203776
 ] 

Lars Hofhansl commented on HBASE-5311:
--

Gotcha... Makes sense.

 Allow inmemory Memstore compactions
 ---

 Key: HBASE-5311
 URL: https://issues.apache.org/jira/browse/HBASE-5311
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
 Attachments: InternallyLayeredMap.java


 Just like we periodically compact the StoreFiles we should also periodically 
 compact the MemStore.
 During these compactions we eliminate deleted cells, expired cells, cells to 
 removed because of version count, etc, before we even do a memstore flush.
 Besides the optimization that we could get from this, it should also allow us 
 to remove the special handling of ICV, Increment, and Append (all of which 
 use upsert logic to avoid accumulating excessive cells in the Memstore).
 Not targeting this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5221) bin/hbase script doesn't look for Hadoop jars in the right place in trunk layout

2012-02-08 Thread Roman Shaposhnik (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203784#comment-13203784
 ] 

Roman Shaposhnik commented on HBASE-5221:
-

Guys, this fix will break certain HADOOP 0.23 layout. The correct fix for this 
one will have to come out of HBASE-5286

It would be really nice if folks can chime in on the JIRA that I quoted.

Also, I don't quite understand why this is a problem. For developer's builds 
HBase bundles everything under lib and thus has no reason to look under HADOOP 
installation tree. For pacakged deployments (Bigtop, CDH) the layout of Hadoop 
is different anyway, which means this fix will actually break things.

 bin/hbase script doesn't look for Hadoop jars in the right place in trunk 
 layout
 

 Key: HBASE-5221
 URL: https://issues.apache.org/jira/browse/HBASE-5221
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Todd Lipcon
Assignee: Jimmy Xiang
 Fix For: 0.94.0

 Attachments: hbase-5221.txt


 Running against an 0.24.0-SNAPSHOT hadoop:
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-common*.jar: No such file or 
 directory
 ls: cannot access /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-hdfs*.jar: 
 No such file or directory
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-mapred*.jar: No such file or 
 directory
 The jars are rooted deeper in the heirarchy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-3537) [site] Make it so each page of manual allows users comment like mysql's manual does

2012-02-08 Thread stack (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203785#comment-13203785
 ] 

stack commented on HBASE-3537:
--

I don't seem to have filled in here that I couldn't figure how to make facebook 
comments include the page the comment was made on.  Here is another 
http://disqus.com/ we might embed.  

 [site] Make it so each page of manual allows users comment like mysql's 
 manual does
 ---

 Key: HBASE-3537
 URL: https://issues.apache.org/jira/browse/HBASE-3537
 Project: HBase
  Issue Type: Improvement
Reporter: stack

 I like the way the mysql manuals allow users comment, improve or correct 
 mysql manual pages.  We should have same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HBASE-5221) bin/hbase script doesn't look for Hadoop jars in the right place in trunk layout

2012-02-08 Thread stack (Reopened) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reopened HBASE-5221:
--


I reverted the change Jimmy.  Should we close as won't resolve or as dup of the 
issue that Roman refers to?

 bin/hbase script doesn't look for Hadoop jars in the right place in trunk 
 layout
 

 Key: HBASE-5221
 URL: https://issues.apache.org/jira/browse/HBASE-5221
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Todd Lipcon
Assignee: Jimmy Xiang
 Fix For: 0.94.0

 Attachments: hbase-5221.txt


 Running against an 0.24.0-SNAPSHOT hadoop:
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-common*.jar: No such file or 
 directory
 ls: cannot access /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-hdfs*.jar: 
 No such file or directory
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-mapred*.jar: No such file or 
 directory
 The jars are rooted deeper in the heirarchy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5221) bin/hbase script doesn't look for Hadoop jars in the right place in trunk layout

2012-02-08 Thread stack (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203801#comment-13203801
 ] 

stack commented on HBASE-5221:
--

@Roman Thanks for catching this.

 bin/hbase script doesn't look for Hadoop jars in the right place in trunk 
 layout
 

 Key: HBASE-5221
 URL: https://issues.apache.org/jira/browse/HBASE-5221
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Todd Lipcon
Assignee: Jimmy Xiang
 Fix For: 0.94.0

 Attachments: hbase-5221.txt


 Running against an 0.24.0-SNAPSHOT hadoop:
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-common*.jar: No such file or 
 directory
 ls: cannot access /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-hdfs*.jar: 
 No such file or directory
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-mapred*.jar: No such file or 
 directory
 The jars are rooted deeper in the heirarchy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5221) bin/hbase script doesn't look for Hadoop jars in the right place in trunk layout

2012-02-08 Thread Jimmy Xiang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203807#comment-13203807
 ] 

Jimmy Xiang commented on HBASE-5221:


The problem is that when I run hbase shell, it complains those files are 
missing, and ClassNotFound org.apache.hadoop.util.PlatformName.

We need to fix it. 

The script is already looking under HADOOP installation tree, just a wrong 
place.

I don't think this fix will break anything.  We can use this fix before 
HBASE-5286 is resolved.

 bin/hbase script doesn't look for Hadoop jars in the right place in trunk 
 layout
 

 Key: HBASE-5221
 URL: https://issues.apache.org/jira/browse/HBASE-5221
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Todd Lipcon
Assignee: Jimmy Xiang
 Fix For: 0.94.0

 Attachments: hbase-5221.txt


 Running against an 0.24.0-SNAPSHOT hadoop:
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-common*.jar: No such file or 
 directory
 ls: cannot access /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-hdfs*.jar: 
 No such file or directory
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-mapred*.jar: No such file or 
 directory
 The jars are rooted deeper in the heirarchy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5221) bin/hbase script doesn't look for Hadoop jars in the right place in trunk layout

2012-02-08 Thread Jimmy Xiang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203810#comment-13203810
 ] 

Jimmy Xiang commented on HBASE-5221:


Ok, let me close it as dup.  Probably, I just use the fix for myself before 
HBASE-5286 is resolved.

 bin/hbase script doesn't look for Hadoop jars in the right place in trunk 
 layout
 

 Key: HBASE-5221
 URL: https://issues.apache.org/jira/browse/HBASE-5221
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Todd Lipcon
Assignee: Jimmy Xiang
 Fix For: 0.94.0

 Attachments: hbase-5221.txt


 Running against an 0.24.0-SNAPSHOT hadoop:
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-common*.jar: No such file or 
 directory
 ls: cannot access /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-hdfs*.jar: 
 No such file or directory
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-mapred*.jar: No such file or 
 directory
 The jars are rooted deeper in the heirarchy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-5221) bin/hbase script doesn't look for Hadoop jars in the right place in trunk layout

2012-02-08 Thread Jimmy Xiang (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang resolved HBASE-5221.


Resolution: Duplicate

 bin/hbase script doesn't look for Hadoop jars in the right place in trunk 
 layout
 

 Key: HBASE-5221
 URL: https://issues.apache.org/jira/browse/HBASE-5221
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Todd Lipcon
Assignee: Jimmy Xiang
 Fix For: 0.94.0

 Attachments: hbase-5221.txt


 Running against an 0.24.0-SNAPSHOT hadoop:
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-common*.jar: No such file or 
 directory
 ls: cannot access /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-hdfs*.jar: 
 No such file or directory
 ls: cannot access 
 /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-mapred*.jar: No such file or 
 directory
 The jars are rooted deeper in the heirarchy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4762) ROOT and META region never be assigned if IOE throws in verifyRootRegionLocation

2012-02-08 Thread stack (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203811#comment-13203811
 ] 

stack commented on HBASE-4762:
--

This should be less likely in TRUNK, right RAM?, since we retry in TRUNK but 
not in 0.90.x

 ROOT and META region never be assigned if IOE throws in 
 verifyRootRegionLocation
 

 Key: HBASE-4762
 URL: https://issues.apache.org/jira/browse/HBASE-4762
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.90.4
Reporter: mingjian
Assignee: mingjian
 Fix For: 0.90.7


 Patch in HBASE-3914 fixed root assigned in two regionservers. But it seemed 
 like root region will never be assigned if verifyRootRegionLocation throws 
 IOE.
 Like following master logs:
 {noformat}
 2011-10-19 19:13:34,873 ERROR org.apache.hadoop.hbase.executor.EventHandler: 
 Caught throwable while processing event M_META_SERVER_S
 HUTDOWN
 org.apache.hadoop.ipc.RemoteException: 
 org.apache.hadoop.hbase.ipc.ServerNotRunningException: Server is not running 
 yet
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1090)
 at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:771)
 at 
 org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:256)
 at $Proxy7.getRegionInfo(Unknown Source)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
 at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
 at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
 at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {noformat}
 After this, -ROOT-'s region won't be assigned, like this:
 {noformat}
 2011-10-19 19:18:40,000 DEBUG 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: 
 locateRegionInMeta parent
 Table=-ROOT-, metaLocation=address: dw79.kgb.sqa.cm4:60020, regioninfo: 
 -ROOT-,,0.70236052, attempt=0 of 10 failed; retrying after s
 leep of 1000 because: org.apache.hadoop.hbase.NotServingRegionException: 
 Region is not online: -ROOT-,,0
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:2771)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1802)
 at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:569)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1091)
 {noformat}
 So we should rewrite the verifyRootRegionLocation method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5229) Provide basic building blocks for multi-row local transactions.

2012-02-08 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5229:
-

Status: Open  (was: Patch Available)

 Provide basic building blocks for multi-row local transactions.
 -

 Key: HBASE-5229
 URL: https://issues.apache.org/jira/browse/HBASE-5229
 Project: HBase
  Issue Type: New Feature
  Components: client, regionserver
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.0

 Attachments: 5229-endpoint.txt, 5229-multiRow-v2.txt, 
 5229-multiRow.txt, 5229-seekto-v2.txt, 5229-seekto.txt, 5229.txt


 In the final iteration, this issue provides a generalized, public 
 mutateRowsWithLocks method on HRegion, that can be used by coprocessors to 
 implement atomic operations efficiently.
 Coprocessors are already region aware, which makes this is a good pairing of 
 APIs. This feature is by design not available to the client via the HTable 
 API.
 It took a long time to arrive at this and I apologize for the public exposure 
 of my (erratic in retrospect) thought processes.
 Was:
 HBase should provide basic building blocks for multi-row local transactions. 
 Local means that we do this by co-locating the data. Global (cross region) 
 transactions are not discussed here.
 After a bit of discussion two solutions have emerged:
 1. Keep the row-key for determining grouping and location and allow efficient 
 intra-row scanning. A client application would then model tables as 
 HBase-rows.
 2. Define a prefix-length in HTableDescriptor that defines a grouping of 
 rows. Regions will then never be split inside a grouping prefix.
 #1 is true to the current storage paradigm of HBase.
 #2 is true to the current client side API.
 I will explore these two with sample patches here.
 
 Was:
 As discussed (at length) on the dev mailing list with the HBASE-3584 and 
 HBASE-5203 committed, supporting atomic cross row transactions within a 
 region becomes simple.
 I am aware of the hesitation about the usefulness of this feature, but we 
 have to start somewhere.
 Let's use this jira for discussion, I'll attach a patch (with tests) 
 momentarily to make this concrete.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-5229) Provide basic building blocks for multi-row local transactions.

2012-02-08 Thread Lars Hofhansl (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-5229.
--

  Resolution: Fixed
Hadoop Flags: Reviewed

Committed to trunnk. Thanks everyone for bearing with me.

 Provide basic building blocks for multi-row local transactions.
 -

 Key: HBASE-5229
 URL: https://issues.apache.org/jira/browse/HBASE-5229
 Project: HBase
  Issue Type: New Feature
  Components: client, regionserver
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.0

 Attachments: 5229-endpoint.txt, 5229-multiRow-v2.txt, 
 5229-multiRow.txt, 5229-seekto-v2.txt, 5229-seekto.txt, 5229.txt


 In the final iteration, this issue provides a generalized, public 
 mutateRowsWithLocks method on HRegion, that can be used by coprocessors to 
 implement atomic operations efficiently.
 Coprocessors are already region aware, which makes this is a good pairing of 
 APIs. This feature is by design not available to the client via the HTable 
 API.
 It took a long time to arrive at this and I apologize for the public exposure 
 of my (erratic in retrospect) thought processes.
 Was:
 HBase should provide basic building blocks for multi-row local transactions. 
 Local means that we do this by co-locating the data. Global (cross region) 
 transactions are not discussed here.
 After a bit of discussion two solutions have emerged:
 1. Keep the row-key for determining grouping and location and allow efficient 
 intra-row scanning. A client application would then model tables as 
 HBase-rows.
 2. Define a prefix-length in HTableDescriptor that defines a grouping of 
 rows. Regions will then never be split inside a grouping prefix.
 #1 is true to the current storage paradigm of HBase.
 #2 is true to the current client side API.
 I will explore these two with sample patches here.
 
 Was:
 As discussed (at length) on the dev mailing list with the HBASE-3584 and 
 HBASE-5203 committed, supporting atomic cross row transactions within a 
 region becomes simple.
 I am aware of the hesitation about the usefulness of this feature, but we 
 have to start somewhere.
 Let's use this jira for discussion, I'll attach a patch (with tests) 
 momentarily to make this concrete.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5353) HA/Distributed HMaster via RegionServers

2012-02-08 Thread Jesse Yates (Created) (JIRA)
HA/Distributed HMaster via RegionServers


 Key: HBASE-5353
 URL: https://issues.apache.org/jira/browse/HBASE-5353
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.94.0
Reporter: Jesse Yates
Priority: Minor


Currently, the HMaster node must be considered a 'special' node (single point 
of failure), meaning that the node must be protected more than the other 
commodity machines. It should be possible to instead have the HMaster be much 
more available, either in a distributed sense (meaning a bit rewrite) or with 
multiple instances and automatic failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5350) Fix jamon generated package names

2012-02-08 Thread stack (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5350:
-

Status: Patch Available  (was: Open)

Submitting patch

 Fix jamon generated package names
 -

 Key: HBASE-5350
 URL: https://issues.apache.org/jira/browse/HBASE-5350
 Project: HBase
  Issue Type: Bug
  Components: monitoring
Affects Versions: 0.92.0
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 0.94.0

 Attachments: jamon_HBASE-5350.patch


 Previously, jamon was creating the template files in org.apache.hbase, but 
 it should be org.apache.hadoop.hbase, so it's in line with rest of source 
 files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5229) Provide basic building blocks for multi-row local transactions.

2012-02-08 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5229:
-

Attachment: 5229-final.txt

Final patch for reference

 Provide basic building blocks for multi-row local transactions.
 -

 Key: HBASE-5229
 URL: https://issues.apache.org/jira/browse/HBASE-5229
 Project: HBase
  Issue Type: New Feature
  Components: client, regionserver
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.0

 Attachments: 5229-endpoint.txt, 5229-final.txt, 5229-multiRow-v2.txt, 
 5229-multiRow.txt, 5229-seekto-v2.txt, 5229-seekto.txt, 5229.txt


 In the final iteration, this issue provides a generalized, public 
 mutateRowsWithLocks method on HRegion, that can be used by coprocessors to 
 implement atomic operations efficiently.
 Coprocessors are already region aware, which makes this is a good pairing of 
 APIs. This feature is by design not available to the client via the HTable 
 API.
 It took a long time to arrive at this and I apologize for the public exposure 
 of my (erratic in retrospect) thought processes.
 Was:
 HBase should provide basic building blocks for multi-row local transactions. 
 Local means that we do this by co-locating the data. Global (cross region) 
 transactions are not discussed here.
 After a bit of discussion two solutions have emerged:
 1. Keep the row-key for determining grouping and location and allow efficient 
 intra-row scanning. A client application would then model tables as 
 HBase-rows.
 2. Define a prefix-length in HTableDescriptor that defines a grouping of 
 rows. Regions will then never be split inside a grouping prefix.
 #1 is true to the current storage paradigm of HBase.
 #2 is true to the current client side API.
 I will explore these two with sample patches here.
 
 Was:
 As discussed (at length) on the dev mailing list with the HBASE-3584 and 
 HBASE-5203 committed, supporting atomic cross row transactions within a 
 region becomes simple.
 I am aware of the hesitation about the usefulness of this feature, but we 
 have to start somewhere.
 Let's use this jira for discussion, I'll attach a patch (with tests) 
 momentarily to make this concrete.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5353) HA/Distributed HMaster via RegionServers

2012-02-08 Thread Jesse Yates (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203829#comment-13203829
 ] 

Jesse Yates commented on HBASE-5353:


I was thinking about this and it seems like it wouldn't be that hard to have 
each of the regionservers doing leader election via ZK to select the one (or 
top 'n' rs) that would spin up master instances on their local machine. Those 
new masters could do their own leader election in ZK to determine who is the 
current 'official' HMaster, and the others would act as hot failovers. If a 
master dies, the next rs in the list would spin up a master instance, ensuring 
that we always have a certain number of hot masters (clearly cascading failure 
here is a problem, but if that happens, you have bigger problems). Clearly, 
running the master from the same JVM is probably a bad idea, but you could 
potentially even use the startup scripts to spin up a separate jvm with the 
master.

This also means some modification to the client, to keep track of the current 
master, but that should be fairly trivial, as it already has the zk connection 
(or can do a fail and lookup). 

 HA/Distributed HMaster via RegionServers
 

 Key: HBASE-5353
 URL: https://issues.apache.org/jira/browse/HBASE-5353
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.94.0
Reporter: Jesse Yates
Priority: Minor

 Currently, the HMaster node must be considered a 'special' node (single point 
 of failure), meaning that the node must be protected more than the other 
 commodity machines. It should be possible to instead have the HMaster be much 
 more available, either in a distributed sense (meaning a bit rewrite) or with 
 multiple instances and automatic failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5354) Source to standalone deployment script

2012-02-08 Thread Jesse Yates (Created) (JIRA)
Source to standalone deployment script
--

 Key: HBASE-5354
 URL: https://issues.apache.org/jira/browse/HBASE-5354
 Project: HBase
  Issue Type: New Feature
  Components: build, scripts
Affects Versions: 0.94.0
Reporter: Jesse Yates
Assignee: Jesse Yates
Priority: Minor


Automating the testing of source code in a 'real' instance can be a bit of a 
pain, even getting it into standalone mode.
Steps you need to go through:
1) Build the project
2) Copy it to the deployment directory
3) Shutdown the current cluster (if it is running)
4) Untar the tar
5) Update the configs to point to a local data cluster
6) Startup the new deployment

Yeah, its not super difficult, but it would be nice to just have a script to 
make it button push easy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5313) Restructure hfiles layout for better compression

2012-02-08 Thread Nicolas Spiegelberg (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203836#comment-13203836
 ] 

Nicolas Spiegelberg commented on HBASE-5313:


Storing all keys together would just help on CPU, correct?  We wouldn't get any 
disk size savings or IO savings with the current approach.

 Restructure hfiles layout for better compression
 

 Key: HBASE-5313
 URL: https://issues.apache.org/jira/browse/HBASE-5313
 Project: HBase
  Issue Type: Improvement
  Components: io
Reporter: dhruba borthakur
Assignee: dhruba borthakur

 A HFile block contain a stream of key-values. Can we can organize these kvs 
 on the disk in a better way so that we get much greater compression ratios?
 One option (thanks Prakash) is to store all the keys in the beginning of the 
 block (let's call this the key-section) and then store all their 
 corresponding values towards the end of the block. This will allow us to 
 not-even decompress the values when we are scanning and skipping over rows in 
 the block.
 Any other ideas? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5354) Source to standalone deployment script

2012-02-08 Thread Jesse Yates (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-5354:
---

Attachment: bash_HBASE-5354.patch

Attaching patch that does as described. Apply the patch and then run it from 
the /hbase as ./dev-support/deploy.sh -h to see the usage info.

 Source to standalone deployment script
 --

 Key: HBASE-5354
 URL: https://issues.apache.org/jira/browse/HBASE-5354
 Project: HBase
  Issue Type: New Feature
  Components: build, scripts
Affects Versions: 0.94.0
Reporter: Jesse Yates
Assignee: Jesse Yates
Priority: Minor
 Attachments: bash_HBASE-5354.patch


 Automating the testing of source code in a 'real' instance can be a bit of a 
 pain, even getting it into standalone mode.
 Steps you need to go through:
 1) Build the project
 2) Copy it to the deployment directory
 3) Shutdown the current cluster (if it is running)
 4) Untar the tar
 5) Update the configs to point to a local data cluster
 6) Startup the new deployment
 Yeah, its not super difficult, but it would be nice to just have a script to 
 make it button push easy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4336) Convert source tree into maven modules

2012-02-08 Thread Jesse Yates (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203847#comment-13203847
 ] 

Jesse Yates commented on HBASE-4336:


After sitting with stack on this and looking through the restructure101 stuff, 
doing the modularization first really isn't going to help anything here. What 
we need to do first is 'detangle' hbase - remove a lot of the dependencies 
between classes/packages that really shouldn't exist. This is going to mean a 
whole bunch of smaller tickets, doing piecemeal refactorings. Once we actually 
have the project nicely decoupled, then the modularization will actually make 
sense and go rather smoothly (as opposed to the mess it was the first time 
around). 

This is further accentuated by the fact that on 'finishing' the modularization, 
we had a bunch of classes split between client and shared, but basically 
nothing in the server or core modules because the server stuff was too tightly 
coupled to the client to be in its own, distinct module.

I'm going to start working through some of the pieces and just link the tickets 
back here for the refactorings. After it starts to look clean (arbitrarily 
defined), then we should go back to the modularization.

 Convert source tree into maven modules
 --

 Key: HBASE-4336
 URL: https://issues.apache.org/jira/browse/HBASE-4336
 Project: HBase
  Issue Type: Task
  Components: build
Reporter: Gary Helmling
Priority: Critical
 Fix For: 0.94.0


 When we originally converted the build to maven we had a single core module 
 defined, but later reverted this to a module-less build for the sake of 
 simplicity.
 It now looks like it's time to re-address this, as we have an actual need for 
 modules to:
 * provide a trimmed down client library that applications can make use of
 * more cleanly support building against different versions of Hadoop, in 
 place of some of the reflection machinations currently required
 * incorporate the secure RPC engine that depends on some secure Hadoop classes
 I propose we start simply by refactoring into two initial modules:
 * core - common classes and utilities, and client-side code and interfaces
 * server - master and region server implementations and supporting code
 This would also lay the groundwork for incorporating the HBase security 
 features that have been developed.  Once the module structure is in place, 
 security-related features could then be incorporated into a third module -- 
 security -- after normal review and approval.  The security module could 
 then depend on secure Hadoop, without modifying the dependencies of the rest 
 of the HBase code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5345) CheckAndPut doesn't work when value is empty byte[]

2012-02-08 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203858#comment-13203858
 ] 

Hudson commented on HBASE-5345:
---

Integrated in HBase-0.92 #273 (See 
[https://builds.apache.org/job/HBase-0.92/273/])
HBASE-5345  CheckAndPut doesn't work when value is empty byte[] (Evert 
Arckens)

tedyu : 
Files : 
* /hbase/branches/0.92/CHANGES.txt
* 
/hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java


 CheckAndPut doesn't work when value is empty byte[]
 ---

 Key: HBASE-5345
 URL: https://issues.apache.org/jira/browse/HBASE-5345
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Evert Arckens
Assignee: Evert Arckens
 Fix For: 0.94.0, 0.92.1

 Attachments: 5345-v2.txt, 5345.txt, 
 checkAndMutateEmpty-HBASE-5345.patch


 When a value contains an empty byte[] and then a checkAndPut is performed 
 with an empty byte[] , the operation will fail.
 For example:
 Put put = new Put(row1);
 put.add(fam1, qf1, new byte[0]);
 table.put(put);
 put = new Put(row1);
 put.add(fam1, qf1, val1);
 table.checkAndPut(row1, fam1, qf1, new byte[0], put); --- false
 I think this is related to HBASE-3793 and HBASE-3468.
 Note that you will also get into this situation when first putting a null 
 value ( put.add(fam1,qf1,null) ), as this value will then be regarded and 
 returned as an empty byte[] upon a get.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5350) Fix jamon generated package names

2012-02-08 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203870#comment-13203870
 ] 

Hadoop QA commented on HBASE-5350:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12513718/jamon_HBASE-5350.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -136 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 156 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.hfile.TestHFileBlock
  org.apache.hadoop.hbase.mapreduce.TestImportTsv
  org.apache.hadoop.hbase.mapred.TestTableMapReduce
  org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/925//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/925//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/925//console

This message is automatically generated.

 Fix jamon generated package names
 -

 Key: HBASE-5350
 URL: https://issues.apache.org/jira/browse/HBASE-5350
 Project: HBase
  Issue Type: Bug
  Components: monitoring
Affects Versions: 0.92.0
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 0.94.0

 Attachments: jamon_HBASE-5350.patch


 Previously, jamon was creating the template files in org.apache.hbase, but 
 it should be org.apache.hadoop.hbase, so it's in line with rest of source 
 files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5353) HA/Distributed HMaster via RegionServers

2012-02-08 Thread stack (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203872#comment-13203872
 ] 

stack commented on HBASE-5353:
--

I'd say just run the master in-process w/ the regionserver.  Master doesn't do 
much (It used to be heavily loaded when we did log splitting but thats 
distributed now or on startup... but even then, should be fine).

Client already tracks master location as you say though we need to undo 
this...and just have the client do a read of zk to find master location when it 
needs it.

Regards UI, we'd collapse it so that there'd be a single webapp rather than the 
two we have now.  There'd be a 'master' link.  If the current regionserver were 
not the master, the master link would redirect you to current master.

 HA/Distributed HMaster via RegionServers
 

 Key: HBASE-5353
 URL: https://issues.apache.org/jira/browse/HBASE-5353
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.94.0
Reporter: Jesse Yates
Priority: Minor

 Currently, the HMaster node must be considered a 'special' node (single point 
 of failure), meaning that the node must be protected more than the other 
 commodity machines. It should be possible to instead have the HMaster be much 
 more available, either in a distributed sense (meaning a bit rewrite) or with 
 multiple instances and automatic failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5347) GC free memory management in Level-1 Block Cache

2012-02-08 Thread Prakash Khemani (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203871#comment-13203871
 ] 

Prakash Khemani commented on HBASE-5347:


initial diff for feedback https://reviews.facebook.net/D1635

 GC free memory management in Level-1 Block Cache
 

 Key: HBASE-5347
 URL: https://issues.apache.org/jira/browse/HBASE-5347
 Project: HBase
  Issue Type: Improvement
Reporter: Prakash Khemani
Assignee: Prakash Khemani

 On eviction of a block from the block-cache, instead of waiting for the 
 garbage collecter to reuse its memory, reuse the block right away.
 This will require us to keep reference counts on the HFile blocks. Once we 
 have the reference counts in place we can do our own simple 
 blocks-out-of-slab allocation for the block-cache.
 This will help us with
 * reducing gc pressure, especially in the old generation
 * making it possible to have non-java-heap memory backing the HFile blocks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5353) HA/Distributed HMaster via RegionServers

2012-02-08 Thread Jesse Yates (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203892#comment-13203892
 ] 

Jesse Yates commented on HBASE-5353:


{quote}
I'd say just run the master in-process w/ the regionserver. Master doesn't do 
much (It used to be heavily loaded when we did log splitting but thats 
distributed now or on startup... but even then, should be fine).
{quote}

I worry about putting too much in the same jvm, especially when you ahve a 
heavily loaded RS, you could be seriously killed with jvm pauses when you up 
the size to accomodate the master (could be bad too when you have the larger 
jvm, but no master running). Since, initially, this is going to be enabled via 
a configuration option, another option would just be to start it in JVM vs. 
outside the jvm; seems to me it would work either way.

{quote}
Client already tracks master location as you say though we need to undo 
this...and just have the client do a read of zk to find master location when it 
needs it.
{quote}

Talked wtih Lars H about doing this fix too - the client really doesn't need 
the long running zk connection, but should just zk when it needs the master 
info. So that could be part of this ticket too. 

{quote}
Regards UI, we'd collapse it so that there'd be a single webapp rather than the 
two we have now.
{quote}

Only works if we have same jvm stuff. I would argue that it should have a RS 
link rather than a master link (smile). But for the initial patch I would say 
the ui stuff should be on hold, until the actual implementation gets worked out.

 HA/Distributed HMaster via RegionServers
 

 Key: HBASE-5353
 URL: https://issues.apache.org/jira/browse/HBASE-5353
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.94.0
Reporter: Jesse Yates
Priority: Minor

 Currently, the HMaster node must be considered a 'special' node (single point 
 of failure), meaning that the node must be protected more than the other 
 commodity machines. It should be possible to instead have the HMaster be much 
 more available, either in a distributed sense (meaning a bit rewrite) or with 
 multiple instances and automatic failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5313) Restructure hfiles layout for better compression

2012-02-08 Thread Lars Hofhansl (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203899#comment-13203899
 ] 

Lars Hofhansl commented on HBASE-5313:
--

Presumably storing the keys together might lends itself for better compression.
Do we need to index values then? In that case we'd use up more space. Or how 
would we find the value belonging to a key?
I suppose we could use the value length from the key, then know we have nth key 
and by using the value length of all 1 to n-1 keys to find the value.
Or store the lengths with the values and scan the keys and values in parallel.

 Restructure hfiles layout for better compression
 

 Key: HBASE-5313
 URL: https://issues.apache.org/jira/browse/HBASE-5313
 Project: HBase
  Issue Type: Improvement
  Components: io
Reporter: dhruba borthakur
Assignee: dhruba borthakur

 A HFile block contain a stream of key-values. Can we can organize these kvs 
 on the disk in a better way so that we get much greater compression ratios?
 One option (thanks Prakash) is to store all the keys in the beginning of the 
 block (let's call this the key-section) and then store all their 
 corresponding values towards the end of the block. This will allow us to 
 not-even decompress the values when we are scanning and skipping over rows in 
 the block.
 Any other ideas? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5319) TRUNK broke since hbase-4218 went in? TestHFileBlock OOMEs

2012-02-08 Thread Lars Hofhansl (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203901#comment-13203901
 ] 

Lars Hofhansl commented on HBASE-5319:
--

@Mikhail: Are you planning to look at this? Ted pointed out that this might be 
a 0.94 blocker and I concur.


 TRUNK broke since hbase-4218 went in?  TestHFileBlock OOMEs
 ---

 Key: HBASE-5319
 URL: https://issues.apache.org/jira/browse/HBASE-5319
 Project: HBase
  Issue Type: Bug
Reporter: stack

 Check it out...https://builds.apache.org/job/HBase-TRUNK/  Mikhail, you might 
 know whats up.  Else, will have a looksee...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5353) HA/Distributed HMaster via RegionServers

2012-02-08 Thread Jimmy Xiang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203909#comment-13203909
 ] 

Jimmy Xiang commented on HBASE-5353:


Another option is not to have a master, every region server can do the work a 
master currently does.  Just uses the ZK to coordinate them.
For example, once a region server dies, all other region server knows about it, 
all try to run the dead server clean up, but only one will actually do it.  The 
drawback here is too much zk interaction.

 HA/Distributed HMaster via RegionServers
 

 Key: HBASE-5353
 URL: https://issues.apache.org/jira/browse/HBASE-5353
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.94.0
Reporter: Jesse Yates
Priority: Minor

 Currently, the HMaster node must be considered a 'special' node (single point 
 of failure), meaning that the node must be protected more than the other 
 commodity machines. It should be possible to instead have the HMaster be much 
 more available, either in a distributed sense (meaning a bit rewrite) or with 
 multiple instances and automatic failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5313) Restructure hfiles layout for better compression

2012-02-08 Thread Prakash Khemani (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203918#comment-13203918
 ] 

Prakash Khemani commented on HBASE-5313:


The values can be kept compressed in memory. We can uncompress them on
demand when writing out the key-values during rpc or compactions.

The key has to have a pointer to the values. The pointer can be implicit
and can be derived from value lengths if all the values are stored in the
same order as keys.

The value pointer has to be explicit if the values are stored in a
different order than the keys. We might want to write out the values in a
different order if we want to do per column compression. While writing out
the HFileBlock the following can be done - group all the values by their
column identifier, independently compress and write out each group of
values, go back to the keys and update the value pointers.


On 2/8/12 11:50 AM, Lars Hofhansl (Commented) (JIRA) j...@apache.org




 Restructure hfiles layout for better compression
 

 Key: HBASE-5313
 URL: https://issues.apache.org/jira/browse/HBASE-5313
 Project: HBase
  Issue Type: Improvement
  Components: io
Reporter: dhruba borthakur
Assignee: dhruba borthakur

 A HFile block contain a stream of key-values. Can we can organize these kvs 
 on the disk in a better way so that we get much greater compression ratios?
 One option (thanks Prakash) is to store all the keys in the beginning of the 
 block (let's call this the key-section) and then store all their 
 corresponding values towards the end of the block. This will allow us to 
 not-even decompress the values when we are scanning and skipping over rows in 
 the block.
 Any other ideas? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5270) Handle potential data loss due to concurrent processing of processFaileOver and ServerShutdownHandler

2012-02-08 Thread stack (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203921#comment-13203921
 ] 

stack commented on HBASE-5270:
--

I was taking a look through HBASE-5179 and HBASE-4748 again, the two issues 
that spawned this one (Both are in synopsis about master failover with 
concurrent servershutdown handler running).  I have also been looking at 
HBASE-5344
[89-fb] Scan unassigned region directory on master failover.

HBASE-5179 starts out as we can miss edits if a server is discovered to be dead 
AFTER master failover has started up splitting logs because we'll notice it 
dead so will assign out its regions but before we've had a chance to split its 
logs.  The way fb deal with this in hbase-5344 is not to process zookeeper 
events that come in during master failover.  They queue them instead and only 
start in on the processing after master is up.

Chunhui does something like this in his original patch by adding any server 
currently being processed by server shutdown to the list of regionservers whose 
logs we should not split.  The fb way of halting temporarily the callback 
processing seems more airtight.

HBASE-5179 is then extended to include as in scope, the processing of servers 
carrying root and meta (hbase-4748) that crash during master failover.  We need 
to consider the cases where a server crashes AFTER master failover distributed 
log splitting has started but before we run the verifications of meta and root 
locations.

Currently we'll expire the server that is unresponsive when we go to verify 
root and meta locations.  The notion is that the meta regions will be assigned 
by the server shutdown handler.  The fb technique of turning off processing zk 
events would mess with our existing handling code here -- but I'm not too 
confident the code is going to do the right thing since it has no tests of this 
predicament and the scenarios look like they could be pretty varied (root is 
offline only, meta server has crashed only, a server with both root and meta 
has crashed, etc).  In hbase-5344, fb will go query each regionserver for the 
regions its currently hosting (and look in zk to see what rs are up).   Maybe 
we need some of this from 89-fb in trunk but I'm not clear on it just yet; 
would need more study of the current state of trunk and then of what is 
happening over in 89-fb.

One thing I think we should do to lessen the number of code paths we can take 
on failover is to do the long-talked of purge  of the root region.  This should 
cut down on the number of states we need to deal with and make reasoning about 
failure states on failover easier to reason about.

 Handle potential data loss due to concurrent processing of processFaileOver 
 and ServerShutdownHandler
 -

 Key: HBASE-5270
 URL: https://issues.apache.org/jira/browse/HBASE-5270
 Project: HBase
  Issue Type: Sub-task
  Components: master
Reporter: Zhihong Yu
 Fix For: 0.94.0, 0.92.1


 This JIRA continues the effort from HBASE-5179. Starting with Stack's 
 comments about patches for 0.92 and TRUNK:
 Reviewing 0.92v17
 isDeadServerInProgress is a new public method in ServerManager but it does 
 not seem to be used anywhere.
 Does isDeadRootServerInProgress need to be public? Ditto for meta version.
 This method param names are not right 'definitiveRootServer'; what is meant 
 by definitive? Do they need this qualifier?
 Is there anything in place to stop us expiring a server twice if its carrying 
 root and meta?
 What is difference between asking assignment manager isCarryingRoot and this 
 variable that is passed in? Should be doc'd at least. Ditto for meta.
 I think I've asked for this a few times - onlineServers needs to be 
 explained... either in javadoc or in comment. This is the param passed into 
 joinCluster. How does it arise? I think I know but am unsure. God love the 
 poor noob that comes awandering this code trying to make sense of it all.
 It looks like we get the list by trawling zk for regionserver znodes that 
 have not checked in. Don't we do this operation earlier in master setup? Are 
 we doing it again here?
 Though distributed split log is configured, we will do in master single 
 process splitting under some conditions with this patch. Its not explained in 
 code why we would do this. Why do we think master log splitting 'high 
 priority' when it could very well be slower. Should we only go this route if 
 distributed splitting is not going on. Do we know if concurrent distributed 
 log splitting and master splitting works?
 Why would we have dead servers in progress here in master startup? Because a 
 servershutdownhandler fired?
 This patch is different to the patch for 0.90. Should go 

[jira] [Commented] (HBASE-5270) Handle potential data loss due to concurrent processing of processFaileOver and ServerShutdownHandler

2012-02-08 Thread stack (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203923#comment-13203923
 ] 

stack commented on HBASE-5270:
--

HBASE-3171 is the issue to purge root.

 Handle potential data loss due to concurrent processing of processFaileOver 
 and ServerShutdownHandler
 -

 Key: HBASE-5270
 URL: https://issues.apache.org/jira/browse/HBASE-5270
 Project: HBase
  Issue Type: Sub-task
  Components: master
Reporter: Zhihong Yu
 Fix For: 0.94.0, 0.92.1


 This JIRA continues the effort from HBASE-5179. Starting with Stack's 
 comments about patches for 0.92 and TRUNK:
 Reviewing 0.92v17
 isDeadServerInProgress is a new public method in ServerManager but it does 
 not seem to be used anywhere.
 Does isDeadRootServerInProgress need to be public? Ditto for meta version.
 This method param names are not right 'definitiveRootServer'; what is meant 
 by definitive? Do they need this qualifier?
 Is there anything in place to stop us expiring a server twice if its carrying 
 root and meta?
 What is difference between asking assignment manager isCarryingRoot and this 
 variable that is passed in? Should be doc'd at least. Ditto for meta.
 I think I've asked for this a few times - onlineServers needs to be 
 explained... either in javadoc or in comment. This is the param passed into 
 joinCluster. How does it arise? I think I know but am unsure. God love the 
 poor noob that comes awandering this code trying to make sense of it all.
 It looks like we get the list by trawling zk for regionserver znodes that 
 have not checked in. Don't we do this operation earlier in master setup? Are 
 we doing it again here?
 Though distributed split log is configured, we will do in master single 
 process splitting under some conditions with this patch. Its not explained in 
 code why we would do this. Why do we think master log splitting 'high 
 priority' when it could very well be slower. Should we only go this route if 
 distributed splitting is not going on. Do we know if concurrent distributed 
 log splitting and master splitting works?
 Why would we have dead servers in progress here in master startup? Because a 
 servershutdownhandler fired?
 This patch is different to the patch for 0.90. Should go into trunk first 
 with tests, then 0.92. Should it be in this issue? This issue is really hard 
 to follow now. Maybe this issue is for 0.90.x and new issue for more work on 
 this trunk patch?
 This patch needs to have the v18 differences applied.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5313) Restructure hfiles layout for better compression

2012-02-08 Thread He Yongqiang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203935#comment-13203935
 ] 

He Yongqiang commented on HBASE-5313:
-

I suppose we could use the value length from the key, then know we have nth 
key and by using the value length of all 1 to n-1 keys to find the value.
Yes. The value length is stored in the key header. The key header is cheap. And 
can always be decompressed without a big cpu cost.

 Restructure hfiles layout for better compression
 

 Key: HBASE-5313
 URL: https://issues.apache.org/jira/browse/HBASE-5313
 Project: HBase
  Issue Type: Improvement
  Components: io
Reporter: dhruba borthakur
Assignee: dhruba borthakur

 A HFile block contain a stream of key-values. Can we can organize these kvs 
 on the disk in a better way so that we get much greater compression ratios?
 One option (thanks Prakash) is to store all the keys in the beginning of the 
 block (let's call this the key-section) and then store all their 
 corresponding values towards the end of the block. This will allow us to 
 not-even decompress the values when we are scanning and skipping over rows in 
 the block.
 Any other ideas? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-3071) Graceful decommissioning of a regionserver

2012-02-08 Thread Jonathan Hsieh (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-3071:
--

Fix Version/s: 0.90.3

Updated jira to include the the 0.90.x version it was included in.

 Graceful decommissioning of a regionserver
 --

 Key: HBASE-3071
 URL: https://issues.apache.org/jira/browse/HBASE-3071
 Project: HBase
  Issue Type: Improvement
Reporter: stack
Assignee: stack
 Fix For: 0.90.3, 0.92.0

 Attachments: 3071-v5.txt, 3071.txt, 3701-v2.txt, 3701-v3.txt


 Currently if you stop a regionserver nicely, it'll put up its stopping flag 
 and then close all hosted regions.  While the stopping flag is in place all 
 region requests are rejected.  If this server was under load, closing could 
 take a while.  Only after all is closed is the master informed and it'll 
 restart assigning (in old master, master woud get a report with list of all 
 regions closed, in new master the zk expired is triggered and we'll run 
 shutdown handler).
 At least in new master, we have means of disabling balancer, and then moving 
 the regions off the server one by one via HBaseAdmin methods -- we shoud 
 write a script to do this at least for rolling restarts -- but we need 
 something better.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5355) Compressed RPC's for HBase

2012-02-08 Thread Karthik Ranganathan (Created) (JIRA)
Compressed RPC's for HBase
--

 Key: HBASE-5355
 URL: https://issues.apache.org/jira/browse/HBASE-5355
 Project: HBase
  Issue Type: Improvement
  Components: ipc
Affects Versions: 0.89.20100924
Reporter: Karthik Ranganathan
Assignee: Karthik Ranganathan


Some application need ability to do large batched writes and reads from a 
remote MR cluster. These eventually get bottlenecked on the network. These 
results are also pretty compressible sometimes.

The aim here is to add the ability to do compressed calls to the server on both 
the send and receive paths.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5313) Restructure hfiles layout for better compression

2012-02-08 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204031#comment-13204031
 ] 

Todd Lipcon commented on HBASE-5313:


I'm curious what the expected compression gain would be. Has anyone tried 
rearranging an example of a production hfile block and recompressing to see 
the difference?

My thinking is that typical LZ-based compression (eg snappy) uses a hashtable 
for common substring identification which is up to 16K entries or so. So I 
don't know that it would do a particularly better job with the common keys if 
they were all grouped at the front of the block - so long as the keyval pairs 
are less than a few hundred bytes apart, it should still find them OK.

Of course the other gains (storing large values compressed in RAM for example) 
seem good.

 Restructure hfiles layout for better compression
 

 Key: HBASE-5313
 URL: https://issues.apache.org/jira/browse/HBASE-5313
 Project: HBase
  Issue Type: Improvement
  Components: io
Reporter: dhruba borthakur
Assignee: dhruba borthakur

 A HFile block contain a stream of key-values. Can we can organize these kvs 
 on the disk in a better way so that we get much greater compression ratios?
 One option (thanks Prakash) is to store all the keys in the beginning of the 
 block (let's call this the key-section) and then store all their 
 corresponding values towards the end of the block. This will allow us to 
 not-even decompress the values when we are scanning and skipping over rows in 
 the block.
 Any other ideas? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5353) HA/Distributed HMaster via RegionServers

2012-02-08 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204036#comment-13204036
 ] 

Todd Lipcon commented on HBASE-5353:


Given that we already have failover support for the master, I'm skeptical that 
adding any complexity here is a good idea. If you want to colocate masters and 
RS, you can simply run a master process on a few of your RS nodes, and 
basically have the same behavior.

What's the compelling use case? The master is _not_ a SPOF since we already 
have hot failover support.

 HA/Distributed HMaster via RegionServers
 

 Key: HBASE-5353
 URL: https://issues.apache.org/jira/browse/HBASE-5353
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.94.0
Reporter: Jesse Yates
Priority: Minor

 Currently, the HMaster node must be considered a 'special' node (single point 
 of failure), meaning that the node must be protected more than the other 
 commodity machines. It should be possible to instead have the HMaster be much 
 more available, either in a distributed sense (meaning a bit rewrite) or with 
 multiple instances and automatic failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5353) HA/Distributed HMaster via RegionServers

2012-02-08 Thread Jesse Yates (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204043#comment-13204043
 ] 

Jesse Yates commented on HBASE-5353:


bq. have failover support for the master,

oh right. But this means you don't need to worry about where you run your 
master - the system takes care of all of that for you, making the startup 
process easier. Works well here b/c we have the master down to such a 
lightweight process.

 HA/Distributed HMaster via RegionServers
 

 Key: HBASE-5353
 URL: https://issues.apache.org/jira/browse/HBASE-5353
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.94.0
Reporter: Jesse Yates
Priority: Minor

 Currently, the HMaster node must be considered a 'special' node (single point 
 of failure), meaning that the node must be protected more than the other 
 commodity machines. It should be possible to instead have the HMaster be much 
 more available, either in a distributed sense (meaning a bit rewrite) or with 
 multiple instances and automatic failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5355) Compressed RPC's for HBase

2012-02-08 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204046#comment-13204046
 ] 

Todd Lipcon commented on HBASE-5355:


I've been mulling this over in the back of my head recently with regards to the 
work just getting started on adding extensible RPC. Here are a few thoughts:

A lot of our current lack of efficiency can be dealt with by simply avoiding 
multiple copies of the same data. The most egregious example: when we serialize 
columns, we serialize each KeyValue indepedendently, even though they all share 
the same row key.
One potential solution I've been thinking about:
- introduce the concept of a constant pool which is associated with an RPC 
request or response. This pool would be serialized on the wire before the 
actual request/response and might be encoded like:
{code}
total byte length of constant pool number of constants constant 1 len 
constant 1 val constant 2 len constant 2 val
{code}
Then in the actual serialization of KeyValues, etc, we would not write out the 
data, but rather indexes into the constant pool.
The advantages to this kind of technique would be:
- pushing all of the data near each other in the packet would make any 
compression more beneficial (rather than interleaving compressible user data 
with less compressible encoded information)
- allows multiple parts of a request/response to reference the same byte arrays 
(eg multiple columns referring to the same row key)
- allows zero-copy implementations even if we use protobufs to encode the 
actual call/response

This idea might be orthogonal to the compression discussed above, but may be a 
cheaper (CPU-wise) way of getting a similar effect.

 Compressed RPC's for HBase
 --

 Key: HBASE-5355
 URL: https://issues.apache.org/jira/browse/HBASE-5355
 Project: HBase
  Issue Type: Improvement
  Components: ipc
Affects Versions: 0.89.20100924
Reporter: Karthik Ranganathan
Assignee: Karthik Ranganathan

 Some application need ability to do large batched writes and reads from a 
 remote MR cluster. These eventually get bottlenecked on the network. These 
 results are also pretty compressible sometimes.
 The aim here is to add the ability to do compressed calls to the server on 
 both the send and receive paths.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5353) HA/Distributed HMaster via RegionServers

2012-02-08 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204048#comment-13204048
 ] 

Todd Lipcon commented on HBASE-5353:


bq. But this means you don't need to worry about where you run your master

Except it opens a new can of worms: where do you find the master UI? how do you 
monitor your master if it moves around? how do you easily find the master logs 
when it could be anywhere in the cluster?

If your goal is to automatically pick a system to run a master on, you could 
have your cluster management software do that, but I only see additional 
complexity being introduced if you add this to HBase proper.

 HA/Distributed HMaster via RegionServers
 

 Key: HBASE-5353
 URL: https://issues.apache.org/jira/browse/HBASE-5353
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.94.0
Reporter: Jesse Yates
Priority: Minor

 Currently, the HMaster node must be considered a 'special' node (single point 
 of failure), meaning that the node must be protected more than the other 
 commodity machines. It should be possible to instead have the HMaster be much 
 more available, either in a distributed sense (meaning a bit rewrite) or with 
 multiple instances and automatic failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5311) Allow inmemory Memstore compactions

2012-02-08 Thread Raghu Angadi (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204049#comment-13204049
 ] 

Raghu Angadi commented on HBASE-5311:
-

should 'state' be closed _before_ being updated?
{noformat}
state = newState;
oldState.gate.closeGateAndFlushThreads();
{noformat}

 Allow inmemory Memstore compactions
 ---

 Key: HBASE-5311
 URL: https://issues.apache.org/jira/browse/HBASE-5311
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
 Attachments: InternallyLayeredMap.java


 Just like we periodically compact the StoreFiles we should also periodically 
 compact the MemStore.
 During these compactions we eliminate deleted cells, expired cells, cells to 
 removed because of version count, etc, before we even do a memstore flush.
 Besides the optimization that we could get from this, it should also allow us 
 to remove the special handling of ICV, Increment, and Append (all of which 
 use upsert logic to avoid accumulating excessive cells in the Memstore).
 Not targeting this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5356) region_mover.rb can hang if table region it belongs to is deleted.

2012-02-08 Thread Jonathan Hsieh (Created) (JIRA)
region_mover.rb can hang if table region it belongs to is deleted.
--

 Key: HBASE-5356
 URL: https://issues.apache.org/jira/browse/HBASE-5356
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0, 0.90.3, 0.94.0
Reporter: Jonathan Hsieh
Priority: Minor


I was testing the region_mover.rb script on a loaded hbase and noticed that it 
can hang (thus hanging graceful shutdown) if a region that it is attempting to 
move gets deleted (by a table delete operation).

Here's the start of the relevent stack dump
{code}
12/02/08 13:27:13 WARN client.HConnectionManager$HConnectionImplementation: 
Encountered problems when prefetch META table:
org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
table: TestLoadAndVerify_1328735001040, row=TestLoadAnd\
Verify_1328735001040,yC^P\xD7\x945\xD4,99
at 
org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:136)
at 
org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:95)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:64\
9)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:703\
)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:594)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:565)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:416)
at 
org.apache.hadoop.hbase.client.ServerCallable.instantiateServer(ServerCallable.java:57)
at 
org.apache.hadoop.hbase.client.ScannerCallable.instantiateServer(ScannerCallable.java:63)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.\
java:1018)
at 
org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1104)
at 
org.apache.hadoop.hbase.client.HTable$ClientScanner.initialize(HTable.java:1027)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:535)
at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:525)
at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:380)
at 
org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:58)
at 
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:137)
at 
usr.lib.hbase.bin.region_mover.method__7$RUBY$isSuccessfulScan(/usr/lib/hbase/bin/region_mover.rb:133)
at 
usr$lib$hbase$bin$region_mover#method__7$RUBY$isSuccessfulScan.call(usr$lib$hbase$bin$region_mover#method__7$RUBY$isSucces\
sfulScan:65535)
at 
usr$lib$hbase$bin$region_mover#method__7$RUBY$isSuccessfulScan.call(usr$lib$hbase$bin$region_mover#method__7$RUBY$isSucces\
sfulScan:65535)

{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5356) region_mover.rb can hang if table region it belongs to is deleted.

2012-02-08 Thread Jonathan Hsieh (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204052#comment-13204052
 ] 

Jonathan Hsieh commented on HBASE-5356:
---

Looks like we just need to properly catch TableNotFoundExceptions and continue. 
 

Also we should probably abort if it gets another kind of exception that it 
cannot handle.

Separate issue should probably update the graceful shutdown script so that it 
fails fast on unexpected failures as well.

 region_mover.rb can hang if table region it belongs to is deleted.
 --

 Key: HBASE-5356
 URL: https://issues.apache.org/jira/browse/HBASE-5356
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.3, 0.94.0, 0.92.0
Reporter: Jonathan Hsieh
Priority: Minor

 I was testing the region_mover.rb script on a loaded hbase and noticed that 
 it can hang (thus hanging graceful shutdown) if a region that it is 
 attempting to move gets deleted (by a table delete operation).
 Here's the start of the relevent stack dump
 {code}
 12/02/08 13:27:13 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table:
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: TestLoadAndVerify_1328735001040, row=TestLoadAnd\
 Verify_1328735001040,yC^P\xD7\x945\xD4,99
 at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:136)
 at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:95)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:64\
 9)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:703\
 )
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:594)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:565)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:416)
 at 
 org.apache.hadoop.hbase.client.ServerCallable.instantiateServer(ServerCallable.java:57)
 at 
 org.apache.hadoop.hbase.client.ScannerCallable.instantiateServer(ScannerCallable.java:63)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.\
 java:1018)
 at 
 org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1104)
 at 
 org.apache.hadoop.hbase.client.HTable$ClientScanner.initialize(HTable.java:1027)
 at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:535)
 at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:525)
 at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:380)
 at 
 org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:58)
 at 
 org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:137)
 at 
 usr.lib.hbase.bin.region_mover.method__7$RUBY$isSuccessfulScan(/usr/lib/hbase/bin/region_mover.rb:133)
 at 
 usr$lib$hbase$bin$region_mover#method__7$RUBY$isSuccessfulScan.call(usr$lib$hbase$bin$region_mover#method__7$RUBY$isSucces\
 sfulScan:65535)
 at 
 usr$lib$hbase$bin$region_mover#method__7$RUBY$isSuccessfulScan.call(usr$lib$hbase$bin$region_mover#method__7$RUBY$isSucces\
 sfulScan:65535)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5353) HA/Distributed HMaster via RegionServers

2012-02-08 Thread Jesse Yates (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204051#comment-13204051
 ] 

Jesse Yates commented on HBASE-5353:


bq. where do you find the master UI? how do you monitor your master if it moves 
around? how do you easily find the master logs when it could be anywhere in the 
cluster?

The cluster knows about it, so you can have a link on the webui to the master 
or any of the region servers. As stack was saying above, the region server page 
would have a link to the master page. Same deal with the logs (or using 
something like the hbscan stuff from fbook).

bq. if your goal is to automatically pick a system to run a master on, you 
could have your cluster management software do that

True, but if those masters fail over, then your cluster management needs to be 
aware enough of that to provision more, on different servers; afaik, thisis a 
pain to do in a really 'cluster aware' sense. This way, its all handled under 
the covers

 HA/Distributed HMaster via RegionServers
 

 Key: HBASE-5353
 URL: https://issues.apache.org/jira/browse/HBASE-5353
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.94.0
Reporter: Jesse Yates
Priority: Minor

 Currently, the HMaster node must be considered a 'special' node (single point 
 of failure), meaning that the node must be protected more than the other 
 commodity machines. It should be possible to instead have the HMaster be much 
 more available, either in a distributed sense (meaning a bit rewrite) or with 
 multiple instances and automatic failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5311) Allow inmemory Memstore compactions

2012-02-08 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204055#comment-13204055
 ] 

Todd Lipcon commented on HBASE-5311:


bq. should 'state' be closed before being updated?

I don't think it matters from a correctness standpoint. If we close the old 
state before updating the state variable, then all concurrent accessors 
beginning at this point will sit and spin while any previously running 
accessors finish their work. If we set the new state first, then any new 
accessors can immediately proceed so we don't cause any hiccup in access.

The semantics of this whole data structure are a little subtle though so we'll 
have to be careful to expose only a minimal API and clearly document where we 
might have strange causality relations, etc.

 Allow inmemory Memstore compactions
 ---

 Key: HBASE-5311
 URL: https://issues.apache.org/jira/browse/HBASE-5311
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
 Attachments: InternallyLayeredMap.java


 Just like we periodically compact the StoreFiles we should also periodically 
 compact the MemStore.
 During these compactions we eliminate deleted cells, expired cells, cells to 
 removed because of version count, etc, before we even do a memstore flush.
 Besides the optimization that we could get from this, it should also allow us 
 to remove the special handling of ICV, Increment, and Append (all of which 
 use upsert logic to avoid accumulating excessive cells in the Memstore).
 Not targeting this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5353) HA/Distributed HMaster via RegionServers

2012-02-08 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204058#comment-13204058
 ] 

Todd Lipcon commented on HBASE-5353:


bq. The cluster knows about it, so you can have a link on the webui to the 
master or any of the region servers

And each of the potential masters publishes metrics to ganglia, so if you want 
to find the master metrics, you have to hunt around in the ganglia graphs for 
which master was active at that time?
And any cron jobs or nagios alerts you write need to first call some HBase 
utility to find the active master's IP via ZK in order to get to it?

bq. True, but if those masters fail over, then your cluster management needs to 
be aware enough of that to provision more, on different servers

If you have two masters on separate racks, and you have any reasonable 
monitoring, then your ops team will restart or provision a new one when they 
fail. I've never ever heard of this kind of scenario being a major cause of 
downtime.


The whole thing seems like a bad idea to me. I won't -1 but consider me -0.5

 HA/Distributed HMaster via RegionServers
 

 Key: HBASE-5353
 URL: https://issues.apache.org/jira/browse/HBASE-5353
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.94.0
Reporter: Jesse Yates
Priority: Minor

 Currently, the HMaster node must be considered a 'special' node (single point 
 of failure), meaning that the node must be protected more than the other 
 commodity machines. It should be possible to instead have the HMaster be much 
 more available, either in a distributed sense (meaning a bit rewrite) or with 
 multiple instances and automatic failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5349) Automagically tweak global memstore and block cache sizes based on workload

2012-02-08 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204070#comment-13204070
 ] 

Zhihong Yu commented on HBASE-5349:
---

We currently don't maintain moving average of read/write requests per region 
server.
What should be an effective measure for determining read vs. write heavy 
workload ?

 Automagically tweak global memstore and block cache sizes based on workload
 ---

 Key: HBASE-5349
 URL: https://issues.apache.org/jira/browse/HBASE-5349
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
 Fix For: 0.94.0


 Hypertable does a neat thing where it changes the size given to the CellCache 
 (our MemStores) and Block Cache based on the workload. If you need an image, 
 scroll down at the bottom of this link: 
 http://www.hypertable.com/documentation/architecture/
 That'd be one less thing to configure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5311) Allow inmemory Memstore compactions

2012-02-08 Thread Raghu Angadi (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204074#comment-13204074
 ] 

Raghu Angadi commented on HBASE-5311:
-

yeah, theoretically stale values are expected any way. And at higher layer 
memstore might allow only one accessor for a key.

 Allow inmemory Memstore compactions
 ---

 Key: HBASE-5311
 URL: https://issues.apache.org/jira/browse/HBASE-5311
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
 Attachments: InternallyLayeredMap.java


 Just like we periodically compact the StoreFiles we should also periodically 
 compact the MemStore.
 During these compactions we eliminate deleted cells, expired cells, cells to 
 removed because of version count, etc, before we even do a memstore flush.
 Besides the optimization that we could get from this, it should also allow us 
 to remove the special handling of ICV, Increment, and Append (all of which 
 use upsert logic to avoid accumulating excessive cells in the Memstore).
 Not targeting this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5349) Automagically tweak global memstore and block cache sizes based on workload

2012-02-08 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204078#comment-13204078
 ] 

Zhihong Yu commented on HBASE-5349:
---

w.r.t. the measure for determining workload, should the measure be computed 
solely based on one region server ?
Or should this measure be relative to the workload on other region servers ?

 Automagically tweak global memstore and block cache sizes based on workload
 ---

 Key: HBASE-5349
 URL: https://issues.apache.org/jira/browse/HBASE-5349
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
 Fix For: 0.94.0


 Hypertable does a neat thing where it changes the size given to the CellCache 
 (our MemStores) and Block Cache based on the workload. If you need an image, 
 scroll down at the bottom of this link: 
 http://www.hypertable.com/documentation/architecture/
 That'd be one less thing to configure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5349) Automagically tweak global memstore and block cache sizes based on workload

2012-02-08 Thread Jean-Daniel Cryans (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204087#comment-13204087
 ] 

Jean-Daniel Cryans commented on HBASE-5349:
---

Good question, I don't think looking at requests is good enough... instead we 
could look at how both are used and if there's adjustment to be made. For 
example, if you have a read heavy workload then the memstores would not see a 
lot of usage... same with write heavy, the block cache would be close to empty.

Those two are clear cuts, now for those workloads in between it gets a bit 
harder. Maybe at first we shouldn't even try to optimize them.

I think it should also be done incrementally, move like 3-5% of the heap from 
one place to the other every few minutes until it settles.

 Automagically tweak global memstore and block cache sizes based on workload
 ---

 Key: HBASE-5349
 URL: https://issues.apache.org/jira/browse/HBASE-5349
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
 Fix For: 0.94.0


 Hypertable does a neat thing where it changes the size given to the CellCache 
 (our MemStores) and Block Cache based on the workload. If you need an image, 
 scroll down at the bottom of this link: 
 http://www.hypertable.com/documentation/architecture/
 That'd be one less thing to configure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5357) Use builder pattern in StoreFile, HFile, and HColumnDescriptor instantiation

2012-02-08 Thread Mikhail Bautin (Created) (JIRA)
Use builder pattern in StoreFile, HFile, and HColumnDescriptor instantiation


 Key: HBASE-5357
 URL: https://issues.apache.org/jira/browse/HBASE-5357
 Project: HBase
  Issue Type: Improvement
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin


We have five ways to create an HFile writer, two ways to create a StoreFile 
writer, and the sets of parameters keep changing, creating a lot of confusion, 
especially when porting patches across branches. The same thing is happening to 
HColumnDescriptor. I think we should move to a builder pattern solution, e.g.

{code:java}
  HFileWriter w = HFile.getWriterBuilder(conf, some common args)
  .setParameter1(value1)
  .setParameter2(value2)
  ...
  .instantiate();
{code}

Each parameter setter being on the same line will make merges/cherry-pick work 
properly, we will not have to even mention default parameters again, and we can 
eliminate a dozen impossible-to-remember constructors.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5356) region_mover.rb can hang if table region it belongs to is deleted.

2012-02-08 Thread Jonathan Hsieh (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-5356:
--

Description: 
I was testing the region_mover.rb script on a loaded hbase and noticed that it 
can hang (thus hanging graceful shutdown) if a region that it is attempting to 
move gets deleted (by a table delete operation).

Here's the start of the relevent stack dump
{code}
12/02/08 13:27:13 WARN client.HConnectionManager$HConnectionImplementation: 
Encountered problems when prefetch META table:
org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
table: TestLoadAndVerify_1328735001040, row=TestLoadAnd\
Verify_1328735001040,yC^P\xD7\x945\xD4,99
at 
org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:136)
at 
org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:95)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:64\
9)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:703\
)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:594)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:565)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:416)
at 
org.apache.hadoop.hbase.client.ServerCallable.instantiateServer(ServerCallable.java:57)
at 
org.apache.hadoop.hbase.client.ScannerCallable.instantiateServer(ScannerCallable.java:63)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.\
java:1018)
at 
org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1104)
at 
org.apache.hadoop.hbase.client.HTable$ClientScanner.initialize(HTable.java:1027)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:535)
at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:525)
at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:380)
at 
org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:58)
at 
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:137)
at 
usr.lib.hbase.bin.region_mover.method__7$RUBY$isSuccessfulScan(/usr/lib/hbase/bin/region_mover.rb:133)
at 
usr$lib$hbase$bin$region_mover#method__7$RUBY$isSuccessfulScan.call(usr$lib$hbase$bin$region_mover#method__7$RUBY$isSucces\
sfulScan:65535)
at 
usr$lib$hbase$bin$region_mover#method__7$RUBY$isSuccessfulScan.call(usr$lib$hbase$bin$region_mover#method__7$RUBY$isSucces\
sfulScan:65535)
at 
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:171)
at 
usr.lib.hbase.bin.region_mover.block_4$RUBY$__for__(/usr/lib/hbase/bin/region_mover.rb:326)
at 
usr$lib$hbase$bin$region_mover#block_4$RUBY$__for__.call(usr$lib$hbase$bin$region_mover#block_4$RUBY$__for__:65535)
at org.jruby.runtime.CompiledBlock.yield(CompiledBlock.java:133)
at org.jruby.runtime.BlockBody.call(BlockBody.java:73)
at org.jruby.runtime.Block.call(Block.java:89)
at org.jruby.RubyProc.call(RubyProc.java:268)
at org.jruby.RubyProc.call(RubyProc.java:228)
at org.jruby.RubyProc$i$0$0$call.call(RubyProc$i$0$0$call.gen:65535)
at 
org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:209)
at 
org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:205)
at 
org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:137)
at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:103)
at org.jruby.ast.WhileNode.interpret(WhileNode.java:131)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:103)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at 
org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at 
org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:169)
at 
org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:171)
at 
org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:272)
at 

[jira] [Commented] (HBASE-5074) support checksums in HBase block cache

2012-02-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204106#comment-13204106
 ] 

Phabricator commented on HBASE-5074:


todd has commented on the revision [jira] [HBASE-5074] Support checksums in 
HBase block cache.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/HConstants.java:598 typo: verification

  and still not sure what true/false means here... would be better to clarify 
either here or in src/main/resources/hbase-default.xml if you anticipate users 
ever changing this.

  If I set it to false does that mean I get no checksumming? or hdfs 
checksumming as before? please update the comment

  src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java:41-43 I 
think this API would be cleaner with the following changes:
  - rather than use the constant HFileBlock.HEADER_SIZE below, make the API:

  appendChecksums(ChecksumByteArrayOutputStream baos,
int dataOffset, int dataLen,
ChecksumType checksumType,
int bytesPerChecksum) {
  }

  where it would checksum the data between dataOffset and dataOffset + dataLen, 
and append it to the baos
  src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java:73 same 
here, I think it's better to take the offset as a parameter instead of assume 
HFileBlock.HEADER_SIZE
  src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java:84 if this 
is performance critical, use DataOutputBuffer, presized to right size, and then 
return its underlying buffer directly to avoid a copy and realloc
  src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java:123 seems 
strange that this is inconsistent with the above -- if the block desn't have a 
checksum, why is that differently handled than if the block is from a prior 
version which doesn't have a checksum?
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:100 typo 
re-enable
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:79-80 should 
clarify which part of the data is checksummed.
  As I read the code, only the non-header data (ie the user data) is 
checksummed. Is this correct?
  It seems to me like this is potentially dangerous -- eg a flipped bit in an 
hfile block header might cause the compressedDataSize field to be read as 2GB 
or something, in which case the faulty allocation could cause the server to 
OOME. I think we need a checksum on the hfile block header as well.
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:824 rename to 
doCompressionAndChecksumming, and update javadoc
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:815 I was a 
bit confused by this at first - I think it would be nice to add a comment here 
saying:
  // set the header for the uncompressed bytes (for cache-on-write)

  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:852 this weird 
difference between compressed and uncompressed case could be improved, I think:
  Why not make the uncompressedBytesWithHeader leave free space for the 
checksums at the end of the array, and have it generate the checksums into that 
space?
  Or change generateChecksums to take another array as an argument, rather than 
having it append to the same 'baos'?

  It's currently quite confusing that onDiskChecksum ends up empty in the 
compressed case, even though we _did_ write a checksum lumped in with the 
onDiskBytesWithHeader.

  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1375-1379 
Similar to above comment about the block headers, I think we need to do our own 
checksumming on the hfile metadata itself -- what about a corruption in the 
file header? Alternatively we could always use the checksummed stream when 
loading the file-wide header which is probably much simpler
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1545 confused 
by this - if we dn't have an HFileSystem, then wouldn't we assume that the 
checksumming is done by the underlying dfs, and not use hbase checksums?
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1580 s/it 
never changes/because it is marked final/
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1588-1590 this 
isn't thread-safe: multiple threads might decrement and skip -1, causing it to 
never get re-enabled.
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1599 add 
comment here // checksum verification failed
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1620-1623 msg 
should include file path
  src/main/java/org/apache/hadoop/hbase/util/HFileSystem.java:53 typo: delegate
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java:3620 Given we 
have rsServices.getFileSystem, why do we need to also pass this in?

REVISION DETAIL
  https://reviews.facebook.net/D1521


 support checksums in HBase block cache
 --

  

[jira] [Created] (HBASE-5358) HBaseObjectWritable should be able to serialize generic arrays not defined previously

2012-02-08 Thread Enis Soztutar (Created) (JIRA)
HBaseObjectWritable should be able to serialize generic arrays not defined 
previously
-

 Key: HBASE-5358
 URL: https://issues.apache.org/jira/browse/HBASE-5358
 Project: HBase
  Issue Type: Improvement
  Components: coprocessors, io
Reporter: Enis Soztutar
Assignee: Enis Soztutar


HBaseObjectWritable can encode Writable[]'s but, but cannot encode A[] where A 
extends Writable. This becomes an issue for example when adding a coprocessor 
method which takes A[] (see HBASE-5352). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5074) support checksums in HBase block cache

2012-02-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204107#comment-13204107
 ] 

Phabricator commented on HBASE-5074:


todd has commented on the revision [jira] [HBASE-5074] Support checksums in 
HBase block cache.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/HConstants.java:598 typo: verification

  and still not sure what true/false means here... would be better to clarify 
either here or in src/main/resources/hbase-default.xml if you anticipate users 
ever changing this.

  If I set it to false does that mean I get no checksumming? or hdfs 
checksumming as before? please update the comment

  src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java:41-43 I 
think this API would be cleaner with the following changes:
  - rather than use the constant HFileBlock.HEADER_SIZE below, make the API:

  appendChecksums(ChecksumByteArrayOutputStream baos,
int dataOffset, int dataLen,
ChecksumType checksumType,
int bytesPerChecksum) {
  }

  where it would checksum the data between dataOffset and dataOffset + dataLen, 
and append it to the baos
  src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java:73 same 
here, I think it's better to take the offset as a parameter instead of assume 
HFileBlock.HEADER_SIZE
  src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java:84 if this 
is performance critical, use DataOutputBuffer, presized to right size, and then 
return its underlying buffer directly to avoid a copy and realloc
  src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java:123 seems 
strange that this is inconsistent with the above -- if the block desn't have a 
checksum, why is that differently handled than if the block is from a prior 
version which doesn't have a checksum?
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:100 typo 
re-enable
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:79-80 should 
clarify which part of the data is checksummed.
  As I read the code, only the non-header data (ie the user data) is 
checksummed. Is this correct?
  It seems to me like this is potentially dangerous -- eg a flipped bit in an 
hfile block header might cause the compressedDataSize field to be read as 2GB 
or something, in which case the faulty allocation could cause the server to 
OOME. I think we need a checksum on the hfile block header as well.
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:824 rename to 
doCompressionAndChecksumming, and update javadoc
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:815 I was a 
bit confused by this at first - I think it would be nice to add a comment here 
saying:
  // set the header for the uncompressed bytes (for cache-on-write)

  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:852 this weird 
difference between compressed and uncompressed case could be improved, I think:
  Why not make the uncompressedBytesWithHeader leave free space for the 
checksums at the end of the array, and have it generate the checksums into that 
space?
  Or change generateChecksums to take another array as an argument, rather than 
having it append to the same 'baos'?

  It's currently quite confusing that onDiskChecksum ends up empty in the 
compressed case, even though we _did_ write a checksum lumped in with the 
onDiskBytesWithHeader.

  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1375-1379 
Similar to above comment about the block headers, I think we need to do our own 
checksumming on the hfile metadata itself -- what about a corruption in the 
file header? Alternatively we could always use the checksummed stream when 
loading the file-wide header which is probably much simpler
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1545 confused 
by this - if we dn't have an HFileSystem, then wouldn't we assume that the 
checksumming is done by the underlying dfs, and not use hbase checksums?
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1580 s/it 
never changes/because it is marked final/
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1588-1590 this 
isn't thread-safe: multiple threads might decrement and skip -1, causing it to 
never get re-enabled.
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1599 add 
comment here // checksum verification failed
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1620-1623 msg 
should include file path
  src/main/java/org/apache/hadoop/hbase/util/HFileSystem.java:53 typo: delegate
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java:3620 Given we 
have rsServices.getFileSystem, why do we need to also pass this in?

REVISION DETAIL
  https://reviews.facebook.net/D1521


 support checksums in HBase block cache
 --

  

[jira] [Commented] (HBASE-5357) Use builder pattern in StoreFile, HFile, and HColumnDescriptor instantiation

2012-02-08 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204108#comment-13204108
 ] 

Todd Lipcon commented on HBASE-5357:


+1!

 Use builder pattern in StoreFile, HFile, and HColumnDescriptor instantiation
 

 Key: HBASE-5357
 URL: https://issues.apache.org/jira/browse/HBASE-5357
 Project: HBase
  Issue Type: Improvement
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin

 We have five ways to create an HFile writer, two ways to create a StoreFile 
 writer, and the sets of parameters keep changing, creating a lot of 
 confusion, especially when porting patches across branches. The same thing is 
 happening to HColumnDescriptor. I think we should move to a builder pattern 
 solution, e.g.
 {code:java}
   HFileWriter w = HFile.getWriterBuilder(conf, some common args)
   .setParameter1(value1)
   .setParameter2(value2)
   ...
   .instantiate();
 {code}
 Each parameter setter being on the same line will make merges/cherry-pick 
 work properly, we will not have to even mention default parameters again, and 
 we can eliminate a dozen impossible-to-remember constructors.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HBASE-5356) region_mover.rb can hang if table region it belongs to is deleted.

2012-02-08 Thread Jonathan Hsieh (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh reassigned HBASE-5356:
-

Assignee: Jonathan Hsieh

 region_mover.rb can hang if table region it belongs to is deleted.
 --

 Key: HBASE-5356
 URL: https://issues.apache.org/jira/browse/HBASE-5356
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.3, 0.94.0, 0.92.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Minor

 I was testing the region_mover.rb script on a loaded hbase and noticed that 
 it can hang (thus hanging graceful shutdown) if a region that it is 
 attempting to move gets deleted (by a table delete operation).
 Here's the start of the relevent stack dump
 {code}
 12/02/08 13:27:13 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table:
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: TestLoadAndVerify_1328735001040, row=TestLoadAnd\
 Verify_1328735001040,yC^P\xD7\x945\xD4,99
 at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:136)
 at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:95)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:64\
 9)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:703\
 )
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:594)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:565)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:416)
 at 
 org.apache.hadoop.hbase.client.ServerCallable.instantiateServer(ServerCallable.java:57)
 at 
 org.apache.hadoop.hbase.client.ScannerCallable.instantiateServer(ScannerCallable.java:63)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.\
 java:1018)
 at 
 org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1104)
 at 
 org.apache.hadoop.hbase.client.HTable$ClientScanner.initialize(HTable.java:1027)
 at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:535)
 at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:525)
 at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:380)
 at 
 org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:58)
 at 
 org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:137)
 at 
 usr.lib.hbase.bin.region_mover.method__7$RUBY$isSuccessfulScan(/usr/lib/hbase/bin/region_mover.rb:133)
 at 
 usr$lib$hbase$bin$region_mover#method__7$RUBY$isSuccessfulScan.call(usr$lib$hbase$bin$region_mover#method__7$RUBY$isSucces\
 sfulScan:65535)
 at 
 usr$lib$hbase$bin$region_mover#method__7$RUBY$isSuccessfulScan.call(usr$lib$hbase$bin$region_mover#method__7$RUBY$isSucces\
 sfulScan:65535)
 at 
 org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:171)
 at 
 usr.lib.hbase.bin.region_mover.block_4$RUBY$__for__(/usr/lib/hbase/bin/region_mover.rb:326)
 at 
 usr$lib$hbase$bin$region_mover#block_4$RUBY$__for__.call(usr$lib$hbase$bin$region_mover#block_4$RUBY$__for__:65535)
 at org.jruby.runtime.CompiledBlock.yield(CompiledBlock.java:133)
 at org.jruby.runtime.BlockBody.call(BlockBody.java:73)
 at org.jruby.runtime.Block.call(Block.java:89)
 at org.jruby.RubyProc.call(RubyProc.java:268)
 at org.jruby.RubyProc.call(RubyProc.java:228)
 at org.jruby.RubyProc$i$0$0$call.call(RubyProc$i$0$0$call.gen:65535)
 at 
 org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:209)
 at 
 org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:205)
 at 
 org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:137)
 at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
 at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:103)
 at org.jruby.ast.WhileNode.interpret(WhileNode.java:131)
 at 

[jira] [Created] (HBASE-5359) Alter in the shell can be too quick and return before the table is altered

2012-02-08 Thread Jean-Daniel Cryans (Created) (JIRA)
Alter in the shell can be too quick and return before the table is altered
--

 Key: HBASE-5359
 URL: https://issues.apache.org/jira/browse/HBASE-5359
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
 Fix For: 0.94.0, 0.92.1


This seems to be a recent change in behavior but I'm still not sure where it's 
coming from.

The shell is able to call HMaster.getAlterStatus before the TableEventHandler 
is able call AM.setRegionsToReopen so that the returned status shows no pending 
regions. It means that the alter seems instantaneous although it's far from 
completed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5357) Use builder pattern in StoreFile, HFile, and HColumnDescriptor instantiation

2012-02-08 Thread Mikhail Bautin (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Bautin updated HBASE-5357:
--

Description: 
We have five ways to create an HFile writer, two ways to create a StoreFile 
writer, and the sets of parameters keep changing, creating a lot of confusion, 
especially when porting patches across branches. The same thing is happening to 
HColumnDescriptor. I think we should move to a builder pattern solution, e.g.

{code:java}
  HFileWriter w = HFile.getWriterBuilder(conf, some common args)
  .setParameter1(value1)
  .setParameter2(value2)
  ...
  .build();
{code}

Each parameter setter being on its own line will make merges/cherry-pick work 
properly, we will not have to even mention default parameters again, and we can 
eliminate a dozen impossible-to-remember constructors.


  was:
We have five ways to create an HFile writer, two ways to create a StoreFile 
writer, and the sets of parameters keep changing, creating a lot of confusion, 
especially when porting patches across branches. The same thing is happening to 
HColumnDescriptor. I think we should move to a builder pattern solution, e.g.

{code:java}
  HFileWriter w = HFile.getWriterBuilder(conf, some common args)
  .setParameter1(value1)
  .setParameter2(value2)
  ...
  .instantiate();
{code}

Each parameter setter being on the same line will make merges/cherry-pick work 
properly, we will not have to even mention default parameters again, and we can 
eliminate a dozen impossible-to-remember constructors.



 Use builder pattern in StoreFile, HFile, and HColumnDescriptor instantiation
 

 Key: HBASE-5357
 URL: https://issues.apache.org/jira/browse/HBASE-5357
 Project: HBase
  Issue Type: Improvement
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin

 We have five ways to create an HFile writer, two ways to create a StoreFile 
 writer, and the sets of parameters keep changing, creating a lot of 
 confusion, especially when porting patches across branches. The same thing is 
 happening to HColumnDescriptor. I think we should move to a builder pattern 
 solution, e.g.
 {code:java}
   HFileWriter w = HFile.getWriterBuilder(conf, some common args)
   .setParameter1(value1)
   .setParameter2(value2)
   ...
   .build();
 {code}
 Each parameter setter being on its own line will make merges/cherry-pick work 
 properly, we will not have to even mention default parameters again, and we 
 can eliminate a dozen impossible-to-remember constructors.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5353) HA/Distributed HMaster via RegionServers

2012-02-08 Thread stack (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204113#comment-13204113
 ] 

stack commented on HBASE-5353:
--

bq. Except it opens a new can of worms: where do you find the master UI? how do 
you monitor your master if it moves around? how do you easily find the master 
logs when it could be anywhere in the cluster?

Its not a new can of worms, right?  We have the above (mostly unsolved) 
problems now if you run with more than one master.

bq. And any cron jobs or nagios alerts you write need to first call some HBase 
utility to find the active master's IP via ZK in order to get to it?

They should be doing this now, if multiple masters?

If the master function were lightweight enough, it'd be kinda sweet having one 
daemon type only I'd think; there'd be no longer need for special treatment of 
master.  Might be tricky having them running in the same JVM what w/ all the 
executors afloat and RPCs (I'd rather do all in the one JVM then have RS 
start/stop separate Master processes if we were going to go this route).

 HA/Distributed HMaster via RegionServers
 

 Key: HBASE-5353
 URL: https://issues.apache.org/jira/browse/HBASE-5353
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.94.0
Reporter: Jesse Yates
Priority: Minor

 Currently, the HMaster node must be considered a 'special' node (single point 
 of failure), meaning that the node must be protected more than the other 
 commodity machines. It should be possible to instead have the HMaster be much 
 more available, either in a distributed sense (meaning a bit rewrite) or with 
 multiple instances and automatic failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5356) region_mover.rb can hang if table region it belongs to is deleted.

2012-02-08 Thread Jonathan Hsieh (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204116#comment-13204116
 ] 

Jonathan Hsieh commented on HBASE-5356:
---

Related issue -- if you create a new table and region mover had an old list, 
new regions get assigned to the region we are trying to gracefully remove 
regions from.



 region_mover.rb can hang if table region it belongs to is deleted.
 --

 Key: HBASE-5356
 URL: https://issues.apache.org/jira/browse/HBASE-5356
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.3, 0.94.0, 0.92.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Minor

 I was testing the region_mover.rb script on a loaded hbase and noticed that 
 it can hang (thus hanging graceful shutdown) if a region that it is 
 attempting to move gets deleted (by a table delete operation).
 Here's the start of the relevent stack dump
 {code}
 12/02/08 13:27:13 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table:
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: TestLoadAndVerify_1328735001040, row=TestLoadAnd\
 Verify_1328735001040,yC^P\xD7\x945\xD4,99
 at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:136)
 at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:95)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:64\
 9)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:703\
 )
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:594)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:565)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:416)
 at 
 org.apache.hadoop.hbase.client.ServerCallable.instantiateServer(ServerCallable.java:57)
 at 
 org.apache.hadoop.hbase.client.ScannerCallable.instantiateServer(ScannerCallable.java:63)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.\
 java:1018)
 at 
 org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1104)
 at 
 org.apache.hadoop.hbase.client.HTable$ClientScanner.initialize(HTable.java:1027)
 at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:535)
 at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:525)
 at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:380)
 at 
 org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:58)
 at 
 org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:137)
 at 
 usr.lib.hbase.bin.region_mover.method__7$RUBY$isSuccessfulScan(/usr/lib/hbase/bin/region_mover.rb:133)
 at 
 usr$lib$hbase$bin$region_mover#method__7$RUBY$isSuccessfulScan.call(usr$lib$hbase$bin$region_mover#method__7$RUBY$isSucces\
 sfulScan:65535)
 at 
 usr$lib$hbase$bin$region_mover#method__7$RUBY$isSuccessfulScan.call(usr$lib$hbase$bin$region_mover#method__7$RUBY$isSucces\
 sfulScan:65535)
 at 
 org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:171)
 at 
 usr.lib.hbase.bin.region_mover.block_4$RUBY$__for__(/usr/lib/hbase/bin/region_mover.rb:326)
 at 
 usr$lib$hbase$bin$region_mover#block_4$RUBY$__for__.call(usr$lib$hbase$bin$region_mover#block_4$RUBY$__for__:65535)
 at org.jruby.runtime.CompiledBlock.yield(CompiledBlock.java:133)
 at org.jruby.runtime.BlockBody.call(BlockBody.java:73)
 at org.jruby.runtime.Block.call(Block.java:89)
 at org.jruby.RubyProc.call(RubyProc.java:268)
 at org.jruby.RubyProc.call(RubyProc.java:228)
 at org.jruby.RubyProc$i$0$0$call.call(RubyProc$i$0$0$call.gen:65535)
 at 
 org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:209)
 at 
 org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:205)
 at 
 org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:137)
 at 

[jira] [Commented] (HBASE-5353) HA/Distributed HMaster via RegionServers

2012-02-08 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204123#comment-13204123
 ] 

Todd Lipcon commented on HBASE-5353:


bq.  We have the above (mostly unsolved) problems now if you run with more than 
one master.
bq. They should be doing this now, if multiple masters?

Sort of - except when you have two masters, you just set up nagios alerts and 
metrics to point to both, and you only need to look two places if you have an 
issue. If you have no idea where the master is, you have to hunt around the 
cluster to find it.

bq. If the master function were lightweight enough, it'd be kinda sweet having 
one daemon type only I'd think
Except we'd still have multiple daemon types, logically, it's just that they'd 
be collocated inside the same process, making logs harder to de-interleave, etc.


Plus, if your RS are are all collocated with TTs and heavily loaded, then I 
wouldn't want to see the master running on one of them. I'd rather just tell 
ops these nodes run the important master daemons, please monitor them and any 
high utilization is problematic.

 HA/Distributed HMaster via RegionServers
 

 Key: HBASE-5353
 URL: https://issues.apache.org/jira/browse/HBASE-5353
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver
Affects Versions: 0.94.0
Reporter: Jesse Yates
Priority: Minor

 Currently, the HMaster node must be considered a 'special' node (single point 
 of failure), meaning that the node must be protected more than the other 
 commodity machines. It should be possible to instead have the HMaster be much 
 more available, either in a distributed sense (meaning a bit rewrite) or with 
 multiple instances and automatic failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5074) support checksums in HBase block cache

2012-02-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204134#comment-13204134
 ] 

Phabricator commented on HBASE-5074:


tedyu has commented on the revision [jira] [HBASE-5074] Support checksums in 
HBase block cache.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1588-1590 It 
would be nice to make this part of logic (re-enabling HBase checksumming) 
pluggable.
  Can be done in a follow-on JIRA.
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1600 Assertion 
may be disabled in production.

REVISION DETAIL
  https://reviews.facebook.net/D1521


 support checksums in HBase block cache
 --

 Key: HBASE-5074
 URL: https://issues.apache.org/jira/browse/HBASE-5074
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Attachments: D1521.1.patch, D1521.1.patch, D1521.2.patch, 
 D1521.2.patch, D1521.3.patch, D1521.3.patch, D1521.4.patch, D1521.4.patch, 
 D1521.5.patch, D1521.5.patch


 The current implementation of HDFS stores the data in one block file and the 
 metadata(checksum) in another block file. This means that every read into the 
 HBase block cache actually consumes two disk iops, one to the datafile and 
 one to the checksum file. This is a major problem for scaling HBase, because 
 HBase is usually bottlenecked on the number of random disk iops that the 
 storage-hardware offers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5074) support checksums in HBase block cache

2012-02-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204135#comment-13204135
 ] 

Phabricator commented on HBASE-5074:


tedyu has commented on the revision [jira] [HBASE-5074] Support checksums in 
HBase block cache.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1588-1590 It 
would be nice to make this part of logic (re-enabling HBase checksumming) 
pluggable.
  Can be done in a follow-on JIRA.
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1600 Assertion 
may be disabled in production.

REVISION DETAIL
  https://reviews.facebook.net/D1521


 support checksums in HBase block cache
 --

 Key: HBASE-5074
 URL: https://issues.apache.org/jira/browse/HBASE-5074
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Attachments: D1521.1.patch, D1521.1.patch, D1521.2.patch, 
 D1521.2.patch, D1521.3.patch, D1521.3.patch, D1521.4.patch, D1521.4.patch, 
 D1521.5.patch, D1521.5.patch


 The current implementation of HDFS stores the data in one block file and the 
 metadata(checksum) in another block file. This means that every read into the 
 HBase block cache actually consumes two disk iops, one to the datafile and 
 one to the checksum file. This is a major problem for scaling HBase, because 
 HBase is usually bottlenecked on the number of random disk iops that the 
 storage-hardware offers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5356) region_mover.rb can hang if table region it belongs to is deleted.

2012-02-08 Thread Jonathan Hsieh (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204137#comment-13204137
 ] 

Jonathan Hsieh commented on HBASE-5356:
---

rephrase of previous comment: The region_mover script initially caches a list 
of regions to move. If the table is deleted after the cached list is gathered 
but before all regions are moved, the script can get stuck attempting to move a 
deleted region.  

In a related but likely separate issue -- if a new presplit table is created, 
as the region_mover is emptying an RS, the emptying RS is a candidate for new 
regions and will get some of the new regions.  Ideally this is fenced off so 
that it does not get regions assigned to it, but this would require some ZK.  
This seems less important because the majority of regions will be moved off the 
RS and the few new regions can rely on the automatically fail over to other 
RS's. 



 region_mover.rb can hang if table region it belongs to is deleted.
 --

 Key: HBASE-5356
 URL: https://issues.apache.org/jira/browse/HBASE-5356
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.3, 0.94.0, 0.92.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Minor

 I was testing the region_mover.rb script on a loaded hbase and noticed that 
 it can hang (thus hanging graceful shutdown) if a region that it is 
 attempting to move gets deleted (by a table delete operation).
 Here's the start of the relevent stack dump
 {code}
 12/02/08 13:27:13 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table:
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: TestLoadAndVerify_1328735001040, row=TestLoadAnd\
 Verify_1328735001040,yC^P\xD7\x945\xD4,99
 at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:136)
 at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:95)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:64\
 9)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:703\
 )
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:594)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:565)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:416)
 at 
 org.apache.hadoop.hbase.client.ServerCallable.instantiateServer(ServerCallable.java:57)
 at 
 org.apache.hadoop.hbase.client.ScannerCallable.instantiateServer(ScannerCallable.java:63)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.\
 java:1018)
 at 
 org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1104)
 at 
 org.apache.hadoop.hbase.client.HTable$ClientScanner.initialize(HTable.java:1027)
 at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:535)
 at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:525)
 at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:380)
 at 
 org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:58)
 at 
 org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:137)
 at 
 usr.lib.hbase.bin.region_mover.method__7$RUBY$isSuccessfulScan(/usr/lib/hbase/bin/region_mover.rb:133)
 at 
 usr$lib$hbase$bin$region_mover#method__7$RUBY$isSuccessfulScan.call(usr$lib$hbase$bin$region_mover#method__7$RUBY$isSucces\
 sfulScan:65535)
 at 
 usr$lib$hbase$bin$region_mover#method__7$RUBY$isSuccessfulScan.call(usr$lib$hbase$bin$region_mover#method__7$RUBY$isSucces\
 sfulScan:65535)
 at 
 org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:171)
 at 
 usr.lib.hbase.bin.region_mover.block_4$RUBY$__for__(/usr/lib/hbase/bin/region_mover.rb:326)
 at 
 usr$lib$hbase$bin$region_mover#block_4$RUBY$__for__.call(usr$lib$hbase$bin$region_mover#block_4$RUBY$__for__:65535)
 at org.jruby.runtime.CompiledBlock.yield(CompiledBlock.java:133)
 at 

[jira] [Created] (HBASE-5360) [uberhbck] Add options for how to handle offline split parents.

2012-02-08 Thread Jonathan Hsieh (Created) (JIRA)
[uberhbck] Add options for how to handle offline split parents. 


 Key: HBASE-5360
 URL: https://issues.apache.org/jira/browse/HBASE-5360
 Project: HBase
  Issue Type: Improvement
  Components: hbck
Affects Versions: 0.94.0, 0.90.7, 0.92.1
Reporter: Jonathan Hsieh


In a recent case, we attempted to repair a cluster that suffered from 
HBASE-4238 that had about 6-7 generations of leftover split data.  The hbck 
repair options in an development version of HBASE-5128 treat HDFS as ground 
truth but didn't check SPLIT and OFFLINE flags only found in meta.  The net 
effect was that it essentially attempted to merge many regions back into its 
eldest geneneration's parent's range.  

More safe guards to prevent mega-merges are being added on HBASE-5128.

This issue would automate the handling of the mega-merge avoiding cases such 
as lingering grandparents.  The strategy here would be to add more checks 
against .META., and perform part of the catalog janitor's responsibilities for 
lingering grandparents.  This would potentially include options to sideline 
regions, deleting grandparent regions, min size for sidelining, and mechanisms 
for cleaning .META..  

Note: There already exists an mechanism to reload these regions -- the bulk 
loaded mechanisms in LoadIncrementalHFiles can be used to re-add grandparents 
(automatically splitting them if necessary) to HBase.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HBASE-5327) Print a message when an invalid hbase.rootdir is passed

2012-02-08 Thread Jimmy Xiang (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang reassigned HBASE-5327:
--

Assignee: Jimmy Xiang

 Print a message when an invalid hbase.rootdir is passed
 ---

 Key: HBASE-5327
 URL: https://issues.apache.org/jira/browse/HBASE-5327
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.5
Reporter: Jean-Daniel Cryans
Assignee: Jimmy Xiang
 Fix For: 0.94.0, 0.90.7, 0.92.1

 Attachments: hbase-5327.txt


 As seen on the mailing list: 
 http://comments.gmane.org/gmane.comp.java.hadoop.hbase.user/24124
 If hbase.rootdir doesn't specify a folder on hdfs we crash while opening a 
 path to .oldlogs:
 {noformat}
 2012-02-02 23:07:26,292 FATAL org.apache.hadoop.hbase.master.HMaster: 
 Unhandled exception. Starting shutdown.
 java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
 path in absolute URI: hdfs://sv4r11s38:9100.oldlogs
 at org.apache.hadoop.fs.Path.initialize(Path.java:148)
 at org.apache.hadoop.fs.Path.init(Path.java:71)
 at org.apache.hadoop.fs.Path.init(Path.java:50)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:112)
 at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:448)
 at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:326)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
 hdfs://sv4r11s38:9100.oldlogs
 at java.net.URI.checkPath(URI.java:1787)
 at java.net.URI.init(URI.java:735)
 at org.apache.hadoop.fs.Path.initialize(Path.java:145)
 ... 6 more
 {noformat}
 It could also crash anywhere else, this just happens to be the first place we 
 use hbase.rootdir. We need to verify that it's an actual folder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5327) Print a message when an invalid hbase.rootdir is passed

2012-02-08 Thread Jimmy Xiang (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-5327:
---

Attachment: hbase-5327.txt

 Print a message when an invalid hbase.rootdir is passed
 ---

 Key: HBASE-5327
 URL: https://issues.apache.org/jira/browse/HBASE-5327
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.5
Reporter: Jean-Daniel Cryans
Assignee: Jimmy Xiang
 Fix For: 0.94.0, 0.90.7, 0.92.1

 Attachments: hbase-5327.txt


 As seen on the mailing list: 
 http://comments.gmane.org/gmane.comp.java.hadoop.hbase.user/24124
 If hbase.rootdir doesn't specify a folder on hdfs we crash while opening a 
 path to .oldlogs:
 {noformat}
 2012-02-02 23:07:26,292 FATAL org.apache.hadoop.hbase.master.HMaster: 
 Unhandled exception. Starting shutdown.
 java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
 path in absolute URI: hdfs://sv4r11s38:9100.oldlogs
 at org.apache.hadoop.fs.Path.initialize(Path.java:148)
 at org.apache.hadoop.fs.Path.init(Path.java:71)
 at org.apache.hadoop.fs.Path.init(Path.java:50)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:112)
 at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:448)
 at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:326)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
 hdfs://sv4r11s38:9100.oldlogs
 at java.net.URI.checkPath(URI.java:1787)
 at java.net.URI.init(URI.java:735)
 at org.apache.hadoop.fs.Path.initialize(Path.java:145)
 ... 6 more
 {noformat}
 It could also crash anywhere else, this just happens to be the first place we 
 use hbase.rootdir. We need to verify that it's an actual folder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5327) Print a message when an invalid hbase.rootdir is passed

2012-02-08 Thread Jimmy Xiang (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-5327:
---

Status: Patch Available  (was: Open)

I tested the patch and it can detect the error mentioned in the description and 
abort the master with error message saying it is not a valid hdfs file path.

 Print a message when an invalid hbase.rootdir is passed
 ---

 Key: HBASE-5327
 URL: https://issues.apache.org/jira/browse/HBASE-5327
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.5
Reporter: Jean-Daniel Cryans
Assignee: Jimmy Xiang
 Fix For: 0.94.0, 0.90.7, 0.92.1

 Attachments: hbase-5327.txt


 As seen on the mailing list: 
 http://comments.gmane.org/gmane.comp.java.hadoop.hbase.user/24124
 If hbase.rootdir doesn't specify a folder on hdfs we crash while opening a 
 path to .oldlogs:
 {noformat}
 2012-02-02 23:07:26,292 FATAL org.apache.hadoop.hbase.master.HMaster: 
 Unhandled exception. Starting shutdown.
 java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
 path in absolute URI: hdfs://sv4r11s38:9100.oldlogs
 at org.apache.hadoop.fs.Path.initialize(Path.java:148)
 at org.apache.hadoop.fs.Path.init(Path.java:71)
 at org.apache.hadoop.fs.Path.init(Path.java:50)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:112)
 at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:448)
 at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:326)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
 hdfs://sv4r11s38:9100.oldlogs
 at java.net.URI.checkPath(URI.java:1787)
 at java.net.URI.init(URI.java:735)
 at org.apache.hadoop.fs.Path.initialize(Path.java:145)
 ... 6 more
 {noformat}
 It could also crash anywhere else, this just happens to be the first place we 
 use hbase.rootdir. We need to verify that it's an actual folder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5358) HBaseObjectWritable should be able to serialize generic arrays not defined previously

2012-02-08 Thread jirapos...@reviews.apache.org (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204181#comment-13204181
 ] 

jirapos...@reviews.apache.org commented on HBASE-5358:
--


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/3811/
---

Review request for hbase.


Summary
---

HBaseObjectWritable can encode Writable[]'s but, but cannot encode A[] where A 
extends Writable. This becomes an issue for example when adding a coprocessor 
method which takes A[] (see HBASE-5352).


This addresses bug HBASE-5358.
https://issues.apache.org/jira/browse/HBASE-5358


Diffs
-

  src/test/java/org/apache/hadoop/hbase/io/TestHbaseObjectWritable.java 78513ce 
  src/main/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java 260f982 

Diff: https://reviews.apache.org/r/3811/diff


Testing
---


Thanks,

enis



 HBaseObjectWritable should be able to serialize generic arrays not defined 
 previously
 -

 Key: HBASE-5358
 URL: https://issues.apache.org/jira/browse/HBASE-5358
 Project: HBase
  Issue Type: Improvement
  Components: coprocessors, io
Reporter: Enis Soztutar
Assignee: Enis Soztutar

 HBaseObjectWritable can encode Writable[]'s but, but cannot encode A[] where 
 A extends Writable. This becomes an issue for example when adding a 
 coprocessor method which takes A[] (see HBASE-5352). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5327) Print a message when an invalid hbase.rootdir is passed

2012-02-08 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204185#comment-13204185
 ] 

Hadoop QA commented on HBASE-5327:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12513890/hbase-5327.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 javadoc.  The javadoc tool appears to have generated -136 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 156 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/926//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/926//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/926//console

This message is automatically generated.

 Print a message when an invalid hbase.rootdir is passed
 ---

 Key: HBASE-5327
 URL: https://issues.apache.org/jira/browse/HBASE-5327
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.5
Reporter: Jean-Daniel Cryans
Assignee: Jimmy Xiang
 Fix For: 0.94.0, 0.90.7, 0.92.1

 Attachments: hbase-5327.txt


 As seen on the mailing list: 
 http://comments.gmane.org/gmane.comp.java.hadoop.hbase.user/24124
 If hbase.rootdir doesn't specify a folder on hdfs we crash while opening a 
 path to .oldlogs:
 {noformat}
 2012-02-02 23:07:26,292 FATAL org.apache.hadoop.hbase.master.HMaster: 
 Unhandled exception. Starting shutdown.
 java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
 path in absolute URI: hdfs://sv4r11s38:9100.oldlogs
 at org.apache.hadoop.fs.Path.initialize(Path.java:148)
 at org.apache.hadoop.fs.Path.init(Path.java:71)
 at org.apache.hadoop.fs.Path.init(Path.java:50)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:112)
 at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:448)
 at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:326)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
 hdfs://sv4r11s38:9100.oldlogs
 at java.net.URI.checkPath(URI.java:1787)
 at java.net.URI.init(URI.java:735)
 at org.apache.hadoop.fs.Path.initialize(Path.java:145)
 ... 6 more
 {noformat}
 It could also crash anywhere else, this just happens to be the first place we 
 use hbase.rootdir. We need to verify that it's an actual folder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >