[jira] [Updated] (HBASE-5507) ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not use ByteBuffer correctly

2012-03-05 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5507:
---

Attachment: HBASE-5507.D2073.3.patch

sc updated the revision HBASE-5507 [jira] 
ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not 
use ByteBuffer correctly.
Reviewers: tedyu, dhruba, JIRA

  Fix to pass TestThriftServer

REVISION DETAIL
  https://reviews.facebook.net/D2073

AFFECTED FILES
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegionThriftServer.java
  src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
  src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java
  src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServerCmdLine.java


 ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not 
 use ByteBuffer correctly
 -

 Key: HBASE-5507
 URL: https://issues.apache.org/jira/browse/HBASE-5507
 Project: HBase
  Issue Type: Bug
  Components: thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5507.D2073.1.patch, HBASE-5507.D2073.2.patch, 
 HBASE-5507.D2073.3.patch


 We observed that when with framed transport option. The thrift call 
 ThriftServerRunner.HbaseHandler.getRegionInfo() receives corrupted parameter 
 (some garbage string attached to the beginning). This may be a thrift bug 
 requires further investigation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5270) Handle potential data loss due to concurrent processing of processFaileOver and ServerShutdownHandler

2012-03-05 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5270:
--

Status: Patch Available  (was: Open)

 Handle potential data loss due to concurrent processing of processFaileOver 
 and ServerShutdownHandler
 -

 Key: HBASE-5270
 URL: https://issues.apache.org/jira/browse/HBASE-5270
 Project: HBase
  Issue Type: Sub-task
  Components: master
Reporter: Zhihong Yu
Assignee: chunhui shen
 Fix For: 0.92.2

 Attachments: 5270-90-testcase.patch, 5270-90-testcasev2.patch, 
 5270-90.patch, 5270-90v2.patch, 5270-90v3.patch, 5270-testcase.patch, 
 5270-testcasev2.patch, hbase-5270.patch, hbase-5270v10.patch, 
 hbase-5270v2.patch, hbase-5270v4.patch, hbase-5270v5.patch, 
 hbase-5270v6.patch, hbase-5270v7.patch, hbase-5270v8.patch, 
 hbase-5270v9.patch, sampletest.txt


 This JIRA continues the effort from HBASE-5179. Starting with Stack's 
 comments about patches for 0.92 and TRUNK:
 Reviewing 0.92v17
 isDeadServerInProgress is a new public method in ServerManager but it does 
 not seem to be used anywhere.
 Does isDeadRootServerInProgress need to be public? Ditto for meta version.
 This method param names are not right 'definitiveRootServer'; what is meant 
 by definitive? Do they need this qualifier?
 Is there anything in place to stop us expiring a server twice if its carrying 
 root and meta?
 What is difference between asking assignment manager isCarryingRoot and this 
 variable that is passed in? Should be doc'd at least. Ditto for meta.
 I think I've asked for this a few times - onlineServers needs to be 
 explained... either in javadoc or in comment. This is the param passed into 
 joinCluster. How does it arise? I think I know but am unsure. God love the 
 poor noob that comes awandering this code trying to make sense of it all.
 It looks like we get the list by trawling zk for regionserver znodes that 
 have not checked in. Don't we do this operation earlier in master setup? Are 
 we doing it again here?
 Though distributed split log is configured, we will do in master single 
 process splitting under some conditions with this patch. Its not explained in 
 code why we would do this. Why do we think master log splitting 'high 
 priority' when it could very well be slower. Should we only go this route if 
 distributed splitting is not going on. Do we know if concurrent distributed 
 log splitting and master splitting works?
 Why would we have dead servers in progress here in master startup? Because a 
 servershutdownhandler fired?
 This patch is different to the patch for 0.90. Should go into trunk first 
 with tests, then 0.92. Should it be in this issue? This issue is really hard 
 to follow now. Maybe this issue is for 0.90.x and new issue for more work on 
 this trunk patch?
 This patch needs to have the v18 differences applied.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5507) ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not use ByteBuffer correctly

2012-03-05 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1332#comment-1332
 ] 

Hadoop QA commented on HBASE-5507:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12517053/HBASE-5507.D2073.3.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -129 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 154 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestImportTsv
  org.apache.hadoop.hbase.mapred.TestTableMapReduce
  org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1098//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1098//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1098//console

This message is automatically generated.

 ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not 
 use ByteBuffer correctly
 -

 Key: HBASE-5507
 URL: https://issues.apache.org/jira/browse/HBASE-5507
 Project: HBase
  Issue Type: Bug
  Components: thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5507.D2073.1.patch, HBASE-5507.D2073.2.patch, 
 HBASE-5507.D2073.3.patch


 We observed that when with framed transport option. The thrift call 
 ThriftServerRunner.HbaseHandler.getRegionInfo() receives corrupted parameter 
 (some garbage string attached to the beginning). This may be a thrift bug 
 requires further investigation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5514) Compile against hadoop 0.24-SNAPSHOT

2012-03-05 Thread Mingjie Lai (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingjie Lai updated HBASE-5514:
---

Attachment: HBASE-5514-4.patch

Per Ted's request: removed the redundant null check. 

 Can we extract the new code into a helper class or method in 
 org.apache.hadoop.hbase.regionserver.wal package ?

It only occurs for 2 test cases, and the code duplication is quite small. Do 
you really think we should have one method to cover them? 

 Compile against hadoop 0.24-SNAPSHOT
 

 Key: HBASE-5514
 URL: https://issues.apache.org/jira/browse/HBASE-5514
 Project: HBase
  Issue Type: Bug
  Components: build, test
Affects Versions: 0.92.0, 0.94.0
Reporter: Mingjie Lai
Assignee: Mingjie Lai
Priority: Minor
 Fix For: 0.94.0

 Attachments: HBASE-5514-2.patch, HBASE-5514-3.patch, 
 HBASE-5514-4.patch, HBASE-5514.patch


 Need to compile hbase against the latest hadoop trunk which just had NN HA 
 merged in. 
 1) add a hadoop 0.24 profile
 2) HBASE-5480
 3) HADOOP-8124 removed deprecated Syncable.sync(). It brings compile errors 
 for hbase against hadoop trunk(0.24). TestHLogSplit and TestHLog still call 
 the deprecated sync(). Need to replace it with hflush() so the compilation 
 can pass. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5270) Handle potential data loss due to concurrent processing of processFaileOver and ServerShutdownHandler

2012-03-05 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1352#comment-1352
 ] 

Hadoop QA commented on HBASE-5270:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12517050/hbase-5270v10.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 9 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -129 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 155 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1100//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1100//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1100//console

This message is automatically generated.

 Handle potential data loss due to concurrent processing of processFaileOver 
 and ServerShutdownHandler
 -

 Key: HBASE-5270
 URL: https://issues.apache.org/jira/browse/HBASE-5270
 Project: HBase
  Issue Type: Sub-task
  Components: master
Reporter: Zhihong Yu
Assignee: chunhui shen
 Fix For: 0.92.2

 Attachments: 5270-90-testcase.patch, 5270-90-testcasev2.patch, 
 5270-90.patch, 5270-90v2.patch, 5270-90v3.patch, 5270-testcase.patch, 
 5270-testcasev2.patch, hbase-5270.patch, hbase-5270v10.patch, 
 hbase-5270v2.patch, hbase-5270v4.patch, hbase-5270v5.patch, 
 hbase-5270v6.patch, hbase-5270v7.patch, hbase-5270v8.patch, 
 hbase-5270v9.patch, sampletest.txt


 This JIRA continues the effort from HBASE-5179. Starting with Stack's 
 comments about patches for 0.92 and TRUNK:
 Reviewing 0.92v17
 isDeadServerInProgress is a new public method in ServerManager but it does 
 not seem to be used anywhere.
 Does isDeadRootServerInProgress need to be public? Ditto for meta version.
 This method param names are not right 'definitiveRootServer'; what is meant 
 by definitive? Do they need this qualifier?
 Is there anything in place to stop us expiring a server twice if its carrying 
 root and meta?
 What is difference between asking assignment manager isCarryingRoot and this 
 variable that is passed in? Should be doc'd at least. Ditto for meta.
 I think I've asked for this a few times - onlineServers needs to be 
 explained... either in javadoc or in comment. This is the param passed into 
 joinCluster. How does it arise? I think I know but am unsure. God love the 
 poor noob that comes awandering this code trying to make sense of it all.
 It looks like we get the list by trawling zk for regionserver znodes that 
 have not checked in. Don't we do this operation earlier in master setup? Are 
 we doing it again here?
 Though distributed split log is configured, we will do in master single 
 process splitting under some conditions with this patch. Its not explained in 
 code why we would do this. Why do we think master log splitting 'high 
 priority' when it could very well be slower. Should we only go this route if 
 distributed splitting is not going on. Do we know if concurrent distributed 
 log splitting and master splitting works?
 Why would we have dead servers in progress here in master startup? Because a 
 servershutdownhandler fired?
 This patch is different to the patch for 0.90. Should go into trunk first 
 with tests, then 0.92. Should it be in this issue? This issue is really hard 
 to follow now. Maybe this issue is for 0.90.x and new issue for more work on 
 this trunk patch?
 This patch needs to have the v18 differences applied.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5514) Compile against hadoop 0.24-SNAPSHOT

2012-03-05 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1353#comment-1353
 ] 

Hadoop QA commented on HBASE-5514:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12517055/HBASE-5514-4.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 8 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -129 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 154 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestSplitLogManager
  org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat
  org.apache.hadoop.hbase.mapred.TestTableMapReduce
  org.apache.hadoop.hbase.mapreduce.TestImportTsv

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1099//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1099//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1099//console

This message is automatically generated.

 Compile against hadoop 0.24-SNAPSHOT
 

 Key: HBASE-5514
 URL: https://issues.apache.org/jira/browse/HBASE-5514
 Project: HBase
  Issue Type: Bug
  Components: build, test
Affects Versions: 0.92.0, 0.94.0
Reporter: Mingjie Lai
Assignee: Mingjie Lai
Priority: Minor
 Fix For: 0.94.0

 Attachments: HBASE-5514-2.patch, HBASE-5514-3.patch, 
 HBASE-5514-4.patch, HBASE-5514.patch


 Need to compile hbase against the latest hadoop trunk which just had NN HA 
 merged in. 
 1) add a hadoop 0.24 profile
 2) HBASE-5480
 3) HADOOP-8124 removed deprecated Syncable.sync(). It brings compile errors 
 for hbase against hadoop trunk(0.24). TestHLogSplit and TestHLog still call 
 the deprecated sync(). Need to replace it with hflush() so the compilation 
 can pass. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5520) Support seek() reseek() at RegionScanner

2012-03-05 Thread Anoop Sam John (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-5520:
--

Description: 
seek() reseek() is not supported currently at the RegionScanner level. We can 
support the same.
This is created following the discussion under HBASE-2038

  was:
seek() reseek() is not supported currently at the RegionScanner level. We can 
support the same.
This is created following the discussion under HBASE_2038


 Support seek() reseek() at RegionScanner
 

 Key: HBASE-5520
 URL: https://issues.apache.org/jira/browse/HBASE-5520
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.92.0
Reporter: Anoop Sam John

 seek() reseek() is not supported currently at the RegionScanner level. We can 
 support the same.
 This is created following the discussion under HBASE-2038

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-2038) Coprocessors: Region level indexing

2012-03-05 Thread Anoop Sam John (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222340#comment-13222340
 ] 

Anoop Sam John commented on HBASE-2038:
---

Or may be we can give the signature of the seek() and reseek() at the 
RegionScanner as seek( byte[] rowKey ) reseek( byte[] rowKey )?
So that the seek will be always to the begin KV of the row in every CF. [ if CF 
contains that key ]

 Coprocessors: Region level indexing
 ---

 Key: HBASE-2038
 URL: https://issues.apache.org/jira/browse/HBASE-2038
 Project: HBase
  Issue Type: New Feature
  Components: coprocessors
Reporter: Andrew Purtell
Priority: Minor

 HBASE-2037 is a good candidate to be done as coprocessor. It also serve as a 
 good goalpost for coprocessor environment design -- there should be enough of 
 it so region level indexing can be reimplemented as a coprocessor without any 
 loss of functionality. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5520) Support seek() reseek() at RegionScanner

2012-03-05 Thread Anoop Sam John (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222341#comment-13222341
 ] 

Anoop Sam John commented on HBASE-5520:
---

We can support only seek() and reseek() at the row boundary level.
We can take any of the below approaches
1. The APIs make use of the rowkey and timestamp only from the KeyValue passed.
2. Check at the RegionScannerImpl level that it is not having the CF, qualifier 
in the passed KV. If so throw exception. Only the KV can have the rowkey and 
timestamp also.[It is ok.Timestamp can be there...]
3. Dont bother let the seek happen. But may be dangerous??
4. We can give the signature of the seek() and reseek() at the RegionScanner as 
seek( byte[] rowKey ) reseek( byte[] rowKey )? So that the seek will be always 
to the begin KV of the row in every CF. [ if CF contains that key ]

 Support seek() reseek() at RegionScanner
 

 Key: HBASE-5520
 URL: https://issues.apache.org/jira/browse/HBASE-5520
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.92.0
Reporter: Anoop Sam John

 seek() reseek() is not supported currently at the RegionScanner level. We can 
 support the same.
 This is created following the discussion under HBASE-2038

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5507) ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not use ByteBuffer correctly

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222378#comment-13222378
 ] 

Phabricator commented on HBASE-5507:


tedyu has accepted the revision HBASE-5507 [jira] 
ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not 
use ByteBuffer correctly.

REVISION DETAIL
  https://reviews.facebook.net/D2073

BRANCH
  5507


 ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not 
 use ByteBuffer correctly
 -

 Key: HBASE-5507
 URL: https://issues.apache.org/jira/browse/HBASE-5507
 Project: HBase
  Issue Type: Bug
  Components: thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5507.D2073.1.patch, HBASE-5507.D2073.2.patch, 
 HBASE-5507.D2073.3.patch


 We observed that when with framed transport option. The thrift call 
 ThriftServerRunner.HbaseHandler.getRegionInfo() receives corrupted parameter 
 (some garbage string attached to the beginning). This may be a thrift bug 
 requires further investigation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HBASE-5338) Add SKIP support to importtsv

2012-03-05 Thread Brock Noland (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland reassigned HBASE-5338:
---

Assignee: Brock Noland

 Add SKIP support to importtsv 
 --

 Key: HBASE-5338
 URL: https://issues.apache.org/jira/browse/HBASE-5338
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Lars George
Assignee: Brock Noland
Priority: Trivial

 It'd be nice to have support for SKIP mappings so that you can omit columns 
 from the TSV during the import. For example
 {code}
 -Dimporttsv.columns=SKIP,HBASE_ROW_KEY,cf1:col1,cf1:col2,SKIP,SKIP,cf2:col1...
 {code}
 Or maybe HBASE_SKIP_COLUMN to be less ambiguous. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5517) Region Server Coprocessor : Suggestion for change when next() call with nbRows1

2012-03-05 Thread Anoop Sam John (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222409#comment-13222409
 ] 

Anoop Sam John commented on HBASE-5517:
---

In HRegionServer next(final long scannerId, int nbRows) 
{code}
for (int i = 0; i  nbRows
   currentScanResultSize  maxScannerResultSize; i++) {
requestCount.incrementAndGet();
{code}
Here if next() is called with nbRows=10 we are treating it as 10 requests came 
to RS. We treat it as 10 different operations on the RS.In that case we better 
contact the CP 10 times rather than 1 time?  Correct me if I am wrong...:)

 Region Server Coprocessor : Suggestion for change when next() call with 
 nbRows1
 

 Key: HBASE-5517
 URL: https://issues.apache.org/jira/browse/HBASE-5517
 Project: HBase
  Issue Type: Improvement
  Components: coprocessors
Affects Versions: 0.92.0
Reporter: Anoop Sam John

 Originated from the discussion under HBASE-2038 [Coprocessor based IHBase]
 Currently preNext() and postNext() will be called once for a next() call into 
 HRegionServer.
 But if the next() is being called with nbRows1, co processor should provide 
 a chance to do some operation before, after every next() calls into region as 
 part of call next(int scannerId, int nbRows).
 In case of usage of coprocessor with IHBase, before making any calls of 
 next() into a Region, we need to make a reseek() to a row based on the index 
 information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5516) GZip leading to memory leak in 0.90. Fix similar to HBASE-5387 needed for 0.90.

2012-03-05 Thread ramkrishna.s.vasudevan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222411#comment-13222411
 ] 

ramkrishna.s.vasudevan commented on HBASE-5516:
---

Test cases are running. Will upload the patch after that.

 GZip leading to memory leak in 0.90.  Fix similar to HBASE-5387 needed for 
 0.90.
 

 Key: HBASE-5516
 URL: https://issues.apache.org/jira/browse/HBASE-5516
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.5
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.90.7


 Usage of GZip is leading to resident memory leak in 0.90.
 We need to have something similar to HBASE-5387 in 0.90. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5520) Support seek() reseek() at RegionScanner

2012-03-05 Thread ramkrishna.s.vasudevan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222414#comment-13222414
 ] 

ramkrishna.s.vasudevan commented on HBASE-5520:
---

We can upload a patch if we are fine with any one of the approaches?

Please provide your suggestions.


 Support seek() reseek() at RegionScanner
 

 Key: HBASE-5520
 URL: https://issues.apache.org/jira/browse/HBASE-5520
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.92.0
Reporter: Anoop Sam John

 seek() reseek() is not supported currently at the RegionScanner level. We can 
 support the same.
 This is created following the discussion under HBASE-2038

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5399) Cut the link between the client and the zookeeper ensemble

2012-03-05 Thread nkeywal (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222436#comment-13222436
 ] 

nkeywal commented on HBASE-5399:


org.apache.hadoop.hbase.TestZooKeeper is surprising, because:
- if we add a 7s sleep before in  testMasterSessionExpired(), then it's much 
more difficult to reproduce.
- in RecovableZooKeeper, there is no tests for SESSIONEXPIRED: if it happens 
there is no retry.

So I will tend to think it's an existing issue, even if I need to understand 
how it's supposed to work when there is a session timeout. I tried to add it 
but it does not work.

 Cut the link between the client and the zookeeper ensemble
 --

 Key: HBASE-5399
 URL: https://issues.apache.org/jira/browse/HBASE-5399
 Project: HBase
  Issue Type: Improvement
  Components: client
Affects Versions: 0.94.0
 Environment: all
Reporter: nkeywal
Assignee: nkeywal
Priority: Minor
 Attachments: 5399.v27.patch, 5399.v38.patch, 5399.v39.patch, 
 5399_inprogress.patch, 5399_inprogress.v14.patch, 5399_inprogress.v16.patch, 
 5399_inprogress.v18.patch, 5399_inprogress.v20.patch, 
 5399_inprogress.v21.patch, 5399_inprogress.v23.patch, 
 5399_inprogress.v3.patch, 5399_inprogress.v32.patch, 5399_inprogress.v9.patch


 The link is often considered as an issue, for various reasons. One of them 
 being that there is a limit on the number of connection that ZK can manage. 
 Stack was suggesting as well to remove the link to master from HConnection.
 There are choices to be made considering the existing API (that we don't want 
 to break).
 The first patches I will submit on hadoop-qa should not be committed: they 
 are here to show the progress on the direction taken.
 ZooKeeper is used for:
 - public getter, to let the client do whatever he wants, and close ZooKeeper 
 when closing the connection = we have to deprecate this but keep it.
 - read get master address to create a master = now done with a temporary 
 zookeeper connection
 - read root location = now done with a temporary zookeeper connection, but 
 questionable. Used in public function locateRegion. To be reworked.
 - read cluster id = now done once with a temporary zookeeper connection.
 - check if base done is available = now done once with a zookeeper 
 connection given as a parameter
 - isTableDisabled/isTableAvailable = public functions, now done with a 
 temporary zookeeper connection.
  - Called internally from HBaseAdmin and HTable
 - getCurrentNrHRS(): public function to get the number of region servers and 
 create a pool of thread = now done with a temporary zookeeper connection
 -
 Master is used for:
 - getMaster public getter, as for ZooKeeper = we have to deprecate this but 
 keep it.
 - isMasterRunning(): public function, used internally by HMerge  HBaseAdmin
 - getHTableDescriptor*: public functions offering access to the master.  = 
 we could make them using a temporary master connection as well.
 Main points are:
 - hbase class for ZooKeeper; ZooKeeperWatcher is really designed for a 
 strongly coupled architecture ;-). This can be changed, but requires a lot of 
 modifications in these classes (likely adding a class in the middle of the 
 hierarchy, something like that). Anyway, non connected client will always be 
 really slower, because it's a tcp connection, and establishing a tcp 
 connection is slow.
 - having a link between ZK and all the client seems to make sense for some 
 Use Cases. However, it won't scale if a TCP connection is required for every 
 client
 - if we move the table descriptor part away from the client, we need to find 
 a new place for it.
 - we will have the same issue if HBaseAdmin (for both ZK  Master), may be we 
 can put a timeout on the connection. That would make the whole system less 
 deterministic however.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-2817) Allow separate HBASE_REGIONSERVER_HEAPSIZE and HBASE_MASTER_HEAPSIZE

2012-03-05 Thread Adrian Muraru (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222452#comment-13222452
 ] 

Adrian Muraru commented on HBASE-2817:
--

This is related to HBASE-1687 - I see it fixed but the HEAPSIZE per service is 
not sorted out

 Allow separate HBASE_REGIONSERVER_HEAPSIZE and HBASE_MASTER_HEAPSIZE
 

 Key: HBASE-2817
 URL: https://issues.apache.org/jira/browse/HBASE-2817
 Project: HBase
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.90.0
Reporter: Todd Lipcon
Priority: Minor

 Right now we have a single HBASE_HEAPSIZE configuration. This isn't that 
 great, since the HMaster doesn't really need much ram compared to the region 
 servers. We should allow different java options and heapsize for the 
 different daemon types.
 Probably worth breaking out THRIFT, REST, AVRO, etc, as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4608) HLog Compression

2012-03-05 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-4608:
-

Fix Version/s: 0.94.0

Marking for 0.94

 HLog Compression
 

 Key: HBASE-4608
 URL: https://issues.apache.org/jira/browse/HBASE-4608
 Project: HBase
  Issue Type: New Feature
Reporter: Li Pi
Assignee: Li Pi
 Fix For: 0.94.0

 Attachments: 4608v1.txt, 4608v13.txt, 4608v13.txt, 4608v14.txt, 
 4608v15.txt, 4608v16.txt, 4608v5.txt, 4608v6.txt, 4608v7.txt, 4608v8fixed.txt


 The current bottleneck to HBase write speed is replicating the WAL appends 
 across different datanodes. We can speed up this process by compressing the 
 HLog. Current plan involves using a dictionary to compress table name, region 
 id, cf name, and possibly other bits of repeated data. Also, HLog format may 
 be changed in other ways to produce a smaller HLog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5507) ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not use ByteBuffer correctly

2012-03-05 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5507:
--

Fix Version/s: 0.94.0

 ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not 
 use ByteBuffer correctly
 -

 Key: HBASE-5507
 URL: https://issues.apache.org/jira/browse/HBASE-5507
 Project: HBase
  Issue Type: Bug
  Components: thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.94.0

 Attachments: HBASE-5507.D2073.1.patch, HBASE-5507.D2073.2.patch, 
 HBASE-5507.D2073.3.patch


 We observed that when with framed transport option. The thrift call 
 ThriftServerRunner.HbaseHandler.getRegionInfo() receives corrupted parameter 
 (some garbage string attached to the beginning). This may be a thrift bug 
 requires further investigation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5507) ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not use ByteBuffer correctly

2012-03-05 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222460#comment-13222460
 ] 

Zhihong Yu commented on HBASE-5507:
---

@Scott:
I assume the fix should also go to 0.94 branch.
{code}
2 out of 3 hunks FAILED -- saving rejects to file 
src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java.rej
patching file src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java
Hunk #1 succeeded at 420 (offset -9 lines).
patching file 
src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServerCmdLine.java
Reversed (or previously applied) patch detected!  Assume -R? [n] 
Apply anyway? [n] 
Skipping patch.
2 out of 2 hunks ignored -- saving rejects to file 
src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServerCmdLine.java.rej
{code}
Can you prepare a patch that applies to 0.94 cleanly ?

Thanks

 ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not 
 use ByteBuffer correctly
 -

 Key: HBASE-5507
 URL: https://issues.apache.org/jira/browse/HBASE-5507
 Project: HBase
  Issue Type: Bug
  Components: thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.94.0

 Attachments: HBASE-5507.D2073.1.patch, HBASE-5507.D2073.2.patch, 
 HBASE-5507.D2073.3.patch


 We observed that when with framed transport option. The thrift call 
 ThriftServerRunner.HbaseHandler.getRegionInfo() receives corrupted parameter 
 (some garbage string attached to the beginning). This may be a thrift bug 
 requires further investigation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222464#comment-13222464
 ] 

Phabricator commented on HBASE-5515:


sc has commented on the revision HBASE-5515 [jira] Add a processRow API that 
supports atomic multiple reads and writes on a row.

  Ted: Thanks for the review! I will update this soon.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java:4364 I think 
you're right. If any of exception thrown, we actually don't want to 
completeMemstoreInsert().
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java:4368 goodcatch
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java:4318 good 
idea. I will make that change.

REVISION DETAIL
  https://reviews.facebook.net/D2067


 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.2.patch, 
 HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, HBASE-5515.D2067.5.patch, 
 HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, HBASE-5515.D2067.8.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5510) Pass region info in LoadBalancer.randomAssignment(ListServerName servers)

2012-03-05 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5510:
--

Summary: Pass region info in LoadBalancer.randomAssignment(ListServerName 
servers)  (was: Change in LB.randomAssignment(ListServerName servers) API)

 Pass region info in LoadBalancer.randomAssignment(ListServerName servers)
 ---

 Key: HBASE-5510
 URL: https://issues.apache.org/jira/browse/HBASE-5510
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.0
Reporter: Anoop Sam John
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.96.0

 Attachments: HBase-5010_3.patch, HBase-5510.patch, HBase-5510_2.patch


  In LB there is randomAssignment(ListServerName servers) API which will be 
 used by AM to assign
  a region from a down RS. [This will be also used in other cases like call to 
 assign() API from client]
  I feel it would be better to pass the HRegionInfo also into this method. 
 When the LB making a choice for a region
  assignment, when one RS is down, it would be nice that the LB knows for 
 which region it is doing this server selection.
 +Scenario+
  While one RS down, we wanted the regions to get moved to other RSs but a set 
 of regions stay together. We are having custom load balancer but with the 
 current way of LB interface this is not possible. Another way is I can allow 
 a random assignment of the regions at the RS down time. Later with a cluster 
 balance I can balance the regions as I need. But this might make regions 
 assign 1st to one RS and then again move to another. Also for some time 
 period my business use case can not get satisfied.
 Also I have seen some issue in JIRA which speaks about making sure that Root 
 and META regions always sit in some specific RSs. With the current LB API 
 this wont be possible in future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HBASE-5510) Pass region info in LoadBalancer.randomAssignment(ListServerName servers)

2012-03-05 Thread Zhihong Yu (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu reassigned HBASE-5510:
-

Assignee: Anoop Sam John  (was: ramkrishna.s.vasudevan)

Integrated to TRUNK.

Thanks for the patch Anoop.

 Pass region info in LoadBalancer.randomAssignment(ListServerName servers)
 ---

 Key: HBASE-5510
 URL: https://issues.apache.org/jira/browse/HBASE-5510
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.96.0

 Attachments: HBase-5010_3.patch, HBase-5510.patch, HBase-5510_2.patch


  In LB there is randomAssignment(ListServerName servers) API which will be 
 used by AM to assign
  a region from a down RS. [This will be also used in other cases like call to 
 assign() API from client]
  I feel it would be better to pass the HRegionInfo also into this method. 
 When the LB making a choice for a region
  assignment, when one RS is down, it would be nice that the LB knows for 
 which region it is doing this server selection.
 +Scenario+
  While one RS down, we wanted the regions to get moved to other RSs but a set 
 of regions stay together. We are having custom load balancer but with the 
 current way of LB interface this is not possible. Another way is I can allow 
 a random assignment of the regions at the RS down time. Later with a cluster 
 balance I can balance the regions as I need. But this might make regions 
 assign 1st to one RS and then again move to another. Also for some time 
 period my business use case can not get satisfied.
 Also I have seen some issue in JIRA which speaks about making sure that Root 
 and META regions always sit in some specific RSs. With the current LB API 
 this wont be possible in future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5521) Move compression/decompression to an encoder specific encoding context

2012-03-05 Thread He Yongqiang (Created) (JIRA)
Move compression/decompression to an encoder specific encoding context
--

 Key: HBASE-5521
 URL: https://issues.apache.org/jira/browse/HBASE-5521
 Project: HBase
  Issue Type: Improvement
Reporter: He Yongqiang


As part of working on HBASE-5313, we want to add a new columnar 
encoder/decoder. It makes sense to move compression to be part of 
encoder/decoder:
1) a scanner for a columnar encoded block can do lazy decompression to a 
specific part of a key value object
2) avoid an extra bytes copy from encoder to hblock-writer. 

If there is no encoder specified for a writer, the HBlock.Writer will use a 
default compression-context to do something very similar to today's code.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5313) Restructure hfiles layout for better compression

2012-03-05 Thread He Yongqiang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222512#comment-13222512
 ] 

He Yongqiang commented on HBASE-5313:
-

As part of working on HBASE-5313, we first tried to write a 
HFileWriter/HFileReader to do it. After finishing some work, it seems this 
requires a lot of code refactoring in order to reuse existing code as much as 
possible.

Then we find seems adding a new columnar encoder/decoder would be easy to do. 
opened https://issues.apache.org/jira/browse/HBASE-5521 to do encoder/decoder 
specific compression work.

 Restructure hfiles layout for better compression
 

 Key: HBASE-5313
 URL: https://issues.apache.org/jira/browse/HBASE-5313
 Project: HBase
  Issue Type: Improvement
  Components: io
Reporter: dhruba borthakur
Assignee: dhruba borthakur

 A HFile block contain a stream of key-values. Can we can organize these kvs 
 on the disk in a better way so that we get much greater compression ratios?
 One option (thanks Prakash) is to store all the keys in the beginning of the 
 block (let's call this the key-section) and then store all their 
 corresponding values towards the end of the block. This will allow us to 
 not-even decompress the values when we are scanning and skipping over rows in 
 the block.
 Any other ideas? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5313) Restructure hfiles layout for better compression

2012-03-05 Thread Matt Corgan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222550#comment-13222550
 ] 

Matt Corgan commented on HBASE-5313:


Just noticed this jira.  I've been working on 
https://issues.apache.org/jira/browse/HBASE-4676.  In this trie format all the 
values are concatenated at the end of the block.  I haven't done anything with 
compressing them because they are generally small in my use cases, but seems 
like it would eventually be a good option.  I would think that the compression 
savings would be similar to the on-disk compression savings, but the benefit is 
that you have access to scan the keys while the data part of the block is still 
compressed.

 Restructure hfiles layout for better compression
 

 Key: HBASE-5313
 URL: https://issues.apache.org/jira/browse/HBASE-5313
 Project: HBase
  Issue Type: Improvement
  Components: io
Reporter: dhruba borthakur
Assignee: dhruba borthakur

 A HFile block contain a stream of key-values. Can we can organize these kvs 
 on the disk in a better way so that we get much greater compression ratios?
 One option (thanks Prakash) is to store all the keys in the beginning of the 
 block (let's call this the key-section) and then store all their 
 corresponding values towards the end of the block. This will allow us to 
 not-even decompress the values when we are scanning and skipping over rows in 
 the block.
 Any other ideas? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5521) Move compression/decompression to an encoder specific encoding context

2012-03-05 Thread He Yongqiang (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Yongqiang updated HBASE-5521:


Attachment: HBASE-5313-refactory.1.patch

 Move compression/decompression to an encoder specific encoding context
 --

 Key: HBASE-5521
 URL: https://issues.apache.org/jira/browse/HBASE-5521
 Project: HBase
  Issue Type: Improvement
Reporter: He Yongqiang
 Attachments: HBASE-5313-refactory.1.patch


 As part of working on HBASE-5313, we want to add a new columnar 
 encoder/decoder. It makes sense to move compression to be part of 
 encoder/decoder:
 1) a scanner for a columnar encoded block can do lazy decompression to a 
 specific part of a key value object
 2) avoid an extra bytes copy from encoder to hblock-writer. 
 If there is no encoder specified for a writer, the HBlock.Writer will use a 
 default compression-context to do something very similar to today's code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5521) Move compression/decompression to an encoder specific encoding context

2012-03-05 Thread He Yongqiang (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Yongqiang updated HBASE-5521:


Attachment: (was: HBASE-5313-refactory.1.patch)

 Move compression/decompression to an encoder specific encoding context
 --

 Key: HBASE-5521
 URL: https://issues.apache.org/jira/browse/HBASE-5521
 Project: HBase
  Issue Type: Improvement
Reporter: He Yongqiang

 As part of working on HBASE-5313, we want to add a new columnar 
 encoder/decoder. It makes sense to move compression to be part of 
 encoder/decoder:
 1) a scanner for a columnar encoded block can do lazy decompression to a 
 specific part of a key value object
 2) avoid an extra bytes copy from encoder to hblock-writer. 
 If there is no encoder specified for a writer, the HBlock.Writer will use a 
 default compression-context to do something very similar to today's code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5521) Move compression/decompression to an encoder specific encoding context

2012-03-05 Thread He Yongqiang (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Yongqiang updated HBASE-5521:


Attachment: HBASE-5521.1.patch

 Move compression/decompression to an encoder specific encoding context
 --

 Key: HBASE-5521
 URL: https://issues.apache.org/jira/browse/HBASE-5521
 Project: HBase
  Issue Type: Improvement
Reporter: He Yongqiang
 Attachments: HBASE-5521.1.patch


 As part of working on HBASE-5313, we want to add a new columnar 
 encoder/decoder. It makes sense to move compression to be part of 
 encoder/decoder:
 1) a scanner for a columnar encoded block can do lazy decompression to a 
 specific part of a key value object
 2) avoid an extra bytes copy from encoder to hblock-writer. 
 If there is no encoder specified for a writer, the HBlock.Writer will use a 
 default compression-context to do something very similar to today's code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5521) Move compression/decompression to an encoder specific encoding context

2012-03-05 Thread He Yongqiang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222576#comment-13222576
 ] 

He Yongqiang commented on HBASE-5521:
-

moved the review to https://reviews.facebook.net/D2097

 Move compression/decompression to an encoder specific encoding context
 --

 Key: HBASE-5521
 URL: https://issues.apache.org/jira/browse/HBASE-5521
 Project: HBase
  Issue Type: Improvement
Reporter: He Yongqiang
Assignee: He Yongqiang
 Attachments: HBASE-5521.1.patch, HBASE-5521.D2097.1.patch


 As part of working on HBASE-5313, we want to add a new columnar 
 encoder/decoder. It makes sense to move compression to be part of 
 encoder/decoder:
 1) a scanner for a columnar encoded block can do lazy decompression to a 
 specific part of a key value object
 2) avoid an extra bytes copy from encoder to hblock-writer. 
 If there is no encoder specified for a writer, the HBlock.Writer will use a 
 default compression-context to do something very similar to today's code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5521) Move compression/decompression to an encoder specific encoding context

2012-03-05 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5521:
---

Attachment: HBASE-5521.D2097.1.patch

heyongqiang requested code review of HBASE-5521 [jira] Move 
compression/decompression to an encoder specific encoding context.
Reviewers: JIRA

  https://issues.apache.org/jira/browse/HBASE-5521



  As part of working on HBASE-5313, we want to add a new columnar 
encoder/decoder. It makes sense to move compression to be part of 
encoder/decoder:
  1) a scanner for a columnar encoded block can do lazy decompression to a 
specific part of a key value object
  2) avoid an extra bytes copy from encoder to hblock-writer.

  If there is no encoder specified for a writer, the HBlock.Writer will use a 
default compression-context to do something very similar to today's code.

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D2097

AFFECTED FILES
  src/test/java/org/apache/hadoop/hbase/regionserver/DataBlockEncodingTool.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
  src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
  
src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java
  src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java
  
src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDecodingContext.java
  src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java
  
src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
  src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java
  
src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java
  src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.java
  
src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockEncodingContext.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/Compression.java
  src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/4539/

Tip: use the X-Herald-Rules header to filter Herald messages in your client.


 Move compression/decompression to an encoder specific encoding context
 --

 Key: HBASE-5521
 URL: https://issues.apache.org/jira/browse/HBASE-5521
 Project: HBase
  Issue Type: Improvement
Reporter: He Yongqiang
Assignee: He Yongqiang
 Attachments: HBASE-5521.1.patch, HBASE-5521.D2097.1.patch


 As part of working on HBASE-5313, we want to add a new columnar 
 encoder/decoder. It makes sense to move compression to be part of 
 encoder/decoder:
 1) a scanner for a columnar encoded block can do lazy decompression to a 
 specific part of a key value object
 2) avoid an extra bytes copy from encoder to hblock-writer. 
 If there is no encoder specified for a writer, the HBlock.Writer will use a 
 default compression-context to do something very similar to today's code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222593#comment-13222593
 ] 

Phabricator commented on HBASE-5515:


lhofhansl has commented on the revision HBASE-5515 [jira] Add a processRow API 
that supports atomic multiple reads and writes on a row.

  Here's another thought. This is actually slight different from HBASE-5229.

  HBASE-5229 provides one API for coprocessors to use, while this issue 
provides the code from the out side (the RowProcessor).

  I think it would be nice if there was an API on the RegionServer do execute 
something with lock/mvcc.
  Not sure about the actual API, but the coprocessor endpoint could *be* the 
RowProcessor, which would have the advantage that they could loaded dynamically 
and per table if needed.

  Maybe just public lockAndStartMvcc(row) and unlockAndCommitMvcc methods on 
RegionServer.
  Or executeAsTransaction(RowProcessor), or something.

REVISION DETAIL
  https://reviews.facebook.net/D2067


 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.2.patch, 
 HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, HBASE-5515.D2067.5.patch, 
 HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, HBASE-5515.D2067.8.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5521) Move compression/decompression to an encoder specific encoding context

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222608#comment-13222608
 ] 

Phabricator commented on HBASE-5521:


tedyu has commented on the revision HBASE-5521 [jira] Move 
compression/decompression to an encoder specific encoding context.

  Still going over the new Context interfaces.

INLINE COMMENTS
  
src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockEncodingContext.java:51
 I don't find where this method is used.
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:690 There is 
nothing to be done between preparation and post processing ?
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:693 
nonDataBlockEncodingCtx should be used here, I assume.

REVISION DETAIL
  https://reviews.facebook.net/D2097


 Move compression/decompression to an encoder specific encoding context
 --

 Key: HBASE-5521
 URL: https://issues.apache.org/jira/browse/HBASE-5521
 Project: HBase
  Issue Type: Improvement
Reporter: He Yongqiang
Assignee: He Yongqiang
 Attachments: HBASE-5521.1.patch, HBASE-5521.D2097.1.patch


 As part of working on HBASE-5313, we want to add a new columnar 
 encoder/decoder. It makes sense to move compression to be part of 
 encoder/decoder:
 1) a scanner for a columnar encoded block can do lazy decompression to a 
 specific part of a key value object
 2) avoid an extra bytes copy from encoder to hblock-writer. 
 If there is no encoder specified for a writer, the HBlock.Writer will use a 
 default compression-context to do something very similar to today's code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5521) Move compression/decompression to an encoder specific encoding context

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222616#comment-13222616
 ] 

Phabricator commented on HBASE-5521:


tedyu has commented on the revision HBASE-5521 [jira] Move 
compression/decompression to an encoder specific encoding context.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:811 Should we 
perform similar handling for nonDataBlockEncodingCtx ?
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:810 
dataBlockEncodingCtx should be set to null so that release() is re-entrant.

REVISION DETAIL
  https://reviews.facebook.net/D2097


 Move compression/decompression to an encoder specific encoding context
 --

 Key: HBASE-5521
 URL: https://issues.apache.org/jira/browse/HBASE-5521
 Project: HBase
  Issue Type: Improvement
Reporter: He Yongqiang
Assignee: He Yongqiang
 Attachments: HBASE-5521.1.patch, HBASE-5521.D2097.1.patch


 As part of working on HBASE-5313, we want to add a new columnar 
 encoder/decoder. It makes sense to move compression to be part of 
 encoder/decoder:
 1) a scanner for a columnar encoded block can do lazy decompression to a 
 specific part of a key value object
 2) avoid an extra bytes copy from encoder to hblock-writer. 
 If there is no encoder specified for a writer, the HBlock.Writer will use a 
 default compression-context to do something very similar to today's code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5521) Move compression/decompression to an encoder specific encoding context

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222621#comment-13222621
 ] 

Phabricator commented on HBASE-5521:


mbautin has requested changes to the revision HBASE-5521 [jira] Move 
compression/decompression to an encoder specific encoding context.

  Yongqiang: thanks for taking on this refactoring project. I have added some 
comments inline.

  What is columnar encoded block format, by the way? Also, it is not totally 
clear to me why we would need to unite compression and encoding steps in order 
to support columnar encoded block format.

  This requires a test plan and new unit tests. Please add test cases for 
encoding context implementations.


INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java:58 
Why do we need two separate compressKeyValues functions? What is the difference 
in use cases? Overloaded methods make the interface confusing.

  Add postEncoding javadoc.
  
src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java:179-183 
Javadoc please.
  src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java:53 
Why do we need to take encoding as an explicit parameter? Can we figure it out 
from dataBlockEncoder?
  
src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java:323-327
 Is this actually what you wanted to check? This code as written will accept 
any subclass of HFileBlockDefaultEncodingContext. If you want to only accept an 
instance of the HFileBlockDefaultEncodingContext class specifically, you need

encodingCxt.getClass() == HFileBlockDefaultEncodingContext.class

  However, why do we need to enforce such a constraint? Would not an arbitrary 
implementation of HFileBlockEncodingContext be acceptable here?

  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1396 Code 
style: add spaces after //. Capitalize the first word in a sentence. In short: 
be consistent with the existing style of the surrounding code.
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1327-1328 Why 
did you remove the comment about the peek-into-next-block-header optimization?

  here we already read - here we have already read

  
src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java:194-195
 Why do we need to instantiate the encoding context for every encoding 
operation?
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1253 Why do we 
need to have a separate default encoding context instance here?

  Can you make the default context a singleton (or a per-compression-type 
singleton) and use the relevant unique instance instead of this field?
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1357 Code 
style: space after if.

REVISION DETAIL
  https://reviews.facebook.net/D2097

BRANCH
  svn


 Move compression/decompression to an encoder specific encoding context
 --

 Key: HBASE-5521
 URL: https://issues.apache.org/jira/browse/HBASE-5521
 Project: HBase
  Issue Type: Improvement
Reporter: He Yongqiang
Assignee: He Yongqiang
 Attachments: HBASE-5521.1.patch, HBASE-5521.D2097.1.patch


 As part of working on HBASE-5313, we want to add a new columnar 
 encoder/decoder. It makes sense to move compression to be part of 
 encoder/decoder:
 1) a scanner for a columnar encoded block can do lazy decompression to a 
 specific part of a key value object
 2) avoid an extra bytes copy from encoder to hblock-writer. 
 If there is no encoder specified for a writer, the HBlock.Writer will use a 
 default compression-context to do something very similar to today's code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5371) Introduce AccessControllerProtocol.checkPermissions(Permission[] permissons) API

2012-03-05 Thread stack (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222634#comment-13222634
 ] 

stack commented on HBASE-5371:
--

Please integrate into 0.92 Ted.  Thanks.

 Introduce AccessControllerProtocol.checkPermissions(Permission[] permissons) 
 API
 

 Key: HBASE-5371
 URL: https://issues.apache.org/jira/browse/HBASE-5371
 Project: HBase
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.94.0

 Attachments: HBASE-5371-addendum_v1.patch, HBASE-5371_v2.patch, 
 HBASE-5371_v3-noprefix.patch, HBASE-5371_v3.patch


 We need to introduce something like 
 AccessControllerProtocol.checkPermissions(Permission[] permissions) API, so 
 that clients can check access rights before carrying out the operations. We 
 need this kind of operation for HCATALOG-245, which introduces authorization 
 providers for hbase over hcat. We cannot use getUserPermissions() since it 
 requires ADMIN permissions on the global/table level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5358) HBaseObjectWritable should be able to serialize/deserialize generic arrays

2012-03-05 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5358:
--

Attachment: 5358-92.txt

Patch for 0.92 branch.

As Enis suggested, private class DummyType is introduced to align ordinals for 
classes across major releases.

 HBaseObjectWritable should be able to serialize/deserialize generic arrays
 --

 Key: HBASE-5358
 URL: https://issues.apache.org/jira/browse/HBASE-5358
 Project: HBase
  Issue Type: Improvement
  Components: coprocessors, io
Affects Versions: 0.94.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.94.0

 Attachments: 5358-92.txt, HBASE-5358_v3.patch


 HBaseObjectWritable can encode Writable[]'s but, but cannot encode A[] where 
 A extends Writable. This becomes an issue for example when adding a 
 coprocessor method which takes A[] (see HBASE-5352). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5522) hbase 0.92 test artifacts are missing from Maven central

2012-03-05 Thread Roman Shaposhnik (Created) (JIRA)
hbase 0.92 test artifacts are missing from Maven central


 Key: HBASE-5522
 URL: https://issues.apache.org/jira/browse/HBASE-5522
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.92.0
Reporter: Roman Shaposhnik


Could someone with enough karma, please, publish the test artifacts for 0.92.0?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4890) fix possible NPE in HConnectionManager

2012-03-05 Thread stack (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-4890:
-

Attachment: splits.txt

J-D can reproduce it using this attached file and this command in the shell:

{code}
create 't1', 'f1', {SPLITS_FILE = 'splits.txt'}

{code}

 fix possible NPE in HConnectionManager
 --

 Key: HBASE-4890
 URL: https://issues.apache.org/jira/browse/HBASE-4890
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.92.1

 Attachments: splits.txt


 I was running YCSB against a 0.92 branch and encountered this error message:
 {code}
 11/11/29 08:47:16 WARN client.HConnectionManager$HConnectionImplementation: 
 Failed all from 
 region=usertable,user3917479014967760871,1322555655231.f78d161e5724495a9723bcd972f97f41.,
  hostname=c0316.hal.cloudera.com, port=57020
 java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
 java.lang.NullPointerException
 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
 at java.util.concurrent.FutureTask.get(FutureTask.java:83)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1501)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1353)
 at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:898)
 at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:775)
 at org.apache.hadoop.hbase.client.HTable.put(HTable.java:750)
 at com.yahoo.ycsb.db.HBaseClient.update(Unknown Source)
 at com.yahoo.ycsb.DBWrapper.update(Unknown Source)
 at com.yahoo.ycsb.workloads.CoreWorkload.doTransactionUpdate(Unknown 
 Source)
 at com.yahoo.ycsb.workloads.CoreWorkload.doTransaction(Unknown Source)
 at com.yahoo.ycsb.ClientThread.run(Unknown Source)
 Caused by: java.lang.RuntimeException: java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithoutRetries(HConnectionManager.java:1315)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1327)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1325)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 Caused by: java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:158)
 at $Proxy4.multi(Unknown Source)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1330)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1328)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithoutRetries(HConnectionManager.java:1309)
 ... 7 more
 {code}
 It looks like the NPE is caused by server being null in the MultiRespone 
 call() method.
 {code}
  public MultiResponse call() throws IOException {
  return getRegionServerWithoutRetries(
  new ServerCallableMultiResponse(connection, tableName, null) {
public MultiResponse call() throws IOException {
  return server.multi(multi);
}
@Override
public void connect(boolean reload) throws IOException {
  server =
connection.getHRegionConnection(loc.getHostname(), 
 loc.getPort());
}
  }
  );
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5523) Fix Delete Timerange logic for KEEP_DELETED_CELLS

2012-03-05 Thread Lars Hofhansl (Created) (JIRA)
Fix Delete Timerange logic for KEEP_DELETED_CELLS
-

 Key: HBASE-5523
 URL: https://issues.apache.org/jira/browse/HBASE-5523
 Project: HBase
  Issue Type: Sub-task
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.94.0, 0.96.0


A Delete at time T marks a Put at time T as deleted.
In parent I invented special logic that insert a virtual millisecond into the 
tr if the encountered KV is a delete marker.
This was so that there is a way to specify a timerange that would allow to see 
the put but not the delete:
{code}
if (kv.isDelete()) {
  if (!keepDeletedCells) {
// first ignore delete markers if the scanner can do so, and the
// range does not include the marker
boolean includeDeleteMarker = seePastDeleteMarkers ?
// +1, to allow a range between a delete and put of same TS
tr.withinTimeRange(timestamp+1) :
tr.withinOrAfterTimeRange(timestamp);
{code}

Discussed this today with a coworker and he convinced me that this is very 
confusing and also not needed.
When we have a Delete and Put at the same time T, there *is* not timerange that 
can include the Put but not the Delete.

So I will change the code to this (and fix the tests):
{code}
if (kv.isDelete()) {
  if (!keepDeletedCells) {
// first ignore delete markers if the scanner can do so, and the
// range does not include the marker
boolean includeDeleteMarker = seePastDeleteMarkers ?
tr.withinTimeRange(timestamp) :
tr.withinOrAfterTimeRange(timestamp);
{code}

It's easier to understand, and does not lead to strange scenarios when the TS 
is used as a controlled counter.

Needs to be done before 0.94 goes out.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5358) HBaseObjectWritable should be able to serialize/deserialize generic arrays

2012-03-05 Thread stack (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222701#comment-13222701
 ] 

stack commented on HBASE-5358:
--

This will do.  +1

 HBaseObjectWritable should be able to serialize/deserialize generic arrays
 --

 Key: HBASE-5358
 URL: https://issues.apache.org/jira/browse/HBASE-5358
 Project: HBase
  Issue Type: Improvement
  Components: coprocessors, io
Affects Versions: 0.94.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.94.0

 Attachments: 5358-92.txt, HBASE-5358_v3.patch


 HBaseObjectWritable can encode Writable[]'s but, but cannot encode A[] where 
 A extends Writable. This becomes an issue for example when adding a 
 coprocessor method which takes A[] (see HBASE-5352). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5358) HBaseObjectWritable should be able to serialize/deserialize generic arrays

2012-03-05 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222713#comment-13222713
 ] 

Zhihong Yu commented on HBASE-5358:
---

Integrated to 0.92.

Thanks for the suggestion, Enis.

Thanks for the review, Stack.

 HBaseObjectWritable should be able to serialize/deserialize generic arrays
 --

 Key: HBASE-5358
 URL: https://issues.apache.org/jira/browse/HBASE-5358
 Project: HBase
  Issue Type: Improvement
  Components: coprocessors, io
Affects Versions: 0.94.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.94.0

 Attachments: 5358-92.txt, HBASE-5358_v3.patch


 HBaseObjectWritable can encode Writable[]'s but, but cannot encode A[] where 
 A extends Writable. This becomes an issue for example when adding a 
 coprocessor method which takes A[] (see HBASE-5352). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5515:
---

Attachment: HBASE-5515.D2067.9.patch

sc updated the revision HBASE-5515 [jira] Add a processRow API that supports 
atomic multiple reads and writes on a row.
Reviewers: tedyu, dhruba, JIRA

  Addressed Ted's review comments.
  Also 1. Remove unnecessary change in HRegionServer and HRegionInterface
   2. Make RowProcessor returns generic

REVISION DETAIL
  https://reviews.facebook.net/D2067

AFFECTED FILES
  src/main/java/org/apache/hadoop/hbase/coprocessor/ProcessRowEndpoint.java
  src/main/java/org/apache/hadoop/hbase/coprocessor/ProcessRowProtocol.java
  src/main/java/org/apache/hadoop/hbase/coprocessor/RowProcessor.java
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
  src/test/java/org/apache/hadoop/hbase/coprocessor/TestProcessRowEndpoint.java


 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.2.patch, 
 HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, HBASE-5515.D2067.5.patch, 
 HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, HBASE-5515.D2067.8.patch, 
 HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5371) Introduce AccessControllerProtocol.checkPermissions(Permission[] permissons) API

2012-03-05 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222717#comment-13222717
 ] 

Zhihong Yu commented on HBASE-5371:
---

Integrated to 0.92 after HBASE-5358 went in.

 Introduce AccessControllerProtocol.checkPermissions(Permission[] permissons) 
 API
 

 Key: HBASE-5371
 URL: https://issues.apache.org/jira/browse/HBASE-5371
 Project: HBase
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.94.0

 Attachments: HBASE-5371-addendum_v1.patch, HBASE-5371_v2.patch, 
 HBASE-5371_v3-noprefix.patch, HBASE-5371_v3.patch


 We need to introduce something like 
 AccessControllerProtocol.checkPermissions(Permission[] permissions) API, so 
 that clients can check access rights before carrying out the operations. We 
 need this kind of operation for HCATALOG-245, which introduces authorization 
 providers for hbase over hcat. We cannot use getUserPermissions() since it 
 requires ADMIN permissions on the global/table level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5524) Add a couple of more filters to our rat exclusion set

2012-03-05 Thread stack (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5524:
-

Attachment: rat.txt

Exclude **/test/** and **/*.orig

 Add a couple of more filters to our rat exclusion set
 -

 Key: HBASE-5524
 URL: https://issues.apache.org/jira/browse/HBASE-5524
 Project: HBase
  Issue Type: Bug
Reporter: stack
 Fix For: 0.92.1, 0.94.0

 Attachments: rat.txt


 Build up on jenkins is failing because I just enabled the rat/license check 
 as part of our build.  We're failing because CP is writing test data into 
 top-level at ./test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5524) Add a couple of more filters to our rat exclusion set

2012-03-05 Thread stack (Created) (JIRA)
Add a couple of more filters to our rat exclusion set
-

 Key: HBASE-5524
 URL: https://issues.apache.org/jira/browse/HBASE-5524
 Project: HBase
  Issue Type: Bug
Reporter: stack
 Fix For: 0.92.1, 0.94.0
 Attachments: rat.txt

Build up on jenkins is failing because I just enabled the rat/license check as 
part of our build.  We're failing because CP is writing test data into 
top-level at ./test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5515:
---

Attachment: HBASE-5515.D2067.10.patch

sc updated the revision HBASE-5515 [jira] Add a processRow API that supports 
atomic multiple reads and writes on a row.
Reviewers: tedyu, dhruba, JIRA

  Fixed a minor bug

REVISION DETAIL
  https://reviews.facebook.net/D2067

AFFECTED FILES
  src/main/java/org/apache/hadoop/hbase/coprocessor/ProcessRowEndpoint.java
  src/main/java/org/apache/hadoop/hbase/coprocessor/ProcessRowProtocol.java
  src/main/java/org/apache/hadoop/hbase/coprocessor/RowProcessor.java
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
  src/test/java/org/apache/hadoop/hbase/coprocessor/TestProcessRowEndpoint.java


 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.2.patch, HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, 
 HBASE-5515.D2067.5.patch, HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, 
 HBASE-5515.D2067.8.patch, HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222741#comment-13222741
 ] 

Hadoop QA commented on HBASE-5515:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12517146/HBASE-5515.D2067.9.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -128 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 154 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.TestHeapSize

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1101//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1101//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1101//console

This message is automatically generated.

 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.2.patch, HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, 
 HBASE-5515.D2067.5.patch, HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, 
 HBASE-5515.D2067.8.patch, HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-5524) Add a couple of more filters to our rat exclusion set

2012-03-05 Thread stack (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-5524.
--

Resolution: Fixed
  Assignee: stack

Committed to 0.92, 0.94, and trunk

 Add a couple of more filters to our rat exclusion set
 -

 Key: HBASE-5524
 URL: https://issues.apache.org/jira/browse/HBASE-5524
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.92.1, 0.94.0

 Attachments: rat.txt


 Build up on jenkins is failing because I just enabled the rat/license check 
 as part of our build.  We're failing because CP is writing test data into 
 top-level at ./test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222749#comment-13222749
 ] 

Phabricator commented on HBASE-5515:


tedyu has commented on the revision HBASE-5515 [jira] Add a processRow API 
that supports atomic multiple reads and writes on a row.

  Good progress.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/coprocessor/RowProcessor.java:43 Please 
explain type parameter T in javadoc.
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java:4379 
mutations may have many elements.
  We should control the length of this log.
  src/main/java/org/apache/hadoop/hbase/coprocessor/RowProcessor.java:72 This 
method is used for testing.
  I suggest making it package private.

  Also, walEdits is empty upon entrance to this method.
  Do we need to expose it ? It is not used by friends of friends test.

REVISION DETAIL
  https://reviews.facebook.net/D2067


 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.2.patch, HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, 
 HBASE-5515.D2067.5.patch, HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, 
 HBASE-5515.D2067.8.patch, HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4608) HLog Compression

2012-03-05 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222763#comment-13222763
 ] 

Zhihong Yu commented on HBASE-4608:
---

I got permission from Pi to complete this feature since he is busy with course 
work.

I created new review request:
https://reviews.apache.org/r/4185/

 HLog Compression
 

 Key: HBASE-4608
 URL: https://issues.apache.org/jira/browse/HBASE-4608
 Project: HBase
  Issue Type: New Feature
Reporter: Li Pi
Assignee: Li Pi
 Fix For: 0.94.0

 Attachments: 4608v1.txt, 4608v13.txt, 4608v13.txt, 4608v14.txt, 
 4608v15.txt, 4608v16.txt, 4608v5.txt, 4608v6.txt, 4608v7.txt, 4608v8fixed.txt


 The current bottleneck to HBase write speed is replicating the WAL appends 
 across different datanodes. We can speed up this process by compressing the 
 HLog. Current plan involves using a dictionary to compress table name, region 
 id, cf name, and possibly other bits of repeated data. Also, HLog format may 
 be changed in other ways to produce a smaller HLog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222772#comment-13222772
 ] 

Phabricator commented on HBASE-5515:


sc has commented on the revision HBASE-5515 [jira] Add a processRow API that 
supports atomic multiple reads and writes on a row.

  @lhofhansl: That's very interesting. HBASE-5529 is more general in the sense 
that It process multiple rows while this one process only one row but it allows 
more general process (read-modify-write). Maybe we can have a 
MultiRowProcessEndpoint in the future :)

  It will be nice to have some API that can allows simple control on lock/mvcc. 
That will be very powerful.

REVISION DETAIL
  https://reviews.facebook.net/D2067


 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.2.patch, HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, 
 HBASE-5515.D2067.5.patch, HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, 
 HBASE-5515.D2067.8.patch, HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222778#comment-13222778
 ] 

Phabricator commented on HBASE-5515:


sc has commented on the revision HBASE-5515 [jira] Add a processRow API that 
supports atomic multiple reads and writes on a row.

  Ted: Wow, that was super quick! Thanks!

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/coprocessor/RowProcessor.java:72 This 
is actually for the application to override to create their own processor. I 
should document this more clearly. I will make the change.

  I am passing the walEdit here because internally we also submit some meta 
data to walEdit (kind of like you can send SQL comments to binlog). Basically 
passing walEdit here allow us to do more things.
  src/main/java/org/apache/hadoop/hbase/coprocessor/RowProcessor.java:43 good 
point.
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java:4379 yes. I 
will make the change.

REVISION DETAIL
  https://reviews.facebook.net/D2067


 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.2.patch, HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, 
 HBASE-5515.D2067.5.patch, HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, 
 HBASE-5515.D2067.8.patch, HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222777#comment-13222777
 ] 

Hadoop QA commented on HBASE-5515:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12517150/HBASE-5515.D2067.10.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -128 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 154 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.TestHeapSize

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1102//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1102//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1102//console

This message is automatically generated.

 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.2.patch, HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, 
 HBASE-5515.D2067.5.patch, HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, 
 HBASE-5515.D2067.8.patch, HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5474) [89-fb] Share the multiput thread pool for all the HTable instance

2012-03-05 Thread Liyin Tang (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liyin Tang updated HBASE-5474:
--

Summary: [89-fb] Share the multiput thread pool for all the HTable instance 
 (was: Share the multiput thread pool for all the HTable instance)

 [89-fb] Share the multiput thread pool for all the HTable instance
 --

 Key: HBASE-5474
 URL: https://issues.apache.org/jira/browse/HBASE-5474
 Project: HBase
  Issue Type: Improvement
Reporter: Liyin Tang
Assignee: Liyin Tang

 Currently, each HTable instance will have a thread pool for the multiput 
 operation. Each thread pool is actually a cached thread pool, which is 
 bounded the number of region server. So the maximum number of threads will be 
 ( # region server * # htable instance).  On the other hand, if all HTable 
 instance could share this thread pool, the max number threads will still be 
 the same. However, it will increase the thread pool efficiency.
 Also the single put requests are processed within the current thread instead 
 of the thread pool.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5524) Add a couple of more filters to our rat exclusion set

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222793#comment-13222793
 ] 

Hudson commented on HBASE-5524:
---

Integrated in HBase-0.94 #13 (See 
[https://builds.apache.org/job/HBase-0.94/13/])
HBASE-5524 Add a couple of more filters to our rat exclusion set (Revision 
1297278)

 Result = SUCCESS
stack : 
Files : 
* /hbase/branches/0.94/pom.xml


 Add a couple of more filters to our rat exclusion set
 -

 Key: HBASE-5524
 URL: https://issues.apache.org/jira/browse/HBASE-5524
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.92.1, 0.94.0

 Attachments: rat.txt


 Build up on jenkins is failing because I just enabled the rat/license check 
 as part of our build.  We're failing because CP is writing test data into 
 top-level at ./test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4608) HLog Compression

2012-03-05 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-4608:
--

Attachment: 4608v17.txt

Patch v17 from https://reviews.apache.org/r/4185/

 HLog Compression
 

 Key: HBASE-4608
 URL: https://issues.apache.org/jira/browse/HBASE-4608
 Project: HBase
  Issue Type: New Feature
Reporter: Li Pi
Assignee: Li Pi
 Fix For: 0.94.0

 Attachments: 4608v1.txt, 4608v13.txt, 4608v13.txt, 4608v14.txt, 
 4608v15.txt, 4608v16.txt, 4608v17.txt, 4608v5.txt, 4608v6.txt, 4608v7.txt, 
 4608v8fixed.txt


 The current bottleneck to HBase write speed is replicating the WAL appends 
 across different datanodes. We can speed up this process by compressing the 
 HLog. Current plan involves using a dictionary to compress table name, region 
 id, cf name, and possibly other bits of repeated data. Also, HLog format may 
 be changed in other ways to produce a smaller HLog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5474) [89-fb] Share the multiput thread pool for all the HTable instance

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222803#comment-13222803
 ] 

Phabricator commented on HBASE-5474:


mbautin has committed the revision [jira][HBASE-5474] Share the multiput 
thread pool for all the HTable instance.

REVISION DETAIL
  https://reviews.facebook.net/D2001

COMMIT
  https://reviews.facebook.net/rHBASEEIGHTNINEFBBRANCH1297295


 [89-fb] Share the multiput thread pool for all the HTable instance
 --

 Key: HBASE-5474
 URL: https://issues.apache.org/jira/browse/HBASE-5474
 Project: HBase
  Issue Type: Improvement
Reporter: Liyin Tang
Assignee: Liyin Tang

 Currently, each HTable instance will have a thread pool for the multiput 
 operation. Each thread pool is actually a cached thread pool, which is 
 bounded the number of region server. So the maximum number of threads will be 
 ( # region server * # htable instance).  On the other hand, if all HTable 
 instance could share this thread pool, the max number threads will still be 
 the same. However, it will increase the thread pool efficiency.
 Also the single put requests are processed within the current thread instead 
 of the thread pool.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5474) [89-fb] Share the multiput thread pool for all the HTable instance

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222805#comment-13222805
 ] 

Phabricator commented on HBASE-5474:


mbautin has committed the revision [jira][HBASE-5474] Share the multiput 
thread pool for all the HTable instance.

REVISION DETAIL
  https://reviews.facebook.net/D2001

COMMIT
  https://reviews.facebook.net/rHBASEEIGHTNINEFBBRANCH1297295


 [89-fb] Share the multiput thread pool for all the HTable instance
 --

 Key: HBASE-5474
 URL: https://issues.apache.org/jira/browse/HBASE-5474
 Project: HBase
  Issue Type: Improvement
Reporter: Liyin Tang
Assignee: Liyin Tang

 Currently, each HTable instance will have a thread pool for the multiput 
 operation. Each thread pool is actually a cached thread pool, which is 
 bounded the number of region server. So the maximum number of threads will be 
 ( # region server * # htable instance).  On the other hand, if all HTable 
 instance could share this thread pool, the max number threads will still be 
 the same. However, it will increase the thread pool efficiency.
 Also the single put requests are processed within the current thread instead 
 of the thread pool.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5515:
---

Attachment: HBASE-5515.D2067.11.patch

sc updated the revision HBASE-5515 [jira] Add a processRow API that supports 
atomic multiple reads and writes on a row.
Reviewers: tedyu, dhruba, JIRA

  Addressed Ted's last review comments

REVISION DETAIL
  https://reviews.facebook.net/D2067

AFFECTED FILES
  src/main/java/org/apache/hadoop/hbase/coprocessor/ProcessRowEndpoint.java
  src/main/java/org/apache/hadoop/hbase/coprocessor/ProcessRowProtocol.java
  src/main/java/org/apache/hadoop/hbase/coprocessor/RowProcessor.java
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
  src/test/java/org/apache/hadoop/hbase/coprocessor/TestProcessRowEndpoint.java


 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.11.patch, HBASE-5515.D2067.2.patch, 
 HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, HBASE-5515.D2067.5.patch, 
 HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, HBASE-5515.D2067.8.patch, 
 HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5474) [89-fb] Share the multiput thread pool for all the HTable instance

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222806#comment-13222806
 ] 

Phabricator commented on HBASE-5474:


mbautin has committed the revision [jira][HBASE-5474] Share the multiput 
thread pool for all the HTable instance.

REVISION DETAIL
  https://reviews.facebook.net/D2001

COMMIT
  https://reviews.facebook.net/rHBASEEIGHTNINEFBBRANCH1297295


 [89-fb] Share the multiput thread pool for all the HTable instance
 --

 Key: HBASE-5474
 URL: https://issues.apache.org/jira/browse/HBASE-5474
 Project: HBase
  Issue Type: Improvement
Reporter: Liyin Tang
Assignee: Liyin Tang

 Currently, each HTable instance will have a thread pool for the multiput 
 operation. Each thread pool is actually a cached thread pool, which is 
 bounded the number of region server. So the maximum number of threads will be 
 ( # region server * # htable instance).  On the other hand, if all HTable 
 instance could share this thread pool, the max number threads will still be 
 the same. However, it will increase the thread pool efficiency.
 Also the single put requests are processed within the current thread instead 
 of the thread pool.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5474) [89-fb] Share the multiput thread pool for all the HTable instance

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222807#comment-13222807
 ] 

Phabricator commented on HBASE-5474:


mbautin has committed the revision [jira][HBASE-5474] Share the multiput 
thread pool for all the HTable instance.

REVISION DETAIL
  https://reviews.facebook.net/D2001

COMMIT
  https://reviews.facebook.net/rHBASEEIGHTNINEFBBRANCH1297295


 [89-fb] Share the multiput thread pool for all the HTable instance
 --

 Key: HBASE-5474
 URL: https://issues.apache.org/jira/browse/HBASE-5474
 Project: HBase
  Issue Type: Improvement
Reporter: Liyin Tang
Assignee: Liyin Tang

 Currently, each HTable instance will have a thread pool for the multiput 
 operation. Each thread pool is actually a cached thread pool, which is 
 bounded the number of region server. So the maximum number of threads will be 
 ( # region server * # htable instance).  On the other hand, if all HTable 
 instance could share this thread pool, the max number threads will still be 
 the same. However, it will increase the thread pool efficiency.
 Also the single put requests are processed within the current thread instead 
 of the thread pool.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222815#comment-13222815
 ] 

Phabricator commented on HBASE-5515:


sc has commented on the revision HBASE-5515 [jira] Add a processRow API that 
supports atomic multiple reads and writes on a row.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/coprocessor/RowProcessor.java:72 By 
internally, I mean for our internal use case.

REVISION DETAIL
  https://reviews.facebook.net/D2067


 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.11.patch, HBASE-5515.D2067.2.patch, 
 HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, HBASE-5515.D2067.5.patch, 
 HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, HBASE-5515.D2067.8.patch, 
 HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222827#comment-13222827
 ] 

Hadoop QA commented on HBASE-5515:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12517161/HBASE-5515.D2067.11.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -128 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 155 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.TestHeapSize

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1104//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1104//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1104//console

This message is automatically generated.

 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.11.patch, HBASE-5515.D2067.2.patch, 
 HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, HBASE-5515.D2067.5.patch, 
 HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, HBASE-5515.D2067.8.patch, 
 HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4608) HLog Compression

2012-03-05 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222830#comment-13222830
 ] 

Hadoop QA commented on HBASE-4608:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12517160/4608v17.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 9 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -127 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 155 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.regionserver.wal.TestWALReplayCompressed
  org.apache.hadoop.hbase.mapreduce.TestImportTsv
  org.apache.hadoop.hbase.mapred.TestTableMapReduce
  org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1103//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1103//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1103//console

This message is automatically generated.

 HLog Compression
 

 Key: HBASE-4608
 URL: https://issues.apache.org/jira/browse/HBASE-4608
 Project: HBase
  Issue Type: New Feature
Reporter: Li Pi
Assignee: Li Pi
 Fix For: 0.94.0

 Attachments: 4608v1.txt, 4608v13.txt, 4608v13.txt, 4608v14.txt, 
 4608v15.txt, 4608v16.txt, 4608v17.txt, 4608v5.txt, 4608v6.txt, 4608v7.txt, 
 4608v8fixed.txt


 The current bottleneck to HBase write speed is replicating the WAL appends 
 across different datanodes. We can speed up this process by compressing the 
 HLog. Current plan involves using a dictionary to compress table name, region 
 id, cf name, and possibly other bits of repeated data. Also, HLog format may 
 be changed in other ways to produce a smaller HLog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5515:
---

Attachment: HBASE-5515.D2067.12.patch

sc updated the revision HBASE-5515 [jira] Add a processRow API that supports 
atomic multiple reads and writes on a row.
Reviewers: tedyu, dhruba, JIRA

  Inject some meta keyvalue in the unit test to demonstrate the use of walEdit

REVISION DETAIL
  https://reviews.facebook.net/D2067

AFFECTED FILES
  src/main/java/org/apache/hadoop/hbase/coprocessor/ProcessRowEndpoint.java
  src/main/java/org/apache/hadoop/hbase/coprocessor/ProcessRowProtocol.java
  src/main/java/org/apache/hadoop/hbase/coprocessor/RowProcessor.java
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
  src/test/java/org/apache/hadoop/hbase/coprocessor/TestProcessRowEndpoint.java


 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.11.patch, HBASE-5515.D2067.12.patch, 
 HBASE-5515.D2067.2.patch, HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, 
 HBASE-5515.D2067.5.patch, HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, 
 HBASE-5515.D2067.8.patch, HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5525) Truncate and preserve region boundaries option

2012-03-05 Thread Jean-Daniel Cryans (Created) (JIRA)
Truncate and preserve region boundaries option
--

 Key: HBASE-5525
 URL: https://issues.apache.org/jira/browse/HBASE-5525
 Project: HBase
  Issue Type: New Feature
Reporter: Jean-Daniel Cryans
 Fix For: 0.96.0


A tool that would be useful for testing (and maybe in prod too) would be a 
truncate option to keep the current region boundaries. Right now what you have 
to do is completely kill the table and recreate it with the correct regions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5515:
---

Attachment: HBASE-5515.D2067.13.patch

sc updated the revision HBASE-5515 [jira] Add a processRow API that supports 
atomic multiple reads and writes on a row.
Reviewers: tedyu, dhruba, JIRA

  Fixed TestHeapSize

REVISION DETAIL
  https://reviews.facebook.net/D2067

AFFECTED FILES
  src/main/java/org/apache/hadoop/hbase/coprocessor/ProcessRowEndpoint.java
  src/main/java/org/apache/hadoop/hbase/coprocessor/ProcessRowProtocol.java
  src/main/java/org/apache/hadoop/hbase/coprocessor/RowProcessor.java
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
  src/test/java/org/apache/hadoop/hbase/coprocessor/TestProcessRowEndpoint.java


 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.11.patch, HBASE-5515.D2067.12.patch, 
 HBASE-5515.D2067.13.patch, HBASE-5515.D2067.2.patch, 
 HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, HBASE-5515.D2067.5.patch, 
 HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, HBASE-5515.D2067.8.patch, 
 HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5523) Fix Delete Timerange logic for KEEP_DELETED_CELLS

2012-03-05 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5523:
-

Status: Patch Available  (was: Open)

 Fix Delete Timerange logic for KEEP_DELETED_CELLS
 -

 Key: HBASE-5523
 URL: https://issues.apache.org/jira/browse/HBASE-5523
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.94.0, 0.96.0

 Attachments: 5523.txt


 A Delete at time T marks a Put at time T as deleted.
 In parent I invented special logic that insert a virtual millisecond into the 
 tr if the encountered KV is a delete marker.
 This was so that there is a way to specify a timerange that would allow to 
 see the put but not the delete:
 {code}
 if (kv.isDelete()) {
   if (!keepDeletedCells) {
 // first ignore delete markers if the scanner can do so, and the
 // range does not include the marker
 boolean includeDeleteMarker = seePastDeleteMarkers ?
 // +1, to allow a range between a delete and put of same TS
 tr.withinTimeRange(timestamp+1) :
 tr.withinOrAfterTimeRange(timestamp);
 {code}
 Discussed this today with a coworker and he convinced me that this is very 
 confusing and also not needed.
 When we have a Delete and Put at the same time T, there *is* not timerange 
 that can include the Put but not the Delete.
 So I will change the code to this (and fix the tests):
 {code}
 if (kv.isDelete()) {
   if (!keepDeletedCells) {
 // first ignore delete markers if the scanner can do so, and the
 // range does not include the marker
 boolean includeDeleteMarker = seePastDeleteMarkers ?
 tr.withinTimeRange(timestamp) :
 tr.withinOrAfterTimeRange(timestamp);
 {code}
 It's easier to understand, and does not lead to strange scenarios when the TS 
 is used as a controlled counter.
 Needs to be done before 0.94 goes out.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5523) Fix Delete Timerange logic for KEEP_DELETED_CELLS

2012-03-05 Thread Lars Hofhansl (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5523:
-

Attachment: 5523.txt

Here's a patch.


 Fix Delete Timerange logic for KEEP_DELETED_CELLS
 -

 Key: HBASE-5523
 URL: https://issues.apache.org/jira/browse/HBASE-5523
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.94.0, 0.96.0

 Attachments: 5523.txt


 A Delete at time T marks a Put at time T as deleted.
 In parent I invented special logic that insert a virtual millisecond into the 
 tr if the encountered KV is a delete marker.
 This was so that there is a way to specify a timerange that would allow to 
 see the put but not the delete:
 {code}
 if (kv.isDelete()) {
   if (!keepDeletedCells) {
 // first ignore delete markers if the scanner can do so, and the
 // range does not include the marker
 boolean includeDeleteMarker = seePastDeleteMarkers ?
 // +1, to allow a range between a delete and put of same TS
 tr.withinTimeRange(timestamp+1) :
 tr.withinOrAfterTimeRange(timestamp);
 {code}
 Discussed this today with a coworker and he convinced me that this is very 
 confusing and also not needed.
 When we have a Delete and Put at the same time T, there *is* not timerange 
 that can include the Put but not the Delete.
 So I will change the code to this (and fix the tests):
 {code}
 if (kv.isDelete()) {
   if (!keepDeletedCells) {
 // first ignore delete markers if the scanner can do so, and the
 // range does not include the marker
 boolean includeDeleteMarker = seePastDeleteMarkers ?
 tr.withinTimeRange(timestamp) :
 tr.withinOrAfterTimeRange(timestamp);
 {code}
 It's easier to understand, and does not lead to strange scenarios when the TS 
 is used as a controlled counter.
 Needs to be done before 0.94 goes out.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222843#comment-13222843
 ] 

Phabricator commented on HBASE-5515:


lhofhansl has commented on the revision HBASE-5515 [jira] Add a processRow API 
that supports atomic multiple reads and writes on a row.

  @sc:
  It will be nice to have some API that can allows simple control on 
lock/mvcc. That will be very powerful.

  But you do not want to do it for this issue? ;-)

  My point is that a RowProcessor implementation class is what coprocessor 
endpoints were meant for (server side custom code) - with the caveat that 
coprocessors currently do not have enough access to regionserver details.


REVISION DETAIL
  https://reviews.facebook.net/D2067


 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.11.patch, HBASE-5515.D2067.12.patch, 
 HBASE-5515.D2067.13.patch, HBASE-5515.D2067.2.patch, 
 HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, HBASE-5515.D2067.5.patch, 
 HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, HBASE-5515.D2067.8.patch, 
 HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5515:
---

Attachment: HBASE-5515.D2067.14.patch

sc updated the revision HBASE-5515 [jira] Add a processRow API that supports 
atomic multiple reads and writes on a row.
Reviewers: tedyu, dhruba, JIRA

  Move completeMemstoreInsert() back to finally block.
  Ted: I think this should still be in finally block. Otherwise this may block 
all the reads.

REVISION DETAIL
  https://reviews.facebook.net/D2067

AFFECTED FILES
  src/main/java/org/apache/hadoop/hbase/coprocessor/ProcessRowEndpoint.java
  src/main/java/org/apache/hadoop/hbase/coprocessor/ProcessRowProtocol.java
  src/main/java/org/apache/hadoop/hbase/coprocessor/RowProcessor.java
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
  src/test/java/org/apache/hadoop/hbase/coprocessor/TestProcessRowEndpoint.java


 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.11.patch, HBASE-5515.D2067.12.patch, 
 HBASE-5515.D2067.13.patch, HBASE-5515.D2067.14.patch, 
 HBASE-5515.D2067.2.patch, HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, 
 HBASE-5515.D2067.5.patch, HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, 
 HBASE-5515.D2067.8.patch, HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5524) Add a couple of more filters to our rat exclusion set

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222855#comment-13222855
 ] 

Hudson commented on HBASE-5524:
---

Integrated in HBase-0.92 #315 (See 
[https://builds.apache.org/job/HBase-0.92/315/])
HBASE-5524 Add a couple of more filters to our rat exclusion set (Revision 
1297279)

 Result = SUCCESS
stack : 
Files : 
* /hbase/branches/0.92/pom.xml


 Add a couple of more filters to our rat exclusion set
 -

 Key: HBASE-5524
 URL: https://issues.apache.org/jira/browse/HBASE-5524
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.92.1, 0.94.0

 Attachments: rat.txt


 Build up on jenkins is failing because I just enabled the rat/license check 
 as part of our build.  We're failing because CP is writing test data into 
 top-level at ./test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4608) HLog Compression

2012-03-05 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-4608:
--

Attachment: 4608v18.txt

Fix bug in checking sizeBytes in uncompressIntoArray()

 HLog Compression
 

 Key: HBASE-4608
 URL: https://issues.apache.org/jira/browse/HBASE-4608
 Project: HBase
  Issue Type: New Feature
Reporter: Li Pi
Assignee: Li Pi
 Fix For: 0.94.0

 Attachments: 4608v1.txt, 4608v13.txt, 4608v13.txt, 4608v14.txt, 
 4608v15.txt, 4608v16.txt, 4608v17.txt, 4608v18.txt, 4608v5.txt, 4608v6.txt, 
 4608v7.txt, 4608v8fixed.txt


 The current bottleneck to HBase write speed is replicating the WAL appends 
 across different datanodes. We can speed up this process by compressing the 
 HLog. Current plan involves using a dictionary to compress table name, region 
 id, cf name, and possibly other bits of repeated data. Also, HLog format may 
 be changed in other ways to produce a smaller HLog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222854#comment-13222854
 ] 

Hadoop QA commented on HBASE-5515:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12517169/HBASE-5515.D2067.13.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -128 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 155 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.TestHeapSize

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1106//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1106//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1106//console

This message is automatically generated.

 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.11.patch, HBASE-5515.D2067.12.patch, 
 HBASE-5515.D2067.13.patch, HBASE-5515.D2067.14.patch, 
 HBASE-5515.D2067.2.patch, HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, 
 HBASE-5515.D2067.5.patch, HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, 
 HBASE-5515.D2067.8.patch, HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5371) Introduce AccessControllerProtocol.checkPermissions(Permission[] permissons) API

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222856#comment-13222856
 ] 

Hudson commented on HBASE-5371:
---

Integrated in HBase-0.92 #315 (See 
[https://builds.apache.org/job/HBase-0.92/315/])
HBASE-5371  Introduce 
AccessControllerProtocol.checkPermissions(Permission[] permissons)
   API (Enis) (Revision 1297268)

 Result = SUCCESS
tedyu : 
Files : 
* /hbase/branches/0.92/CHANGES.txt
* 
/hbase/branches/0.92/security/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* 
/hbase/branches/0.92/security/src/main/java/org/apache/hadoop/hbase/security/access/AccessControllerProtocol.java
* 
/hbase/branches/0.92/security/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java


 Introduce AccessControllerProtocol.checkPermissions(Permission[] permissons) 
 API
 

 Key: HBASE-5371
 URL: https://issues.apache.org/jira/browse/HBASE-5371
 Project: HBase
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.94.0

 Attachments: HBASE-5371-addendum_v1.patch, HBASE-5371_v2.patch, 
 HBASE-5371_v3-noprefix.patch, HBASE-5371_v3.patch


 We need to introduce something like 
 AccessControllerProtocol.checkPermissions(Permission[] permissions) API, so 
 that clients can check access rights before carrying out the operations. We 
 need this kind of operation for HCATALOG-245, which introduces authorization 
 providers for hbase over hcat. We cannot use getUserPermissions() since it 
 requires ADMIN permissions on the global/table level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5358) HBaseObjectWritable should be able to serialize/deserialize generic arrays

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222857#comment-13222857
 ] 

Hudson commented on HBASE-5358:
---

Integrated in HBase-0.92 #315 (See 
[https://builds.apache.org/job/HBase-0.92/315/])
HBASE-5358  HBaseObjectWritable should be able to serialize/deserialize 
generic
   arrays (Enis) (Revision 1297267)

 Result = SUCCESS
tedyu : 
Files : 
* /hbase/branches/0.92/CHANGES.txt
* 
/hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java
* 
/hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/io/TestHbaseObjectWritable.java


 HBaseObjectWritable should be able to serialize/deserialize generic arrays
 --

 Key: HBASE-5358
 URL: https://issues.apache.org/jira/browse/HBASE-5358
 Project: HBase
  Issue Type: Improvement
  Components: coprocessors, io
Affects Versions: 0.94.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.94.0

 Attachments: 5358-92.txt, HBASE-5358_v3.patch


 HBaseObjectWritable can encode Writable[]'s but, but cannot encode A[] where 
 A extends Writable. This becomes an issue for example when adding a 
 coprocessor method which takes A[] (see HBASE-5352). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222858#comment-13222858
 ] 

Phabricator commented on HBASE-5515:


sc has commented on the revision HBASE-5515 [jira] Add a processRow API that 
supports atomic multiple reads and writes on a row.

  @lhofhansl: I would prefer that to be the future work :) Thanks for the 
insights that you provide for the patch! That really helped a lot.

REVISION DETAIL
  https://reviews.facebook.net/D2067


 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.11.patch, HBASE-5515.D2067.12.patch, 
 HBASE-5515.D2067.13.patch, HBASE-5515.D2067.14.patch, 
 HBASE-5515.D2067.2.patch, HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, 
 HBASE-5515.D2067.5.patch, HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, 
 HBASE-5515.D2067.8.patch, HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5526) Optional file permission settings

2012-03-05 Thread Jesse Yates (Created) (JIRA)
Optional file permission settings
-

 Key: HBASE-5526
 URL: https://issues.apache.org/jira/browse/HBASE-5526
 Project: HBase
  Issue Type: New Feature
  Components: regionserver
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 0.94.0


Currently many all the files created by the HBase user are just written using 
the default file permissions granted by hdfs. However, it is often times 
adventageous to only allow a subset of the world to view the actual data 
written by hbase when scanning the raw hdfs files. 

This ticket covers setting permissions for files written to hdfs that are 
storing actual user data, as opposed to _all_ files written to hdfs as many of 
them contain non-identifiable metadata.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5507) ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not use ByteBuffer correctly

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222865#comment-13222865
 ] 

Hudson commented on HBASE-5507:
---

Integrated in HBase-TRUNK #2672 (See 
[https://builds.apache.org/job/HBase-TRUNK/2672/])
HBASE-5507 ThriftServerRunner.HbaseHandler.getRegionInfo() and 
getTableRegions() do not use ByteBuffer correctly (Scott Chen) (Revision 
1297300)

 Result = SUCCESS
tedyu : 
Files : 
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionThriftServer.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServerCmdLine.java


 ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not 
 use ByteBuffer correctly
 -

 Key: HBASE-5507
 URL: https://issues.apache.org/jira/browse/HBASE-5507
 Project: HBase
  Issue Type: Bug
  Components: thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.94.0

 Attachments: HBASE-5507.D2073.1.patch, HBASE-5507.D2073.2.patch, 
 HBASE-5507.D2073.3.patch


 We observed that when with framed transport option. The thrift call 
 ThriftServerRunner.HbaseHandler.getRegionInfo() receives corrupted parameter 
 (some garbage string attached to the beginning). This may be a thrift bug 
 requires further investigation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5506) Add unit test for ThriftServerRunner.HbaseHandler.getRegionInfo()

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222866#comment-13222866
 ] 

Hudson commented on HBASE-5506:
---

Integrated in HBase-TRUNK #2672 (See 
[https://builds.apache.org/job/HBase-TRUNK/2672/])
HBASE-5506 Add unit test for 
ThriftServerRunner.HbaseHandler.getRegionInfo() (Scott Chen) (Revision 1296365)

 Result = SUCCESS
tedyu : 
Files : 
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServerCmdLine.java


 Add unit test for ThriftServerRunner.HbaseHandler.getRegionInfo()
 -

 Key: HBASE-5506
 URL: https://issues.apache.org/jira/browse/HBASE-5506
 Project: HBase
  Issue Type: Test
Reporter: Scott Chen
Assignee: Scott Chen
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE-5506.D2031.1.patch, HBASE-5506.D2031.2.patch, 
 HBASE-5506.D2031.3.patch


 We observed that when with framed transport option.
 The thrift call ThriftServerRunner.HbaseHandler.getRegionInfo() receives 
 corrupted parameter (some garbage string attached to the beginning).
 This may be a thrift bug requires further investigation.
 Add a unit test to reproduce the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5486) Warn message in HTable: Stringify the byte[]

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222868#comment-13222868
 ] 

Hudson commented on HBASE-5486:
---

Integrated in HBase-TRUNK #2672 (See 
[https://builds.apache.org/job/HBase-TRUNK/2672/])
HBASE-5486 Warn message in HTable: Stringify the byte[] (Revision 1296478)

 Result = SUCCESS
stack : 
Files : 
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/HConstants.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HTable.java


 Warn message in HTable: Stringify the byte[]
 

 Key: HBASE-5486
 URL: https://issues.apache.org/jira/browse/HBASE-5486
 Project: HBase
  Issue Type: Bug
  Components: client
Affects Versions: 0.92.0
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
Priority: Trivial
  Labels: noob
 Fix For: 0.96.0

 Attachments: 5486-v2.patch, 5486.patch


 The warn message in the method getStartEndKeys() in HTable can be improved by 
 stringifying the byte array for Regions.Qualifier
 Currently, a sample message is like :
 12/01/17 16:36:34 WARN client.HTable: Null [B@552c8fa8 cell in 
 keyvalues={test5,\xC9\xA2\x00\x00\x00\x00\x00\x00/00_0,1326642537734.dbc62b2765529a9ad2ddcf8eb58cb2dc./info:server/1326750341579/Put/vlen=28,
  
 test5,\xC9\xA2\x00\x00\x00\x00\x00\x00/00_0,1326642537734.dbc62b2765529a9ad2ddcf8eb58cb2dc./info:serverstartcode/1326750341579/Put/vlen=8}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5524) Add a couple of more filters to our rat exclusion set

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222867#comment-13222867
 ] 

Hudson commented on HBASE-5524:
---

Integrated in HBase-TRUNK #2672 (See 
[https://builds.apache.org/job/HBase-TRUNK/2672/])
HBASE-5524 Add a couple of more filters to our rat exclusion set (Revision 
1297277)

 Result = SUCCESS
stack : 
Files : 
* /hbase/trunk/pom.xml


 Add a couple of more filters to our rat exclusion set
 -

 Key: HBASE-5524
 URL: https://issues.apache.org/jira/browse/HBASE-5524
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.92.1, 0.94.0

 Attachments: rat.txt


 Build up on jenkins is failing because I just enabled the rat/license check 
 as part of our build.  We're failing because CP is writing test data into 
 top-level at ./test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5286) bin/hbase's logic of adding Hadoop jar files to the classpath is fragile when presented with split packaged Hadoop 0.23 installation

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222869#comment-13222869
 ] 

Hudson commented on HBASE-5286:
---

Integrated in HBase-TRUNK #2672 (See 
[https://builds.apache.org/job/HBase-TRUNK/2672/])
HBASE-5286 bin/hbase's logic of adding Hadoop jar files to the classpath is 
fragile when presented with split packaged Hadoop 0.23 installation (Revision 
1296663)
HBASE-5286 bin/hbase's logic of adding Hadoop jar files to the classpath is 
fragile when presented with split packaged Hadoop 0.23 installation (Revision 
1296661)

 Result = SUCCESS
stack : 
Files : 
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/util/GetJavaProperty.java

stack : 
Files : 
* /hbase/trunk/bin/hbase


 bin/hbase's logic of adding Hadoop jar files to the classpath is fragile when 
 presented with split packaged Hadoop 0.23 installation
 

 Key: HBASE-5286
 URL: https://issues.apache.org/jira/browse/HBASE-5286
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.92.0
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 0.92.1, 0.94.0

 Attachments: HBASE-5286.patch.txt


 Here's the bit from bin/hbase that might need TLC now that Hadoop can be 
 spotted in the wild in split-package configuration:
 {noformat}
 #If avail, add Hadoop to the CLASSPATH and to the JAVA_LIBRARY_PATH
 if [ ! -z $HADOOP_HOME ]; then
   HADOOPCPPATH=
   if [ -z $HADOOP_CONF_DIR ]; then
 HADOOPCPPATH=$(append_path ${HADOOPCPPATH} ${HADOOP_HOME}/conf)
   else
 HADOOPCPPATH=$(append_path ${HADOOPCPPATH} ${HADOOP_CONF_DIR})
   fi
   if [ `echo ${HADOOP_HOME}/hadoop-core*.jar` != 
 ${HADOOP_HOME}/hadoop-core*.jar ] ; then
 HADOOPCPPATH=$(append_path ${HADOOPCPPATH} `ls 
 ${HADOOP_HOME}/hadoop-core*.jar | head -1`)
   else
 HADOOPCPPATH=$(append_path ${HADOOPCPPATH} `ls 
 ${HADOOP_HOME}/hadoop-common*.jar | head -1`)
 HADOOPCPPATH=$(append_path ${HADOOPCPPATH} `ls 
 ${HADOOP_HOME}/hadoop-hdfs*.jar | head -1`)
 HADOOPCPPATH=$(append_path ${HADOOPCPPATH} `ls 
 ${HADOOP_HOME}/hadoop-mapred*.jar | head -1`)
   fi
 {noformat}
 There's a couple of issues with the above code:
0. HADOOP_HOME is now deprecated in Hadoop 0.23
1. the list of jar files added to the class-path should be revised
2. we need to figure out a more robust way to get the jar files that are 
 needed to the classpath (things like hadoop-mapred*.jar tend to match 
 src/test jars as well)
 Better yet, it would be useful to look into whether we can transition HBase's 
 bin/hbase onto using bin/hadoop as a launcher script instead of direct JAVA 
 invocations (Pig, Hive, Sqoop and Mahout already do that)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5511) More doc on maven release process

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222875#comment-13222875
 ] 

Hudson commented on HBASE-5511:
---

Integrated in HBase-TRUNK #2672 (See 
[https://builds.apache.org/job/HBase-TRUNK/2672/])
HBASE-5511 More doc on maven release process (Revision 1296316)

 Result = SUCCESS
stack : 
Files : 
* /hbase/trunk/pom.xml
* /hbase/trunk/src/docbkx/developer.xml


 More doc on maven release process
 -

 Key: HBASE-5511
 URL: https://issues.apache.org/jira/browse/HBASE-5511
 Project: HBase
  Issue Type: Task
Reporter: stack
Assignee: stack
 Fix For: 0.92.1, 0.94.0

 Attachments: doc.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5503) [book] adding Troubleshooting case study

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222874#comment-13222874
 ] 

Hudson commented on HBASE-5503:
---

Integrated in HBase-TRUNK #2672 (See 
[https://builds.apache.org/job/HBase-TRUNK/2672/])
hbase-5503. performance.xml, troubleshooting.xml - adding Troubleshooting 
case study (Revision 1295915)

 Result = SUCCESS

 [book] adding Troubleshooting case study
 

 Key: HBASE-5503
 URL: https://issues.apache.org/jira/browse/HBASE-5503
 Project: HBase
  Issue Type: Improvement
Reporter: Doug Meil
Assignee: Doug Meil
Priority: Minor
 Attachments: docbkx_hbase_5503.patch


 performance.xml
 * adding Reading entry for checking Input Splits for slow nodes
 * adding Network entry for checking networking interfaces, and link to new 
 Troubleshooting Case Study
 troubleshooting.xml
 * adding Network entry for checking networking interfaces, and link to new 
 Troubleshooting Case Study
 * adding Case Study as top-level section in Troubleshooting chapter.  This 
 exhaustive diagnosis of an exotic issue should provide a blueprint on how to 
 diagnose performance issues.
 Thanks to Dan Washburn from Explorys for providing the Case Study!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5489) Add HTable accessor to get regions for a key range

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222871#comment-13222871
 ] 

Hudson commented on HBASE-5489:
---

Integrated in HBase-TRUNK #2672 (See 
[https://builds.apache.org/job/HBase-TRUNK/2672/])
HBASE-5489 Addendum (Revision 1296011)
HBASE-5489 Add HTable accessor to get regions for a key range (Revision 1295729)

 Result = SUCCESS
larsh : 
Files : 
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HTable.java

stack : 
Files : 
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/trunk/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


 Add HTable accessor to get regions for a key range
 --

 Key: HBASE-5489
 URL: https://issues.apache.org/jira/browse/HBASE-5489
 Project: HBase
  Issue Type: Improvement
  Components: client
Reporter: David S. Wang
Assignee: David S. Wang
Priority: Minor
 Fix For: 0.92.1, 0.94.0

 Attachments: HBASE-5489-2.patch, HBASE-5489-3-0.92.1.patch, 
 HBASE-5489-3.patch, HBASE-5489-4.patch


 It would be nice to have an accessor to find all regions that overlap with a 
 particular range of keys. Right now, the only way to accomplish that is to 
 call HTable.getStartEndKeys(), then follow that with calls to 
 getRegionLocation() for the range of keys you are interested in.  This 
 algorithm has 2 drawbacks:
 * It returns more keys than is necessary most of the time.  This is 
 especially evident if there are a lot of regions comprising the table and the 
 range of keys is small.
 * It always does a scan of .META. via MetaScannerVisitor for at least 
 HTable.getStartEndKeys(), and perhaps for HRegionLocations that are not 
 already cached by the client.
 An accessor that limited its scans to a specified range could avoid scanning 
 .META. at all if the HRegionLocations being fetched were already cached by 
 the client, thereby potentially making this operation faster in common cases.
 Here's a proposal for the accessor:
   /**
* Get the corresponding regions for an arbitrary range of keys.
* p
* @param startRow Starting row in range, inclusive
* @param endRow Ending row in range, inclusive
* @return A list of HRegionLocations corresponding to the regions that
* contain the specified range
* @throws IOException if a remote or network exception occurs
*/
   public ListHRegionLocation getRegionsInRange(final byte [] startKey,
 final byte [] endKey) throws IOException

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5371) Introduce AccessControllerProtocol.checkPermissions(Permission[] permissons) API

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222870#comment-13222870
 ] 

Hudson commented on HBASE-5371:
---

Integrated in HBase-TRUNK #2672 (See 
[https://builds.apache.org/job/HBase-TRUNK/2672/])
HBASE-5371 Introduce AccessControllerProtocol.checkPermissions(Permission[] 
permissons) API, addendum (Enis) (Revision 1296709)

 Result = SUCCESS
tedyu : 
Files : 
* 
/hbase/trunk/security/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* 
/hbase/trunk/security/src/main/java/org/apache/hadoop/hbase/security/access/AccessControllerProtocol.java


 Introduce AccessControllerProtocol.checkPermissions(Permission[] permissons) 
 API
 

 Key: HBASE-5371
 URL: https://issues.apache.org/jira/browse/HBASE-5371
 Project: HBase
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.94.0

 Attachments: HBASE-5371-addendum_v1.patch, HBASE-5371_v2.patch, 
 HBASE-5371_v3-noprefix.patch, HBASE-5371_v3.patch


 We need to introduce something like 
 AccessControllerProtocol.checkPermissions(Permission[] permissions) API, so 
 that clients can check access rights before carrying out the operations. We 
 need this kind of operation for HCATALOG-245, which introduces authorization 
 providers for hbase over hcat. We cannot use getUserPermissions() since it 
 requires ADMIN permissions on the global/table level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5508) Add an option to allow test output to show on the terminal

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222872#comment-13222872
 ] 

Hudson commented on HBASE-5508:
---

Integrated in HBase-TRUNK #2672 (See 
[https://builds.apache.org/job/HBase-TRUNK/2672/])
HBASE-5508 Add an option to allow test output to show on the terminal 
(Scott Chen) (Revision 1296370)

 Result = SUCCESS
tedyu : 
Files : 
* /hbase/trunk/pom.xml


 Add an option to allow test output to show on the terminal
 --

 Key: HBASE-5508
 URL: https://issues.apache.org/jira/browse/HBASE-5508
 Project: HBase
  Issue Type: Improvement
  Components: test
Reporter: Scott Chen
Assignee: Scott Chen
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE-5508.D2037.1.patch


 Sometimes it is useful to directly see the test results on the terminal.
 We can add a property to achieve that.
 mvn test -Dtest.output.tofile=false

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5010) Filter HFiles based on TTL

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222876#comment-13222876
 ] 

Hudson commented on HBASE-5010:
---

Integrated in HBase-TRUNK #2672 (See 
[https://builds.apache.org/job/HBase-TRUNK/2672/])
HBASE-5010 Pass region info in 
LoadBalancer.randomAssignment(ListServerName servers) (Anoop Sam John) 
(Revision 1297155)

 Result = SUCCESS
tedyu : 
Files : 
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/DefaultLoadBalancer.java
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/LoadBalancer.java


 Filter HFiles based on TTL
 --

 Key: HBASE-5010
 URL: https://issues.apache.org/jira/browse/HBASE-5010
 Project: HBase
  Issue Type: Bug
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin
 Fix For: 0.94.0

 Attachments: 5010.patch, D1017.1.patch, D1017.2.patch, D909.1.patch, 
 D909.2.patch, D909.3.patch, D909.4.patch, D909.5.patch, D909.6.patch


 In ScanWildcardColumnTracker we have
 {code:java}
  
   this.oldestStamp = EnvironmentEdgeManager.currentTimeMillis() - ttl;
   ...
   private boolean isExpired(long timestamp) {
 return timestamp  oldestStamp;
   }
 {code}
 but this time range filtering does not participate in HFile selection. In one 
 real case this caused next() calls to time out because all KVs in a table got 
 expired, but next() had to iterate over the whole table to find that out. We 
 should be able to filter out those HFiles right away. I think a reasonable 
 approach is to add a default timerange filter to every scan for a CF with a 
 finite TTL and utilize existing filtering in 
 StoreFile.Reader.passesTimerangeFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5430) Fix licenses in 0.92.1 -- RAT plugin won't pass

2012-03-05 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222873#comment-13222873
 ] 

Hudson commented on HBASE-5430:
---

Integrated in HBase-TRUNK #2672 (See 
[https://builds.apache.org/job/HBase-TRUNK/2672/])
HBASE-5430 Fix licenses in 0.92.1 -- RAT plugin won't pass (Revision 
1296358)

 Result = SUCCESS
stack : 
Files : 
* /hbase/trunk/pom.xml


 Fix licenses in 0.92.1 -- RAT plugin won't pass
 ---

 Key: HBASE-5430
 URL: https://issues.apache.org/jira/browse/HBASE-5430
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
Priority: Blocker
 Fix For: 0.92.1

 Attachments: 5430.txt


 Use the -Drelease profile to see we are missing 30 or so license.  Fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5526) Optional file permission settings

2012-03-05 Thread Jesse Yates (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-5526:
---

Attachment: java_HBASE-5526.patch

First iteration, no unit tests, but should cover the identifiable data in hdfs.

 Optional file permission settings
 -

 Key: HBASE-5526
 URL: https://issues.apache.org/jira/browse/HBASE-5526
 Project: HBase
  Issue Type: New Feature
  Components: regionserver
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 0.94.0

 Attachments: java_HBASE-5526.patch


 Currently many all the files created by the HBase user are just written using 
 the default file permissions granted by hdfs. However, it is often times 
 adventageous to only allow a subset of the world to view the actual data 
 written by hbase when scanning the raw hdfs files. 
 This ticket covers setting permissions for files written to hdfs that are 
 storing actual user data, as opposed to _all_ files written to hdfs as many 
 of them contain non-identifiable metadata.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5523) Fix Delete Timerange logic for KEEP_DELETED_CELLS

2012-03-05 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222878#comment-13222878
 ] 

Hadoop QA commented on HBASE-5523:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12517170/5523.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -129 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 154 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1105//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1105//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1105//console

This message is automatically generated.

 Fix Delete Timerange logic for KEEP_DELETED_CELLS
 -

 Key: HBASE-5523
 URL: https://issues.apache.org/jira/browse/HBASE-5523
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.94.0, 0.96.0

 Attachments: 5523.txt


 A Delete at time T marks a Put at time T as deleted.
 In parent I invented special logic that insert a virtual millisecond into the 
 tr if the encountered KV is a delete marker.
 This was so that there is a way to specify a timerange that would allow to 
 see the put but not the delete:
 {code}
 if (kv.isDelete()) {
   if (!keepDeletedCells) {
 // first ignore delete markers if the scanner can do so, and the
 // range does not include the marker
 boolean includeDeleteMarker = seePastDeleteMarkers ?
 // +1, to allow a range between a delete and put of same TS
 tr.withinTimeRange(timestamp+1) :
 tr.withinOrAfterTimeRange(timestamp);
 {code}
 Discussed this today with a coworker and he convinced me that this is very 
 confusing and also not needed.
 When we have a Delete and Put at the same time T, there *is* not timerange 
 that can include the Put but not the Delete.
 So I will change the code to this (and fix the tests):
 {code}
 if (kv.isDelete()) {
   if (!keepDeletedCells) {
 // first ignore delete markers if the scanner can do so, and the
 // range does not include the marker
 boolean includeDeleteMarker = seePastDeleteMarkers ?
 tr.withinTimeRange(timestamp) :
 tr.withinOrAfterTimeRange(timestamp);
 {code}
 It's easier to understand, and does not lead to strange scenarios when the TS 
 is used as a controlled counter.
 Needs to be done before 0.94 goes out.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5519) Incorrect warning in splitlogmanager

2012-03-05 Thread Prakash Khemani (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prakash Khemani updated HBASE-5519:
---

Attachment: 0001-HBASE-5519-Incorrect-warning-in-splitlogmanager.patch

replace a log.warn() w/ a comment

 Incorrect warning in splitlogmanager
 

 Key: HBASE-5519
 URL: https://issues.apache.org/jira/browse/HBASE-5519
 Project: HBase
  Issue Type: Improvement
Reporter: Prakash Khemani
 Attachments: 
 0001-HBASE-5519-Incorrect-warning-in-splitlogmanager.patch


 because of recently added behavior - where the splitlogmanager timeout thread 
 get's data from zk node just to check that the zk node is there ... we might 
 have multiple watches firing without the task znode expiring.
 remove the poor warning message. (internally, there was an assert that failed 
 in Mikhail's tests)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5507) ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not use ByteBuffer correctly

2012-03-05 Thread Scott Chen (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated HBASE-5507:
--

Attachment: HBASE-5507-0.94.txt

Attached the patch for 94.

 ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not 
 use ByteBuffer correctly
 -

 Key: HBASE-5507
 URL: https://issues.apache.org/jira/browse/HBASE-5507
 Project: HBase
  Issue Type: Bug
  Components: thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.94.0

 Attachments: HBASE-5507-0.94.txt, HBASE-5507.D2073.1.patch, 
 HBASE-5507.D2073.2.patch, HBASE-5507.D2073.3.patch


 We observed that when with framed transport option. The thrift call 
 ThriftServerRunner.HbaseHandler.getRegionInfo() receives corrupted parameter 
 (some garbage string attached to the beginning). This may be a thrift bug 
 requires further investigation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5074) support checksums in HBase block cache

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222898#comment-13222898
 ] 

Phabricator commented on HBASE-5074:


mbautin has accepted the revision [jira] [HBASE-5074] Support checksums in 
HBase block cache.

  @dhruba: looks good! A few minor comments inline.

  Also, I still think there is some code duplication between TestHFileBlock and 
TestHFileBlockCompatibility that we could get rid of, but we can do that in a 
separate patch.

  Could you please attach the final patch to the JIRA and run it on Hadoop QA?

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java:48 
s/do do/do/
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1242 do do - 
do
  
src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java:161-174
 It would be great to factor out the common part of this hard-coded gzip blob 
so that it is not repeated in TestHFileBlock and here.

  This is an example of what I meant in my comment regarding code duplication.

  Alternatively, we can remove code duplication in a follow-up patch.

REVISION DETAIL
  https://reviews.facebook.net/D1521

BRANCH
  svn


 support checksums in HBase block cache
 --

 Key: HBASE-5074
 URL: https://issues.apache.org/jira/browse/HBASE-5074
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Fix For: 0.94.0

 Attachments: D1521.1.patch, D1521.1.patch, D1521.10.patch, 
 D1521.10.patch, D1521.10.patch, D1521.10.patch, D1521.10.patch, 
 D1521.11.patch, D1521.11.patch, D1521.12.patch, D1521.12.patch, 
 D1521.2.patch, D1521.2.patch, D1521.3.patch, D1521.3.patch, D1521.4.patch, 
 D1521.4.patch, D1521.5.patch, D1521.5.patch, D1521.6.patch, D1521.6.patch, 
 D1521.7.patch, D1521.7.patch, D1521.8.patch, D1521.8.patch, D1521.9.patch, 
 D1521.9.patch


 The current implementation of HDFS stores the data in one block file and the 
 metadata(checksum) in another block file. This means that every read into the 
 HBase block cache actually consumes two disk iops, one to the datafile and 
 one to the checksum file. This is a major problem for scaling HBase, because 
 HBase is usually bottlenecked on the number of random disk iops that the 
 storage-hardware offers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5074) support checksums in HBase block cache

2012-03-05 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222899#comment-13222899
 ] 

Phabricator commented on HBASE-5074:


mbautin has accepted the revision [jira] [HBASE-5074] Support checksums in 
HBase block cache.

  @dhruba: looks good! A few minor comments inline.

  Also, I still think there is some code duplication between TestHFileBlock and 
TestHFileBlockCompatibility that we could get rid of, but we can do that in a 
separate patch.

  Could you please attach the final patch to the JIRA and run it on Hadoop QA?

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java:48 
s/do do/do/
  src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java:1242 do do - 
do
  
src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java:161-174
 It would be great to factor out the common part of this hard-coded gzip blob 
so that it is not repeated in TestHFileBlock and here.

  This is an example of what I meant in my comment regarding code duplication.

  Alternatively, we can remove code duplication in a follow-up patch.

REVISION DETAIL
  https://reviews.facebook.net/D1521

BRANCH
  svn


 support checksums in HBase block cache
 --

 Key: HBASE-5074
 URL: https://issues.apache.org/jira/browse/HBASE-5074
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Fix For: 0.94.0

 Attachments: D1521.1.patch, D1521.1.patch, D1521.10.patch, 
 D1521.10.patch, D1521.10.patch, D1521.10.patch, D1521.10.patch, 
 D1521.11.patch, D1521.11.patch, D1521.12.patch, D1521.12.patch, 
 D1521.2.patch, D1521.2.patch, D1521.3.patch, D1521.3.patch, D1521.4.patch, 
 D1521.4.patch, D1521.5.patch, D1521.5.patch, D1521.6.patch, D1521.6.patch, 
 D1521.7.patch, D1521.7.patch, D1521.8.patch, D1521.8.patch, D1521.9.patch, 
 D1521.9.patch


 The current implementation of HDFS stores the data in one block file and the 
 metadata(checksum) in another block file. This means that every read into the 
 HBase block cache actually consumes two disk iops, one to the datafile and 
 one to the checksum file. This is a major problem for scaling HBase, because 
 HBase is usually bottlenecked on the number of random disk iops that the 
 storage-hardware offers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5515) Add a processRow API that supports atomic multiple reads and writes on a row

2012-03-05 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222901#comment-13222901
 ] 

Hadoop QA commented on HBASE-5515:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12517174/HBASE-5515.D2067.14.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -128 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 155 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.TestHeapSize

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1107//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1107//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1107//console

This message is automatically generated.

 Add a processRow API that supports atomic multiple reads and writes on a row
 

 Key: HBASE-5515
 URL: https://issues.apache.org/jira/browse/HBASE-5515
 Project: HBase
  Issue Type: New Feature
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HBASE-5515.D2067.1.patch, HBASE-5515.D2067.10.patch, 
 HBASE-5515.D2067.11.patch, HBASE-5515.D2067.12.patch, 
 HBASE-5515.D2067.13.patch, HBASE-5515.D2067.14.patch, 
 HBASE-5515.D2067.2.patch, HBASE-5515.D2067.3.patch, HBASE-5515.D2067.4.patch, 
 HBASE-5515.D2067.5.patch, HBASE-5515.D2067.6.patch, HBASE-5515.D2067.7.patch, 
 HBASE-5515.D2067.8.patch, HBASE-5515.D2067.9.patch


 We have modified HRegion.java internally to do some atomic row processing. It 
 will be nice to have a plugable API for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5507) ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not use ByteBuffer correctly

2012-03-05 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222904#comment-13222904
 ] 

Hadoop QA commented on HBASE-5507:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12517181/HBASE-5507-0.94.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1109//console

This message is automatically generated.

 ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not 
 use ByteBuffer correctly
 -

 Key: HBASE-5507
 URL: https://issues.apache.org/jira/browse/HBASE-5507
 Project: HBase
  Issue Type: Bug
  Components: thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.94.0

 Attachments: HBASE-5507-0.94.txt, HBASE-5507.D2073.1.patch, 
 HBASE-5507.D2073.2.patch, HBASE-5507.D2073.3.patch


 We observed that when with framed transport option. The thrift call 
 ThriftServerRunner.HbaseHandler.getRegionInfo() receives corrupted parameter 
 (some garbage string attached to the beginning). This may be a thrift bug 
 requires further investigation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5507) ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not use ByteBuffer correctly

2012-03-05 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13222924#comment-13222924
 ] 

Zhihong Yu commented on HBASE-5507:
---

Integrated to 0.94.

Thanks for the patch, Scott.

 ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not 
 use ByteBuffer correctly
 -

 Key: HBASE-5507
 URL: https://issues.apache.org/jira/browse/HBASE-5507
 Project: HBase
  Issue Type: Bug
  Components: thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.94.0

 Attachments: HBASE-5507-0.94.txt, HBASE-5507.D2073.1.patch, 
 HBASE-5507.D2073.2.patch, HBASE-5507.D2073.3.patch


 We observed that when with framed transport option. The thrift call 
 ThriftServerRunner.HbaseHandler.getRegionInfo() receives corrupted parameter 
 (some garbage string attached to the beginning). This may be a thrift bug 
 requires further investigation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5507) ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not use ByteBuffer correctly

2012-03-05 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5507:
--

Comment: was deleted

(was: -1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12517181/HBASE-5507-0.94.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1109//console

This message is automatically generated.)

 ThriftServerRunner.HbaseHandler.getRegionInfo() and getTableRegions() do not 
 use ByteBuffer correctly
 -

 Key: HBASE-5507
 URL: https://issues.apache.org/jira/browse/HBASE-5507
 Project: HBase
  Issue Type: Bug
  Components: thrift
Reporter: Scott Chen
Assignee: Scott Chen
 Fix For: 0.94.0

 Attachments: HBASE-5507-0.94.txt, HBASE-5507.D2073.1.patch, 
 HBASE-5507.D2073.2.patch, HBASE-5507.D2073.3.patch


 We observed that when with framed transport option. The thrift call 
 ThriftServerRunner.HbaseHandler.getRegionInfo() receives corrupted parameter 
 (some garbage string attached to the beginning). This may be a thrift bug 
 requires further investigation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >