[jira] [Commented] (HBASE-10391) Deprecate KeyValue#getBuffer

2014-01-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877324#comment-13877324
 ] 

ramkrishna.s.vasudevan commented on HBASE-10391:


+1 on patch.

 Deprecate KeyValue#getBuffer
 

 Key: HBASE-10391
 URL: https://issues.apache.org/jira/browse/HBASE-10391
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
 Fix For: 0.98.0, 0.99.0

 Attachments: 10391.txt


 Make the deprecation a subtask of the parent.  Let the parent stand as an 
 umbrella issue.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10374) add jemalloc into the choice of memstore allocation

2014-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877326#comment-13877326
 ] 

stack commented on HBASE-10374:
---

[~jianbginglover] You suggest jemalloc for memstore block allocations.  What 
are you thinking we'll see by way of benefit?   I'd think that isolating the 
MemStore and then in a simple testing harness trying various options would be 
the way to go (MemStore can be stood up outside of a HStore IIRC).  Try netty 
implementation first since code is done.   See if you can get any speedup in 
your testing rig.  If improvement, then lets talk.  We'll have to see about 
what [~ndimiduk] reminds us of, that netty implemenation is ByteBuf-based (as 
opposed to ByteBuffer).

On pulling in netty4, it might not be too bad since they changed the package 
from org.jboss to io.netty.



 add jemalloc into the choice of memstore allocation 
 

 Key: HBASE-10374
 URL: https://issues.apache.org/jira/browse/HBASE-10374
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 0.96.1.1
Reporter: Bing Jiang
Priority: Minor

 https://blog.twitter.com/2013/netty-4-at-twitter-reduced-gc-overhead 
 introduced that Netty used jemalloc to gain benefits from GC.
 It can be a good choice for memstore block allocation.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer

2014-01-21 Thread Eric Charles (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877328#comment-13877328
 ] 

Eric Charles commented on HBASE-10336:
--

[~stack] 250 was set as default if hadoop.http.max.threads was not present, but 
yes, I also found this value much too high. +1 for 10 which is a very 
reasonable number
Yes, a few servlet classes to go home in the http package (as well in main as 
test).
Didn't look at HADOOP_SSL_ENABLED_KEY nor HADOOP_JETTY_LOGS_SERVE_ALIASES...
Jenkins is strict and doesn't like the uneeded comma in 
/package-info.java:[25,43] @InterfaceAudience.LimitedPrivate({HBase,})

Where are we now Mr. Stack? have you updated the patch and will upload a new 
patch or do you like me to make it?
We can talk and fix the HADOOP_SSL_ENABLED_KEY,  
HADOOP_JETTY_LOGS_SERVE_ALIASES and the upgrade to whatever jetty version after 
the commit.

 Remove deprecated usage of Hadoop HttpServer in InfoServer
 --

 Key: HBASE-10336
 URL: https://issues.apache.org/jira/browse/HBASE-10336
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0
Reporter: Eric Charles
Assignee: Eric Charles
 Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, 
 HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch


 Recent changes in Hadoop HttpServer give NPE when running on hadoop 
 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be 
 not fixed (see HDFS-5760). We'd better move to the new proposed builder 
 pattern, which means we can no more use inheritance to build our nice 
 InfoServer.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10391) Deprecate KeyValue#getBuffer

2014-01-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-10391:
--

  Resolution: Fixed
Assignee: stack
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to 0.98 and 0.99. Hope that ok [~apurtell]

 Deprecate KeyValue#getBuffer
 

 Key: HBASE-10391
 URL: https://issues.apache.org/jira/browse/HBASE-10391
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.99.0

 Attachments: 10391.txt


 Make the deprecation a subtask of the parent.  Let the parent stand as an 
 umbrella issue.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10391) Deprecate KeyValue#getBuffer

2014-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877342#comment-13877342
 ] 

Hadoop QA commented on HBASE-10391:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12624089/10391.txt
  against trunk revision .
  ATTACHMENT ID: 12624089

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8480//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8480//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8480//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8480//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8480//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8480//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8480//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8480//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8480//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8480//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8480//console

This message is automatically generated.

 Deprecate KeyValue#getBuffer
 

 Key: HBASE-10391
 URL: https://issues.apache.org/jira/browse/HBASE-10391
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.99.0

 Attachments: 10391.txt


 Make the deprecation a subtask of the parent.  Let the parent stand as an 
 umbrella issue.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-21 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10322:
---

Attachment: HBASE-10322_V4.patch

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_V3.patch, HBASE-10322_V4.patch, HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10391) Deprecate KeyValue#getBuffer

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877384#comment-13877384
 ] 

Hudson commented on HBASE-10391:


SUCCESS: Integrated in HBase-TRUNK #4842 (See 
[https://builds.apache.org/job/HBase-TRUNK/4842/])
HBASE-10391 Deprecate KeyValue#getBuffer (stack: rev 1559935)
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java


 Deprecate KeyValue#getBuffer
 

 Key: HBASE-10391
 URL: https://issues.apache.org/jira/browse/HBASE-10391
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.99.0

 Attachments: 10391.txt


 Make the deprecation a subtask of the parent.  Let the parent stand as an 
 umbrella issue.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10143) Clean up dead local stores in FSUtils

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877388#comment-13877388
 ] 

Hudson commented on HBASE-10143:


SUCCESS: Integrated in Hadoop-Yarn-trunk #459 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/459/])
HBASE-10143 replace WritableFactories's hashmap with ConcurrentHashMap (Liang 
Xie via Stack) (stack: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1559923)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableFactories.java


 Clean up dead local stores in FSUtils
 -

 Key: HBASE-10143
 URL: https://issues.apache.org/jira/browse/HBASE-10143
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.0, 0.96.0, 0.99.0
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: HBASE-10143-0.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10277) refactor AsyncProcess

2014-01-21 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877390#comment-13877390
 ] 

Nicolas Liochon commented on HBASE-10277:
-

bq. Does it mean that an AsyncProcess can now be shared between Tables?
bq. Yes
This seems great. Would it be possible then to have a single AsyncProcess per 
HConnection, shared between the different htables objects? This would make
Side question: would it make sense to use the multiget path for a single get, 
instead of having two different paths?

bq.  When we have a scenario to use some callback, we can add it, under YAGNI 
principle 
The scenario is already there: it's how to manage the errors with the write 
buffer. I didn't want to make the interface public (as once it's public you 
should not change it), but at the end of the day, the callback is the most 
obvious solution to the problem. Having it here sets a base for the discussion. 
If your patch allows to have a common resource management per HTable, I'm happy 
to lose the callbacks as a side effect of the patch, but having both would be 
better imho.

bq. IIRC most of these paths are deprecated.
What's deprecated is mainly that the batch interfaces were in HConnection 
instead of HTable. The Object[] is ugly, but is still the 'recommended' way.


 refactor AsyncProcess
 -

 Key: HBASE-10277
 URL: https://issues.apache.org/jira/browse/HBASE-10277
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-10277.patch


 AsyncProcess currently has two patterns of usage, one from HTable flush w/o 
 callback and with reuse, and one from HCM/HTable batch call, with callback 
 and w/o reuse. In the former case (but not the latter), it also does some 
 throttling of actions on initial submit call, limiting the number of 
 outstanding actions per server.
 The latter case is relatively straightforward. The former appears to be error 
 prone due to reuse - if, as javadoc claims should be safe, multiple submit 
 calls are performed without waiting for the async part of the previous call 
 to finish, fields like hasError become ambiguous and can be used for the 
 wrong call; callback for success/failure is called based on original index 
 of an action in submitted list, but with only one callback supplied to AP in 
 ctor it's not clear to which submit call the index belongs, if several are 
 outstanding.
 I was going to add support for HBASE-10070 to AP, and found that it might be 
 difficult to do cleanly.
 It would be nice to normalize AP usage patterns; in particular, separate the 
 global part (load tracking) from per-submit-call part.
 Per-submit part can more conveniently track stuff like initialActions, 
 mapping of indexes and retry information, that is currently passed around the 
 method calls.
 -I am not sure yet, but maybe sending of the original index to server in 
 ClientProtos.MultiAction can also be avoided.- Cannot be avoided because 
 the API to server doesn't have one-to-one correspondence between requests and 
 responses in an individual call to multi (retries/rearrangement have nothing 
 to do with it)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10391) Deprecate KeyValue#getBuffer

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877398#comment-13877398
 ] 

Hudson commented on HBASE-10391:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #91 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/91/])
HBASE-10391 Deprecate KeyValue#getBuffer (stack: rev 1559934)
* 
/hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java


 Deprecate KeyValue#getBuffer
 

 Key: HBASE-10391
 URL: https://issues.apache.org/jira/browse/HBASE-10391
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.99.0

 Attachments: 10391.txt


 Make the deprecation a subtask of the parent.  Let the parent stand as an 
 umbrella issue.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877417#comment-13877417
 ] 

Hadoop QA commented on HBASE-10322:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12624103/HBASE-10322_V4.patch
  against trunk revision .
  ATTACHMENT ID: 12624103

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 23 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 3 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8481//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8481//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8481//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8481//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8481//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8481//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8481//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8481//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8481//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8481//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8481//console

This message is automatically generated.

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_V3.patch, HBASE-10322_V4.patch, HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877431#comment-13877431
 ] 

Hudson commented on HBASE-10384:


FAILURE: Integrated in hbase-0.96-hadoop2 #182 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/182/])
HBASE-10384 Failed to increment serveral columns in one Increment (jxiang: rev 
1559857)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Blocker
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: hbase-10384.patch


 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10213) Add read log size per second metrics for replication source

2014-01-21 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877435#comment-13877435
 ] 

Feng Honghua commented on HBASE-10213:
--

bq.However, it is not clear enough to know how many bytes replicating to peer 
cluster from these metrics. In production environment, it may be important to 
know the size of replicating data per second
the intention of this jira is good:-), but by examining the patch:
{code}metrics.incrLogReadInByes(this.repLogReader.getPosition() - 
positionBeforeRead);{code}
the metric above only reflects the log read/parse rate, not the desired 
replicating data to peer cluster rate, since the read/parsed log files may 
contain many kvs from column-families with replication scope=0 which will be 
filtered out and removed from the entries list before the real replicating to 
peer cluster occurs...
why not use currentSize, the size of all entries which will be really 
replicated to the peer cluster?

 Add read log size per second metrics for replication source
 ---

 Key: HBASE-10213
 URL: https://issues.apache.org/jira/browse/HBASE-10213
 Project: HBase
  Issue Type: Improvement
  Components: metrics, Replication
Affects Versions: 0.94.14
Reporter: cuijianwei
Assignee: cuijianwei
Priority: Minor
 Fix For: 0.98.0, 0.99.0

 Attachments: 10213-trunk-addendum-1.patch, HBASE-10213-0.94-v1.patch, 
 HBASE-10213-0.94-v2.patch, HBASE-10213-trunk-v1.patch


 The current metrics of replication source contain logEditsReadRate, 
 shippedBatchesRate, etc, which could indicate how fast the data replicated to 
 peer cluster to some extent. However, it is not clear enough to know how many 
 bytes replicating to peer cluster from these metrics. In production 
 environment, it may be important to know the size of replicating data per 
 second because the services may be affected if the network become busy.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9501) No throttling for replication

2014-01-21 Thread Feng Honghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Honghua updated HBASE-9501:


Attachment: HBASE-9501-trunk_v0.patch

trunk patch attached

 No throttling for replication
 -

 Key: HBASE-9501
 URL: https://issues.apache.org/jira/browse/HBASE-9501
 Project: HBase
  Issue Type: Improvement
  Components: Replication
Reporter: Feng Honghua
Assignee: Feng Honghua
 Attachments: HBASE-9501-trunk_v0.patch


 When we disable a peer for a time of period, and then enable it, the 
 ReplicationSource in master cluster will push the accumulated hlog entries 
 during the disabled interval to the re-enabled peer cluster at full speed.
 If the bandwidth of the two clusters is shared by different applications, the 
 push at full speed for replication can use all the bandwidth and severely 
 influence other applications.
 Though there are two config replication.source.size.capacity and 
 replication.source.nb.capacity to tweak the batch size each time a push 
 delivers, but if decrease these two configs, the number of pushes increase, 
 and all these pushes proceed continuously without pause. And no obvious help 
 for the bandwidth throttling.
 From bandwidth-sharing and push-speed perspective, it's more reasonable to 
 provide a bandwidth up limit for each peer push channel, and within that 
 limit, peer can choose a big batch size for each push for bandwidth 
 efficiency.
 Any opinion?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10213) Add read log size per second metrics for replication source

2014-01-21 Thread cuijianwei (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877443#comment-13877443
 ] 

cuijianwei commented on HBASE-10213:


[~fenghh], thanks for your comment. We need to consider the filtered kvs when 
computing how much data replicate to peer cluster. The 'logReadRateInByte' only 
computes how much we read from HLog in source cluster. Maybe, we only another 
metrics such as 'logReplicateRateInByte'?

 Add read log size per second metrics for replication source
 ---

 Key: HBASE-10213
 URL: https://issues.apache.org/jira/browse/HBASE-10213
 Project: HBase
  Issue Type: Improvement
  Components: metrics, Replication
Affects Versions: 0.94.14
Reporter: cuijianwei
Assignee: cuijianwei
Priority: Minor
 Fix For: 0.98.0, 0.99.0

 Attachments: 10213-trunk-addendum-1.patch, HBASE-10213-0.94-v1.patch, 
 HBASE-10213-0.94-v2.patch, HBASE-10213-trunk-v1.patch


 The current metrics of replication source contain logEditsReadRate, 
 shippedBatchesRate, etc, which could indicate how fast the data replicated to 
 peer cluster to some extent. However, it is not clear enough to know how many 
 bytes replicating to peer cluster from these metrics. In production 
 environment, it may be important to know the size of replicating data per 
 second because the services may be affected if the network become busy.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10143) Clean up dead local stores in FSUtils

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877456#comment-13877456
 ] 

Hudson commented on HBASE-10143:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1676 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1676/])
HBASE-10143 replace WritableFactories's hashmap with ConcurrentHashMap (Liang 
Xie via Stack) (stack: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1559923)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableFactories.java


 Clean up dead local stores in FSUtils
 -

 Key: HBASE-10143
 URL: https://issues.apache.org/jira/browse/HBASE-10143
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.0, 0.96.0, 0.99.0
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: HBASE-10143-0.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10143) Clean up dead local stores in FSUtils

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877467#comment-13877467
 ] 

Hudson commented on HBASE-10143:


SUCCESS: Integrated in Hadoop-Hdfs-trunk #1651 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1651/])
HBASE-10143 replace WritableFactories's hashmap with ConcurrentHashMap (Liang 
Xie via Stack) (stack: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1559923)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableFactories.java


 Clean up dead local stores in FSUtils
 -

 Key: HBASE-10143
 URL: https://issues.apache.org/jira/browse/HBASE-10143
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.0, 0.96.0, 0.99.0
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: HBASE-10143-0.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10087) Store should be locked during a memstore snapshot

2014-01-21 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10087:


  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 Store should be locked during a memstore snapshot
 -

 Key: HBASE-10087
 URL: https://issues.apache.org/jira/browse/HBASE-10087
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.0, 0.96.1, 0.94.14
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.1, 0.99.0

 Attachments: 10079.v1.patch, 10087.v2.patch


 regression from HBASE-9963, found while looking at HBASE-10079.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10087) Store should be locked during a memstore snapshot

2014-01-21 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877494#comment-13877494
 ] 

Nicolas Liochon commented on HBASE-10087:
-

Committed to trunk  0.98, thanks for the review and the feedback, all.

 Store should be locked during a memstore snapshot
 -

 Key: HBASE-10087
 URL: https://issues.apache.org/jira/browse/HBASE-10087
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.0, 0.96.1, 0.94.14
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.1, 0.99.0

 Attachments: 10079.v1.patch, 10087.v2.patch


 regression from HBASE-9963, found while looking at HBASE-10079.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code

2014-01-21 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10375:


Attachment: 10375.v2.96-98.patch
10375.v2.trunk.patch

 hbase-default.xml hbase.status.multicast.address.port does not match code
 -

 Key: HBASE-10375
 URL: https://issues.apache.org/jira/browse/HBASE-10375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jonathan Hsieh
Assignee: Nicolas Liochon
 Attachments: 10375.v1.98-96.patch, 10375.v1.patch, 
 10375.v2.96-98.patch, 10375.v2.trunk.patch


 In hbase-default.xml
 {code}
 +  property
 +namehbase.status.multicast.address.port/name
 +value6100/value
 +description
 +  Multicast port to use for the status publication by multicast.
 +/description
 +  /property
 {code}
 In HConstants it was 60100.
 {code}
   public static final String STATUS_MULTICAST_PORT = 
 hbase.status.multicast.port;
   public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100;
 {code}
 (it was 60100 in the code for 0.96 and 0.98.)
 I lean towards going with the code as opposed to the config file.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code

2014-01-21 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877498#comment-13877498
 ] 

Nicolas Liochon commented on HBASE-10375:
-

I was about to commit, but:
Should we go for hbase.status.multicast.port' or 
hbase.status.multicast.address.port ?

The ip address is named hbase.status.multicast.address.ip

 hbase-default.xml hbase.status.multicast.address.port does not match code
 -

 Key: HBASE-10375
 URL: https://issues.apache.org/jira/browse/HBASE-10375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jonathan Hsieh
Assignee: Nicolas Liochon
 Attachments: 10375.v1.98-96.patch, 10375.v1.patch, 
 10375.v2.96-98.patch, 10375.v2.trunk.patch


 In hbase-default.xml
 {code}
 +  property
 +namehbase.status.multicast.address.port/name
 +value6100/value
 +description
 +  Multicast port to use for the status publication by multicast.
 +/description
 +  /property
 {code}
 In HConstants it was 60100.
 {code}
   public static final String STATUS_MULTICAST_PORT = 
 hbase.status.multicast.port;
   public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100;
 {code}
 (it was 60100 in the code for 0.96 and 0.98.)
 I lean towards going with the code as opposed to the config file.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-3909) Add dynamic config

2014-01-21 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877544#comment-13877544
 ] 

binlijin commented on HBASE-3909:
-

And i can backport it from 0.89 fb and make a patch for trunk.

 Add dynamic config
 --

 Key: HBASE-3909
 URL: https://issues.apache.org/jira/browse/HBASE-3909
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Subbu M Iyer
 Attachments: 3909-102812.patch, 3909-102912.patch, 3909-v1.patch, 
 3909.v1, 3909_090712-2.patch, HBase Cluster Config Details.xlsx, 
 patch-v2.patch, testMasterNoCluster.stack


 I'm sure this issue exists already, at least as part of the discussion around 
 making online schema edits possible, but no hard this having its own issue.  
 Ted started a conversation on this topic up on dev and Todd suggested we 
 lookd at how Hadoop did it over in HADOOP-7001



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-3909) Add dynamic config

2014-01-21 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877542#comment-13877542
 ] 

binlijin commented on HBASE-3909:
-

What about the method which 0.89 fb current use? 
(1) First we change all node's hbase-site.xml.
(2) Second ask the node to reload the conf and take effect.  


 Add dynamic config
 --

 Key: HBASE-3909
 URL: https://issues.apache.org/jira/browse/HBASE-3909
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Subbu M Iyer
 Attachments: 3909-102812.patch, 3909-102912.patch, 3909-v1.patch, 
 3909.v1, 3909_090712-2.patch, HBase Cluster Config Details.xlsx, 
 patch-v2.patch, testMasterNoCluster.stack


 I'm sure this issue exists already, at least as part of the discussion around 
 making online schema edits possible, but no hard this having its own issue.  
 Ted started a conversation on this topic up on dev and Todd suggested we 
 lookd at how Hadoop did it over in HADOOP-7001



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-3909) Add dynamic config

2014-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877546#comment-13877546
 ] 

Hadoop QA commented on HBASE-3909:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12551209/3909-102912.patch
  against trunk revision .
  ATTACHMENT ID: 12551209

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8482//console

This message is automatically generated.

 Add dynamic config
 --

 Key: HBASE-3909
 URL: https://issues.apache.org/jira/browse/HBASE-3909
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Subbu M Iyer
 Attachments: 3909-102812.patch, 3909-102912.patch, 3909-v1.patch, 
 3909.v1, 3909_090712-2.patch, HBase Cluster Config Details.xlsx, 
 patch-v2.patch, testMasterNoCluster.stack


 I'm sure this issue exists already, at least as part of the discussion around 
 making online schema edits possible, but no hard this having its own issue.  
 Ted started a conversation on this topic up on dev and Todd suggested we 
 lookd at how Hadoop did it over in HADOOP-7001



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code

2014-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877567#comment-13877567
 ] 

stack commented on HBASE-10375:
---

Sounds like  hbase.status.multicast.address.port would be following 
convention so +1

 hbase-default.xml hbase.status.multicast.address.port does not match code
 -

 Key: HBASE-10375
 URL: https://issues.apache.org/jira/browse/HBASE-10375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jonathan Hsieh
Assignee: Nicolas Liochon
 Attachments: 10375.v1.98-96.patch, 10375.v1.patch, 
 10375.v2.96-98.patch, 10375.v2.trunk.patch


 In hbase-default.xml
 {code}
 +  property
 +namehbase.status.multicast.address.port/name
 +value6100/value
 +description
 +  Multicast port to use for the status publication by multicast.
 +/description
 +  /property
 {code}
 In HConstants it was 60100.
 {code}
   public static final String STATUS_MULTICAST_PORT = 
 hbase.status.multicast.port;
   public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100;
 {code}
 (it was 60100 in the code for 0.96 and 0.98.)
 I lean towards going with the code as opposed to the config file.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer

2014-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877570#comment-13877570
 ] 

stack commented on HBASE-10336:
---

I have not made the changes.  Would you mind?  Doing the hadoop keys -- if at 
all -- post commit makes sense.  Put up new patch and I'll commit.

 Remove deprecated usage of Hadoop HttpServer in InfoServer
 --

 Key: HBASE-10336
 URL: https://issues.apache.org/jira/browse/HBASE-10336
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0
Reporter: Eric Charles
Assignee: Eric Charles
 Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, 
 HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch


 Recent changes in Hadoop HttpServer give NPE when running on hadoop 
 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be 
 not fixed (see HDFS-5760). We'd better move to the new proposed builder 
 pattern, which means we can no more use inheritance to build our nice 
 InfoServer.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10087) Store should be locked during a memstore snapshot

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877596#comment-13877596
 ] 

Hudson commented on HBASE-10087:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #92 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/92/])
HBASE-10087 Store should be locked during a memstore snapshot (nkeywal: rev 
1560028)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


 Store should be locked during a memstore snapshot
 -

 Key: HBASE-10087
 URL: https://issues.apache.org/jira/browse/HBASE-10087
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.0, 0.96.1, 0.94.14
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.1, 0.99.0

 Attachments: 10079.v1.patch, 10087.v2.patch


 regression from HBASE-9963, found while looking at HBASE-10079.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-21 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10322:
---

Attachment: HBASE-10322_V5.patch

V5 fixing javadoc warns

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_V3.patch, HBASE-10322_V4.patch, HBASE-10322_V5.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer

2014-01-21 Thread Eric Charles (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877623#comment-13877623
 ] 

Eric Charles commented on HBASE-10336:
--

Ok [~stack]. I'm a bit short at time today, so it will be for tomorrow - will 
look also at the hadoop keys.

 Remove deprecated usage of Hadoop HttpServer in InfoServer
 --

 Key: HBASE-10336
 URL: https://issues.apache.org/jira/browse/HBASE-10336
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0
Reporter: Eric Charles
Assignee: Eric Charles
 Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, 
 HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch


 Recent changes in Hadoop HttpServer give NPE when running on hadoop 
 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be 
 not fixed (see HDFS-5760). We'd better move to the new proposed builder 
 pattern, which means we can no more use inheritance to build our nice 
 InfoServer.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-21 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877628#comment-13877628
 ] 

Francis Liu commented on HBASE-10383:
-

Kerberos authentication is not needed to test this api. There's actually two 
unit test classes:

-TestSecureLoadIncrementalHFiles
-TestSecureLoadIncrementalHFilesSplitRecovery

It seems something has caused these tests to run in non-secure mode thus not 
using the SecureBulkLoadClient and causing these tests to falsely pass.

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S
Priority: Critical
 Fix For: 0.94.17

 Attachments: 10383.txt, HBASE-10383-v2.patch


 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10087) Store should be locked during a memstore snapshot

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877625#comment-13877625
 ] 

Hudson commented on HBASE-10087:


FAILURE: Integrated in HBase-TRUNK #4843 (See 
[https://builds.apache.org/job/HBase-TRUNK/4843/])
HBASE-10087 Store should be locked during a memstore snapshot (nkeywal: rev 
1560018)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


 Store should be locked during a memstore snapshot
 -

 Key: HBASE-10087
 URL: https://issues.apache.org/jira/browse/HBASE-10087
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.0, 0.96.1, 0.94.14
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.1, 0.99.0

 Attachments: 10079.v1.patch, 10087.v2.patch


 regression from HBASE-9963, found while looking at HBASE-10079.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10391) Deprecate KeyValue#getBuffer

2014-01-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877633#comment-13877633
 ] 

Andrew Purtell commented on HBASE-10391:


Sure, belated +1

 Deprecate KeyValue#getBuffer
 

 Key: HBASE-10391
 URL: https://issues.apache.org/jira/browse/HBASE-10391
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.99.0

 Attachments: 10391.txt


 Make the deprecation a subtask of the parent.  Let the parent stand as an 
 umbrella issue.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-7320) Replace calls to KeyValue.getBuffer with appropropriate calls to getRowArray, getFamilyArray(), getQualifierArray, and getValueArray

2014-01-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877637#comment-13877637
 ] 

Andrew Purtell commented on HBASE-7320:
---

+1 for the deprecation. It already went in on the subtask anyhow.

bq. This is going to be a fun project. I started to look at all the times we 
get the family array. Its a bunch just to check if the KV family is legit 
client-side in the individual types 

We get both family and qualifier arrays in the access controller because we 
have to look in descending order at perms for global, namespace, table, cf, cf 
+ qualifier, cell.

 Replace calls to KeyValue.getBuffer with appropropriate calls to getRowArray, 
 getFamilyArray(), getQualifierArray, and getValueArray
 

 Key: HBASE-7320
 URL: https://issues.apache.org/jira/browse/HBASE-7320
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: stack
 Fix For: 0.98.0


 In many places this is simple task of just replacing the method name.
 There, however, quite a few places where we assume that:
 # the entire KV is backed by a single byte array
 # the KVs key portion is backed by a single byte array
 Some of those can easily be fixed, others will need their own jiras.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-21 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877630#comment-13877630
 ] 

Francis Liu commented on HBASE-10383:
-

Looks like this line:
{code}
UserProvider.setUserProviderForTesting(util.getConfiguration(),
  HadoopSecurityEnabledUserProviderForTesting.class);
{code}

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S
Priority: Critical
 Fix For: 0.94.17

 Attachments: 10383.txt, HBASE-10383-v2.patch


 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-21 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877647#comment-13877647
 ] 

Francis Liu commented on HBASE-10383:
-

Looks like the constructor to force LoadIncrementalHFiles to use the secure 
mode has been removed. I manually changed useSecure to true and looks like it's 
failing on renaming files. 

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S
Priority: Critical
 Fix For: 0.94.17

 Attachments: 10383.txt, HBASE-10383-v2.patch


 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877651#comment-13877651
 ] 

Andrew Purtell commented on HBASE-10322:


bq. I'd be +1 on committing this in meantime.

+1 for commit of v5 as is. Thanks for seeing this through Anoop and Ram. 

Please add a release note about HConstants.REPLICATION_CODEC_CONF_KEY.

Fix the comment above this change on commit:
{noformat}
diff --git 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java
index a612b18..305a76a 100644
--- hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java
+++ hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java
@@ -1300,7 +1300,7 @@ public class RpcClient {
   Codec getCodec() {
 // For NO CODEC, hbase.client.rpc.codec must be the empty string AND
 // hbase.client.default.rpc.codec -- because default is to do cell block 
encoding.
-String className = conf.get(hbase.client.rpc.codec, 
getDefaultCodec(this.conf));
+String className = conf.get(HConstants.RPC_CODEC_CONF_KEY, 
getDefaultCodec(this.conf));
 if (className == null || className.length() == 0) return null;
 try {
   return (Codec)Class.forName(className).newInstance();
{noformat}

bq. Andrew can make the call on whether to wait on test completion before 
RC'ing or not.

I'd really like a test if we get one in. Would have to be a LargeTest given it 
spins up two clusters. If it proves difficult then we can skip it. Or, if it 
flakes during RC testing, we can revert it for 0.98.1. Therefore it makes sense 
to me to do it as a follow up issue. 

Please also update the replication section of the manual to inform the user 
what HConstants.REPLICATION_CODEC_CONF_KEY does. We also need an update of the 
section talking about tags that setting HConstants.REPLICATION_CODEC_CONF_KEY 
to the tags-aware codec is required to replicate tags from one cluster to 
another. 

[~stack]: The security coprocessors use operation attributes to ship metadata 
to the server. The downside is you have to take care because all cells bundled 
in the op will get the same metadata and the server has to rewrite the incoming 
cells, but the upside is we don't care about any limitations we might have with 
tags on the client. We can make tags first class for 1.0. We will have to 
look at things like negotiating codecs on the connection at that time.

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_V3.patch, HBASE-10322_V4.patch, HBASE-10322_V5.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877659#comment-13877659
 ] 

Andrew Purtell commented on HBASE-10322:


And if making tags first class we will have to look at thorny issues like 
deciding what user is allowed to attach what tags or what user is allowed to 
see what tags. There's a lot involved with it and I think we will end up 
punting on it all, but we can save this discussion for the next JIRA

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_V3.patch, HBASE-10322_V4.patch, HBASE-10322_V5.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877662#comment-13877662
 ] 

Anoop Sam John commented on HBASE-10322:


bq.Would have to be a LargeTest given it spins up two clusters. If it proves 
difficult then we can skip it.
Pls see TestReplicationWithTags in the patch

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_V3.patch, HBASE-10322_V4.patch, HBASE-10322_V5.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code

2014-01-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877664#comment-13877664
 ] 

Andrew Purtell commented on HBASE-10375:


+1 for hbase.status.multicast.address.port

 hbase-default.xml hbase.status.multicast.address.port does not match code
 -

 Key: HBASE-10375
 URL: https://issues.apache.org/jira/browse/HBASE-10375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jonathan Hsieh
Assignee: Nicolas Liochon
 Attachments: 10375.v1.98-96.patch, 10375.v1.patch, 
 10375.v2.96-98.patch, 10375.v2.trunk.patch


 In hbase-default.xml
 {code}
 +  property
 +namehbase.status.multicast.address.port/name
 +value6100/value
 +description
 +  Multicast port to use for the status publication by multicast.
 +/description
 +  /property
 {code}
 In HConstants it was 60100.
 {code}
   public static final String STATUS_MULTICAST_PORT = 
 hbase.status.multicast.port;
   public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100;
 {code}
 (it was 60100 in the code for 0.96 and 0.98.)
 I lean towards going with the code as opposed to the config file.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877665#comment-13877665
 ] 

Andrew Purtell commented on HBASE-10322:


bq. Pls see TestReplicationWithTags in the patch

Yes, can we split this out to a subtask and commit the rest now?

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_V3.patch, HBASE-10322_V4.patch, HBASE-10322_V5.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877665#comment-13877665
 ] 

Andrew Purtell edited comment on HBASE-10322 at 1/21/14 5:57 PM:
-

bq. Pls see TestReplicationWithTags in the patch

Yes, can we split this out to a subtask and commit the rest now? 
Edit: Also please see comments above about manual updates. Could be done at the 
same time.


was (Author: apurtell):
bq. Pls see TestReplicationWithTags in the patch

Yes, can we split this out to a subtask and commit the rest now?

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_V3.patch, HBASE-10322_V4.patch, HBASE-10322_V5.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10249) Intermittent TestReplicationSyncUpTool failure

2014-01-21 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877701#comment-13877701
 ] 

Jean-Daniel Cryans commented on HBASE-10249:


bq. He's living the life.

Are you not? :)

bq. Should change the title to something more descriptive since this in an 
actual bug in the replication code.

Well the tool is racy. It can still fail, but it's much much less likely. Agree 
the title needs to be changed.

bq. The condition here is always false, so removing has no effect, right?

The check just happens sooner now.

bq. Looks good to me, I'll do some more test and commit if all looks good.

Since I'm back from $exotic_location, you mind if I commit? Your testing came 
back ok?

 Intermittent TestReplicationSyncUpTool failure
 --

 Key: HBASE-10249
 URL: https://issues.apache.org/jira/browse/HBASE-10249
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10249-0.94-v0.patch, HBASE-10249-0.94-v1.patch, 
 HBASE-10249-trunk-v0.patch, HBASE-10249-trunk-v1.patch


 New issue to keep track of this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877721#comment-13877721
 ] 

Hadoop QA commented on HBASE-10322:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12624161/HBASE-10322_V5.patch
  against trunk revision .
  ATTACHMENT ID: 12624161

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 23 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8483//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8483//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8483//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8483//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8483//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8483//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8483//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8483//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8483//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8483//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8483//console

This message is automatically generated.

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_V3.patch, HBASE-10322_V4.patch, HBASE-10322_V5.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-7320) Replace calls to KeyValue.getBuffer with appropropriate calls to getRowArray, getFamilyArray(), getQualifierArray, and getValueArray

2014-01-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877745#comment-13877745
 ] 

Lars Hofhansl commented on HBASE-7320:
--

bq.  I started to look at all the times we get the family array.

That's legit, I think. Getting the row, family, qualifier and value via their 
own xyzArray method is OK.
We can assume that the row, family, qualifier, and value are laid out in ram in 
at least a byte[] (or bytebuffer). What we cannot assume that there is any 
layout relationship between them.

What is not OK are at least KeyValue.
* getBuffer
* getOffset
* getLength
* getKeyOffset
* getKeyLength
* getKey/getKeyString

As we should not assume that row/family/qualifier are laid out together nor 
that the entire KV is laid out together.



 Replace calls to KeyValue.getBuffer with appropropriate calls to getRowArray, 
 getFamilyArray(), getQualifierArray, and getValueArray
 

 Key: HBASE-7320
 URL: https://issues.apache.org/jira/browse/HBASE-7320
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: stack
 Fix For: 0.98.0


 In many places this is simple task of just replacing the method name.
 There, however, quite a few places where we assume that:
 # the entire KV is backed by a single byte array
 # the KVs key portion is backed by a single byte array
 Some of those can easily be fixed, others will need their own jiras.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877749#comment-13877749
 ] 

Lars Hofhansl commented on HBASE-10383:
---

Yep. I found that too. HadoopSecurityEnabledUserProviderForTesting returns 
false from isHBaseSecurityEnabled. Need to fix that.

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S
Priority: Critical
 Fix For: 0.94.17

 Attachments: 10383.txt, HBASE-10383-v2.patch


 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10249) Intermittent TestReplicationSyncUpTool failure

2014-01-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877763#comment-13877763
 ] 

Lars Hofhansl commented on HBASE-10249:
---

Didn't get to test this. But it looks good. +1 on commit.

 Intermittent TestReplicationSyncUpTool failure
 --

 Key: HBASE-10249
 URL: https://issues.apache.org/jira/browse/HBASE-10249
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10249-0.94-v0.patch, HBASE-10249-0.94-v1.patch, 
 HBASE-10249-trunk-v0.patch, HBASE-10249-trunk-v1.patch


 New issue to keep track of this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-7404) Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE

2014-01-21 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877833#comment-13877833
 ] 

Vladimir Rodionov commented on HBASE-7404:
--

Although, I am not big fan of this implementation (BucketCache), I still think 
that nobody has actually tried it in a real applications - not in synthetic 
benchmark. Keeping INDEX and BLOOM blocks on heap and DATA blocks off heap is 
very reasonable approach, taking into account that  DATA blocks takes ~95% of 
space and only 33% accesses (get INDEX, get BLOOM, get DATA - correct?). 
Therefore 2/3 of ALL block cache requests must be served from fast on heap 
cache. Deserialiization of serialized block is limited only by memory bandwidth 
and even with modest 1GB per sec per CPU core we can get 15K blocks per sec per 
CPU core. Definitely, not a bottleneck if one takes into account HBase network 
stack limitations as well. 



 

 Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE
 --

 Key: HBASE-7404
 URL: https://issues.apache.org/jira/browse/HBASE-7404
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.94.3
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.95.0

 Attachments: 7404-0.94-fixed-lines.txt, 7404-trunk-v10.patch, 
 7404-trunk-v11.patch, 7404-trunk-v12.patch, 7404-trunk-v13.patch, 
 7404-trunk-v13.txt, 7404-trunk-v14.patch, BucketCache.pdf, 
 HBASE-7404-backport-0.94.patch, Introduction of Bucket Cache.pdf, 
 hbase-7404-94v2.patch, hbase-7404-trunkv2.patch, hbase-7404-trunkv9.patch


 First, thanks @neil from Fusion-IO share the source code.
 Usage:
 1.Use bucket cache as main memory cache, configured as the following:
 –hbase.bucketcache.ioengine heap
 –hbase.bucketcache.size 0.4 (size for bucket cache, 0.4 is a percentage of 
 max heap size)
 2.Use bucket cache as a secondary cache, configured as the following:
 –hbase.bucketcache.ioengine file:/disk1/hbase/cache.data(The file path 
 where to store the block data)
 –hbase.bucketcache.size 1024 (size for bucket cache, unit is MB, so 1024 
 means 1GB)
 –hbase.bucketcache.combinedcache.enabled false (default value being true)
 See more configurations from org.apache.hadoop.hbase.io.hfile.CacheConfig and 
 org.apache.hadoop.hbase.io.hfile.bucket.BucketCache
 What's Bucket Cache? 
 It could greatly decrease CMS and heap fragment by GC
 It support a large cache space for High Read Performance by using high speed 
 disk like Fusion-io
 1.An implementation of block cache like LruBlockCache
 2.Self manage blocks' storage position through Bucket Allocator
 3.The cached blocks could be stored in the memory or file system
 4.Bucket Cache could be used as a mainly block cache(see CombinedBlockCache), 
 combined with LruBlockCache to decrease CMS and fragment by GC.
 5.BucketCache also could be used as a secondary cache(e.g. using Fusionio to 
 store block) to enlarge cache space
 How about SlabCache?
 We have studied and test SlabCache first, but the result is bad, because:
 1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds 
 of block size, especially using DataBlockEncoding
 2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache 
 and LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , 
 it causes CMS and heap fragment don't get any better
 3.Direct heap performance is not good as heap, and maybe cause OOM, so we 
 recommend using heap engine 
 See more in the attachment and in the patch



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10249) TestReplicationSyncUpTool fails because failover takes too long

2014-01-21 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-10249:
---

Summary: TestReplicationSyncUpTool fails because failover takes too long  
(was: Intermittent TestReplicationSyncUpTool failure)

 TestReplicationSyncUpTool fails because failover takes too long
 ---

 Key: HBASE-10249
 URL: https://issues.apache.org/jira/browse/HBASE-10249
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10249-0.94-v0.patch, HBASE-10249-0.94-v1.patch, 
 HBASE-10249-trunk-v0.patch, HBASE-10249-trunk-v1.patch


 New issue to keep track of this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10249) TestReplicationSyncUpTool fails because failover takes too long

2014-01-21 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-10249:
---

  Resolution: Fixed
Release Note: This change also fixes a potential data loss issue when using 
ZK multi actions because region servers could try to failover themselves (the 
replication sync up tool acts as a RS too)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed everywhere, thanks for the reviews guys and sorry I was off drinking 
tequila for a few days.

 TestReplicationSyncUpTool fails because failover takes too long
 ---

 Key: HBASE-10249
 URL: https://issues.apache.org/jira/browse/HBASE-10249
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10249-0.94-v0.patch, HBASE-10249-0.94-v1.patch, 
 HBASE-10249-trunk-v0.patch, HBASE-10249-trunk-v1.patch


 New issue to keep track of this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-4135) Provide unified method of accessing web UI in case of master failover

2014-01-21 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877892#comment-13877892
 ] 

Esteban Gutierrez commented on HBASE-4135:
--

[~yuzhih...@gmail.com] would having the servlet running on the backup master 
and returning an {{HTTP 302}} with the new location be an option?


 Provide unified method of accessing web UI in case of master failover
 -

 Key: HBASE-4135
 URL: https://issues.apache.org/jira/browse/HBASE-4135
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu

 Previously we use servername:60010/master.jsp to access web UI.
 In case of master failover, the above wouldn't work.
 It is desirable to provide unified method of accessing web UI in case of 
 master failover. e.g. We can create a simple servlet hosted by zookeeper and 
 redirect request to the active master.
  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10249) TestReplicationSyncUpTool fails because failover takes too long

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877928#comment-13877928
 ] 

Hudson commented on HBASE-10249:


FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #2 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/2/])
HBASE-10249 TestReplicationSyncUpTool fails because failover takes too long 
(jdcryans: rev 1560198)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/ReplicationZookeeper.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java


 TestReplicationSyncUpTool fails because failover takes too long
 ---

 Key: HBASE-10249
 URL: https://issues.apache.org/jira/browse/HBASE-10249
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10249-0.94-v0.patch, HBASE-10249-0.94-v1.patch, 
 HBASE-10249-trunk-v0.patch, HBASE-10249-trunk-v1.patch


 New issue to keep track of this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10249) TestReplicationSyncUpTool fails because failover takes too long

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877930#comment-13877930
 ] 

Hudson commented on HBASE-10249:


FAILURE: Integrated in HBase-0.94-security #391 (See 
[https://builds.apache.org/job/HBase-0.94-security/391/])
HBASE-10249 TestReplicationSyncUpTool fails because failover takes too long 
(jdcryans: rev 1560198)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/ReplicationZookeeper.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java


 TestReplicationSyncUpTool fails because failover takes too long
 ---

 Key: HBASE-10249
 URL: https://issues.apache.org/jira/browse/HBASE-10249
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10249-0.94-v0.patch, HBASE-10249-0.94-v1.patch, 
 HBASE-10249-trunk-v0.patch, HBASE-10249-trunk-v1.patch


 New issue to keep track of this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10389) Add namespace help info in table related shell commands

2014-01-21 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877937#comment-13877937
 ] 

Jonathan Hsieh commented on HBASE-10389:


sounds great.  is a patch coming?

 Add namespace help info in table related shell commands
 ---

 Key: HBASE-10389
 URL: https://issues.apache.org/jira/browse/HBASE-10389
 Project: HBase
  Issue Type: Improvement
  Components: shell
Affects Versions: 0.96.0, 0.96.1
Reporter: Jerry He
 Fix For: 0.98.0, 0.96.2


 Currently in the help info of table related shell command, we don't mention 
 or give namespace as part of the table name.  
 For example, to create table:
 {code}
 hbase(main):001:0 help 'create'
 Creates a table. Pass a table name, and a set of column family
 specifications (at least one), and, optionally, table configuration.
 Column specification can be a simple string (name), or a dictionary
 (dictionaries are described below in main help output), necessarily
 including NAME attribute.
 Examples:
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}
   hbase create 't1', {NAME = 'f1'}, {NAME = 'f2'}, {NAME = 'f3'}
   hbase # The above in shorthand would be the following:
   hbase create 't1', 'f1', 'f2', 'f3'
   hbase create 't1', {NAME = 'f1', VERSIONS = 1, TTL = 2592000, 
 BLOCKCACHE = true}
   hbase create 't1', {NAME = 'f1', CONFIGURATION = 
 {'hbase.hstore.blockingStoreFiles' = '10'}}
 Table configuration options can be put at the end.
 Examples:
   hbase create 't1', 'f1', SPLITS = ['10', '20', '30', '40']
   hbase create 't1', 'f1', SPLITS_FILE = 'splits.txt', OWNER = 'johndoe'
   hbase create 't1', {NAME = 'f1', VERSIONS = 5}, METADATA = { 'mykey' = 
 'myvalue' }
   hbase # Optionally pre-split the table into NUMREGIONS, using
   hbase # SPLITALGO (HexStringSplit, UniformSplit or classname)
   hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit'}
   hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit', 
 CONFIGURATION = {'hbase.hregion.scan.loadColumnFamiliesOnDemand' = 'true'}}
 You can also keep around a reference to the created table:
   hbase t1 = create 't1', 'f1'
 Which gives you a reference to the table named 't1', on which you can then
 call methods.
 {code}
 We should document the usage of namespace in these commands.
 For example:
 #namespace=foo and table qualifier=bar
 create 'foo:bar', 'fam'
 #namespace=default and table qualifier=bar
 create 'bar', 'fam'



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10365) HBaseFsck should clean up connection properly when repair is completed

2014-01-21 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877947#comment-13877947
 ] 

Jonathan Hsieh commented on HBASE-10365:


lgtm. +1.

 HBaseFsck should clean up connection properly when repair is completed
 --

 Key: HBASE-10365
 URL: https://issues.apache.org/jira/browse/HBASE-10365
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10365-v1.txt


 At the end of exec() method, connections to the cluster are not properly 
 released.
 Connections should be released upon completion of repair.
 This was mentioned by Jean-Marc in the thread '[VOTE] The 1st hbase 0.94.16 
 release candidate is available for download'



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit

2014-01-21 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-10392:


 Summary: Correct references to 
hbase.regionserver.global.memstore.upperLimit
 Key: HBASE-10392
 URL: https://issues.apache.org/jira/browse/HBASE-10392
 Project: HBase
  Issue Type: Bug
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.99.0


As part of the awesome new HBASE-5349, a couple references to 
{{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up to 
use the new {{hbase.regionserver.global.memstore.size}} instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-4027) Enable direct byte buffers LruBlockCache

2014-01-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-4027:
-

Release Note: Setting hbase.offheapcachesize in hbase-site.xml and 
-XX:MaxDirectMemorySize in hbase-env.sh to enable this feature. The file 
already has a line you can uncomment and you need to set the size of the direct 
memory (your total memory - size allocated to memstores - size allocated to the 
normal block cache - and some head room for the other functionalities).  (was: 
Setting -XX:MaxDirectMemorySize in hbase-env.sh enables this feature. The file 
already has a line you can uncomment and you need to set the size of the direct 
memory (your total memory - size allocated to memstores - size allocated to the 
normal block cache - some head room for the other functionalities).)

 Enable direct byte buffers LruBlockCache
 

 Key: HBASE-4027
 URL: https://issues.apache.org/jira/browse/HBASE-4027
 Project: HBase
  Issue Type: Improvement
Reporter: Jason Rutherglen
Assignee: Li Pi
Priority: Minor
 Fix For: 0.92.0

 Attachments: 4027-v5.diff, 4027v7.diff, HBase-4027 (1).pdf, 
 HBase-4027.pdf, HBase4027v8.diff, HBase4027v9.diff, hbase-4027-v10.5.diff, 
 hbase-4027-v10.diff, hbase-4027v10.6.diff, hbase-4027v13.1.diff, 
 hbase-4027v15.3.diff, hbase-4027v6.diff, hbase4027v11.5.diff, 
 hbase4027v11.6.diff, hbase4027v11.7.diff, hbase4027v11.diff, 
 hbase4027v12.1.diff, hbase4027v12.diff, hbase4027v15.2.diff, 
 slabcachepatch.diff, slabcachepatchv2.diff, slabcachepatchv3.1.diff, 
 slabcachepatchv3.2.diff, slabcachepatchv3.diff, slabcachepatchv4.5.diff, 
 slabcachepatchv4.diff


 Java offers the creation of direct byte buffers which are allocated outside 
 of the heap.
 They need to be manually free'd, which can be accomplished using an 
 documented {{clean}} method.
 The feature will be optional.  After implementing, we can benchmark for 
 differences in speed and garbage collection observances.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit

2014-01-21 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-10392:
-

Status: Patch Available  (was: Open)

 Correct references to hbase.regionserver.global.memstore.upperLimit
 ---

 Key: HBASE-10392
 URL: https://issues.apache.org/jira/browse/HBASE-10392
 Project: HBase
  Issue Type: Bug
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.99.0

 Attachments: HBASE-10392.0.patch


 As part of the awesome new HBASE-5349, a couple references to 
 {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up 
 to use the new {{hbase.regionserver.global.memstore.size}} instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit

2014-01-21 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-10392:
-

Attachment: HBASE-10392.0.patch

Does this look right, [~anoop.hbase]?

 Correct references to hbase.regionserver.global.memstore.upperLimit
 ---

 Key: HBASE-10392
 URL: https://issues.apache.org/jira/browse/HBASE-10392
 Project: HBase
  Issue Type: Bug
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.99.0

 Attachments: HBASE-10392.0.patch


 As part of the awesome new HBASE-5349, a couple references to 
 {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up 
 to use the new {{hbase.regionserver.global.memstore.size}} instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer

2014-01-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877968#comment-13877968
 ] 

Ted Yu commented on HBASE-10336:


Is the following file needed ?
{code}
+++ hbase-server/src/test/resources/webapps/test/.gitignore
{code}
Is the following annotation needed ?
{code}
+@InterfaceAudience.LimitedPrivate({HBase})
{code}

 Remove deprecated usage of Hadoop HttpServer in InfoServer
 --

 Key: HBASE-10336
 URL: https://issues.apache.org/jira/browse/HBASE-10336
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0
Reporter: Eric Charles
Assignee: Eric Charles
 Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, 
 HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch


 Recent changes in Hadoop HttpServer give NPE when running on hadoop 
 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be 
 not fixed (see HDFS-5760). We'd better move to the new proposed builder 
 pattern, which means we can no more use inheritance to build our nice 
 InfoServer.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit

2014-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877970#comment-13877970
 ] 

stack commented on HBASE-10392:
---

lgtm

 Correct references to hbase.regionserver.global.memstore.upperLimit
 ---

 Key: HBASE-10392
 URL: https://issues.apache.org/jira/browse/HBASE-10392
 Project: HBase
  Issue Type: Bug
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.99.0

 Attachments: HBASE-10392.0.patch


 As part of the awesome new HBASE-5349, a couple references to 
 {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up 
 to use the new {{hbase.regionserver.global.memstore.size}} instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10249) TestReplicationSyncUpTool fails because failover takes too long

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878012#comment-13878012
 ] 

Hudson commented on HBASE-10249:


SUCCESS: Integrated in HBase-0.94-JDK7 #31 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/31/])
HBASE-10249 TestReplicationSyncUpTool fails because failover takes too long 
(jdcryans: rev 1560198)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/ReplicationZookeeper.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java


 TestReplicationSyncUpTool fails because failover takes too long
 ---

 Key: HBASE-10249
 URL: https://issues.apache.org/jira/browse/HBASE-10249
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10249-0.94-v0.patch, HBASE-10249-0.94-v1.patch, 
 HBASE-10249-trunk-v0.patch, HBASE-10249-trunk-v1.patch


 New issue to keep track of this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9426) Make custom distributed barrier procedure pluggable

2014-01-21 Thread Richard Ding (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Ding updated HBASE-9426:


Attachment: HBASE-9426-7.patch

Thanks [~jmhsieh] for the review. Attaching the patch that rebased on trunk and 
updated based on review comments.

I also run apache-rat:check locally and it seems the previous release audit 
warnings are not caused by my patch.

 Make custom distributed barrier procedure pluggable 
 

 Key: HBASE-9426
 URL: https://issues.apache.org/jira/browse/HBASE-9426
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.95.2, 0.94.11
Reporter: Richard Ding
Assignee: Richard Ding
 Attachments: HBASE-9426-4.patch, HBASE-9426-4.patch, 
 HBASE-9426-6.patch, HBASE-9426-7.patch, HBASE-9426.patch.1, 
 HBASE-9426.patch.2, HBASE-9426.patch.3


 Currently if one wants to implement a custom distributed barrier procedure 
 (e.g., distributed log roll or distributed table flush), the HBase core code 
 needs to be modified in order for the procedure to work.
 Looking into the snapshot code (especially on region server side), most of 
 the code to enable the procedure are generic life-cycle management (i.e., 
 init, start, stop). We can make this part pluggable.
 Here is the proposal. Following the coprocessor example, we define two 
 properties:
 {code}
 hbase.procedure.regionserver.classes
 hbase.procedure.master.classes
 {code}
 The values for both are comma delimited list of classes. On region server 
 side, the classes implements the following interface:
 {code}
 public interface RegionServerProcedureManager {
   public void initialize(RegionServerServices rss) throws KeeperException;
   public void start();
   public void stop(boolean force) throws IOException;
   public String getProcedureName();
 }
 {code}
 While on Master side, the classes implement the interface:
 {code}
 public interface MasterProcedureManager {
   public void initialize(MasterServices master) throws KeeperException, 
 IOException, UnsupportedOperationException;
   public void stop(String why);
   public String getProcedureName();
   public void execProcedure(ProcedureDescription desc) throws IOException;
   IOException;
 }
 {code}
 Where the ProcedureDescription is defined as
 {code}
 message ProcedureDescription {
   required string name = 1;
   required string instance = 2;
   optional int64 creationTime = 3 [default = 0];
   message Property {
 required string tag = 1;
 optional string value = 2;
   }
   repeated Property props = 4;
 }
 {code}
 A generic API can be defined on HMaster to trigger a procedure:
 {code}
 public boolean execProcedure(ProcedureDescription desc) throws IOException;
 {code}
 _SnapshotManager_ and _RegionServerSnapshotManager_ are special examples of 
 _MasterProcedureManager_ and _RegionServerProcedureManager_. They will be 
 automatically included (users don't need to specify them in the conf file).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10249) TestReplicationSyncUpTool fails because failover takes too long

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878021#comment-13878021
 ] 

Hudson commented on HBASE-10249:


SUCCESS: Integrated in HBase-TRUNK #4844 (See 
[https://builds.apache.org/job/HBase-TRUNK/4844/])
HBASE-10249 TestReplicationSyncUpTool fails because failover takes too long 
(jdcryans: rev 1560201)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java


 TestReplicationSyncUpTool fails because failover takes too long
 ---

 Key: HBASE-10249
 URL: https://issues.apache.org/jira/browse/HBASE-10249
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10249-0.94-v0.patch, HBASE-10249-0.94-v1.patch, 
 HBASE-10249-trunk-v0.patch, HBASE-10249-trunk-v1.patch


 New issue to keep track of this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10393) [fb-0.89] Expose a hook to compact files in hbase on the CLI

2014-01-21 Thread Adela Maznikar (JIRA)
Adela Maznikar created HBASE-10393:
--

 Summary: [fb-0.89] Expose a hook to compact files in hbase on the 
CLI
 Key: HBASE-10393
 URL: https://issues.apache.org/jira/browse/HBASE-10393
 Project: HBase
  Issue Type: New Feature
  Components: Compaction
Affects Versions: 0.89-fb
Reporter: Adela Maznikar
 Fix For: 0.89-fb


Sometimes we need way to perform compactions outside of the regionserver, 
example before turning on the cluster. The task is to expose a way to compact 
files from the CLI without requring a running RS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10249) TestReplicationSyncUpTool fails because failover takes too long

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878041#comment-13878041
 ] 

Hudson commented on HBASE-10249:


SUCCESS: Integrated in HBase-0.94 #1264 (See 
[https://builds.apache.org/job/HBase-0.94/1264/])
HBASE-10249 TestReplicationSyncUpTool fails because failover takes too long 
(jdcryans: rev 1560198)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/ReplicationZookeeper.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java


 TestReplicationSyncUpTool fails because failover takes too long
 ---

 Key: HBASE-10249
 URL: https://issues.apache.org/jira/browse/HBASE-10249
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10249-0.94-v0.patch, HBASE-10249-0.94-v1.patch, 
 HBASE-10249-trunk-v0.patch, HBASE-10249-trunk-v1.patch


 New issue to keep track of this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10249) TestReplicationSyncUpTool fails because failover takes too long

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878046#comment-13878046
 ] 

Hudson commented on HBASE-10249:


FAILURE: Integrated in hbase-0.96 #266 (See 
[https://builds.apache.org/job/hbase-0.96/266/])
HBASE-10249 TestReplicationSyncUpTool fails because failover takes too long 
(jdcryans: rev 1560199)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java


 TestReplicationSyncUpTool fails because failover takes too long
 ---

 Key: HBASE-10249
 URL: https://issues.apache.org/jira/browse/HBASE-10249
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10249-0.94-v0.patch, HBASE-10249-0.94-v1.patch, 
 HBASE-10249-trunk-v0.patch, HBASE-10249-trunk-v1.patch


 New issue to keep track of this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10393) [fb-0.89] Expose a hook to compact files in hbase on the CLI

2014-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878060#comment-13878060
 ] 

stack commented on HBASE-10393:
---

FYI [~adela], we have a CompactionTool in trunk.  It might serve for 
inspiration.

 [fb-0.89] Expose a hook to compact files in hbase on the CLI
 

 Key: HBASE-10393
 URL: https://issues.apache.org/jira/browse/HBASE-10393
 Project: HBase
  Issue Type: New Feature
  Components: Compaction
Affects Versions: 0.89-fb
Reporter: Adela Maznikar
 Fix For: 0.89-fb


 Sometimes we need way to perform compactions outside of the regionserver, 
 example before turning on the cluster. The task is to expose a way to compact 
 files from the CLI without requring a running RS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10393) [fb-0.89] Expose a hook to compact files in hbase on the CLI

2014-01-21 Thread Adela Maznikar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878061#comment-13878061
 ] 

Adela Maznikar commented on HBASE-10393:


oh great, thanks stack! I wasn't aware of this

 [fb-0.89] Expose a hook to compact files in hbase on the CLI
 

 Key: HBASE-10393
 URL: https://issues.apache.org/jira/browse/HBASE-10393
 Project: HBase
  Issue Type: New Feature
  Components: Compaction
Affects Versions: 0.89-fb
Reporter: Adela Maznikar
 Fix For: 0.89-fb


 Sometimes we need way to perform compactions outside of the regionserver, 
 example before turning on the cluster. The task is to expose a way to compact 
 files from the CLI without requring a running RS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10393) [fb-0.89] Expose a hook to compact files in hbase on the CLI

2014-01-21 Thread Adela Maznikar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adela Maznikar updated HBASE-10393:
---

Assignee: Adela Maznikar

 [fb-0.89] Expose a hook to compact files in hbase on the CLI
 

 Key: HBASE-10393
 URL: https://issues.apache.org/jira/browse/HBASE-10393
 Project: HBase
  Issue Type: New Feature
  Components: Compaction
Affects Versions: 0.89-fb
Reporter: Adela Maznikar
Assignee: Adela Maznikar
 Fix For: 0.89-fb


 Sometimes we need way to perform compactions outside of the regionserver, 
 example before turning on the cluster. The task is to expose a way to compact 
 files from the CLI without requring a running RS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10393) [fb-0.89] Expose a hook to compact files in hbase on the CLI

2014-01-21 Thread Adela Maznikar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adela Maznikar updated HBASE-10393:
---

Assignee: (was: Adela Maznikar)

 [fb-0.89] Expose a hook to compact files in hbase on the CLI
 

 Key: HBASE-10393
 URL: https://issues.apache.org/jira/browse/HBASE-10393
 Project: HBase
  Issue Type: New Feature
  Components: Compaction
Affects Versions: 0.89-fb
Reporter: Adela Maznikar
 Fix For: 0.89-fb


 Sometimes we need way to perform compactions outside of the regionserver, 
 example before turning on the cluster. The task is to expose a way to compact 
 files from the CLI without requring a running RS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10393) [fb-0.89] Expose a hook to compact files in hbase on the CLI

2014-01-21 Thread Adela Maznikar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adela Maznikar updated HBASE-10393:
---

Description: Sometimes we need way to perform compactions outside of the 
regionserver, example before turning on the cluster. The task is to expose a 
way to compact files from the CLI without requiring a running RS.  (was: 
Sometimes we need way to perform compactions outside of the regionserver, 
example before turning on the cluster. The task is to expose a way to compact 
files from the CLI without requring a running RS.)

 [fb-0.89] Expose a hook to compact files in hbase on the CLI
 

 Key: HBASE-10393
 URL: https://issues.apache.org/jira/browse/HBASE-10393
 Project: HBase
  Issue Type: New Feature
  Components: Compaction
Affects Versions: 0.89-fb
Reporter: Adela Maznikar
 Fix For: 0.89-fb


 Sometimes we need way to perform compactions outside of the regionserver, 
 example before turning on the cluster. The task is to expose a way to compact 
 files from the CLI without requiring a running RS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10249) TestReplicationSyncUpTool fails because failover takes too long

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878075#comment-13878075
 ] 

Hudson commented on HBASE-10249:


FAILURE: Integrated in hbase-0.96-hadoop2 #183 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/183/])
HBASE-10249 TestReplicationSyncUpTool fails because failover takes too long 
(jdcryans: rev 1560199)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java


 TestReplicationSyncUpTool fails because failover takes too long
 ---

 Key: HBASE-10249
 URL: https://issues.apache.org/jira/browse/HBASE-10249
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10249-0.94-v0.patch, HBASE-10249-0.94-v1.patch, 
 HBASE-10249-trunk-v0.patch, HBASE-10249-trunk-v1.patch


 New issue to keep track of this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10277) refactor AsyncProcess

2014-01-21 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878076#comment-13878076
 ] 

Sergey Shelukhin commented on HBASE-10277:
--

bq. This seems great. Would it be possible then to have a single AsyncProcess 
per HConnection, shared between the different htables objects? This would make
Not for legacy mode, because then the cross-put behavior will also be 
cross-HTable.
For individual requests, yeah, that can be done.
Also the sentence appears to be unfinished.

bq. Side question: would it make sense to use the multiget path for a single 
get, instead of having two different paths?
Yeah, that is possible, but it is in scope of different JIRA.

bq. The scenario is already there: it's how to manage the errors with the write 
buffer. I didn't want to make the interface public (as once it's public you 
should not change it), but at the end of the day, the callback is the most 
obvious solution to the problem. Having it here sets a base for the discussion. 
If your patch allows to have a common resource management per HTable, I'm happy 
to lose the callbacks as a side effect of the patch, but having both would be 
better imho.
Can you elaborate on the error management? Right now the patch preserves the 
cross-put-errors mode for HTable, without the callback.


bq. What's deprecated is mainly that the batch interfaces were in HConnection 
instead of HTable. The Object[] is ugly, but is still the 'recommended' way
Yeah, for these paths and without the custom pool is where we reuse the same 
AsyncProcess.

 refactor AsyncProcess
 -

 Key: HBASE-10277
 URL: https://issues.apache.org/jira/browse/HBASE-10277
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-10277.patch


 AsyncProcess currently has two patterns of usage, one from HTable flush w/o 
 callback and with reuse, and one from HCM/HTable batch call, with callback 
 and w/o reuse. In the former case (but not the latter), it also does some 
 throttling of actions on initial submit call, limiting the number of 
 outstanding actions per server.
 The latter case is relatively straightforward. The former appears to be error 
 prone due to reuse - if, as javadoc claims should be safe, multiple submit 
 calls are performed without waiting for the async part of the previous call 
 to finish, fields like hasError become ambiguous and can be used for the 
 wrong call; callback for success/failure is called based on original index 
 of an action in submitted list, but with only one callback supplied to AP in 
 ctor it's not clear to which submit call the index belongs, if several are 
 outstanding.
 I was going to add support for HBASE-10070 to AP, and found that it might be 
 difficult to do cleanly.
 It would be nice to normalize AP usage patterns; in particular, separate the 
 global part (load tracking) from per-submit-call part.
 Per-submit part can more conveniently track stuff like initialActions, 
 mapping of indexes and retry information, that is currently passed around the 
 method calls.
 -I am not sure yet, but maybe sending of the original index to server in 
 ClientProtos.MultiAction can also be avoided.- Cannot be avoided because 
 the API to server doesn't have one-to-one correspondence between requests and 
 responses in an individual call to multi (retries/rearrangement have nothing 
 to do with it)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10338) Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost

2014-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878077#comment-13878077
 ] 

stack commented on HBASE-10338:
---

This patch destabilized the 0.96 builds.  Any chance of taking a look Vandana.  
See here: https://builds.apache.org/job/hbase-0.96/  See how since this patch 
went in 
org.apache.hadoop.hbase.regionserver.TestRSKilledWhenInitializing.testRSTermnationAfterRegisteringToMasterBeforeCreatingEphemeralNod
 has started failing.

Thanks.

 Region server fails to start with AccessController coprocessor if installed 
 into RegionServerCoprocessorHost
 

 Key: HBASE-10338
 URL: https://issues.apache.org/jira/browse/HBASE-10338
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors, regionserver
Affects Versions: 0.98.0
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: 10338.1-0.96.patch, 10338.1-0.98.patch, 10338.1.patch, 
 10338.1.patch, HBASE-10338.0.patch


 Runtime exception is being thrown when AccessController CP is used with 
 region server. This is happening as region server co processor host is 
 created before zookeeper is initialized in region server.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10338) Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost

2014-01-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878081#comment-13878081
 ] 

Andrew Purtell commented on HBASE-10338:


I didn't see that test failure when running the 0.96 suite before commit, so 
it's likely a timing issue / race.

 Region server fails to start with AccessController coprocessor if installed 
 into RegionServerCoprocessorHost
 

 Key: HBASE-10338
 URL: https://issues.apache.org/jira/browse/HBASE-10338
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors, regionserver
Affects Versions: 0.98.0
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: 10338.1-0.96.patch, 10338.1-0.98.patch, 10338.1.patch, 
 10338.1.patch, HBASE-10338.0.patch


 Runtime exception is being thrown when AccessController CP is used with 
 region server. This is happening as region server co processor host is 
 created before zookeeper is initialized in region server.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10338) Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost

2014-01-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878086#comment-13878086
 ] 

Andrew Purtell commented on HBASE-10338:


No, I stand corrected.

Here is the core change to 0.96:
{noformat}
Index: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
===
--- 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
  (revision 1558241)
+++ 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
  (working copy)
@@ -574,7 +574,6 @@
 abort(Uncaught exception in service thread  + t.getName(), e);
   }
 };
-this.rsHost = new RegionServerCoprocessorHost(this, this.conf);
 
 this.distributedLogReplay = 
this.conf.getBoolean(HConstants.DISTRIBUTED_LOG_REPLAY_KEY,
   HConstants.DEFAULT_DISTRIBUTED_LOG_REPLAY_CONFIG);
@@ -791,6 +790,11 @@
 }
   }
 
+  // Initialize the RegionServerCoprocessorHost now that our ephemeral
+  // node was created by reportForDuty, in case any coprocessors want
+  // to use ZooKeeper
+  this.rsHost = new RegionServerCoprocessorHost(this, this.conf);
+
   if (!this.stopped  isHealthy()){
 // start the snapshot handler, since the server is ready to run
 this.snapshotManager.start();
{noformat}

Looks like moving the initialization has left something uninitialized during 
the test because when the abort happens there is what looks like an unexpected 
NPE:
{noformat}
2014-01-21 23:16:22,279 FATAL [RS:0;vesta:49655] 
regionserver.HRegionServer(1733): ABORTING region server 
vesta.apache.org,49655,1390346181063: Unhandled: null
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.stop(HRegionServer.java:1665)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1761)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.abortRegionServer(MiniHBaseCluster.java:173)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$100(MiniHBaseCluster.java:107)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$2.run(MiniHBaseCluster.java:166)
{noformat}

 Region server fails to start with AccessController coprocessor if installed 
 into RegionServerCoprocessorHost
 

 Key: HBASE-10338
 URL: https://issues.apache.org/jira/browse/HBASE-10338
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors, regionserver
Affects Versions: 0.98.0
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: 10338.1-0.96.patch, 10338.1-0.98.patch, 10338.1.patch, 
 10338.1.patch, HBASE-10338.0.patch


 Runtime exception is being thrown when AccessController CP is used with 
 region server. This is happening as region server co processor host is 
 created before zookeeper is initialized in region server.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10338) Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost

2014-01-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878087#comment-13878087
 ] 

Andrew Purtell commented on HBASE-10338:


Ping [~avandana]

 Region server fails to start with AccessController coprocessor if installed 
 into RegionServerCoprocessorHost
 

 Key: HBASE-10338
 URL: https://issues.apache.org/jira/browse/HBASE-10338
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors, regionserver
Affects Versions: 0.98.0
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: 10338.1-0.96.patch, 10338.1-0.98.patch, 10338.1.patch, 
 10338.1.patch, HBASE-10338.0.patch


 Runtime exception is being thrown when AccessController CP is used with 
 region server. This is happening as region server co processor host is 
 created before zookeeper is initialized in region server.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10087) Store should be locked during a memstore snapshot

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878098#comment-13878098
 ] 

Hudson commented on HBASE-10087:


FAILURE: Integrated in HBase-0.98 #100 (See 
[https://builds.apache.org/job/HBase-0.98/100/])
HBASE-10087 Store should be locked during a memstore snapshot (nkeywal: rev 
1560028)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


 Store should be locked during a memstore snapshot
 -

 Key: HBASE-10087
 URL: https://issues.apache.org/jira/browse/HBASE-10087
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.0, 0.96.1, 0.94.14
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.1, 0.99.0

 Attachments: 10079.v1.patch, 10087.v2.patch


 regression from HBASE-9963, found while looking at HBASE-10079.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10249) TestReplicationSyncUpTool fails because failover takes too long

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878097#comment-13878097
 ] 

Hudson commented on HBASE-10249:


FAILURE: Integrated in HBase-0.98 #100 (See 
[https://builds.apache.org/job/HBase-0.98/100/])
HBASE-10249 TestReplicationSyncUpTool fails because failover takes too long 
(jdcryans: rev 1560200)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java


 TestReplicationSyncUpTool fails because failover takes too long
 ---

 Key: HBASE-10249
 URL: https://issues.apache.org/jira/browse/HBASE-10249
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10249-0.94-v0.patch, HBASE-10249-0.94-v1.patch, 
 HBASE-10249-trunk-v0.patch, HBASE-10249-trunk-v1.patch


 New issue to keep track of this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10277) refactor AsyncProcess

2014-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878100#comment-13878100
 ] 

stack commented on HBASE-10277:
---

Pardon the dumb question: Why we need to support 'legacy' behavior?   Strip it?

 refactor AsyncProcess
 -

 Key: HBASE-10277
 URL: https://issues.apache.org/jira/browse/HBASE-10277
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-10277.patch


 AsyncProcess currently has two patterns of usage, one from HTable flush w/o 
 callback and with reuse, and one from HCM/HTable batch call, with callback 
 and w/o reuse. In the former case (but not the latter), it also does some 
 throttling of actions on initial submit call, limiting the number of 
 outstanding actions per server.
 The latter case is relatively straightforward. The former appears to be error 
 prone due to reuse - if, as javadoc claims should be safe, multiple submit 
 calls are performed without waiting for the async part of the previous call 
 to finish, fields like hasError become ambiguous and can be used for the 
 wrong call; callback for success/failure is called based on original index 
 of an action in submitted list, but with only one callback supplied to AP in 
 ctor it's not clear to which submit call the index belongs, if several are 
 outstanding.
 I was going to add support for HBASE-10070 to AP, and found that it might be 
 difficult to do cleanly.
 It would be nice to normalize AP usage patterns; in particular, separate the 
 global part (load tracking) from per-submit-call part.
 Per-submit part can more conveniently track stuff like initialActions, 
 mapping of indexes and retry information, that is currently passed around the 
 method calls.
 -I am not sure yet, but maybe sending of the original index to server in 
 ClientProtos.MultiAction can also be avoided.- Cannot be avoided because 
 the API to server doesn't have one-to-one correspondence between requests and 
 responses in an individual call to multi (retries/rearrangement have nothing 
 to do with it)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10391) Deprecate KeyValue#getBuffer

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878096#comment-13878096
 ] 

Hudson commented on HBASE-10391:


FAILURE: Integrated in HBase-0.98 #100 (See 
[https://builds.apache.org/job/HBase-0.98/100/])
HBASE-10391 Deprecate KeyValue#getBuffer (stack: rev 1559934)
* 
/hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java


 Deprecate KeyValue#getBuffer
 

 Key: HBASE-10391
 URL: https://issues.apache.org/jira/browse/HBASE-10391
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.99.0

 Attachments: 10391.txt


 Make the deprecation a subtask of the parent.  Let the parent stand as an 
 umbrella issue.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10277) refactor AsyncProcess

2014-01-21 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878106#comment-13878106
 ] 

Sergey Shelukhin commented on HBASE-10277:
--

94 compat... HTable put is currently async but does not have any means to 
return errors. flushCommits can flush multiple puts. Errors are eventually 
thrown thru some put call or flushCommits. We can either break HTable::put 
interface (doesn't seem viable), make put-s sync and add separate async put 
(that is possible but may also be surprising), or remove old pattern from AP, 
but keep track of all the puts inside HTable itself, and aggregate all errors 
only when flushCommits is called, for example (with some client performance 
loss because multiple requests will be tracked on higher level than in AP). 
Overall, I can see merit in scenario where you do bunch of puts and then 
flush... it could be replaced with user issuing multi-puts explicitly, but now 
that API is such as it is, we cannot simply remove it I think. Maybe the 3rd 
approach above is viable, if we add some javadocs/notes.
What do you think?

 refactor AsyncProcess
 -

 Key: HBASE-10277
 URL: https://issues.apache.org/jira/browse/HBASE-10277
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-10277.patch


 AsyncProcess currently has two patterns of usage, one from HTable flush w/o 
 callback and with reuse, and one from HCM/HTable batch call, with callback 
 and w/o reuse. In the former case (but not the latter), it also does some 
 throttling of actions on initial submit call, limiting the number of 
 outstanding actions per server.
 The latter case is relatively straightforward. The former appears to be error 
 prone due to reuse - if, as javadoc claims should be safe, multiple submit 
 calls are performed without waiting for the async part of the previous call 
 to finish, fields like hasError become ambiguous and can be used for the 
 wrong call; callback for success/failure is called based on original index 
 of an action in submitted list, but with only one callback supplied to AP in 
 ctor it's not clear to which submit call the index belongs, if several are 
 outstanding.
 I was going to add support for HBASE-10070 to AP, and found that it might be 
 difficult to do cleanly.
 It would be nice to normalize AP usage patterns; in particular, separate the 
 global part (load tracking) from per-submit-call part.
 Per-submit part can more conveniently track stuff like initialActions, 
 mapping of indexes and retry information, that is currently passed around the 
 method calls.
 -I am not sure yet, but maybe sending of the original index to server in 
 ClientProtos.MultiAction can also be avoided.- Cannot be avoided because 
 the API to server doesn't have one-to-one correspondence between requests and 
 responses in an individual call to multi (retries/rearrangement have nothing 
 to do with it)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10277) refactor AsyncProcess

2014-01-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878111#comment-13878111
 ] 

stack commented on HBASE-10277:
---

So, this redo has to be backportable to 0.94?  Or you mean that the 0.94 APIs 
must work as they did in 0.94 though you are in a 0.96+ context?

Which is option #3?  I do not see a #3 above.  Do you mean 'remove old pattern 
from AP'?  If so, that sounds good to me.  AP is done 'right' (but you have to 
add hackery to handle the ugly stuff a while).  Old API is deprecated and IMO, 
it is find if deprecated API loses perf -- it is incentive to move to the new 
way.



 refactor AsyncProcess
 -

 Key: HBASE-10277
 URL: https://issues.apache.org/jira/browse/HBASE-10277
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-10277.patch


 AsyncProcess currently has two patterns of usage, one from HTable flush w/o 
 callback and with reuse, and one from HCM/HTable batch call, with callback 
 and w/o reuse. In the former case (but not the latter), it also does some 
 throttling of actions on initial submit call, limiting the number of 
 outstanding actions per server.
 The latter case is relatively straightforward. The former appears to be error 
 prone due to reuse - if, as javadoc claims should be safe, multiple submit 
 calls are performed without waiting for the async part of the previous call 
 to finish, fields like hasError become ambiguous and can be used for the 
 wrong call; callback for success/failure is called based on original index 
 of an action in submitted list, but with only one callback supplied to AP in 
 ctor it's not clear to which submit call the index belongs, if several are 
 outstanding.
 I was going to add support for HBASE-10070 to AP, and found that it might be 
 difficult to do cleanly.
 It would be nice to normalize AP usage patterns; in particular, separate the 
 global part (load tracking) from per-submit-call part.
 Per-submit part can more conveniently track stuff like initialActions, 
 mapping of indexes and retry information, that is currently passed around the 
 method calls.
 -I am not sure yet, but maybe sending of the original index to server in 
 ClientProtos.MultiAction can also be avoided.- Cannot be avoided because 
 the API to server doesn't have one-to-one correspondence between requests and 
 responses in an individual call to multi (retries/rearrangement have nothing 
 to do with it)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10391) Deprecate KeyValue#getBuffer

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878124#comment-13878124
 ] 

Hudson commented on HBASE-10391:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #61 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/61/])
HBASE-10391 Deprecate KeyValue#getBuffer (stack: rev 1559935)
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java


 Deprecate KeyValue#getBuffer
 

 Key: HBASE-10391
 URL: https://issues.apache.org/jira/browse/HBASE-10391
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.99.0

 Attachments: 10391.txt


 Make the deprecation a subtask of the parent.  Let the parent stand as an 
 umbrella issue.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10087) Store should be locked during a memstore snapshot

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878127#comment-13878127
 ] 

Hudson commented on HBASE-10087:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #61 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/61/])
HBASE-10087 Store should be locked during a memstore snapshot (nkeywal: rev 
1560018)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


 Store should be locked during a memstore snapshot
 -

 Key: HBASE-10087
 URL: https://issues.apache.org/jira/browse/HBASE-10087
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.0, 0.96.1, 0.94.14
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.1, 0.99.0

 Attachments: 10079.v1.patch, 10087.v2.patch


 regression from HBASE-9963, found while looking at HBASE-10079.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10249) TestReplicationSyncUpTool fails because failover takes too long

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878125#comment-13878125
 ] 

Hudson commented on HBASE-10249:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #61 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/61/])
HBASE-10249 TestReplicationSyncUpTool fails because failover takes too long 
(jdcryans: rev 1560201)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java


 TestReplicationSyncUpTool fails because failover takes too long
 ---

 Key: HBASE-10249
 URL: https://issues.apache.org/jira/browse/HBASE-10249
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10249-0.94-v0.patch, HBASE-10249-0.94-v1.patch, 
 HBASE-10249-trunk-v0.patch, HBASE-10249-trunk-v1.patch


 New issue to keep track of this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878126#comment-13878126
 ] 

Hudson commented on HBASE-10384:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #61 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/61/])
HBASE-10384 Failed to increment serveral columns in one Increment (jxiang: rev 
1559855)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Blocker
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: hbase-10384.patch


 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10388) Add export control notice in README

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878123#comment-13878123
 ] 

Hudson commented on HBASE-10388:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #61 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/61/])
HBASE-10388. Add export control notice in README (apurtell: rev 1559858)
* /hbase/trunk/README.txt
* /hbase/trunk/src/main/site/xdoc/export_control.xml
* /hbase/trunk/src/main/site/xdoc/index.xml


 Add export control notice in README
 ---

 Key: HBASE-10388
 URL: https://issues.apache.org/jira/browse/HBASE-10388
 Project: HBase
  Issue Type: Task
Affects Versions: 0.98.0, 0.99.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Blocker
 Fix For: 0.99.0

 Attachments: 10388.patch


 A discussion on general@incubator for Twill mentioned that the (out-of-date?) 
 document at http://www.apache.org/dev/crypto.html suggests an export notice 
 in the project README. I know Apache Accumulo added a transparent encryption 
 feature recently and found an export notice in their README on their trunk. 
 Adding one to ours out of an abundance of caution.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10249) TestReplicationSyncUpTool fails because failover takes too long

2014-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878129#comment-13878129
 ] 

Hudson commented on HBASE-10249:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #94 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/94/])
HBASE-10249 TestReplicationSyncUpTool fails because failover takes too long 
(jdcryans: rev 1560200)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueues.java
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java


 TestReplicationSyncUpTool fails because failover takes too long
 ---

 Key: HBASE-10249
 URL: https://issues.apache.org/jira/browse/HBASE-10249
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Jean-Daniel Cryans
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10249-0.94-v0.patch, HBASE-10249-0.94-v1.patch, 
 HBASE-10249-trunk-v0.patch, HBASE-10249-trunk-v1.patch


 New issue to keep track of this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit

2014-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878133#comment-13878133
 ] 

Hadoop QA commented on HBASE-10392:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12624213/HBASE-10392.0.patch
  against trunk revision .
  ATTACHMENT ID: 12624213

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8484//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8484//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8484//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8484//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8484//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8484//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8484//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8484//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8484//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8484//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8484//console

This message is automatically generated.

 Correct references to hbase.regionserver.global.memstore.upperLimit
 ---

 Key: HBASE-10392
 URL: https://issues.apache.org/jira/browse/HBASE-10392
 Project: HBase
  Issue Type: Bug
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.99.0

 Attachments: HBASE-10392.0.patch


 As part of the awesome new HBASE-5349, a couple references to 
 {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up 
 to use the new {{hbase.regionserver.global.memstore.size}} instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9426) Make custom distributed barrier procedure pluggable

2014-01-21 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878135#comment-13878135
 ] 

Jonathan Hsieh commented on HBASE-9426:
---

Thanks for your patience.  I've committed to trunk.

 Make custom distributed barrier procedure pluggable 
 

 Key: HBASE-9426
 URL: https://issues.apache.org/jira/browse/HBASE-9426
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.95.2, 0.94.11
Reporter: Richard Ding
Assignee: Richard Ding
 Attachments: HBASE-9426-4.patch, HBASE-9426-4.patch, 
 HBASE-9426-6.patch, HBASE-9426-7.patch, HBASE-9426.patch.1, 
 HBASE-9426.patch.2, HBASE-9426.patch.3


 Currently if one wants to implement a custom distributed barrier procedure 
 (e.g., distributed log roll or distributed table flush), the HBase core code 
 needs to be modified in order for the procedure to work.
 Looking into the snapshot code (especially on region server side), most of 
 the code to enable the procedure are generic life-cycle management (i.e., 
 init, start, stop). We can make this part pluggable.
 Here is the proposal. Following the coprocessor example, we define two 
 properties:
 {code}
 hbase.procedure.regionserver.classes
 hbase.procedure.master.classes
 {code}
 The values for both are comma delimited list of classes. On region server 
 side, the classes implements the following interface:
 {code}
 public interface RegionServerProcedureManager {
   public void initialize(RegionServerServices rss) throws KeeperException;
   public void start();
   public void stop(boolean force) throws IOException;
   public String getProcedureName();
 }
 {code}
 While on Master side, the classes implement the interface:
 {code}
 public interface MasterProcedureManager {
   public void initialize(MasterServices master) throws KeeperException, 
 IOException, UnsupportedOperationException;
   public void stop(String why);
   public String getProcedureName();
   public void execProcedure(ProcedureDescription desc) throws IOException;
   IOException;
 }
 {code}
 Where the ProcedureDescription is defined as
 {code}
 message ProcedureDescription {
   required string name = 1;
   required string instance = 2;
   optional int64 creationTime = 3 [default = 0];
   message Property {
 required string tag = 1;
 optional string value = 2;
   }
   repeated Property props = 4;
 }
 {code}
 A generic API can be defined on HMaster to trigger a procedure:
 {code}
 public boolean execProcedure(ProcedureDescription desc) throws IOException;
 {code}
 _SnapshotManager_ and _RegionServerSnapshotManager_ are special examples of 
 _MasterProcedureManager_ and _RegionServerProcedureManager_. They will be 
 automatically included (users don't need to specify them in the conf file).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9426) Make custom distributed barrier procedure pluggable

2014-01-21 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-9426:
--

   Resolution: Fixed
Fix Version/s: 0.99.0
 Release Note: This patch adds two new API calls to the protobuf rpc 
interface.
 Hadoop Flags: Incompatible change,Reviewed
   Status: Resolved  (was: Patch Available)

 Make custom distributed barrier procedure pluggable 
 

 Key: HBASE-9426
 URL: https://issues.apache.org/jira/browse/HBASE-9426
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.95.2, 0.94.11
Reporter: Richard Ding
Assignee: Richard Ding
 Fix For: 0.99.0

 Attachments: HBASE-9426-4.patch, HBASE-9426-4.patch, 
 HBASE-9426-6.patch, HBASE-9426-7.patch, HBASE-9426.patch.1, 
 HBASE-9426.patch.2, HBASE-9426.patch.3


 Currently if one wants to implement a custom distributed barrier procedure 
 (e.g., distributed log roll or distributed table flush), the HBase core code 
 needs to be modified in order for the procedure to work.
 Looking into the snapshot code (especially on region server side), most of 
 the code to enable the procedure are generic life-cycle management (i.e., 
 init, start, stop). We can make this part pluggable.
 Here is the proposal. Following the coprocessor example, we define two 
 properties:
 {code}
 hbase.procedure.regionserver.classes
 hbase.procedure.master.classes
 {code}
 The values for both are comma delimited list of classes. On region server 
 side, the classes implements the following interface:
 {code}
 public interface RegionServerProcedureManager {
   public void initialize(RegionServerServices rss) throws KeeperException;
   public void start();
   public void stop(boolean force) throws IOException;
   public String getProcedureName();
 }
 {code}
 While on Master side, the classes implement the interface:
 {code}
 public interface MasterProcedureManager {
   public void initialize(MasterServices master) throws KeeperException, 
 IOException, UnsupportedOperationException;
   public void stop(String why);
   public String getProcedureName();
   public void execProcedure(ProcedureDescription desc) throws IOException;
   IOException;
 }
 {code}
 Where the ProcedureDescription is defined as
 {code}
 message ProcedureDescription {
   required string name = 1;
   required string instance = 2;
   optional int64 creationTime = 3 [default = 0];
   message Property {
 required string tag = 1;
 optional string value = 2;
   }
   repeated Property props = 4;
 }
 {code}
 A generic API can be defined on HMaster to trigger a procedure:
 {code}
 public boolean execProcedure(ProcedureDescription desc) throws IOException;
 {code}
 _SnapshotManager_ and _RegionServerSnapshotManager_ are special examples of 
 _MasterProcedureManager_ and _RegionServerProcedureManager_. They will be 
 automatically included (users don't need to specify them in the conf file).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10277) refactor AsyncProcess

2014-01-21 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878137#comment-13878137
 ] 

Sergey Shelukhin commented on HBASE-10277:
--

Yeah, I mean the API behavior compat; not backporting.
Options are (inserting numbers into the above)
bq. (1) either break HTable::put interface (doesn't seem viable), (2) make 
put-s sync and add separate async put (that is possible but may also be 
surprising), or (3) remove old pattern from AP, but keep track of all the puts 
inside HTable itself, and aggregate all errors only when flushCommits is 
called, for example (with some client performance loss because multiple 
requests will be tracked on higher level than in AP). 
HTable::put is not deprecated though, it just has very idiosyncratic behavior 
compared to usual sync, async or batching APIs.



 refactor AsyncProcess
 -

 Key: HBASE-10277
 URL: https://issues.apache.org/jira/browse/HBASE-10277
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-10277.patch


 AsyncProcess currently has two patterns of usage, one from HTable flush w/o 
 callback and with reuse, and one from HCM/HTable batch call, with callback 
 and w/o reuse. In the former case (but not the latter), it also does some 
 throttling of actions on initial submit call, limiting the number of 
 outstanding actions per server.
 The latter case is relatively straightforward. The former appears to be error 
 prone due to reuse - if, as javadoc claims should be safe, multiple submit 
 calls are performed without waiting for the async part of the previous call 
 to finish, fields like hasError become ambiguous and can be used for the 
 wrong call; callback for success/failure is called based on original index 
 of an action in submitted list, but with only one callback supplied to AP in 
 ctor it's not clear to which submit call the index belongs, if several are 
 outstanding.
 I was going to add support for HBASE-10070 to AP, and found that it might be 
 difficult to do cleanly.
 It would be nice to normalize AP usage patterns; in particular, separate the 
 global part (load tracking) from per-submit-call part.
 Per-submit part can more conveniently track stuff like initialActions, 
 mapping of indexes and retry information, that is currently passed around the 
 method calls.
 -I am not sure yet, but maybe sending of the original index to server in 
 ClientProtos.MultiAction can also be avoided.- Cannot be avoided because 
 the API to server doesn't have one-to-one correspondence between requests and 
 responses in an individual call to multi (retries/rearrangement have nothing 
 to do with it)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-7404) Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE

2014-01-21 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878174#comment-13878174
 ] 

Liang Xie commented on HBASE-7404:
--

We(xiaomi) had ported it into our 0.94 branch and run in several latency 
sensltive clusters for several months already:)

 Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE
 --

 Key: HBASE-7404
 URL: https://issues.apache.org/jira/browse/HBASE-7404
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.94.3
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.95.0

 Attachments: 7404-0.94-fixed-lines.txt, 7404-trunk-v10.patch, 
 7404-trunk-v11.patch, 7404-trunk-v12.patch, 7404-trunk-v13.patch, 
 7404-trunk-v13.txt, 7404-trunk-v14.patch, BucketCache.pdf, 
 HBASE-7404-backport-0.94.patch, Introduction of Bucket Cache.pdf, 
 hbase-7404-94v2.patch, hbase-7404-trunkv2.patch, hbase-7404-trunkv9.patch


 First, thanks @neil from Fusion-IO share the source code.
 Usage:
 1.Use bucket cache as main memory cache, configured as the following:
 –hbase.bucketcache.ioengine heap
 –hbase.bucketcache.size 0.4 (size for bucket cache, 0.4 is a percentage of 
 max heap size)
 2.Use bucket cache as a secondary cache, configured as the following:
 –hbase.bucketcache.ioengine file:/disk1/hbase/cache.data(The file path 
 where to store the block data)
 –hbase.bucketcache.size 1024 (size for bucket cache, unit is MB, so 1024 
 means 1GB)
 –hbase.bucketcache.combinedcache.enabled false (default value being true)
 See more configurations from org.apache.hadoop.hbase.io.hfile.CacheConfig and 
 org.apache.hadoop.hbase.io.hfile.bucket.BucketCache
 What's Bucket Cache? 
 It could greatly decrease CMS and heap fragment by GC
 It support a large cache space for High Read Performance by using high speed 
 disk like Fusion-io
 1.An implementation of block cache like LruBlockCache
 2.Self manage blocks' storage position through Bucket Allocator
 3.The cached blocks could be stored in the memory or file system
 4.Bucket Cache could be used as a mainly block cache(see CombinedBlockCache), 
 combined with LruBlockCache to decrease CMS and fragment by GC.
 5.BucketCache also could be used as a secondary cache(e.g. using Fusionio to 
 store block) to enlarge cache space
 How about SlabCache?
 We have studied and test SlabCache first, but the result is bad, because:
 1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds 
 of block size, especially using DataBlockEncoding
 2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache 
 and LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , 
 it causes CMS and heap fragment don't get any better
 3.Direct heap performance is not good as heap, and maybe cause OOM, so we 
 recommend using heap engine 
 See more in the attachment and in the patch



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10338) Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost

2014-01-21 Thread Vandana Ayyalasomayajula (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878180#comment-13878180
 ] 

Vandana Ayyalasomayajula commented on HBASE-10338:
--

[~apurtell] Taking a look at the NPE. Sorry for the delay, I was traveling.

 Region server fails to start with AccessController coprocessor if installed 
 into RegionServerCoprocessorHost
 

 Key: HBASE-10338
 URL: https://issues.apache.org/jira/browse/HBASE-10338
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors, regionserver
Affects Versions: 0.98.0
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: 10338.1-0.96.patch, 10338.1-0.98.patch, 10338.1.patch, 
 10338.1.patch, HBASE-10338.0.patch


 Runtime exception is being thrown when AccessController CP is used with 
 region server. This is happening as region server co processor host is 
 created before zookeeper is initialized in region server.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit

2014-01-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878198#comment-13878198
 ] 

Anoop Sam John commented on HBASE-10392:


Thanks for finding out this Nick!  Sorry I missed
In other places where the new config is added, giving BC for the old config 
also.  So if old config alone is present then also this check should work 
correctly right?  We can handle that way?

 Correct references to hbase.regionserver.global.memstore.upperLimit
 ---

 Key: HBASE-10392
 URL: https://issues.apache.org/jira/browse/HBASE-10392
 Project: HBase
  Issue Type: Bug
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.99.0

 Attachments: HBASE-10392.0.patch


 As part of the awesome new HBASE-5349, a couple references to 
 {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up 
 to use the new {{hbase.regionserver.global.memstore.size}} instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-21 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10322:
---

Attachment: HBASE-10322_V6.patch

V5 patch - TestReplicationWithTags

The test and book update will be given in a follow on task soon.

I will commit V6 in some time unless hear objections.   Thanks all for the help.

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_V3.patch, HBASE-10322_V4.patch, HBASE-10322_V5.patch, 
 HBASE-10322_V6.patch, HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-3909) Add dynamic config

2014-01-21 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-3909:


Attachment: HBASE-3909-backport-from-fb-for-trunk.patch

 Add dynamic config
 --

 Key: HBASE-3909
 URL: https://issues.apache.org/jira/browse/HBASE-3909
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Subbu M Iyer
 Attachments: 3909-102812.patch, 3909-102912.patch, 3909-v1.patch, 
 3909.v1, 3909_090712-2.patch, HBASE-3909-backport-from-fb-for-trunk.patch, 
 HBase Cluster Config Details.xlsx, patch-v2.patch, testMasterNoCluster.stack


 I'm sure this issue exists already, at least as part of the discussion around 
 making online schema edits possible, but no hard this having its own issue.  
 Ted started a conversation on this topic up on dev and Todd suggested we 
 lookd at how Hadoop did it over in HADOOP-7001



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10277) refactor AsyncProcess

2014-01-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-10277:
-

Attachment: HBASE-10277.01.patch

Updated patch. Updated javadoc, addressed some of the comments below; removed 
generic arg from top-level asyncprocess; so asyncprocess can be shared more. 
Will address that in next iteration

 refactor AsyncProcess
 -

 Key: HBASE-10277
 URL: https://issues.apache.org/jira/browse/HBASE-10277
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-10277.01.patch, HBASE-10277.patch


 AsyncProcess currently has two patterns of usage, one from HTable flush w/o 
 callback and with reuse, and one from HCM/HTable batch call, with callback 
 and w/o reuse. In the former case (but not the latter), it also does some 
 throttling of actions on initial submit call, limiting the number of 
 outstanding actions per server.
 The latter case is relatively straightforward. The former appears to be error 
 prone due to reuse - if, as javadoc claims should be safe, multiple submit 
 calls are performed without waiting for the async part of the previous call 
 to finish, fields like hasError become ambiguous and can be used for the 
 wrong call; callback for success/failure is called based on original index 
 of an action in submitted list, but with only one callback supplied to AP in 
 ctor it's not clear to which submit call the index belongs, if several are 
 outstanding.
 I was going to add support for HBASE-10070 to AP, and found that it might be 
 difficult to do cleanly.
 It would be nice to normalize AP usage patterns; in particular, separate the 
 global part (load tracking) from per-submit-call part.
 Per-submit part can more conveniently track stuff like initialActions, 
 mapping of indexes and retry information, that is currently passed around the 
 method calls.
 -I am not sure yet, but maybe sending of the original index to server in 
 ClientProtos.MultiAction can also be avoided.- Cannot be avoided because 
 the API to server doesn't have one-to-one correspondence between requests and 
 responses in an individual call to multi (retries/rearrangement have nothing 
 to do with it)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-3909) Add dynamic config

2014-01-21 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878207#comment-13878207
 ] 

binlijin commented on HBASE-3909:
-

[~saint@gmail.com] [~te...@apache.org] how about the patch 
HBASE-3909-backport-from-fb-for-trunk.patch?
We have many method to implement dynamic config, so i just add a framework, if 
we accept this method , will add more.  

 Add dynamic config
 --

 Key: HBASE-3909
 URL: https://issues.apache.org/jira/browse/HBASE-3909
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Subbu M Iyer
 Attachments: 3909-102812.patch, 3909-102912.patch, 3909-v1.patch, 
 3909.v1, 3909_090712-2.patch, HBASE-3909-backport-from-fb-for-trunk.patch, 
 HBase Cluster Config Details.xlsx, patch-v2.patch, testMasterNoCluster.stack


 I'm sure this issue exists already, at least as part of the discussion around 
 making online schema edits possible, but no hard this having its own issue.  
 Ted started a conversation on this topic up on dev and Todd suggested we 
 lookd at how Hadoop did it over in HADOOP-7001



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


  1   2   >