[jira] [Commented] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2017-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859173#comment-15859173
 ] 

Hudson commented on HBASE-15437:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #99 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/99/])
HBASE-15437 Response size calculated in RPCServer for warning tooLarge (garyh: 
rev 5a044ffc6a546f03c74ef98aa78262b5a372b065)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java


> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-15437.patch, HBASE-15437-v1.patch, 
> HBASE-15437-v2.patch, HBASE-15437-v3.patch, HBASE-15437-v4.patch, 
> HBASE-15437-v5.patch, HBASE-15437-v6.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17599) Use mayHaveMoreCellsInRow instead of isPartial

2017-02-08 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17599:
--
  Resolution: Fixed
Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
  Status: Resolved  (was: Patch Available)

Pushed to master and branch-1.

Thanks all for reviewing.

> Use mayHaveMoreCellsInRow instead of isPartial
> --
>
> Key: HBASE-17599
> URL: https://issues.apache.org/jira/browse/HBASE-17599
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17599-branch-1.patch, HBASE-17599.patch, 
> HBASE-17599-v1.patch, HBASE-17599-v2.patch, HBASE-17599-v3.patch
>
>
> For now if we set scan.allowPartial(true), the partial result returned will 
> have the partial flag set to true. But for scan.setBatch(xx), the partial 
> result returned may not be marked as partial.
> This is an Incompatible change, indeed. But I do not think it will introduce 
> any issues as we just provide more informations to client. The old partial 
> flag for batched scan is always false so I do not think anyone can make use 
> of it.
> This is very important for the limited scan to support partial results from 
> server. If we get a Result which partial flag is false then we know we get 
> the whole row. Otherwise we need to fetch one more row to see if the row key 
> is changed which causes the logic to be more complicated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17583) Add inclusive/exclusive support for startRow and endRow of scan for sync client

2017-02-08 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17583:
--
Attachment: HBASE-17583-branch-1.patch

Patch for branch-1.

> Add inclusive/exclusive support for startRow and endRow of scan for sync 
> client
> ---
>
> Key: HBASE-17583
> URL: https://issues.apache.org/jira/browse/HBASE-17583
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17583-branch-1.patch, HBASE-17583.patch, 
> HBASE-17583-v1.patch, HBASE-17583-v2.patch
>
>
> Implement the same feature of HBASE-17320 for sync client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17069) RegionServer writes invalid META entries for split daughters in some circumstances

2017-02-08 Thread Abhishek Singh Chouhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859144#comment-15859144
 ] 

Abhishek Singh Chouhan commented on HBASE-17069:


Did a bit of debugging around this. I think the issue is as follows:
- We try to split a region A into B and C. SplitTransactionImpl.execute(..) -> 
SplitTransactionImpl.createDaughters(..) -> MetaTableAccessor.splitRegion(..)
In MetaTableAccessor.splitRegion() we create a multimutate to add entries of B 
and C in meta, the puts here have hregioninfo. 
- We encounter an exception in MetaTableAccessor.multimutate(..) since we just 
slayed the RS hosting meta (this was the cached location)
MultiRowMutationProtos.MultiRowMutationService.BlockingInterface service =
  MultiRowMutationProtos.MultiRowMutationService.newBlockingStub(channel);
try {
  service.mutateRows(null, mmrBuilder.build());
} catch (ServiceException ex) {
  ProtobufUtil.toIOException(ex);
}
- We swallow the exception here and move on and try to online the regions using 
SplitTransactionImpl.openDaughters(..). 
- We successfully open the region a target RS. At the time of opening in 
HRegionServer.postOpenDeployTasks(..) we do 
MetaTableAccessor.updateRegionLocation(..). This put has other entries like 
servername, startcode etc but doesn't have hregioninfo. This request now goes 
through since meta is now up again on another RS. We end up with partial info 
in hbase:meta and the region is online on a RS. The clients however can't 
access the data belonging to this region and we get "java.io.IOException: 
HRegionInfo was null".
The immediate possible solutions to the problem seems to be either throwing the 
exception rather than swallowing it, which since we were doing it after PONR 
would result in failure to rollback the split and eventually RS aborting, or 
having the hregioninfo in the put that we do during postOpen task so that we 
have complete information in meta even after the earlier multi fails. Throwing 
the exception resulting in rs aborting seems radical since the scenario of 
split+meta moving around can be there quite often especially in case of rolling 
restarts on a cluster under load. However i'm not very sure about the other fix 
too since we'll be in a state where we assume we have regioninfo in meta which 
wont be there untill the daughter opens successfully on a RS (not sure of the 
failure scenarios between the two events). 
I'm doing some testing with the adding hregioninfo during postOpen task part. I 
can put up a patch if this sounds ok. [~stack] [~apurtell] [~lhofhansl] 
Thoughts?

> RegionServer writes invalid META entries for split daughters in some 
> circumstances
> --
>
> Key: HBASE-17069
> URL: https://issues.apache.org/jira/browse/HBASE-17069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.4
>Reporter: Andrew Purtell
>Priority: Critical
> Attachments: daughter_1_d55ef81c2f8299abbddfce0445067830.log, 
> daughter_2_08629d59564726da2497f70451aafcdb.log, logs.tar.gz, 
> parent-393d2bfd8b1c52ce08540306659624f2.log
>
>
> I have been seeing frequent ITBLL failures testing various versions of 1.2.x. 
> Over the lifetime of 1.2.x the following issues have been fixed:
> - HBASE-15315 (Remove always set super user call as high priority)
> - HBASE-16093 (Fix splits failed before creating daughter regions leave meta 
> inconsistent)
> And this one is pending:
> - HBASE-17044 (Fix merge failed before creating merged region leaves meta 
> inconsistent)
> I can apply all of the above to branch-1.2 and still see this failure: 
> *The life of stillborn region d55ef81c2f8299abbddfce0445067830*
> *Master sees SPLITTING_NEW*
> {noformat}
> 2016-11-08 04:23:21,186 INFO  [AM.ZK.Worker-pool2-t82] master.RegionStates: 
> Transition null to {d55ef81c2f8299abbddfce0445067830 state=SPLITTING_NEW, 
> ts=1478579001186, server=node-3.cluster,16020,1478578389506}
> {noformat}
> *The RegionServer creates it*
> {noformat}
> 2016-11-08 04:23:26,035 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for GomnU: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, freeSize=12823716208, maxSize=12838712320, 
> heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, 
> multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false
> 2016-11-08 04:23:26,038 INFO  
> [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created 
> cacheConfig for big: blockCache=LruBlockCache{blockCount=34, 
> currentSize=14996112, 

[jira] [Commented] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2017-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859140#comment-15859140
 ] 

Hudson commented on HBASE-15437:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #110 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/110/])
HBASE-15437 Response size calculated in RPCServer for warning tooLarge (garyh: 
rev 5a044ffc6a546f03c74ef98aa78262b5a372b065)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java


> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-15437.patch, HBASE-15437-v1.patch, 
> HBASE-15437-v2.patch, HBASE-15437-v3.patch, HBASE-15437-v4.patch, 
> HBASE-15437-v5.patch, HBASE-15437-v6.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17572) HMaster: Caught throwable while processing event C_M_MERGE_REGION

2017-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859139#comment-15859139
 ] 

Hudson commented on HBASE-17572:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #110 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/110/])
Revert "HBASE-17572 HMaster: Caught throwable while processing event (apurtell: 
rev 6effb0dcee4838c611da01d8e180aa571219bf0f)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java


> HMaster: Caught throwable while processing event C_M_MERGE_REGION
> -
>
> Key: HBASE-17572
> URL: https://issues.apache.org/jira/browse/HBASE-17572
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 1.4.0, 1.3.1
>
> Attachments: HBASE-17572-branch-1.3.patch
>
>
> Running ITBLL 1B rows against branch-1.3 compiled against Hadoop 2.7.3 with 
> the noKill monkey policy, I see both masters go down with
> master.HMaster: Caught throwable while processing event C_M_MERGE_REGION
> java.lang.reflect.UndeclaredThrowableException
> In ServerManager#sendRegionsMerge we call ProtobufUtil#mergeRegions, which 
> does a doAs, and the code within that block invokes 
> RSRpcServices#mergeRegions, but is not resilient against 
> RegionOpeningException ("region is opening")
> An UndeclaredThrowableException is "thrown by a method invocation on a proxy 
> instance if its invocation handler's invoke method throws a checked exception 
> (a Throwable that is not assignable to RuntimeException or Error) that is not 
> assignable to any of the exception types declared in the throws clause of the 
> method that was invoked on the proxy instance and dispatched to the 
> invocation handler." 
> (http://docs.oracle.com/javase/7/docs/api/java/lang/reflect/UndeclaredThrowableException.html)
>  
> {noformat}
> 2017-01-31 07:21:17,495 FATAL [MASTER_TABLE_OPERATIONS-node-1:16000-0] 
> master.HMaster: Caught throwable while processing event C_M_MERGE_REGION
> java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1737)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.mergeRegions(ProtobufUtil.java:1990)
> at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionsMerge(ServerManager.java:925)
> at 
> org.apache.hadoop.hbase.master.handler.DispatchMergingRegionHandler.process(DispatchMergingRegionHandler.java:153)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: com.google.protobuf.ServiceException: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.RegionOpeningException):
>  org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region 
> IntegrationTestBigLinkedList,|\xFFnk\x1C\x85<[\x1Ef\xFDE\xF9\xAA\xAC\x08,1485846598043.f56ad22121e872777468020c4452a7c7.
>  is opening on node-2.cluster,16020,1485822382322
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2964)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1139)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.mergeRegions(RSRpcServices.java:1497)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22749)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2355)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:244)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:340)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.mergeRegions(AdminProtos.java:23695)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil$1.run(ProtobufUtil.java:1993)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil$1.run(ProtobufUtil.java:1990)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> 

[jira] [Commented] (HBASE-17599) Use mayHaveMoreCellsInRow instead of isPartial

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859134#comment-15859134
 ] 

Hadoop QA commented on HBASE-17599:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 28s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
9s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 
0s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 0s 
{color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 12s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s 
{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 49s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 91m 23s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
49s {color} | {color:green} The patch does not generate ASF License 

[jira] [Commented] (HBASE-17608) Add suspend support for RawScanResultConsumer

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859132#comment-15859132
 ] 

Hadoop QA commented on HBASE-17608:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 46s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 52s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 84m 37s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 31s 
{color} | {color:green} hbase-endpoint in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
52s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 136m 4s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851787/HBASE-17608-v1.patch |
| JIRA Issue | HBASE-17608 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 49fbeb9429a0 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / b238901 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5641/testReport/ |
| 

[jira] [Updated] (HBASE-17605) Refactor procedure framework code

2017-02-08 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-17605:
-
Attachment: HBASE-17605.master.006.patch

> Refactor procedure framework code
> -
>
> Key: HBASE-17605
> URL: https://issues.apache.org/jira/browse/HBASE-17605
> Project: HBase
>  Issue Type: Improvement
>  Components: proc-v2
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-17605.master.001.patch, 
> HBASE-17605.master.002.patch, HBASE-17605.master.003.patch, 
> HBASE-17605.master.004.patch, HBASE-17605.master.005.patch, 
> HBASE-17605.master.006.patch, without-patch.png, with-patch.png
>
>
> - Moved locks out of MasterProcedureScheduler#Queue. One Queue object is 
> used for each namespace/table, which aren't more than 100. So we don't 
> complexity arising from all functionalities being in one place. 
> MasterProcedureLocking#Lock is the new locking class.
> - Removed NamespaceQueue because it wasn't being used as Queue 
> (add,peek,poll,etc functions threw UnsupportedOperationException). It's was 
> only used for locks on namespaces. Now that locks have been moved out of 
> Queue class, it's not needed anymore.
> - Remoed RegionEvent which was there only for locking on regions. 
> Tables/namespaces used locking from Queue class and regions couldn't (there 
> are no separate proc queue at region level), hence the redundance. Now that 
> locking is separate, we can use the same for regions too.
> - Removed QueueInterface class. No declarations, except one 
> implementaion, which makes the point of having an interface moot.
> - Removed QueueImpl, which was the only concrete implementation of 
> abstract Queue class. Moved functions to Queue class itself to avoid 
> unnecessary level in inheritance hierarchy.
> - Removed ProcedureEventQueue class which was just a wrapper around 
> ArrayDeque class.
> - Encapsulated table priority related stuff in a single class.
> - Removed some unused functions.
> *Perf using MasterProcedureSchedulerPerformanceEvaluation*
> 10 threads, 10M ops, 5 tables
> Without patch:
> 10 regions/table : #yield 584980, addBack time 4.1s, poll time 10s
> 1M regions/table: #yield 16, addBack time 5.9s, poll time 12.9s
> With patch:
> 10 regions/table : #yield 86413, addBack time 4.1s, poll time 8.2s
> 1M regions/table: #yield 9, addBack time 6s, poll time 13s
> *Memory footprint and CPU* (don't compare GC as that depends on life of 
> objects which will be much longer in real-world scenarios)
> Without patch
> !without-patch.png|width=800!
> With patch
> !with-patch.png|width=800!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17280) Add mechanism to control hbase cleaner behavior

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859125#comment-15859125
 ] 

Hadoop QA commented on HBASE-17280:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 1s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 1s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
33s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 
35s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
50s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 44s 
{color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
13m 48s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
47s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} hbase-client-jdk1.8.0_121 with JDK v1.8.0_121 generated 1 
new + 13 unchanged - 0 fixed = 14 total (was 13) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s 
{color} | {color:red} hbase-client-jdk1.7.0_80 with JDK v1.7.0_80 generated 1 
new + 13 unchanged - 0 fixed = 14 total (was 13) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s 
{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 49s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| 

[jira] [Commented] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2017-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859122#comment-15859122
 ] 

Hudson commented on HBASE-15437:


FAILURE: Integrated in Jenkins build HBase-1.4 #620 (See 
[https://builds.apache.org/job/HBase-1.4/620/])
HBASE-15437 Response size calculated in RPCServer for warning tooLarge (garyh: 
rev f61b840a3162c6126129e486eff621e3cf4e1539)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java


> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-15437.patch, HBASE-15437-v1.patch, 
> HBASE-15437-v2.patch, HBASE-15437-v3.patch, HBASE-15437-v4.patch, 
> HBASE-15437-v5.patch, HBASE-15437-v6.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17608) Add suspend support for RawScanResultConsumer

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859118#comment-15859118
 ] 

Hadoop QA commented on HBASE-17608:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
6s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 23s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 52s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 20s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 88m 13s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 51s 
{color} | {color:green} hbase-endpoint in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
55s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 146m 28s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851785/HBASE-17608-v1.patch |
| JIRA Issue | HBASE-17608 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 608842ea1962 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b238901 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5638/testReport/ |
| 

[jira] [Commented] (HBASE-14123) HBase Backup/Restore Phase 2

2017-02-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859115#comment-15859115
 ] 

stack commented on HBASE-14123:
---

Is latest up on rb and if so, what is the link (I found one searching this JIRA 
but it doesn't seem right). Thanks.

> HBase Backup/Restore Phase 2
> 
>
> Key: HBASE-14123
> URL: https://issues.apache.org/jira/browse/HBASE-14123
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Fix For: HBASE-7912
>
> Attachments: 14123-master.v14.txt, 14123-master.v15.txt, 
> 14123-master.v16.txt, 14123-master.v17.txt, 14123-master.v18.txt, 
> 14123-master.v19.txt, 14123-master.v20.txt, 14123-master.v21.txt, 
> 14123-master.v24.txt, 14123-master.v25.txt, 14123-master.v27.txt, 
> 14123-master.v28.txt, 14123-master.v29.full.txt, 14123-master.v2.txt, 
> 14123-master.v30.txt, 14123-master.v31.txt, 14123-master.v32.txt, 
> 14123-master.v33.txt, 14123-master.v34.txt, 14123-master.v35.txt, 
> 14123-master.v36.txt, 14123-master.v37.txt, 14123-master.v38.txt, 
> 14123.master.v39.patch, 14123-master.v3.txt, 14123.master.v40.patch, 
> 14123.master.v41.patch, 14123.master.v42.patch, 14123.master.v44.patch, 
> 14123.master.v45.patch, 14123.master.v46.patch, 14123.master.v48.patch, 
> 14123.master.v49.patch, 14123.master.v50.patch, 14123.master.v51.patch, 
> 14123.master.v52.patch, 14123.master.v54.patch, 14123.master.v56.patch, 
> 14123.master.v57.patch, 14123-master.v5.txt, 14123-master.v6.txt, 
> 14123-master.v7.txt, 14123-master.v8.txt, 14123-master.v9.txt, 14123-v14.txt, 
> HBASE-14123-for-7912-v1.patch, HBASE-14123-for-7912-v6.patch, 
> HBASE-14123-v10.patch, HBASE-14123-v11.patch, HBASE-14123-v12.patch, 
> HBASE-14123-v13.patch, HBASE-14123-v15.patch, HBASE-14123-v16.patch, 
> HBASE-14123-v1.patch, HBASE-14123-v2.patch, HBASE-14123-v3.patch, 
> HBASE-14123-v4.patch, HBASE-14123-v5.patch, HBASE-14123-v6.patch, 
> HBASE-14123-v7.patch, HBASE-14123-v9.patch
>
>
> Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17599) Use mayHaveMoreCellsInRow instead of isPartial

2017-02-08 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17599:
--
Release Note: The word 'isPartial' is ambiguous so we introduce a new 
method 'mayHaveMoreCellsInRow' to replace it. And the old meaning of 
'isPartial' is not the same with 'mayHaveMoreCellsInRow' as for batched scan, 
if the number of returned cells equals to the batch, isPartial will be false. 
After this change the meaning of 'isPartial' will be same with 
'mayHaveMoreCellsInRow'. This is an incompatible change but it is not likely to 
break a lot of things as for batched scan the old 'isPartial' is just a 
redundant information, i.e, if the number of returned cells reaches the batch 
limit. You have already know the number of returned cells and the value of 
batch.

> Use mayHaveMoreCellsInRow instead of isPartial
> --
>
> Key: HBASE-17599
> URL: https://issues.apache.org/jira/browse/HBASE-17599
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17599-branch-1.patch, HBASE-17599.patch, 
> HBASE-17599-v1.patch, HBASE-17599-v2.patch, HBASE-17599-v3.patch
>
>
> For now if we set scan.allowPartial(true), the partial result returned will 
> have the partial flag set to true. But for scan.setBatch(xx), the partial 
> result returned may not be marked as partial.
> This is an Incompatible change, indeed. But I do not think it will introduce 
> any issues as we just provide more informations to client. The old partial 
> flag for batched scan is always false so I do not think anyone can make use 
> of it.
> This is very important for the limited scan to support partial results from 
> server. If we get a Result which partial flag is false then we know we get 
> the whole row. Otherwise we need to fetch one more row to see if the row key 
> is changed which causes the logic to be more complicated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17599) Use mayHaveMoreCellsInRow instead of isPartial

2017-02-08 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17599:
--
Description: 
For now if we set scan.allowPartial(true), the partial result returned will 
have the partial flag set to true. But for scan.setBatch(xx), the partial 
result returned may not be marked as partial.

This is an Incompatible change, indeed. But I do not think it will introduce 
any issues as we just provide more informations to client. The old partial flag 
for batched scan is always false so I do not think anyone can make use of it.

This is very important for the limited scan to support partial results from 
server. If we get a Result which partial flag is false then we know we get the 
whole row. Otherwise we need to fetch one more row to see if the row key is 
changed which causes the logic to be more complicated.

  was:
For now if we set scan.allowPartial(true), the partial result returned will 
have the partial flag set to true. But for scan.setBatch(xx), the partial 
result returned will not be marked as partial.

This is an Incompatible change, indeed. But I do not think it will introduce 
any issues as we just provide more informations to client. The old partial flag 
for batched scan is always false so I do not think anyone can make use of it.

This is very important for the limited scan to support partial results from 
server. If we get a Result which partial flag is false then we know we get the 
whole row. Otherwise we need to fetch one more row to see if the row key is 
changed which causes the logic to be more complicated.


> Use mayHaveMoreCellsInRow instead of isPartial
> --
>
> Key: HBASE-17599
> URL: https://issues.apache.org/jira/browse/HBASE-17599
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17599-branch-1.patch, HBASE-17599.patch, 
> HBASE-17599-v1.patch, HBASE-17599-v2.patch, HBASE-17599-v3.patch
>
>
> For now if we set scan.allowPartial(true), the partial result returned will 
> have the partial flag set to true. But for scan.setBatch(xx), the partial 
> result returned may not be marked as partial.
> This is an Incompatible change, indeed. But I do not think it will introduce 
> any issues as we just provide more informations to client. The old partial 
> flag for batched scan is always false so I do not think anyone can make use 
> of it.
> This is very important for the limited scan to support partial results from 
> server. If we get a Result which partial flag is false then we know we get 
> the whole row. Otherwise we need to fetch one more row to see if the row key 
> is changed which causes the logic to be more complicated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17605) Refactor procedure framework code

2017-02-08 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-17605:
-
Attachment: HBASE-17605.master.005.patch

> Refactor procedure framework code
> -
>
> Key: HBASE-17605
> URL: https://issues.apache.org/jira/browse/HBASE-17605
> Project: HBase
>  Issue Type: Improvement
>  Components: proc-v2
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-17605.master.001.patch, 
> HBASE-17605.master.002.patch, HBASE-17605.master.003.patch, 
> HBASE-17605.master.004.patch, HBASE-17605.master.005.patch, 
> without-patch.png, with-patch.png
>
>
> - Moved locks out of MasterProcedureScheduler#Queue. One Queue object is 
> used for each namespace/table, which aren't more than 100. So we don't 
> complexity arising from all functionalities being in one place. 
> MasterProcedureLocking#Lock is the new locking class.
> - Removed NamespaceQueue because it wasn't being used as Queue 
> (add,peek,poll,etc functions threw UnsupportedOperationException). It's was 
> only used for locks on namespaces. Now that locks have been moved out of 
> Queue class, it's not needed anymore.
> - Remoed RegionEvent which was there only for locking on regions. 
> Tables/namespaces used locking from Queue class and regions couldn't (there 
> are no separate proc queue at region level), hence the redundance. Now that 
> locking is separate, we can use the same for regions too.
> - Removed QueueInterface class. No declarations, except one 
> implementaion, which makes the point of having an interface moot.
> - Removed QueueImpl, which was the only concrete implementation of 
> abstract Queue class. Moved functions to Queue class itself to avoid 
> unnecessary level in inheritance hierarchy.
> - Removed ProcedureEventQueue class which was just a wrapper around 
> ArrayDeque class.
> - Encapsulated table priority related stuff in a single class.
> - Removed some unused functions.
> *Perf using MasterProcedureSchedulerPerformanceEvaluation*
> 10 threads, 10M ops, 5 tables
> Without patch:
> 10 regions/table : #yield 584980, addBack time 4.1s, poll time 10s
> 1M regions/table: #yield 16, addBack time 5.9s, poll time 12.9s
> With patch:
> 10 regions/table : #yield 86413, addBack time 4.1s, poll time 8.2s
> 1M regions/table: #yield 9, addBack time 6s, poll time 13s
> *Memory footprint and CPU* (don't compare GC as that depends on life of 
> objects which will be much longer in real-world scenarios)
> Without patch
> !without-patch.png|width=800!
> With patch
> !with-patch.png|width=800!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16488) Starting namespace and quota services in master startup asynchronizely

2017-02-08 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-16488:
---
Attachment: HBASE-16488.v3-branch-1.patch

> Starting namespace and quota services in master startup asynchronizely
> --
>
> Key: HBASE-16488
> URL: https://issues.apache.org/jira/browse/HBASE-16488
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 2.0.0, 1.3.0, 1.0.3, 1.4.0, 1.1.5, 1.2.2
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
> Attachments: HBASE-16488.v1-branch-1.patch, 
> HBASE-16488.v1-master.patch, HBASE-16488.v2-branch-1.patch, 
> HBASE-16488.v2-branch-1.patch, HBASE-16488.v3-branch-1.patch
>
>
> From time to time, during internal IT test and from customer, we often see 
> master initialization failed due to namespace table region takes long time to 
> assign (eg. sometimes split log takes long time or hanging; or sometimes RS 
> is temporarily not available; sometimes due to some unknown assignment 
> issue).  In the past, there was some proposal to improve this situation, eg. 
> HBASE-13556 / HBASE-14190 (Assign system tables ahead of user region 
> assignment) or HBASE-13557 (Special WAL handling for system tables) or  
> HBASE-14623 (Implement dedicated WAL for system tables).  
> This JIRA proposes another way to solve this master initialization fail 
> issue: namespace service is only used by a handful operations (eg. create 
> table / namespace DDL / get namespace API / some RS group DDL).  Only quota 
> manager depends on it and quota management is off by default.  Therefore, 
> namespace service is not really needed for master to be functional.  So we 
> could start namespace service asynchronizely without blocking master startup.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17608) Add suspend support for RawScanResultConsumer

2017-02-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859067#comment-15859067
 ] 

stack commented on HBASE-17608:
---

Thanks. That helps.

> Add suspend support for RawScanResultConsumer
> -
>
> Key: HBASE-17608
> URL: https://issues.apache.org/jira/browse/HBASE-17608
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, scan
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17608.patch, HBASE-17608-v1.patch
>
>
> Now for the AsyncResultScanner, we can only close the scanner if we reach the 
> cache size limit and open a new scanner later. This will breaks the region 
> level consistency.
> For example, you put 10 rows to the region, and open a scanner to scan it. 
> The scanner returns 5 rows at the first time and the cache is full, so it 
> closes the background scanner. And before you reopens the scanner to fetch 
> data, the remaining 5 rows has been deleted and a compaction makes them gone 
> for ever. Then after you reopen the scanner you can not see the remaining 5 
> rows.
> So here we should keep the scanner open at RS side to prevent the data below 
> the mvcc read point of this scanner being deleted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17599) Use mayHaveMoreCellsInRow instead of isPartial

2017-02-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859065#comment-15859065
 ] 

stack commented on HBASE-17599:
---

+1

Mark it incompatible and add your nice explanation above as to why this is ok 
as release note. Nice one [~Apache9]


> Use mayHaveMoreCellsInRow instead of isPartial
> --
>
> Key: HBASE-17599
> URL: https://issues.apache.org/jira/browse/HBASE-17599
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17599-branch-1.patch, HBASE-17599.patch, 
> HBASE-17599-v1.patch, HBASE-17599-v2.patch, HBASE-17599-v3.patch
>
>
> For now if we set scan.allowPartial(true), the partial result returned will 
> have the partial flag set to true. But for scan.setBatch(xx), the partial 
> result returned will not be marked as partial.
> This is an Incompatible change, indeed. But I do not think it will introduce 
> any issues as we just provide more informations to client. The old partial 
> flag for batched scan is always false so I do not think anyone can make use 
> of it.
> This is very important for the limited scan to support partial results from 
> server. If we get a Result which partial flag is false then we know we get 
> the whole row. Otherwise we need to fetch one more row to see if the row key 
> is changed which causes the logic to be more complicated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17608) Add suspend support for RawScanResultConsumer

2017-02-08 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859064#comment-15859064
 ] 

Duo Zhang commented on HBASE-17608:
---

Done sir.

The problem is even if we bring the mvcc read point when reopening scanner, we 
could still miss some data because we do not leave an record in the 
scannerReadPoints at RS side so the data maybe reclaimed by compaction. So the 
proper way is to leave scanner open at RS side, i.e., suspend the scanner, not 
close the scanner.

Thanks.

> Add suspend support for RawScanResultConsumer
> -
>
> Key: HBASE-17608
> URL: https://issues.apache.org/jira/browse/HBASE-17608
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, scan
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17608.patch, HBASE-17608-v1.patch
>
>
> Now for the AsyncResultScanner, we can only close the scanner if we reach the 
> cache size limit and open a new scanner later. This will breaks the region 
> level consistency.
> For example, you put 10 rows to the region, and open a scanner to scan it. 
> The scanner returns 5 rows at the first time and the cache is full, so it 
> closes the background scanner. And before you reopens the scanner to fetch 
> data, the remaining 5 rows has been deleted and a compaction makes them gone 
> for ever. Then after you reopen the scanner you can not see the remaining 5 
> rows.
> So here we should keep the scanner open at RS side to prevent the data below 
> the mvcc read point of this scanner being deleted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17608) Add suspend support for RawScanResultConsumer

2017-02-08 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17608:
--
Description: 
Now for the AsyncResultScanner, we can only close the scanner if we reach the 
cache size limit and open a new scanner later. This will breaks the region 
level consistency.

For example, you put 10 rows to the region, and open a scanner to scan it. The 
scanner returns 5 rows at the first time and the cache is full, so it closes 
the background scanner. And before you reopens the scanner to fetch data, the 
remaining 5 rows has been deleted and a compaction makes them gone for ever. 
Then after you reopen the scanner you can not see the remaining 5 rows.

So here we should keep the scanner open at RS side to prevent the data below 
the mvcc read point of this scanner being deleted.

  was:Now for the AsyncResultScanner, we can only close the scanner if we reach 
the cache size limit and open a new scanner later. This will breaks the region 
level consistency. We should just stop fetching data and leave the scanner open 
at RS.


> Add suspend support for RawScanResultConsumer
> -
>
> Key: HBASE-17608
> URL: https://issues.apache.org/jira/browse/HBASE-17608
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, scan
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17608.patch, HBASE-17608-v1.patch
>
>
> Now for the AsyncResultScanner, we can only close the scanner if we reach the 
> cache size limit and open a new scanner later. This will breaks the region 
> level consistency.
> For example, you put 10 rows to the region, and open a scanner to scan it. 
> The scanner returns 5 rows at the first time and the cache is full, so it 
> closes the background scanner. And before you reopens the scanner to fetch 
> data, the remaining 5 rows has been deleted and a compaction makes them gone 
> for ever. Then after you reopen the scanner you can not see the remaining 5 
> rows.
> So here we should keep the scanner open at RS side to prevent the data below 
> the mvcc read point of this scanner being deleted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17608) Add suspend support for RawScanResultConsumer

2017-02-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859059#comment-15859059
 ] 

stack commented on HBASE-17608:
---

Can you beef up the description some please [~Apache9]. I'm not sure I'm clear 
on this but: "...ow for the AsyncResultScanner, we can only close the scanner 
if we reach the cache size limit and open a new scanner later. This will breaks 
the region level consistency. ..." Thanks.

> Add suspend support for RawScanResultConsumer
> -
>
> Key: HBASE-17608
> URL: https://issues.apache.org/jira/browse/HBASE-17608
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, scan
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17608.patch, HBASE-17608-v1.patch
>
>
> Now for the AsyncResultScanner, we can only close the scanner if we reach the 
> cache size limit and open a new scanner later. This will breaks the region 
> level consistency. We should just stop fetching data and leave the scanner 
> open at RS.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17603) Rest api for scan should return 404 when table not exists

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859051#comment-15859051
 ] 

Hadoop QA commented on HBASE-17603:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 6s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 51s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 13s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 0s 
{color} | {color:green} hbase-rest in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 16s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.6 Server=1.12.6 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851788/17603.v3.txt |
| JIRA Issue | HBASE-17603 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 3c2c9da7e662 3.13.0-100-generic #147-Ubuntu SMP Tue Oct 18 
16:48:51 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b238901 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5642/testReport/ |
| modules | C: hbase-client 

[jira] [Commented] (HBASE-17603) Rest api for scan should return 404 when table not exists

2017-02-08 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859037#comment-15859037
 ] 

Duo Zhang commented on HBASE-17603:
---

My concern is, even without HBASE-17508, what if we delete the table after we 
get the scanner? Now the http code returned is 204, which is not expected as 
this means the table exists but does not have any data. We need to handle this.

> Rest api for scan should return 404 when table not exists
> -
>
> Key: HBASE-17603
> URL: https://issues.apache.org/jira/browse/HBASE-17603
> Project: HBase
>  Issue Type: Bug
>  Components: REST, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Ted Yu
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 17603.v1.txt, 17603.v3.txt
>
>
> This was the first Jenkins build where 
> TestScannerResource#testTableDoesNotExist started failing.
> https://builds.apache.org/job/HBase-1.4/612/jdk=JDK_1_8,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.rest/TestScannerResource/testTableDoesNotExist/
> The test failure can be reproduced locally.
> The test failure seemed to start after HBASE-17508 went in.
> The problem is introduced after HBASE-17508. After HBASE-17508 we will not 
> contact RS when getScanner. So for rest, get scanner will not return 404 
> either. But we should get a 404 when fetching data from the scanner but now 
> it will return 204.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17613) avoid copy of family when initializing the FSWALEntry

2017-02-08 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17613:
--
Status: Open  (was: Patch Available)

retry

> avoid copy of family when initializing the FSWALEntry
> -
>
> Key: HBASE-17613
> URL: https://issues.apache.org/jira/browse/HBASE-17613
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17613.v0.patch, HBASE-17613.v1.patch, 
> HBASE-17613.v2.patch, HBASE-17613.v2.patch
>
>
> We should compare the families before cloning it.
> {noformat}
> Set familySet = Sets.newTreeSet(Bytes.BYTES_COMPARATOR);
> for (Cell cell : cells) {
>   if (!CellUtil.matchingFamily(cell, WALEdit.METAFAMILY)) {
>   // TODO: Avoid this clone?
> familySet.add(CellUtil.cloneFamily(cell));
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17613) avoid copy of family when initializing the FSWALEntry

2017-02-08 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17613:
--
Status: Patch Available  (was: Open)

> avoid copy of family when initializing the FSWALEntry
> -
>
> Key: HBASE-17613
> URL: https://issues.apache.org/jira/browse/HBASE-17613
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17613.v0.patch, HBASE-17613.v1.patch, 
> HBASE-17613.v2.patch, HBASE-17613.v2.patch
>
>
> We should compare the families before cloning it.
> {noformat}
> Set familySet = Sets.newTreeSet(Bytes.BYTES_COMPARATOR);
> for (Cell cell : cells) {
>   if (!CellUtil.matchingFamily(cell, WALEdit.METAFAMILY)) {
>   // TODO: Avoid this clone?
> familySet.add(CellUtil.cloneFamily(cell));
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17613) avoid copy of family when initializing the FSWALEntry

2017-02-08 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17613:
--
Attachment: HBASE-17613.v2.patch

> avoid copy of family when initializing the FSWALEntry
> -
>
> Key: HBASE-17613
> URL: https://issues.apache.org/jira/browse/HBASE-17613
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17613.v0.patch, HBASE-17613.v1.patch, 
> HBASE-17613.v2.patch, HBASE-17613.v2.patch
>
>
> We should compare the families before cloning it.
> {noformat}
> Set familySet = Sets.newTreeSet(Bytes.BYTES_COMPARATOR);
> for (Cell cell : cells) {
>   if (!CellUtil.matchingFamily(cell, WALEdit.METAFAMILY)) {
>   // TODO: Avoid this clone?
> familySet.add(CellUtil.cloneFamily(cell));
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17572) HMaster: Caught throwable while processing event C_M_MERGE_REGION

2017-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859005#comment-15859005
 ] 

Hudson commented on HBASE-17572:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #98 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/98/])
Revert "HBASE-17572 HMaster: Caught throwable while processing event (apurtell: 
rev 6effb0dcee4838c611da01d8e180aa571219bf0f)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java


> HMaster: Caught throwable while processing event C_M_MERGE_REGION
> -
>
> Key: HBASE-17572
> URL: https://issues.apache.org/jira/browse/HBASE-17572
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 1.4.0, 1.3.1
>
> Attachments: HBASE-17572-branch-1.3.patch
>
>
> Running ITBLL 1B rows against branch-1.3 compiled against Hadoop 2.7.3 with 
> the noKill monkey policy, I see both masters go down with
> master.HMaster: Caught throwable while processing event C_M_MERGE_REGION
> java.lang.reflect.UndeclaredThrowableException
> In ServerManager#sendRegionsMerge we call ProtobufUtil#mergeRegions, which 
> does a doAs, and the code within that block invokes 
> RSRpcServices#mergeRegions, but is not resilient against 
> RegionOpeningException ("region is opening")
> An UndeclaredThrowableException is "thrown by a method invocation on a proxy 
> instance if its invocation handler's invoke method throws a checked exception 
> (a Throwable that is not assignable to RuntimeException or Error) that is not 
> assignable to any of the exception types declared in the throws clause of the 
> method that was invoked on the proxy instance and dispatched to the 
> invocation handler." 
> (http://docs.oracle.com/javase/7/docs/api/java/lang/reflect/UndeclaredThrowableException.html)
>  
> {noformat}
> 2017-01-31 07:21:17,495 FATAL [MASTER_TABLE_OPERATIONS-node-1:16000-0] 
> master.HMaster: Caught throwable while processing event C_M_MERGE_REGION
> java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1737)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.mergeRegions(ProtobufUtil.java:1990)
> at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionsMerge(ServerManager.java:925)
> at 
> org.apache.hadoop.hbase.master.handler.DispatchMergingRegionHandler.process(DispatchMergingRegionHandler.java:153)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: com.google.protobuf.ServiceException: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.RegionOpeningException):
>  org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region 
> IntegrationTestBigLinkedList,|\xFFnk\x1C\x85<[\x1Ef\xFDE\xF9\xAA\xAC\x08,1485846598043.f56ad22121e872777468020c4452a7c7.
>  is opening on node-2.cluster,16020,1485822382322
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2964)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1139)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.mergeRegions(RSRpcServices.java:1497)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22749)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2355)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:244)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:340)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.mergeRegions(AdminProtos.java:23695)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil$1.run(ProtobufUtil.java:1993)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil$1.run(ProtobufUtil.java:1990)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> 

[jira] [Updated] (HBASE-17603) Rest api for scan should return 404 when table not exists

2017-02-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17603:
---
Attachment: 17603.v3.txt

Patch v3 doesn't use MetaTableAccessor.

There is trivial change in TestScannerResource to trigger tests in hbase-rest 
module.

> Rest api for scan should return 404 when table not exists
> -
>
> Key: HBASE-17603
> URL: https://issues.apache.org/jira/browse/HBASE-17603
> Project: HBase
>  Issue Type: Bug
>  Components: REST, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Ted Yu
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 17603.v1.txt, 17603.v3.txt
>
>
> This was the first Jenkins build where 
> TestScannerResource#testTableDoesNotExist started failing.
> https://builds.apache.org/job/HBase-1.4/612/jdk=JDK_1_8,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.rest/TestScannerResource/testTableDoesNotExist/
> The test failure can be reproduced locally.
> The test failure seemed to start after HBASE-17508 went in.
> The problem is introduced after HBASE-17508. After HBASE-17508 we will not 
> contact RS when getScanner. So for rest, get scanner will not return 404 
> either. But we should get a 404 when fetching data from the scanner but now 
> it will return 204.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17608) Add suspend support for RawScanResultConsumer

2017-02-08 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17608:
--
Attachment: HBASE-17608-v1.patch

Missed the two new files.

> Add suspend support for RawScanResultConsumer
> -
>
> Key: HBASE-17608
> URL: https://issues.apache.org/jira/browse/HBASE-17608
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, scan
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17608.patch, HBASE-17608-v1.patch
>
>
> Now for the AsyncResultScanner, we can only close the scanner if we reach the 
> cache size limit and open a new scanner later. This will breaks the region 
> level consistency. We should just stop fetching data and leave the scanner 
> open at RS.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17608) Add suspend support for RawScanResultConsumer

2017-02-08 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17608:
--
Attachment: (was: HBASE-17608-v1.patch)

> Add suspend support for RawScanResultConsumer
> -
>
> Key: HBASE-17608
> URL: https://issues.apache.org/jira/browse/HBASE-17608
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, scan
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17608.patch
>
>
> Now for the AsyncResultScanner, we can only close the scanner if we reach the 
> cache size limit and open a new scanner later. This will breaks the region 
> level consistency. We should just stop fetching data and leave the scanner 
> open at RS.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16859) Use Bytebuffer pool for non java clients specifically for scans/gets

2017-02-08 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858984#comment-15858984
 ] 

ramkrishna.s.vasudevan commented on HBASE-16859:


I will rebase this patch and go for commit later today.

> Use Bytebuffer pool for non java clients specifically for scans/gets
> 
>
> Key: HBASE-16859
> URL: https://issues.apache.org/jira/browse/HBASE-16859
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16859_V1.patch, HBASE-16859_V2.patch, 
> HBASE-16859_V2.patch, HBASE-16859_V4.patch, HBASE-16859_V5.patch, 
> HBASE-16859_V6.patch
>
>
> In case of non java clients we still write the results and header into a on 
> demand  byte[]. This can be changed to use the BBPool (onheap or offheap 
> buffer?).
> But the basic problem is to identify if the response is for scans/gets. 
> - One easy way to do it is use the MethodDescriptor per Call and use the   
> name of the MethodDescriptor to identify it is a scan/get. But this will 
> pollute RpcServer by checking for scan/get type response.
> - Other way is always set the result to cellScanner but we know that 
> isClientCellBlockSupported is going to false for non PB clients. So ignore 
> the cellscanner and go ahead with the results in PB. But this is not clean
> - third one is that we already have a RpccallContext being passed to the RS. 
> In case of scan/gets/multiGets we already set a Rpccallback for shipped call. 
> So here on response we can check if the callback is not null and check for 
> isclientBlockSupported. In this case we can get the BB from the pool and 
> write the result and header to that BB. May be this looks clean?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17605) Refactor procedure framework code

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858982#comment-15858982
 ] 

Hadoop QA commented on HBASE-17605:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
51s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 44s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 44s 
{color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 41s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 59s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 124m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  Should 
org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler$SchemaLocking 
be a _static_ inner class?  At MasterProcedureScheduler.java:inner class?  At 
MasterProcedureScheduler.java:[lines 900-938] |
| Failed junit tests | 
hadoop.hbase.io.asyncfs.TestSaslFanOutOneBlockAsyncDFSOutput |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851769/HBASE-17605.master.004.patch
 |
| JIRA Issue | HBASE-17605 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux bb5487fd26d3 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HBASE-17608) Add suspend support for RawScanResultConsumer

2017-02-08 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17608:
--
Attachment: HBASE-17608-v1.patch

Fix findbugs and javadoc issues. Add missing annotations to newly added 
interfaces.

> Add suspend support for RawScanResultConsumer
> -
>
> Key: HBASE-17608
> URL: https://issues.apache.org/jira/browse/HBASE-17608
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, scan
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17608.patch, HBASE-17608-v1.patch
>
>
> Now for the AsyncResultScanner, we can only close the scanner if we reach the 
> cache size limit and open a new scanner later. This will breaks the region 
> level consistency. We should just stop fetching data and leave the scanner 
> open at RS.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17603) Rest api for scan should return 404 when table not exists

2017-02-08 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858973#comment-15858973
 ] 

Duo Zhang commented on HBASE-17603:
---

Yes it is possible. But if you bypass the meta cache the meta table will be 
fucked...Every scan request will lead to a request to meta table...

> Rest api for scan should return 404 when table not exists
> -
>
> Key: HBASE-17603
> URL: https://issues.apache.org/jira/browse/HBASE-17603
> Project: HBase
>  Issue Type: Bug
>  Components: REST, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Ted Yu
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 17603.v1.txt
>
>
> This was the first Jenkins build where 
> TestScannerResource#testTableDoesNotExist started failing.
> https://builds.apache.org/job/HBase-1.4/612/jdk=JDK_1_8,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.rest/TestScannerResource/testTableDoesNotExist/
> The test failure can be reproduced locally.
> The test failure seemed to start after HBASE-17508 went in.
> The problem is introduced after HBASE-17508. After HBASE-17508 we will not 
> contact RS when getScanner. So for rest, get scanner will not return 404 
> either. But we should get a 404 when fetching data from the scanner but now 
> it will return 204.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17603) Rest api for scan should return 404 when table not exists

2017-02-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858969#comment-15858969
 ] 

Ted Yu commented on HBASE-17603:


I was looking at ConnectionImplementation#locateRegion() which has useCache 
parameter.
If we pass true for useCache, isn't there a chance that the underlying table is 
dropped concurrently ?

> Rest api for scan should return 404 when table not exists
> -
>
> Key: HBASE-17603
> URL: https://issues.apache.org/jira/browse/HBASE-17603
> Project: HBase
>  Issue Type: Bug
>  Components: REST, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Ted Yu
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 17603.v1.txt
>
>
> This was the first Jenkins build where 
> TestScannerResource#testTableDoesNotExist started failing.
> https://builds.apache.org/job/HBase-1.4/612/jdk=JDK_1_8,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.rest/TestScannerResource/testTableDoesNotExist/
> The test failure can be reproduced locally.
> The test failure seemed to start after HBASE-17508 went in.
> The problem is introduced after HBASE-17508. After HBASE-17508 we will not 
> contact RS when getScanner. So for rest, get scanner will not return 404 
> either. But we should get a 404 when fetching data from the scanner but now 
> it will return 204.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17105) Annotate RegionServerObserver

2017-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858957#comment-15858957
 ] 

Hudson commented on HBASE-17105:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2470 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2470/])
HBASE-17105 Annotate RegionServerObserver (enis: rev 
b23890157c10bf2db347bf5396806295e4532825)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionServerObserver.java


> Annotate RegionServerObserver
> -
>
> Key: HBASE-17105
> URL: https://issues.apache.org/jira/browse/HBASE-17105
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.4.0
>
> Attachments: hbase-17105_v1.patch
>
>
> Seems that we have forgotten to annotate RegionServerObserver with 
> InterfaceAudience. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17603) Rest api for scan should return 404 when table not exists

2017-02-08 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858948#comment-15858948
 ] 

Duo Zhang commented on HBASE-17603:
---

And IMO the patch is not acceptable. You should not call MetaTableAcessor 
directly, it will bypass the meta cache.

> Rest api for scan should return 404 when table not exists
> -
>
> Key: HBASE-17603
> URL: https://issues.apache.org/jira/browse/HBASE-17603
> Project: HBase
>  Issue Type: Bug
>  Components: REST, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Ted Yu
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 17603.v1.txt
>
>
> This was the first Jenkins build where 
> TestScannerResource#testTableDoesNotExist started failing.
> https://builds.apache.org/job/HBase-1.4/612/jdk=JDK_1_8,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.rest/TestScannerResource/testTableDoesNotExist/
> The test failure can be reproduced locally.
> The test failure seemed to start after HBASE-17508 went in.
> The problem is introduced after HBASE-17508. After HBASE-17508 we will not 
> contact RS when getScanner. So for rest, get scanner will not return 404 
> either. But we should get a 404 when fetching data from the scanner but now 
> it will return 204.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17603) Rest api for scan should return 404 when table not exists

2017-02-08 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858943#comment-15858943
 ] 

Duo Zhang commented on HBASE-17603:
---

I mean the http code for rest-client.

> Rest api for scan should return 404 when table not exists
> -
>
> Key: HBASE-17603
> URL: https://issues.apache.org/jira/browse/HBASE-17603
> Project: HBase
>  Issue Type: Bug
>  Components: REST, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Ted Yu
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 17603.v1.txt
>
>
> This was the first Jenkins build where 
> TestScannerResource#testTableDoesNotExist started failing.
> https://builds.apache.org/job/HBase-1.4/612/jdk=JDK_1_8,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.rest/TestScannerResource/testTableDoesNotExist/
> The test failure can be reproduced locally.
> The test failure seemed to start after HBASE-17508 went in.
> The problem is introduced after HBASE-17508. After HBASE-17508 we will not 
> contact RS when getScanner. So for rest, get scanner will not return 404 
> either. But we should get a 404 when fetching data from the scanner but now 
> it will return 204.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17603) Rest api for scan should return 404 when table not exists

2017-02-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858940#comment-15858940
 ] 

Ted Yu commented on HBASE-17603:


The next() call would encounter exception in the above scenario.
In current releases, same behavior is expected.

> Rest api for scan should return 404 when table not exists
> -
>
> Key: HBASE-17603
> URL: https://issues.apache.org/jira/browse/HBASE-17603
> Project: HBase
>  Issue Type: Bug
>  Components: REST, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Ted Yu
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 17603.v1.txt
>
>
> This was the first Jenkins build where 
> TestScannerResource#testTableDoesNotExist started failing.
> https://builds.apache.org/job/HBase-1.4/612/jdk=JDK_1_8,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.rest/TestScannerResource/testTableDoesNotExist/
> The test failure can be reproduced locally.
> The test failure seemed to start after HBASE-17508 went in.
> The problem is introduced after HBASE-17508. After HBASE-17508 we will not 
> contact RS when getScanner. So for rest, get scanner will not return 404 
> either. But we should get a 404 when fetching data from the scanner but now 
> it will return 204.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17599) Use mayHaveMoreCellsInRow instead of isPartial

2017-02-08 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17599:
--
Attachment: HBASE-17599-branch-1.patch

Patch for branch-1.

> Use mayHaveMoreCellsInRow instead of isPartial
> --
>
> Key: HBASE-17599
> URL: https://issues.apache.org/jira/browse/HBASE-17599
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17599-branch-1.patch, HBASE-17599.patch, 
> HBASE-17599-v1.patch, HBASE-17599-v2.patch, HBASE-17599-v3.patch
>
>
> For now if we set scan.allowPartial(true), the partial result returned will 
> have the partial flag set to true. But for scan.setBatch(xx), the partial 
> result returned will not be marked as partial.
> This is an Incompatible change, indeed. But I do not think it will introduce 
> any issues as we just provide more informations to client. The old partial 
> flag for batched scan is always false so I do not think anyone can make use 
> of it.
> This is very important for the limited scan to support partial results from 
> server. If we get a Result which partial flag is false then we know we get 
> the whole row. Otherwise we need to fetch one more row to see if the row key 
> is changed which causes the logic to be more complicated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17280) Add mechanism to control hbase cleaner behavior

2017-02-08 Thread Ajay Jadhav (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Jadhav updated HBASE-17280:

Affects Version/s: (was: 1.2.0)
   1.0.0
   Status: Patch Available  (was: Reopened)

> Add mechanism to control hbase cleaner behavior
> ---
>
> Key: HBASE-17280
> URL: https://issues.apache.org/jira/browse/HBASE-17280
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, hbase, shell
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Ajay Jadhav
>Assignee: Ajay Jadhav
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17280.branch-1.001.patch, 
> HBASE-17280.master.003.patch, HBASE-17280.master.004.patch, 
> HBASE-17280.master.005.patch
>
>
> Cleaner is used to get rid of archived HFiles and old WALs in HBase.
> In the case of heavy workload, cleaner can affect query performance by 
> creating a lot of connections to perform costly reads/ writes against 
> underlying filesystem.
> This patch allows the user to control HBase cleaner behavior by providing 
> shell commands to enable/ disable and manually run it.
> Our main intention with this patch was to avoid running the expensive cleaner 
> chore during peak times. During our experimentation, we saw a lot of HFiles 
> and WAL log related files getting created inside archive dir (didn't see 
> ZKlock related files). Since we were replacing hdfs with S3, these delete 
> calls will take forever to complete.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17280) Add mechanism to control hbase cleaner behavior

2017-02-08 Thread Ajay Jadhav (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858932#comment-15858932
 ] 

Ajay Jadhav commented on HBASE-17280:
-

added branch-1 patch

> Add mechanism to control hbase cleaner behavior
> ---
>
> Key: HBASE-17280
> URL: https://issues.apache.org/jira/browse/HBASE-17280
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, hbase, shell
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Ajay Jadhav
>Assignee: Ajay Jadhav
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17280.branch-1.001.patch, 
> HBASE-17280.master.003.patch, HBASE-17280.master.004.patch, 
> HBASE-17280.master.005.patch
>
>
> Cleaner is used to get rid of archived HFiles and old WALs in HBase.
> In the case of heavy workload, cleaner can affect query performance by 
> creating a lot of connections to perform costly reads/ writes against 
> underlying filesystem.
> This patch allows the user to control HBase cleaner behavior by providing 
> shell commands to enable/ disable and manually run it.
> Our main intention with this patch was to avoid running the expensive cleaner 
> chore during peak times. During our experimentation, we saw a lot of HFiles 
> and WAL log related files getting created inside archive dir (didn't see 
> ZKlock related files). Since we were replacing hdfs with S3, these delete 
> calls will take forever to complete.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17280) Add mechanism to control hbase cleaner behavior

2017-02-08 Thread Ajay Jadhav (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Jadhav updated HBASE-17280:

Attachment: (was: HBASE-17280.branch-1.2.patch)

> Add mechanism to control hbase cleaner behavior
> ---
>
> Key: HBASE-17280
> URL: https://issues.apache.org/jira/browse/HBASE-17280
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, hbase, shell
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Ajay Jadhav
>Assignee: Ajay Jadhav
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17280.master.003.patch, 
> HBASE-17280.master.004.patch, HBASE-17280.master.005.patch
>
>
> Cleaner is used to get rid of archived HFiles and old WALs in HBase.
> In the case of heavy workload, cleaner can affect query performance by 
> creating a lot of connections to perform costly reads/ writes against 
> underlying filesystem.
> This patch allows the user to control HBase cleaner behavior by providing 
> shell commands to enable/ disable and manually run it.
> Our main intention with this patch was to avoid running the expensive cleaner 
> chore during peak times. During our experimentation, we saw a lot of HFiles 
> and WAL log related files getting created inside archive dir (didn't see 
> ZKlock related files). Since we were replacing hdfs with S3, these delete 
> calls will take forever to complete.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17280) Add mechanism to control hbase cleaner behavior

2017-02-08 Thread Ajay Jadhav (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Jadhav updated HBASE-17280:

Attachment: (was: HBASE-17280.v1-branch-1.2.patch)

> Add mechanism to control hbase cleaner behavior
> ---
>
> Key: HBASE-17280
> URL: https://issues.apache.org/jira/browse/HBASE-17280
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, hbase, shell
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Ajay Jadhav
>Assignee: Ajay Jadhav
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17280.branch-1.2.patch, 
> HBASE-17280.master.003.patch, HBASE-17280.master.004.patch, 
> HBASE-17280.master.005.patch
>
>
> Cleaner is used to get rid of archived HFiles and old WALs in HBase.
> In the case of heavy workload, cleaner can affect query performance by 
> creating a lot of connections to perform costly reads/ writes against 
> underlying filesystem.
> This patch allows the user to control HBase cleaner behavior by providing 
> shell commands to enable/ disable and manually run it.
> Our main intention with this patch was to avoid running the expensive cleaner 
> chore during peak times. During our experimentation, we saw a lot of HFiles 
> and WAL log related files getting created inside archive dir (didn't see 
> ZKlock related files). Since we were replacing hdfs with S3, these delete 
> calls will take forever to complete.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17280) Add mechanism to control hbase cleaner behavior

2017-02-08 Thread Ajay Jadhav (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Jadhav updated HBASE-17280:

Attachment: (was: HBASE-17280.v2-branch-1.2.patch)

> Add mechanism to control hbase cleaner behavior
> ---
>
> Key: HBASE-17280
> URL: https://issues.apache.org/jira/browse/HBASE-17280
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, hbase, shell
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Ajay Jadhav
>Assignee: Ajay Jadhav
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17280.branch-1.2.patch, 
> HBASE-17280.master.003.patch, HBASE-17280.master.004.patch, 
> HBASE-17280.master.005.patch
>
>
> Cleaner is used to get rid of archived HFiles and old WALs in HBase.
> In the case of heavy workload, cleaner can affect query performance by 
> creating a lot of connections to perform costly reads/ writes against 
> underlying filesystem.
> This patch allows the user to control HBase cleaner behavior by providing 
> shell commands to enable/ disable and manually run it.
> Our main intention with this patch was to avoid running the expensive cleaner 
> chore during peak times. During our experimentation, we saw a lot of HFiles 
> and WAL log related files getting created inside archive dir (didn't see 
> ZKlock related files). Since we were replacing hdfs with S3, these delete 
> calls will take forever to complete.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17280) Add mechanism to control hbase cleaner behavior

2017-02-08 Thread Ajay Jadhav (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Jadhav updated HBASE-17280:

Attachment: (was: HBASE-17280.branch-2.0.patch)

> Add mechanism to control hbase cleaner behavior
> ---
>
> Key: HBASE-17280
> URL: https://issues.apache.org/jira/browse/HBASE-17280
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, hbase, shell
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Ajay Jadhav
>Assignee: Ajay Jadhav
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17280.master.003.patch, 
> HBASE-17280.master.004.patch, HBASE-17280.master.005.patch
>
>
> Cleaner is used to get rid of archived HFiles and old WALs in HBase.
> In the case of heavy workload, cleaner can affect query performance by 
> creating a lot of connections to perform costly reads/ writes against 
> underlying filesystem.
> This patch allows the user to control HBase cleaner behavior by providing 
> shell commands to enable/ disable and manually run it.
> Our main intention with this patch was to avoid running the expensive cleaner 
> chore during peak times. During our experimentation, we saw a lot of HFiles 
> and WAL log related files getting created inside archive dir (didn't see 
> ZKlock related files). Since we were replacing hdfs with S3, these delete 
> calls will take forever to complete.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (HBASE-17280) Add mechanism to control hbase cleaner behavior

2017-02-08 Thread Ajay Jadhav (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Jadhav reopened HBASE-17280:
-

> Add mechanism to control hbase cleaner behavior
> ---
>
> Key: HBASE-17280
> URL: https://issues.apache.org/jira/browse/HBASE-17280
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, hbase, shell
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Ajay Jadhav
>Assignee: Ajay Jadhav
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17280.branch-1.2.patch, 
> HBASE-17280.branch-2.0.patch, HBASE-17280.master.003.patch, 
> HBASE-17280.master.004.patch, HBASE-17280.master.005.patch, 
> HBASE-17280.v1-branch-1.2.patch, HBASE-17280.v2-branch-1.2.patch, 
> HBASE-17280.v2-branch-2.patch
>
>
> Cleaner is used to get rid of archived HFiles and old WALs in HBase.
> In the case of heavy workload, cleaner can affect query performance by 
> creating a lot of connections to perform costly reads/ writes against 
> underlying filesystem.
> This patch allows the user to control HBase cleaner behavior by providing 
> shell commands to enable/ disable and manually run it.
> Our main intention with this patch was to avoid running the expensive cleaner 
> chore during peak times. During our experimentation, we saw a lot of HFiles 
> and WAL log related files getting created inside archive dir (didn't see 
> ZKlock related files). Since we were replacing hdfs with S3, these delete 
> calls will take forever to complete.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17599) Use mayHaveMoreCellsInRow instead of isPartial

2017-02-08 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17599:
--
Attachment: HBASE-17599-v3.patch

Fix comment.

> Use mayHaveMoreCellsInRow instead of isPartial
> --
>
> Key: HBASE-17599
> URL: https://issues.apache.org/jira/browse/HBASE-17599
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17599.patch, HBASE-17599-v1.patch, 
> HBASE-17599-v2.patch, HBASE-17599-v3.patch
>
>
> For now if we set scan.allowPartial(true), the partial result returned will 
> have the partial flag set to true. But for scan.setBatch(xx), the partial 
> result returned will not be marked as partial.
> This is an Incompatible change, indeed. But I do not think it will introduce 
> any issues as we just provide more informations to client. The old partial 
> flag for batched scan is always false so I do not think anyone can make use 
> of it.
> This is very important for the limited scan to support partial results from 
> server. If we get a Result which partial flag is false then we know we get 
> the whole row. Otherwise we need to fetch one more row to see if the row key 
> is changed which causes the logic to be more complicated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17603) Rest api for scan should return 404 when table not exists

2017-02-08 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858909#comment-15858909
 ] 

Duo Zhang commented on HBASE-17603:
---

What if the table is deleted after the existence check but still before we 
actually send a next request?

> Rest api for scan should return 404 when table not exists
> -
>
> Key: HBASE-17603
> URL: https://issues.apache.org/jira/browse/HBASE-17603
> Project: HBase
>  Issue Type: Bug
>  Components: REST, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Ted Yu
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 17603.v1.txt
>
>
> This was the first Jenkins build where 
> TestScannerResource#testTableDoesNotExist started failing.
> https://builds.apache.org/job/HBase-1.4/612/jdk=JDK_1_8,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.rest/TestScannerResource/testTableDoesNotExist/
> The test failure can be reproduced locally.
> The test failure seemed to start after HBASE-17508 went in.
> The problem is introduced after HBASE-17508. After HBASE-17508 we will not 
> contact RS when getScanner. So for rest, get scanner will not return 404 
> either. But we should get a 404 when fetching data from the scanner but now 
> it will return 204.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17599) Use mayHaveMoreCellsInRow instead of isPartial

2017-02-08 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858908#comment-15858908
 ] 

Duo Zhang commented on HBASE-17599:
---

{quote}
We should just remove it in the patch for master branch?
{quote}

Oops, forgot to remove it. As [~stack] said, we'd better keep it for a little 
longer time, maybe remove it in 3.0?

Let me remove it and prepare a new patch.

> Use mayHaveMoreCellsInRow instead of isPartial
> --
>
> Key: HBASE-17599
> URL: https://issues.apache.org/jira/browse/HBASE-17599
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17599.patch, HBASE-17599-v1.patch, 
> HBASE-17599-v2.patch
>
>
> For now if we set scan.allowPartial(true), the partial result returned will 
> have the partial flag set to true. But for scan.setBatch(xx), the partial 
> result returned will not be marked as partial.
> This is an Incompatible change, indeed. But I do not think it will introduce 
> any issues as we just provide more informations to client. The old partial 
> flag for batched scan is always false so I do not think anyone can make use 
> of it.
> This is very important for the limited scan to support partial results from 
> server. If we get a Result which partial flag is false then we know we get 
> the whole row. Otherwise we need to fetch one more row to see if the row key 
> is changed which causes the logic to be more complicated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17604) Backport HBASE-15437 (fix request and response size metrics) to branch-1

2017-02-08 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HBASE-17604:
--
   Resolution: Fixed
Fix Version/s: 1.3.1
   1.4.0
   Status: Resolved  (was: Patch Available)

Committed to branch-1.3 and branch-1.  Thanks for taking a look [~stack]!

> Backport HBASE-15437 (fix request and response size metrics) to branch-1
> 
>
> Key: HBASE-17604
> URL: https://issues.apache.org/jira/browse/HBASE-17604
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC, metrics
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Fix For: 1.4.0, 1.3.1
>
> Attachments: HBASE-17604.branch-1.001.patch
>
>
> HBASE-15437 fixed request and response size metrics in master.  We should 
> apply the same to branch-1 and related release branches.
> Prior to HBASE-15437, request and response size metrics were only calculated 
> based on the protobuf message serialized size.  This isn't correct when the 
> cell scanner payload is in use.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17615) Use nonce and procedure v2 for add/remove replication peer

2017-02-08 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-17615:
--

 Summary: Use nonce and procedure v2 for add/remove replication peer
 Key: HBASE-17615
 URL: https://issues.apache.org/jira/browse/HBASE-17615
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 2.0.0
Reporter: Guanghao Zhang






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17611) Thrift 2 per-call latency metrics are capped at ~ 2 seconds

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858885#comment-15858885
 ] 

Hadoop QA commented on HBASE-17611:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 47s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 43s {color} 
| {color:red} hbase-thrift in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
5s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 30s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.thrift.TestThriftServerCmdLine |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851767/HBASE-17611.001.patch 
|
| JIRA Issue | HBASE-17611 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 13bb1e264adb 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b238901 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5636/artifact/patchprocess/patch-unit-hbase-thrift.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/5636/artifact/patchprocess/patch-unit-hbase-thrift.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5636/testReport/ |
| modules | C: hbase-thrift U: hbase-thrift |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5636/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Thrift 2 per-call latency metrics are capped at ~ 2 seconds
> ---
>
> Key: 

[jira] [Commented] (HBASE-17040) HBase Spark does not work in Kerberos and yarn-master mode

2017-02-08 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858870#comment-15858870
 ] 

Yi Liang commented on HBASE-17040:
--

Hi Binzi,
  I have met some errors about the keberos when i use 
org.apache.hadoop.hbase.mapreduce.SyncTable;this command is a mapreduce job 
which will compare 'source' table in cluster A and 'target' table in cluster B, 
and then put the diff data from 'source' table into 'target' table, see 
HBASE-13639. It works fine if both cluster are unkerberized, and does not work 
when both of them are kerberized(have the same errors as yours).  If your spark 
app deal with two cluster that are both kerberized, maybe we have the similar 
issues. 



> HBase Spark does not work in Kerberos and yarn-master mode
> --
>
> Key: HBASE-17040
> URL: https://issues.apache.org/jira/browse/HBASE-17040
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.0.0
> Environment: HBase
> Kerberos
> Yarn
> Cloudera
>Reporter: Binzi Cao
>
> We are loading hbase records  to RDD with the hbase-spark library in 
> Cloudera. 
> The hbase-spark code works if  we submit the job with client mode, but does 
> not work in cluster mode. We got below exceptions:
> ```
> 16/11/07 05:43:28 WARN security.UserGroupInformation: 
> PriviledgedActionException as:spark (auth:SIMPLE) 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> 16/11/07 05:43:28 WARN ipc.RpcClientImpl: Exception encountered while 
> connecting to the server : javax.security.sasl.SaslException: GSS initiate 
> failed [Caused by GSSException: No valid credentials provided (Mechanism 
> level: Failed to find any Kerberos tgt)]
> 16/11/07 05:43:28 ERROR ipc.RpcClientImpl: SASL authentication failed. The 
> most likely cause is missing or invalid credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:181)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1242)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:34118)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1627)
>   at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:92)
>   at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:89)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
>   at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:95)
>   at 
> org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512)
>   at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:86)
>   at 
> org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:111)
>   at 
> 

[jira] [Commented] (HBASE-17105) Annotate RegionServerObserver

2017-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858868#comment-15858868
 ] 

Hudson commented on HBASE-17105:


FAILURE: Integrated in Jenkins build HBase-1.4 #619 (See 
[https://builds.apache.org/job/HBase-1.4/619/])
HBASE-17105 Annotate RegionServerObserver (enis: rev 
753169a3af9e1d7901cb3b4e59fb967876b2d67b)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionServerObserver.java


> Annotate RegionServerObserver
> -
>
> Key: HBASE-17105
> URL: https://issues.apache.org/jira/browse/HBASE-17105
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.4.0
>
> Attachments: hbase-17105_v1.patch
>
>
> Seems that we have forgotten to annotate RegionServerObserver with 
> InterfaceAudience. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17437) Support specifying a WAL directory outside of the root directory

2017-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858867#comment-15858867
 ] 

Hudson commented on HBASE-17437:


FAILURE: Integrated in Jenkins build HBase-1.4 #619 (See 
[https://builds.apache.org/job/HBase-1.4/619/])
HBASE-17437 Support specifying a WAL directory outside of the root (enis: rev 
8f6388503b626e2da9a048ae3f05f4164395bd8d)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/wal/IOTestProvider.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DefaultWALProvider.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/wal/WALPerformanceEvaluation.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollAbort.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestWALProcedureStoreOnHDFS.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestFSHLog.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSyncUp.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFactory.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestWALObserver.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekBeforeWithReverseScan.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALActionsListener.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALPlayer.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestDefaultWALProvider.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterFromRegionSide.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALRecordReader.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java
* (edit) hbase-common/src/main/resources/hbase-default.xml
* (edit) 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALPerformanceEvaluation.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java
* (edit) 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALLoaderPerformanceEvaluation.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitLogWorker.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFileSystemWithWALDir.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DisabledWALProvider.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestCompactedHFilesDischarger.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALRootDir.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* (edit) 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/ProcedureTestingUtility.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALSplit.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/WALLink.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java


> Support specifying a WAL directory outside of the root directory
> 
>
> Key: HBASE-17437
> URL: https://issues.apache.org/jira/browse/HBASE-17437
> Project: HBase
>  Issue Type: Improvement
>  Components: Filesystem Integration, wal
>Affects Versions: 1.2.4
>Reporter: Yishan Yang
>Assignee: Zach York
>  Labels: patch
>

[jira] [Commented] (HBASE-17572) HMaster: Caught throwable while processing event C_M_MERGE_REGION

2017-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858866#comment-15858866
 ] 

Hudson commented on HBASE-17572:


FAILURE: Integrated in Jenkins build HBase-1.4 #619 (See 
[https://builds.apache.org/job/HBase-1.4/619/])
Revert "HBASE-17572 HMaster: Caught throwable while processing event (apurtell: 
rev fd062011f00eb9131fba78f6b6dcc1992002553c)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java


> HMaster: Caught throwable while processing event C_M_MERGE_REGION
> -
>
> Key: HBASE-17572
> URL: https://issues.apache.org/jira/browse/HBASE-17572
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 1.4.0, 1.3.1
>
> Attachments: HBASE-17572-branch-1.3.patch
>
>
> Running ITBLL 1B rows against branch-1.3 compiled against Hadoop 2.7.3 with 
> the noKill monkey policy, I see both masters go down with
> master.HMaster: Caught throwable while processing event C_M_MERGE_REGION
> java.lang.reflect.UndeclaredThrowableException
> In ServerManager#sendRegionsMerge we call ProtobufUtil#mergeRegions, which 
> does a doAs, and the code within that block invokes 
> RSRpcServices#mergeRegions, but is not resilient against 
> RegionOpeningException ("region is opening")
> An UndeclaredThrowableException is "thrown by a method invocation on a proxy 
> instance if its invocation handler's invoke method throws a checked exception 
> (a Throwable that is not assignable to RuntimeException or Error) that is not 
> assignable to any of the exception types declared in the throws clause of the 
> method that was invoked on the proxy instance and dispatched to the 
> invocation handler." 
> (http://docs.oracle.com/javase/7/docs/api/java/lang/reflect/UndeclaredThrowableException.html)
>  
> {noformat}
> 2017-01-31 07:21:17,495 FATAL [MASTER_TABLE_OPERATIONS-node-1:16000-0] 
> master.HMaster: Caught throwable while processing event C_M_MERGE_REGION
> java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1737)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.mergeRegions(ProtobufUtil.java:1990)
> at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionsMerge(ServerManager.java:925)
> at 
> org.apache.hadoop.hbase.master.handler.DispatchMergingRegionHandler.process(DispatchMergingRegionHandler.java:153)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: com.google.protobuf.ServiceException: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.RegionOpeningException):
>  org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region 
> IntegrationTestBigLinkedList,|\xFFnk\x1C\x85<[\x1Ef\xFDE\xF9\xAA\xAC\x08,1485846598043.f56ad22121e872777468020c4452a7c7.
>  is opening on node-2.cluster,16020,1485822382322
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2964)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1139)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.mergeRegions(RSRpcServices.java:1497)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22749)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2355)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:244)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:340)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.mergeRegions(AdminProtos.java:23695)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil$1.run(ProtobufUtil.java:1993)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil$1.run(ProtobufUtil.java:1990)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> 

[jira] [Updated] (HBASE-17605) Refactor procedure framework code

2017-02-08 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-17605:
-
Attachment: HBASE-17605.master.004.patch

> Refactor procedure framework code
> -
>
> Key: HBASE-17605
> URL: https://issues.apache.org/jira/browse/HBASE-17605
> Project: HBase
>  Issue Type: Improvement
>  Components: proc-v2
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-17605.master.001.patch, 
> HBASE-17605.master.002.patch, HBASE-17605.master.003.patch, 
> HBASE-17605.master.004.patch, without-patch.png, with-patch.png
>
>
> - Moved locks out of MasterProcedureScheduler#Queue. One Queue object is 
> used for each namespace/table, which aren't more than 100. So we don't 
> complexity arising from all functionalities being in one place. 
> MasterProcedureLocking#Lock is the new locking class.
> - Removed NamespaceQueue because it wasn't being used as Queue 
> (add,peek,poll,etc functions threw UnsupportedOperationException). It's was 
> only used for locks on namespaces. Now that locks have been moved out of 
> Queue class, it's not needed anymore.
> - Remoed RegionEvent which was there only for locking on regions. 
> Tables/namespaces used locking from Queue class and regions couldn't (there 
> are no separate proc queue at region level), hence the redundance. Now that 
> locking is separate, we can use the same for regions too.
> - Removed QueueInterface class. No declarations, except one 
> implementaion, which makes the point of having an interface moot.
> - Removed QueueImpl, which was the only concrete implementation of 
> abstract Queue class. Moved functions to Queue class itself to avoid 
> unnecessary level in inheritance hierarchy.
> - Removed ProcedureEventQueue class which was just a wrapper around 
> ArrayDeque class.
> - Encapsulated table priority related stuff in a single class.
> - Removed some unused functions.
> *Perf using MasterProcedureSchedulerPerformanceEvaluation*
> 10 threads, 10M ops, 5 tables
> Without patch:
> 10 regions/table : #yield 584980, addBack time 4.1s, poll time 10s
> 1M regions/table: #yield 16, addBack time 5.9s, poll time 12.9s
> With patch:
> 10 regions/table : #yield 86413, addBack time 4.1s, poll time 8.2s
> 1M regions/table: #yield 9, addBack time 6s, poll time 13s
> *Memory footprint and CPU* (don't compare GC as that depends on life of 
> objects which will be much longer in real-world scenarios)
> Without patch
> !without-patch.png|width=800!
> With patch
> !with-patch.png|width=800!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17611) Thrift 2 per-call latency metrics are capped at ~ 2 seconds

2017-02-08 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HBASE-17611:
--
Attachment: HBASE-17611.001.patch

Attaching a simple fix and test for coverage.

> Thrift 2 per-call latency metrics are capped at ~ 2 seconds
> ---
>
> Key: HBASE-17611
> URL: https://issues.apache.org/jira/browse/HBASE-17611
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, Thrift
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Fix For: 1.3.1
>
> Attachments: HBASE-17611.001.patch
>
>
> Thrift 2 latency metrics are measured in nanoseconds.  However, the duration 
> used for per-method latencies is cast to an int, meaning the values are 
> capped at 2.147 seconds.  Let's use a long instead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17611) Thrift 2 per-call latency metrics are capped at ~ 2 seconds

2017-02-08 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HBASE-17611:
--
Status: Patch Available  (was: Open)

> Thrift 2 per-call latency metrics are capped at ~ 2 seconds
> ---
>
> Key: HBASE-17611
> URL: https://issues.apache.org/jira/browse/HBASE-17611
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, Thrift
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Fix For: 1.3.1
>
> Attachments: HBASE-17611.001.patch
>
>
> Thrift 2 latency metrics are measured in nanoseconds.  However, the duration 
> used for per-method latencies is cast to an int, meaning the values are 
> capped at 2.147 seconds.  Let's use a long instead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17603) Rest api for scan should return 404 when table not exists

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858765#comment-15858765
 ] 

Hadoop QA commented on HBASE-17603:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 5s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 5s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 13s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 58s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851755/17603.v1.txt |
| JIRA Issue | HBASE-17603 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 0b7cca7933e9 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b238901 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5635/testReport/ |
| modules | C: hbase-client U: hbase-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5635/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Rest api for scan should return 404 when table not exists
> -
>
> Key: HBASE-17603
> URL: https://issues.apache.org/jira/browse/HBASE-17603

[jira] [Updated] (HBASE-14416) HBase Backup/Restore Phase 3: Create plugin hooks for Backup tools

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14416:
--
Affects Version/s: (was: 2.0.0)

> HBase Backup/Restore Phase 3: Create plugin hooks for Backup tools
> --
>
> Key: HBASE-14416
> URL: https://issues.apache.org/jira/browse/HBASE-14416
> Project: HBase
>  Issue Type: New Feature
>Reporter: Geoffrey Jacoby
>  Labels: backup
> Fix For: HBASE-7912
>
>
> Different organizations may already have their own backup tools for HBase, or 
> wish to develop them in the future for their own particular use cases. It 
> would be useful if HBase supported hooks to integrate those tools so that 
> they could be configured and run in a standard way.
> In particular, the administrative interface to start a backup, restore, or 
> merge should be decoupled from any particular implementation as much as 
> possible, so that any implementation with similar capabilities can be 
> substituted via configuration without needing to fork or modify the code. 
> Ideally, it will also be possible to decouple the various components so that 
> implementations can be mixed and matched. (For example, one could use the 
> standard backup's functionality to track what needs to be backed up, but use 
> a custom plugin to change the logic or storage format of the backup operation 
> itself.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16198) Enhance backup history command

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-16198:
--
Affects Version/s: (was: 2.0.0)

> Enhance backup history command
> --
>
> Key: HBASE-16198
> URL: https://issues.apache.org/jira/browse/HBASE-16198
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: HBASE-7912
>
> Attachments: HBASE-16198-v1.patch, HBASE-16198-v2.patch, 
> HBASE-16198-v3.patch, HBASE-16198-v4.patch, HBASE-16198-v5.patch
>
>
> We need history per table and ability to run command on a fully disabled 
> cluster (using info from backup file system only).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14138) HBase Backup/Restore Phase 3: Security

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14138:
--
Affects Version/s: (was: 2.0.0)

> HBase Backup/Restore Phase 3: Security
> --
>
> Key: HBASE-14138
> URL: https://issues.apache.org/jira/browse/HBASE-14138
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>  Labels: backup
> Fix For: HBASE-7912
>
>
> Security is not supported. Only authorized user (GLOBAL ADMIN) must be 
> allowed to perform backup/restore. See: HBASE-7367 for good discussion on 
> snapshot security model. 
> * Backup between two secured (Kerberos) clusters (Cross-realm authentication 
> is required to use distcp/export snapshot between two secured cluster?)
> * Backup between secured (Kerberos) and secured non-Kerberos cluster (AWS)
> * Backup between secured and unsecured cluster
> * Restore between two Kerberos clusters
> * Restore from non-Kerberos (AWS) to Kerberos
> * Restore from unsecured to secured (Kerberos)
> * Users must be able to run backup/restore for table if they have admin 
> privileges for a table (?)
> Some relevant JIRAs
> https://issues.apache.org/jira/browse/HADOOP-8828
> https://issues.apache.org/jira/browse/HDFS-6776
> Links:
> http://henning.kropponline.de/2015/10/04/distcp-between-kerberized-and-none-kerberized-cluster/
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_Sys_Admin_Guides/content/distcp_and_security_settings.html
> https://discuss.zendesk.com/hc/en-us/articles/203176207-Setting-up-a-kerberos-cross-realm-trust-for-distcp
> http://www.cloudera.com/documentation/enterprise/5-5-x/topics/cdh_admin_distcp_secure_insecure.html
> https://www.cloudera.com/documentation/enterprise/5-4-x/topics/cdh_admin_distcp_data_cluster_migrate.html
> https://www.cloudera.com/documentation/enterprise/5-7-x/topics/cdh_admin_distcp_data_cluster_migrate.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14141) HBase Backup/Restore Phase 3: Filter WALs on backup to include only edits from backup tables

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14141:
--
Affects Version/s: (was: 2.0.0)

> HBase Backup/Restore Phase 3: Filter WALs on backup to include only edits 
> from backup tables
> 
>
> Key: HBASE-14141
> URL: https://issues.apache.org/jira/browse/HBASE-14141
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: HBASE-7912
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15998) Cancel restore operation support

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15998:
--
Affects Version/s: (was: 2.0.0)

> Cancel restore operation support
> 
>
> Key: HBASE-15998
> URL: https://issues.apache.org/jira/browse/HBASE-15998
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>  Labels: backup
> Fix For: HBASE-7912
>
>
> This issue is to add support for user to cancel on-going restore operation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15997) Cancel backup operation support

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15997:
--
Affects Version/s: (was: 2.0.0)

> Cancel backup operation support
> ---
>
> Key: HBASE-15997
> URL: https://issues.apache.org/jira/browse/HBASE-15997
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: HBASE-7912
>
>
> Cancel operation support for lengthy backups.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14440) Restore to snapshot

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14440:
--
Affects Version/s: (was: 2.0.0)

> Restore to snapshot
> ---
>
> Key: HBASE-14440
> URL: https://issues.apache.org/jira/browse/HBASE-14440
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: HBASE-7912
>
>
> Good to have feature: restore backup to snapshot. This will allow to massage 
> data with custom M/R job over snapshot. Restore time range only, or 
> particular key range (ranges).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14415) Full backup based on Snapshot v2

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14415:
--
Affects Version/s: (was: 2.0.0)

> Full backup based on Snapshot v2
> 
>
> Key: HBASE-14415
> URL: https://issues.apache.org/jira/browse/HBASE-14415
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: HBASE-7912
>
>
> Full backup is based on table snapshot. Current implementation of table 
> snapshots is not robust at scale and snapshot verification stage may be very 
> slow for tables with large number of regions. If region gets split during 
> snapshot - verification and snapshot will fail.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14135) HBase Backup/Restore Phase 3: Merge backup images

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14135:
--
Affects Version/s: (was: 2.0.0)

> HBase Backup/Restore Phase 3: Merge backup images
> -
>
> Key: HBASE-14135
> URL: https://issues.apache.org/jira/browse/HBASE-14135
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: HBASE-7912
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14417) Incremental backup and bulk loading

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14417:
--
Affects Version/s: (was: 2.0.0)

> Incremental backup and bulk loading
> ---
>
> Key: HBASE-14417
> URL: https://issues.apache.org/jira/browse/HBASE-14417
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>Priority: Critical
>  Labels: backup
> Fix For: HBASE-7912
>
> Attachments: 14417-tbl-ext.v10.txt, 14417-tbl-ext.v11.txt, 
> 14417-tbl-ext.v14.txt, 14417-tbl-ext.v18.txt, 14417-tbl-ext.v9.txt, 
> 14417.v11.txt, 14417.v13.txt, 14417.v1.txt, 14417.v21.txt, 14417.v23.txt, 
> 14417.v24.txt, 14417.v25.txt, 14417.v2.txt, 14417.v6.txt
>
>
> Currently, incremental backup is based on WAL files. Bulk data loading 
> bypasses WALs for obvious reasons, breaking incremental backups. The only way 
> to continue backups after bulk loading is to create new full backup of a 
> table. This may not be feasible for customers who do bulk loading regularly 
> (say, every day).
> Here is the review board:
> https://reviews.apache.org/r/54258/
> Google doc for design:
> https://docs.google.com/document/d/1ACCLsecHDvzVSasORgqqRNrloGx4mNYIbvAU7lq5lJE



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16064) delete backup command shows HDFS permission error when deleting the intended backup

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-16064:
--
Affects Version/s: (was: 2.0.0)

> delete backup command shows HDFS permission error when deleting the intended 
> backup
> ---
>
> Key: HBASE-16064
> URL: https://issues.apache.org/jira/browse/HBASE-16064
> Project: HBase
>  Issue Type: Bug
>Reporter: Romil Choksi
>Assignee: Ted Yu
>  Labels: backup
> Fix For: HBASE-7912
>
> Attachments: 16064.v1.txt, 16064.v2.txt, 16064.v3.txt
>
>
> HBase delete backup command shows error, after successfully deleting the 
> intended backup
> {code}
> hbase@cluster-name:~$ hbase backup delete backup_1465950334243
> 2016-06-15 00:36:18,883 INFO  [main] util.BackupClientUtil: No data has been 
> found in 
> hdfs://cluster-name:8020/user/hbase/backup_1465950334243/default/table_ttx7w0jgw8.
> 2016-06-15 00:36:18,894 ERROR [main] util.BackupClientUtil: Cleaning up 
> backup data of backup_1465950334243 at hdfs://cluster-name:8020/user/hbase 
> failed due to Permission denied: user=hbase, access=WRITE, 
> inode="/user/hbase":hdfs:hdfs:drwxr-xr-x
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:216)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:92)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3822)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1071)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:619)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
> .
> {code}
> Backup has been successfully deleted but the backup root dir under 
> /user/hbase dir still persists
> {code}
> hbase@cluster-name:~$ hdfs dfs -ls /user/hbase
> Found 6 items
> drwx--   - hbase hbase  0 2016-06-15 00:26 /user/hbase/.staging
> drwxr-xr-x   - hbase hbase  0 2016-06-15 00:36 
> /user/hbase/backup_1465950334243
> drwxr-xr-x   - hbase hbase  0 2016-06-15 00:26 
> /user/hbase/hbase-staging
> {code}
> /user/hbase/backup_1465950334243 is now empty though



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16197) Enhance backup delete command

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-16197:
--
Affects Version/s: (was: 2.0.0)

> Enhance backup delete command
> -
>
> Key: HBASE-16197
> URL: https://issues.apache.org/jira/browse/HBASE-16197
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: HBASE-7912
>
> Attachments: 16197-suggest.v2.patch, HBASE-16197-v1.patch, 
> HBASE-16197-v2.patch, HBASE-16197-v4.patch, HBASE-16197-v5.patch, 
> HBASE-16197-v6.patch, HBASE-16197-v7.patch, HBASE-16197-v8.patch, 
> HBASE-16197-v9.patch
>
>
> Currently, backup delete command deletes only physical files/directories in 
> backup destination. It does not update backup system table (incremental 
> backup table set) and it does not delete related backup images (previous 
> ones), thus creating a hole in dependency chain. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16039) Incremental backup action failed with NPE when table in full backup is deleted in between

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-16039:
--
Affects Version/s: (was: 2.0.0)

> Incremental backup action failed with NPE when table in full backup is 
> deleted in between
> -
>
> Key: HBASE-16039
> URL: https://issues.apache.org/jira/browse/HBASE-16039
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Reporter: Romil Choksi
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: HBASE-7912
>
> Attachments: HBASE-16039-v1.patch
>
>
> Incremental backup action failed with NPE.
> Creating a full backup went fine but creating an incremental backup failed
> {code}
> hbase@cluster_name:~$ hbase backup create incremental 
> hdfs://cluster-name:8020/user/hbase "table_02uvzkggro"
> 2016-06-15 06:38:28,605 INFO  [main] util.BackupClientUtil: Using existing 
> backup root dir: hdfs://cluster-name:8020/user/hbase
> 2016-06-15 06:38:30,483 ERROR [main] util.AbstractHBaseTool: Error running 
> command-line tool
> org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> at 
> org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.cleanupTargetDir(FullTableBackupProcedure.java:198)
> at 
> org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.failBackup(FullTableBackupProcedure.java:276)
> at 
> org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.executeFromState(IncrementalTableBackupProcedure.java:186)
> at 
> org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.executeFromState(IncrementalTableBackupProcedure.java:54)
> at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107)
> at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:443)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:934)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:736)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:689)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:73)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:416)
> {code}
> from Master log
> {code}
> 2016-06-15 06:38:29,875 ERROR [ProcedureExecutorThread-3] 
> master.FullTableBackupProcedure: Unexpected exception in incremental-backup: 
> incremental copy 
> backup_1465972709112org.apache.hadoop.hbase.TableInfoMissingException: No 
> table descriptor file under 
> hdfs://cluster-name:8020/apps/hbase/data/data/default/table_pjtxpp3r74
> org.apache.hadoop.hbase.backup.impl.BackupException: 
> org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file 
> under hdfs://cluster-name:8020/apps/hbase/data/data/default/table_pjtxpp3r74
> at 
> org.apache.hadoop.hbase.backup.util.BackupServerUtil.copyTableRegionInfo(BackupServerUtil.java:196)
> at 
> org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.executeFromState(IncrementalTableBackupProcedure.java:178)
> at 
> org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.executeFromState(IncrementalTableBackupProcedure.java:54)
> at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107)
> at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:443)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:934)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:736)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:689)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:73)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:416)
> Caused by: org.apache.hadoop.hbase.TableInfoMissingException: No table 
> descriptor file under 
> hdfs://cluster-name:8020/apps/hbase/data/data/default/table_pjtxpp3r74
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:509)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:496)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:476)
> at 
> org.apache.hadoop.hbase.backup.util.BackupServerUtil.copyTableRegionInfo(BackupServerUtil.java:172)
> ... 9 more
> 2016-06-15 06:38:29,875 INFO  

[jira] [Updated] (HBASE-15986) BackupServerUtil.getWALFilesOlderThan() should handle oldWALs properly

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15986:
--
Affects Version/s: (was: 2.0.0)

> BackupServerUtil.getWALFilesOlderThan() should handle oldWALs properly
> --
>
> Key: HBASE-15986
> URL: https://issues.apache.org/jira/browse/HBASE-15986
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: HBASE-7912
>
> Attachments: HBASE-15986-v1.patch, HBASE-15986-v2.patch, 
> HBASE-15986-v3.patch
>
>
> I was running TestBackupDescribe where I saw the following being passed to 
> BackupServerUtil.getWALFilesOlderThan():
> hdfs://localhost:59278/user/tyu/test-data/a42ac21a-2097-49d8-9c0e-86991e104e4e/oldWALs/hregion-05273083.default.146587177
> Since the filename doesn't represent a server, the following call throws 
> exception:
> {code}
> ServerName.parsePort(String) line: 150
> BackupClientUtil.parseHostFromOldLog(Path) line: 136
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15928) hbase backup delete command does not remove backup root dir from hdfs

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15928:
--
Affects Version/s: (was: 2.0.0)

> hbase backup delete command does not remove backup root dir from hdfs
> -
>
> Key: HBASE-15928
> URL: https://issues.apache.org/jira/browse/HBASE-15928
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Fix For: HBASE-7912
>
> Attachments: 15928.v1.txt
>
>
> [~romil.choksi] reported the following bug.
> hbase backup delete command successfully deletes backup
> {code}
> hbase@hbase-backup-test-5:~> hbase backup delete backup_1464217940560
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Delete backup failed: no information found for backupID=delete
> 2016-05-26 01:44:40,077 INFO  [main] impl.BackupUtil: No data has been found 
> in 
> hdfs://hbase-backup-test-5.openstacklocal:8020/user/hbase/backup_1464217940560/default/t1.
> 2016-05-26 01:44:40,081 INFO  [main] impl.BackupUtil: No data has been found 
> in 
> hdfs://hbase-backup-test-5.openstacklocal:8020/user/hbase/backup_1464217940560/default/t2.
> 2016-05-26 01:44:40,085 INFO  [main] impl.BackupUtil: No data has been found 
> in 
> hdfs://hbase-backup-test-5.openstacklocal:8020/user/hbase/backup_1464217940560/default/t3.
> Delete backup for backupID=backup_1464217940560 completed.
> {code}
> Listing the backup directory of the backup that was just deleted
> {code}
> hbase@hbase-backup-test-5:~> hdfs dfs -ls /user/hbase
> Found 37 items
> drwx--   - hbase hbase  0 2016-05-25 23:13 /user/hbase/.staging
> drwxr-xr-x   - hbase hbase  0 2016-05-24 19:42 
> /user/hbase/backup_1464047611132
> 
> drwxr-xr-x   - hbase hbase  0 2016-05-25 23:08 
> /user/hbase/backup_1464217727296
> drwxr-xr-x   - hbase hbase  0 2016-05-26 01:44 
> /user/hbase/backup_1464217940560
> {code}
> Backup root dir still exists



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15330) HBase Backup/Restore Phase 3: support delete/truncate table

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15330:
--
Fix Version/s: HBASE-7912

> HBase Backup/Restore Phase 3: support delete/truncate table
> ---
>
> Key: HBASE-15330
> URL: https://issues.apache.org/jira/browse/HBASE-15330
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
> Fix For: HBASE-7912
>
>
> Currently, we do not track delete/recreate or truncate table events



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15899) HBase incremental restore should handle namespaces properly

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15899:
--
Affects Version/s: (was: 2.0.0)

> HBase incremental restore should handle namespaces properly
> ---
>
> Key: HBASE-15899
> URL: https://issues.apache.org/jira/browse/HBASE-15899
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: HBASE-7912
>
> Attachments: HBASE-15899-v1.patch
>
>
> HBase incremental restores seem to have problems with namespaces. As far as I 
> can tell the incremental restore attempt to create invalid HDFS paths which 
> leads to failure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15682) HBase Backup Phase 3: Possible data loss during incremental WAL files copy

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15682:
--
Affects Version/s: (was: 2.0.0)

> HBase Backup Phase 3: Possible data loss during incremental WAL files copy
> --
>
> Key: HBASE-15682
> URL: https://issues.apache.org/jira/browse/HBASE-15682
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: HBASE-7912
>
> Attachments: HBASE-15682-v2.patch, HBASE-15682-v3.patch, 
> HBASE-15682-v4.patch
>
>
> We collect list of files in WAL and oldWALs directory and launch DistCp job. 
> Some files can be moved from WALs to oldWALs  directory by RS during job's 
> run, what can result in potential data loss.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15862) Backup - Delete- Restore does not restore deleted data

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15862:
--
Affects Version/s: (was: 2.0.0)

> Backup - Delete- Restore does not restore deleted data
> --
>
> Key: HBASE-15862
> URL: https://issues.apache.org/jira/browse/HBASE-15862
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: HBASE-7912
>
> Attachments: HBASE-15862-v1.patch, HBASE-15862-v2.patch, 
> HBASE-15862-v3.patch
>
>
> This was discovered during testing. If we delete row after full backup and 
> perform immediately restore, the deleted row still remains deleted. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16573) Backup restore into disabled table support

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-16573:
--
Fix Version/s: HBASE-7912

> Backup restore into disabled table support
> --
>
> Key: HBASE-16573
> URL: https://issues.apache.org/jira/browse/HBASE-16573
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: HBASE-7912
>
>
> If we restore into existing table, table must be disabled to avoid and block 
> any incoming requests.
> {code}
> java.lang.IllegalStateException: Cannot restore hbase table
> at 
> org.apache.hadoop.hbase.backup.util.RestoreServerUtil.restoreTableAndCreate(RestoreServerUtil.java:492)
> at 
> org.apache.hadoop.hbase.backup.util.RestoreServerUtil.fullRestoreTable(RestoreServerUtil.java:264)
> at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restoreImages(RestoreClientImpl.java:321)
> at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:219)
> at 
> org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:107)
> at 
> org.apache.hadoop.hbase.client.HBaseBackupAdmin.restore(HBaseBackupAdmin.java:411)
> at 
> org.apache.hadoop.hbase.backup.TestIncrementalBackup.TestIncBackupRestore(TestIncrementalBackup.java:171)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Caused by: org.apache.hadoop.hbase.TableNotFoundException: Table 
> ns1:table1_restore is not currently available.
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:347)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:1097)
> at 
> org.apache.hadoop.hbase.backup.util.RestoreServerUtil.restoreTableAndCreate(RestoreServerUtil.java:487)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17133) Backup documentation update

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-17133:
--
Fix Version/s: HBASE-7912

> Backup documentation update
> ---
>
> Key: HBASE-17133
> URL: https://issues.apache.org/jira/browse/HBASE-17133
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
> Fix For: HBASE-7912
>
>
> We need to update backup doc to sync it with the current implementation and 
> to add section for current limitations:
> {quote}
> - if you write to the table with Durability.SKIP_WALS your data will not
> be in the incremental-backup
>  - if you bulkload files that data will not be in the incremental backup
> (HBASE-14417)
>  - the incremental backup will not only contains the data of the table you
> specified but also the regions from other tables that are on the same set
> of RSs (HBASE-14141) ...maybe a note about security around this topic
>  - the incremental backup will not contains just the "latest row" between
> backup A and B, but it will also contains all the updates occurred in
> between. but the restore does not allow you to restore up to a certain
> point in time, the restore will always be up to the "latest backup point".
>  - you should limit the number of "incremental" up to N (or maybe SIZE), to
> avoid replay time becoming the bottleneck. (HBASE-14135)
> {quote} 
> Update command line tool section
> Clarify restore backup section
> Add section on backup delete algorithm
> Add section on how backup image dependency chain works.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17150) Verify restore logic (remote/local cluster)

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-17150:
--
Fix Version/s: HBASE-7912

> Verify restore logic (remote/local cluster)
> ---
>
> Key: HBASE-17150
> URL: https://issues.apache.org/jira/browse/HBASE-17150
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: HBASE-7912
>
>
> This part of application is a legacy code from a first version. 
> If backup destination is local cluster, then during restore we copy HFiles 
> into local temp dir first. For remote cluster we do not do this. Seems should 
> be other way around.
> {quote}
> What does this mean?
> 253 2016-11-17 14:13:39,782 DEBUG [main] util.RestoreServerUtil: File 
> hdfs://ve0524.halxg.cloudera.com:8020/user/stack/backup/backup_1479419995738/default/x_1/archive/data/default/x_1
>  on local cluster, back it up before restore
> Is this a full copy of the backup to elsewhere?
> 296 2016-11-17 14:13:47,907 DEBUG [main] util.RestoreServerUtil: Copied to 
> temporary path on local cluster: /user/stack/hbase-staging/restore
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15993) Regex support in table names

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15993:
--
Fix Version/s: (was: 2.0.0)
   HBASE-7912

> Regex support in table names
> 
>
> Key: HBASE-15993
> URL: https://issues.apache.org/jira/browse/HBASE-15993
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: HBASE-7912
>
>
> Add support for regular expression in table names in backup/restore/set 
> operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15993) Regex support in table names

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15993:
--
Affects Version/s: (was: 2.0.0)

> Regex support in table names
> 
>
> Key: HBASE-15993
> URL: https://issues.apache.org/jira/browse/HBASE-15993
> Project: HBase
>  Issue Type: Bug
>Reporter: Vladimir Rodionov
>  Labels: backup
> Fix For: HBASE-7912
>
>
> Add support for regular expression in table names in backup/restore/set 
> operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14123) HBase Backup/Restore Phase 2

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14123:
--
Fix Version/s: (was: 2.0.0)
   HBASE-7912

> HBase Backup/Restore Phase 2
> 
>
> Key: HBASE-14123
> URL: https://issues.apache.org/jira/browse/HBASE-14123
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Fix For: HBASE-7912
>
> Attachments: 14123-master.v14.txt, 14123-master.v15.txt, 
> 14123-master.v16.txt, 14123-master.v17.txt, 14123-master.v18.txt, 
> 14123-master.v19.txt, 14123-master.v20.txt, 14123-master.v21.txt, 
> 14123-master.v24.txt, 14123-master.v25.txt, 14123-master.v27.txt, 
> 14123-master.v28.txt, 14123-master.v29.full.txt, 14123-master.v2.txt, 
> 14123-master.v30.txt, 14123-master.v31.txt, 14123-master.v32.txt, 
> 14123-master.v33.txt, 14123-master.v34.txt, 14123-master.v35.txt, 
> 14123-master.v36.txt, 14123-master.v37.txt, 14123-master.v38.txt, 
> 14123.master.v39.patch, 14123-master.v3.txt, 14123.master.v40.patch, 
> 14123.master.v41.patch, 14123.master.v42.patch, 14123.master.v44.patch, 
> 14123.master.v45.patch, 14123.master.v46.patch, 14123.master.v48.patch, 
> 14123.master.v49.patch, 14123.master.v50.patch, 14123.master.v51.patch, 
> 14123.master.v52.patch, 14123.master.v54.patch, 14123.master.v56.patch, 
> 14123.master.v57.patch, 14123-master.v5.txt, 14123-master.v6.txt, 
> 14123-master.v7.txt, 14123-master.v8.txt, 14123-master.v9.txt, 14123-v14.txt, 
> HBASE-14123-for-7912-v1.patch, HBASE-14123-for-7912-v6.patch, 
> HBASE-14123-v10.patch, HBASE-14123-v11.patch, HBASE-14123-v12.patch, 
> HBASE-14123-v13.patch, HBASE-14123-v15.patch, HBASE-14123-v16.patch, 
> HBASE-14123-v1.patch, HBASE-14123-v2.patch, HBASE-14123-v3.patch, 
> HBASE-14123-v4.patch, HBASE-14123-v5.patch, HBASE-14123-v6.patch, 
> HBASE-14123-v7.patch, HBASE-14123-v9.patch
>
>
> Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Fix Version/s: (was: 2.0.0)
   HBASE-7912

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: HBASE-7912
>
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v10.patch, 
> HBASE-14030-v11.patch, HBASE-14030-v12.patch, HBASE-14030-v13.patch, 
> HBASE-14030-v14.patch, HBASE-14030-v15.patch, HBASE-14030-v17.patch, 
> HBASE-14030-v18.patch, HBASE-14030-v1.patch, HBASE-14030-v20.patch, 
> HBASE-14030-v21.patch, HBASE-14030-v22.patch, HBASE-14030-v23.patch, 
> HBASE-14030-v24.patch, HBASE-14030-v25.patch, HBASE-14030-v26.patch, 
> HBASE-14030-v27.patch, HBASE-14030-v28.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v30.patch, HBASE-14030-v35.patch, hbase-14030_v36.patch, 
> HBASE-14030-v37.patch, HBASE-14030.v38.patch, HBASE-14030.v39.patch, 
> HBASE-14030-v3.patch, HBASE-14030.v40.patch, HBASE-14030.v41.patch, 
> HBASE-14030.v42.patch, HBASE-14030.v43.patch, HBASE-14030-v4.patch, 
> HBASE-14030-v5.patch, HBASE-14030-v6.patch, HBASE-14030-v7.patch, 
> HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15704) Refactoring: Move HFileArchiver from backup to tool package, remove backup.examples

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15704:
--
Fix Version/s: (was: 2.0.0)

> Refactoring: Move HFileArchiver from backup to tool package, remove 
> backup.examples
> ---
>
> Key: HBASE-15704
> URL: https://issues.apache.org/jira/browse/HBASE-15704
> Project: HBase
>  Issue Type: Task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-15704-v2.patch, HBASE-15704-v3.patch
>
>
> This class is in backup package (as well as backup/examples classes) but is 
> not backup - related.  Remove examples classes from  a codebase



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16037) Make automatic mode default one

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-16037:
--
Fix Version/s: (was: 2.0.0)
   HBASE-7912

> Make automatic mode default one
> ---
>
> Key: HBASE-16037
> URL: https://issues.apache.org/jira/browse/HBASE-16037
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: HBASE-7912
>
> Attachments: HBASE-16037-v1.patch, HBASE-16037-v2.patch
>
>
> By default, restore operation runs with automatic off and overwrite off. This 
> is not what user expect (point -in -time data restore).  
> When automatic is off, only last backup image will be restored - the image 
> user requested. With automatic on - the whole image dependency chain will be 
> restored, starting with the most recent full backup image, followed by 
> incremental images. This is should be default.
> When overwrite is off - we face issues similar HBASE-15862 ( Backup - delete 
> data - restore from backup results in missing data). Overwrite is off by 
> default.
> Definitely, we  need to make, at least, both modes to be default ones, at 
> least. The question do we need these exotic modes at all? What are use case 
> for them?
> Please cast your vote, comments are welcome.
> cc: [~tedyu], [~jerryhe], [~enis], [~devaraj].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Affects Version/s: (was: 2.0.0)

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v10.patch, 
> HBASE-14030-v11.patch, HBASE-14030-v12.patch, HBASE-14030-v13.patch, 
> HBASE-14030-v14.patch, HBASE-14030-v15.patch, HBASE-14030-v17.patch, 
> HBASE-14030-v18.patch, HBASE-14030-v1.patch, HBASE-14030-v20.patch, 
> HBASE-14030-v21.patch, HBASE-14030-v22.patch, HBASE-14030-v23.patch, 
> HBASE-14030-v24.patch, HBASE-14030-v25.patch, HBASE-14030-v26.patch, 
> HBASE-14030-v27.patch, HBASE-14030-v28.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v30.patch, HBASE-14030-v35.patch, hbase-14030_v36.patch, 
> HBASE-14030-v37.patch, HBASE-14030.v38.patch, HBASE-14030.v39.patch, 
> HBASE-14030-v3.patch, HBASE-14030.v40.patch, HBASE-14030.v41.patch, 
> HBASE-14030.v42.patch, HBASE-14030.v43.patch, HBASE-14030-v4.patch, 
> HBASE-14030-v5.patch, HBASE-14030-v6.patch, HBASE-14030-v7.patch, 
> HBASE-14030-v8.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16978) Disable backup by default

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-16978:
--
Fix Version/s: (was: 2.0.0)
   HBASE-7912

> Disable backup by default
> -
>
> Key: HBASE-16978
> URL: https://issues.apache.org/jira/browse/HBASE-16978
> Project: HBase
>  Issue Type: Task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: HBASE-7912
>
> Attachments: HBASE-16978.addendum, HBASE-16978.addendum.2, 
> HBASE-16978.addendum.3, HBASE-16978-v1.patch, HBASE-16978-v2.patch
>
>
> Currently, we create backup system table on a Master start up (if does not 
> exists). In deployments where backup is not used this does not make sense. We 
> should create system table only if backup is enabled and disable it by 
> default.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17199) Back-port HBASE-17151 to HBASE-7912 branch

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-17199:
--
Fix Version/s: HBASE-7912

> Back-port HBASE-17151 to HBASE-7912 branch
> --
>
> Key: HBASE-17199
> URL: https://issues.apache.org/jira/browse/HBASE-17199
> Project: HBase
>  Issue Type: Task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: HBASE-7912
>
> Attachments: HBASE-17199.HBASE-7912.v2.patch, HBASE-17199-v1.patch
>
>
> HBASE-17151 introduces new API to read HFile w/o instantiating block cache. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17270) Change Service to Task in BackupRestoreServerFactory

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-17270:
--
Affects Version/s: (was: HBASE-7912)

> Change Service to Task in BackupRestoreServerFactory
> 
>
> Key: HBASE-17270
> URL: https://issues.apache.org/jira/browse/HBASE-17270
> Project: HBase
>  Issue Type: Task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Trivial
> Fix For: HBASE-7912
>
> Attachments: HBASE-17270.HBASE-7912.v1.patch
>
>
> Small method name refactoring



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17270) Change Service to Task in BackupRestoreServerFactory

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-17270:
--
Fix Version/s: (was: 2.0.0)
   HBASE-7912

> Change Service to Task in BackupRestoreServerFactory
> 
>
> Key: HBASE-17270
> URL: https://issues.apache.org/jira/browse/HBASE-17270
> Project: HBase
>  Issue Type: Task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Trivial
> Fix For: HBASE-7912
>
> Attachments: HBASE-17270.HBASE-7912.v1.patch
>
>
> Small method name refactoring



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16255) Backup/Restore IT

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-16255:
--
Fix Version/s: HBASE-7912

> Backup/Restore IT
> -
>
> Key: HBASE-16255
> URL: https://issues.apache.org/jira/browse/HBASE-16255
> Project: HBase
>  Issue Type: Task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: HBASE-7912
>
> Attachments: 16255.addendum, 16255.addendum2, 16255-addendum.3.txt, 
> 16255.addendum4, 16255.addendum5, 16255.addendum6, backup-it-7912-8-30.out, 
> backup-it-8-30.out, backup-it-success.out, HBASE-16255-v1.patch, 
> HBASE-16255-v2.patch, HBASE-16255-v3.patch, HBASE-16255-v4.patch, 
> HBASE-16255-v5.patch, HBASE-16255-v6.patch
>
>
> Integration test for backup restore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16727) Backup refactoring: remove MR dependencies from HMaster

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-16727:
--
Fix Version/s: HBASE-7912

> Backup refactoring: remove MR dependencies from HMaster
> ---
>
> Key: HBASE-16727
> URL: https://issues.apache.org/jira/browse/HBASE-16727
> Project: HBase
>  Issue Type: Task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: HBASE-7912
>
> Attachments: HBASE-16727-v1.patch, HBASE-16727-v2.patch
>
>
> * No MR jobs in HMaster
> * No proc2 implementation
> * client-driven backup-restore
> * basic security: only super user is allowed to run backup/restore



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15988) User needs to initiate full backup for new table(s) being added for incremental backup

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15988:
--
Affects Version/s: (was: 2.0.0)

> User needs to initiate full backup for new table(s) being added for 
> incremental backup
> --
>
> Key: HBASE-15988
> URL: https://issues.apache.org/jira/browse/HBASE-15988
> Project: HBase
>  Issue Type: Task
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>  Labels: backup
> Fix For: HBASE-7912
>
> Attachments: 15988.v1.txt, 15988.v2.txt, 15988.v3.txt, 15988.v4.txt
>
>
> When a new table is added to backup table set, the incremental backup 
> involving the new table should be full backup.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16181) Allow snapshot of hbase:backup table

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-16181:
--
Fix Version/s: HBASE-7912

> Allow snapshot of hbase:backup table
> 
>
> Key: HBASE-16181
> URL: https://issues.apache.org/jira/browse/HBASE-16181
> Project: HBase
>  Issue Type: Task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: HBASE-7912
>
> Attachments: HBASE-16181-v1.patch, HBASE-16181-v2.patch
>
>
> Snapshot of HBase system tables is not supported, we need either move 
> hbase:backup into different name space or fix snapshots.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15988) User needs to initiate full backup for new table(s) being added for incremental backup

2017-02-08 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15988:
--
Fix Version/s: (was: 2.0.0)
   HBASE-7912

> User needs to initiate full backup for new table(s) being added for 
> incremental backup
> --
>
> Key: HBASE-15988
> URL: https://issues.apache.org/jira/browse/HBASE-15988
> Project: HBase
>  Issue Type: Task
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>  Labels: backup
> Fix For: HBASE-7912
>
> Attachments: 15988.v1.txt, 15988.v2.txt, 15988.v3.txt, 15988.v4.txt
>
>
> When a new table is added to backup table set, the incremental backup 
> involving the new table should be full backup.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   3   >