[jira] [Commented] (HBASE-16283) Batch Append/Increment will always fail if set ReturnResults to false

2016-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393255#comment-15393255
 ] 

Hadoop QA commented on HBASE-16283:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 58s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 100m 13s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 149m 2s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.wal.TestDurability |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820084/HBASE-16283.patch |
| JIRA Issue | HBASE-16283 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / abfd584 |
| Default Java | 1.7.0_80 |
| Multi-JDK versions |  /usr/local/jenkins/java/jdk1.8.0:1.8.0 
/home/jenkins/jenkins-slave/tools/hudson.model.JDK/JDK_1.7_latest_:1.7.0_80 |
| findbugs | v3.0.0 |
| 

[jira] [Commented] (HBASE-15536) Make AsyncFSWAL as our default WAL

2016-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393252#comment-15393252
 ] 

stack commented on HBASE-15536:
---

Ugh. That'd put a stake in the offheaping effort if had to copy onheap just to 
write the WAL!

> Make AsyncFSWAL as our default WAL
> --
>
> Key: HBASE-15536
> URL: https://issues.apache.org/jira/browse/HBASE-15536
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15536-v1.patch, HBASE-15536-v2.patch, 
> HBASE-15536-v3.patch, HBASE-15536-v4.patch, HBASE-15536-v5.patch, 
> HBASE-15536.patch
>
>
> As it should be predicated on passing basic cluster ITBLL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16260) Audit dependencies for Category-X

2016-07-25 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393244#comment-15393244
 ] 

Nick Dimiduk commented on HBASE-16260:
--

bq.I've been unable to prioritize this issue enough given its impact on the 
project.

Thanks for making an effort [~busbey]!

bq. Would revert of HBASE-15122 help?

Looks like we'll also need to pop off HBASE-15270, as it makes further use of 
the introduced esapi dependency. For an immediate solution, yes, they revert 
cleanly and doing so removes the dependencies esapi and beanshell from the 
output of dependency:tree. This doesn't help with the larger issue though.

I suggest we move forward with the revert, downgrade this issue from blocker, 
and free up RM's. I looked briefly at the rat module source code, it appears to 
be only designed to enforce the presence of approved headers in distributed 
files. There's nothing I can find about checking metadata on dependencies. Are 
we reduced to consuming the DEPENDENCIES report mentioned earlier? Maybe 
[~busbey] knows more voodoo than I...

> Audit dependencies for Category-X
> -
>
> Key: HBASE-16260
> URL: https://issues.apache.org/jira/browse/HBASE-16260
> Project: HBase
>  Issue Type: Task
>  Components: community, dependencies
>Affects Versions: 2.0.0, 1.2.0, 1.3.0, 1.2.1, 1.1.4, 1.0.4, 1.1.5, 1.2.2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 1.1.6, 1.2.3
>
>
> Make sure we do not have category x dependencies.
> right now we atleast have an LGPL for xom:xom (thanks to PHOENIX-3103 for the 
> catch)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16284) Unauthorized client can shutdown the cluster

2016-07-25 Thread DeokWoo Han (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DeokWoo Han updated HBASE-16284:

Attachment: (was: HBASE-16284.patch)

> Unauthorized client can shutdown the cluster
> 
>
> Key: HBASE-16284
> URL: https://issues.apache.org/jira/browse/HBASE-16284
> Project: HBase
>  Issue Type: Bug
>Reporter: DeokWoo Han
> Attachments: HBASE-16284.patch
>
>
> An unauthorized client can shutdown the cluster as {{AccessDeniedException}} 
> is ignored during {{Admin.stopMaster}} and {{Admin.shutdown}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16284) Unauthorized client can shutdown the cluster

2016-07-25 Thread DeokWoo Han (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DeokWoo Han updated HBASE-16284:

Attachment: HBASE-16284.patch

> Unauthorized client can shutdown the cluster
> 
>
> Key: HBASE-16284
> URL: https://issues.apache.org/jira/browse/HBASE-16284
> Project: HBase
>  Issue Type: Bug
>Reporter: DeokWoo Han
> Attachments: HBASE-16284.patch
>
>
> An unauthorized client can shutdown the cluster as {{AccessDeniedException}} 
> is ignored during {{Admin.stopMaster}} and {{Admin.shutdown}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16284) Unauthorized client can shutdown the cluster

2016-07-25 Thread DeokWoo Han (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DeokWoo Han updated HBASE-16284:

Attachment: HBASE-16284.patch

> Unauthorized client can shutdown the cluster
> 
>
> Key: HBASE-16284
> URL: https://issues.apache.org/jira/browse/HBASE-16284
> Project: HBase
>  Issue Type: Bug
>Reporter: DeokWoo Han
> Attachments: HBASE-16284.patch
>
>
> An unauthorized client can shutdown the cluster as {{AccessDeniedException}} 
> is ignored during {{Admin.stopMaster}} and {{Admin.shutdown}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16284) Unauthorized client can shutdown the cluster

2016-07-25 Thread DeokWoo Han (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DeokWoo Han updated HBASE-16284:

Status: Patch Available  (was: Open)

> Unauthorized client can shutdown the cluster
> 
>
> Key: HBASE-16284
> URL: https://issues.apache.org/jira/browse/HBASE-16284
> Project: HBase
>  Issue Type: Bug
>Reporter: DeokWoo Han
> Attachments: HBASE-16284.patch
>
>
> An unauthorized client can shutdown the cluster as {{AccessDeniedException}} 
> is ignored during {{Admin.stopMaster}} and {{Admin.shutdown}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15536) Make AsyncFSWAL as our default WAL

2016-07-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393219#comment-15393219
 ] 

ramkrishna.s.vasudevan commented on HBASE-15536:


For the write path offheap work - we were able to make things work with 
AsyncFSWAL without having the need to copy the offheap cells to onheap for 
writing to WAL. This is because 
org.apache.hadoop.hbase.io.ByteBufferSupportOutputStream supports writing 
directly from offheap cells to the ByteBufferOS. One more reason to make 
AsyncWAL  as default in 2.0.
Without this the offheap cells have to be brought onheap and then flushed to 
the WAL OutputStream which generates lot of garbage.

> Make AsyncFSWAL as our default WAL
> --
>
> Key: HBASE-15536
> URL: https://issues.apache.org/jira/browse/HBASE-15536
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15536-v1.patch, HBASE-15536-v2.patch, 
> HBASE-15536-v3.patch, HBASE-15536-v4.patch, HBASE-15536-v5.patch, 
> HBASE-15536.patch
>
>
> As it should be predicated on passing basic cluster ITBLL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16284) Unauthorized client can shutdown the cluster

2016-07-25 Thread DeokWoo Han (JIRA)
DeokWoo Han created HBASE-16284:
---

 Summary: Unauthorized client can shutdown the cluster
 Key: HBASE-16284
 URL: https://issues.apache.org/jira/browse/HBASE-16284
 Project: HBase
  Issue Type: Bug
Reporter: DeokWoo Han


An unauthorized client can shutdown the cluster as {{AccessDeniedException}} is 
ignored during {{Admin.stopMaster}} and {{Admin.shutdown}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12770) Don't transfer all the queued hlogs of a dead server to the same alive server

2016-07-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393228#comment-15393228
 ] 

Duo Zhang commented on HBASE-12770:
---

Fine. I have no concerns then.

[~apurtell] Ping.

> Don't transfer all the queued hlogs of a dead server to the same alive server
> -
>
> Key: HBASE-12770
> URL: https://issues.apache.org/jira/browse/HBASE-12770
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Jianwei Cui
>Assignee: Phil Yang
>Priority: Minor
> Attachments: HBASE-12770-trunk.patch, HBASE-12770-v1.patch
>
>
> When a region server is down(or the cluster restart), all the hlog queues 
> will be transferred by the same alive region server. In a shared cluster, we 
> might create several peers replicating data to different peer clusters. There 
> might be lots of hlogs queued for these peers caused by several reasons, such 
> as some peers might be disabled, or errors from peer cluster might prevent 
> the replication, or the replication sources may fail to read some hlog 
> because of hdfs problem. Then, if the server is down or restarted, another 
> alive server will take all the replication jobs of the dead server, this 
> might bring a big pressure to resources(network/disk read) of the alive 
> server and also is not fast enough to replicate the queued hlogs. And if the 
> alive server is down, all the replication jobs including that takes from 
> other dead servers will once again be totally transferred to another alive 
> server, this might cause a server have a large number of queued hlogs(in our 
> shared cluster, we find one server might have thousands of queued hlogs for 
> replication). As an optional way, is it reasonable that the alive server only 
> transfer one peer's hlogs from the dead server one time? Then, other alive 
> region servers might have the opportunity to transfer the hlogs of rest 
> peers. This may also help the queued hlogs be processed more fast. Any 
> discussion is welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-9465) Push entries to peer clusters serially

2016-07-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-9465:
-
Attachment: HBASE-9465-v2.patch

Retry.

> Push entries to peer clusters serially
> --
>
> Key: HBASE-9465
> URL: https://issues.apache.org/jira/browse/HBASE-9465
> Project: HBase
>  Issue Type: New Feature
>  Components: regionserver, Replication
>Reporter: Honghua Feng
>Assignee: Phil Yang
> Attachments: HBASE-9465-v1.patch, HBASE-9465-v2.patch, 
> HBASE-9465-v2.patch, HBASE-9465.pdf
>
>
> When region-move or RS failure occurs in master cluster, the hlog entries 
> that are not pushed before region-move or RS-failure will be pushed by 
> original RS(for region move) or another RS which takes over the remained hlog 
> of dead RS(for RS failure), and the new entries for the same region(s) will 
> be pushed by the RS which now serves the region(s), but they push the hlog 
> entries of a same region concurrently without coordination.
> This treatment can possibly lead to data inconsistency between master and 
> peer clusters:
> 1. there are put and then delete written to master cluster
> 2. due to region-move / RS-failure, they are pushed by different 
> replication-source threads to peer cluster
> 3. if delete is pushed to peer cluster before put, and flush and 
> major-compact occurs in peer cluster before put is pushed to peer cluster, 
> the delete is collected and the put remains in peer cluster
> In this scenario, the put remains in peer cluster, but in master cluster the 
> put is masked by the delete, hence data inconsistency between master and peer 
> clusters



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14921) Memory optimizations

2016-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393204#comment-15393204
 ] 

stack commented on HBASE-14921:
---

[~anoop.hbase] What you think of the [~ebortnik] proposal.

I just want to iterate that it is priority that the 80% case, the case where we 
do not have much by way of Cell overlaps/duplicates, cannot suffer when we add 
in this feature. It is fine if there is a temporary performance regression. We 
can live with that in master branch, but it cannot go unaddressed. Just saying.

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, HBASE-14921-V04-CA-V02.patch, 
> HBASE-14921-V04-CA.patch, HBASE-14921-V05-CAO.patch, 
> HBASE-14921-V06-CAO.patch, InitialCellArrayMapEvaluation.pdf, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14921) Memory optimizations

2016-07-25 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393193#comment-15393193
 ] 

Anoop Sam John commented on HBASE-14921:


Ya [~anastas] the 2 points I raised are those..  I wanted to make sure those 
are highlighted early rather than saying it after..  And I dont mean they 
should be addressed in a single jira. Ya this is already big patch and no need 
to do more work again.. That is the common practice we follow.. When a comment 
needs more work, the developer can suggest doing it later as part of another 
jira and reviewers mostly agree. I am fine for that..  Sorry if I was not 
saying it explicitly.  It is a practice for us so I missed. Sorry.
It is your wish..  In this form also am ok to get that in.. We can always make 
things better after.
The cost of scan may be more when we have CellChunkMap.. In this flattened 
form, we get rid of Cell objects and again this Scan will make these objects 
over us.  With CellArrayMap the overhead might be from SQM and StoreScanner 
heap.  With CellChunkMap it is more.. That is why I raised it early.  Ya let us 
do that later also. Its ok.

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, HBASE-14921-V04-CA-V02.patch, 
> HBASE-14921-V04-CA.patch, HBASE-14921-V05-CAO.patch, 
> HBASE-14921-V06-CAO.patch, InitialCellArrayMapEvaluation.pdf, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16234) Expect and handle nulls when assigning replicas

2016-07-25 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393169#comment-15393169
 ] 

Heng Chen commented on HBASE-16234:
---

Patch looks good to me. 

> Expect and handle nulls when assigning replicas
> ---
>
> Key: HBASE-16234
> URL: https://issues.apache.org/jira/browse/HBASE-16234
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: Harsh J
>Assignee: Yi Liang
> Attachments: HBASE-16234-V1.patch
>
>
> Observed this on a cluster:
> {code}
> FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting 
> shutdown. 
> java.lang.NullPointerException 
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.replicaRegionsNotRecordedInMeta(AssignmentManager.java:2799)
>  
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignAllUserRegions(AssignmentManager.java:2778)
>  
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.processDeadServersAndRegionsInTransition(AssignmentManager.java:638)
>  
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:485)
>  
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:723)
>  
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:169) 
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1481) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}
> It looks like {{FSTableDescriptors#get(…)}} can be expected to return null in 
> some cases, but {{AssignmentManager.replicaRegionsNotRecordedInMeta(…)}} does 
> not currently have any handling for such a possibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16280) Use hash based map in SequenceIdAccounting

2016-07-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16280:
--
Attachment: HBASE-16280-branch-1.patch

Patch for branch-1.

> Use hash based map in SequenceIdAccounting
> --
>
> Key: HBASE-16280
> URL: https://issues.apache.org/jira/browse/HBASE-16280
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-16280-branch-1.patch, HBASE-16280-v1.patch, 
> HBASE-16280.patch
>
>
> Its update method is on the write path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16281) TestMasterReplication is flaky

2016-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393166#comment-15393166
 ] 

Hudson commented on HBASE-16281:


SUCCESS: Integrated in HBase-1.2-IT #563 (See 
[https://builds.apache.org/job/HBase-1.2-IT/563/])
HBASE-16281 TestMasterReplication is flaky (zhangduo: rev 
a82ca4a8229d2b9ebf1bf124747ff47a3db51860)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java


> TestMasterReplication is flaky
> --
>
> Key: HBASE-16281
> URL: https://issues.apache.org/jira/browse/HBASE-16281
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 0.98.20, 1.1.6
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21, 1.2.3, 1.1.7
>
> Attachments: HBASE-16281-v1.patch, HBASE-16281-v1.patch, 
> HBASE-16281-v1.patch
>
>
> In TestMasterReplication we put some mutations and wait until we can read the 
> data from slave cluster. However the waiting time is too short. Replication 
> service in slave cluster may not be initialized and ready to handle 
> replication RPC requests in several seconds. 
> We should wait for more time.
> {quote}
> 2016-07-25 11:47:03,156 WARN  [Time-limited 
> test-EventThread.replicationSource,1.replicationSource.10.235.114.28%2C56313%2C1469418386448,1]
>  regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate 
> because of a local or network error: 
> java.io.IOException: java.io.IOException: Replication services are not 
> initialized yet
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2263)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:118)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:189)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:169)
> Caused by: com.google.protobuf.ServiceException: Replication services are not 
> initialized yet
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:1935)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22751)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2212)
>   ... 3 more
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14743) Add metrics around HeapMemoryManager

2016-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393165#comment-15393165
 ] 

Hudson commented on HBASE-14743:


FAILURE: Integrated in HBase-Trunk_matrix #1296 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1296/])
Revert HBASE-14743 because of wrong attribution. Since I added commit (appy: 
rev eff38ccf8cf9c61f1bda1005bd19b58c960e3fd2)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemoryManager.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsHeapMemoryManagerSourceImpl.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsHeapMemoryManagerSource.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMetricsHeapMemoryManager.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceFactoryImpl.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceFactory.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsHeapMemoryManager.java
HBASE-14743 Add metrics around HeapMemoryManager. (Reid Chan) (appy: rev 
abfd584fe646951a9d0b43602052bbe2b82c3364)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsHeapMemoryManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemoryManager.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMetricsHeapMemoryManager.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceFactory.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceFactoryImpl.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsHeapMemoryManagerSource.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsHeapMemoryManagerSourceImpl.java


> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14743.009.patch, HBASE-14743.009.rw3.patch, 
> HBASE-14743.009.v2.patch, HBASE-14743.010.patch, HBASE-14743.010.v2.patch, 
> HBASE-14743.011.patch, Metrics snapshot 2016-6-30.png, Screen Shot 2016-06-16 
> at 5.39.13 PM.png, test2_1.png, test2_2.png, test2_3.png, test2_4.png
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12770) Don't transfer all the queued hlogs of a dead server to the same alive server

2016-07-25 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393154#comment-15393154
 ] 

Phil Yang commented on HBASE-12770:
---

{quote}
Suggest adding some jitter of the sleep time between the attempt of claiming a 
queue.
{quote}
We have a random sleep time at the start of all claiming, so each RS has 
different start time here. Is it enough?

{quote}
 is it safe to sleep in ReplicationSourceManager
{quote}
The transferring logic is in NodeFailoverWorker which will run in its own 
thread when a RS offline by submitting to a thread pool whose default worker 
number is 1(configured by replication.executor.workers),  so I think it is safe 
to sleep here. 

> Don't transfer all the queued hlogs of a dead server to the same alive server
> -
>
> Key: HBASE-12770
> URL: https://issues.apache.org/jira/browse/HBASE-12770
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Jianwei Cui
>Assignee: Phil Yang
>Priority: Minor
> Attachments: HBASE-12770-trunk.patch, HBASE-12770-v1.patch
>
>
> When a region server is down(or the cluster restart), all the hlog queues 
> will be transferred by the same alive region server. In a shared cluster, we 
> might create several peers replicating data to different peer clusters. There 
> might be lots of hlogs queued for these peers caused by several reasons, such 
> as some peers might be disabled, or errors from peer cluster might prevent 
> the replication, or the replication sources may fail to read some hlog 
> because of hdfs problem. Then, if the server is down or restarted, another 
> alive server will take all the replication jobs of the dead server, this 
> might bring a big pressure to resources(network/disk read) of the alive 
> server and also is not fast enough to replicate the queued hlogs. And if the 
> alive server is down, all the replication jobs including that takes from 
> other dead servers will once again be totally transferred to another alive 
> server, this might cause a server have a large number of queued hlogs(in our 
> shared cluster, we find one server might have thousands of queued hlogs for 
> replication). As an optional way, is it reasonable that the alive server only 
> transfer one peer's hlogs from the dead server one time? Then, other alive 
> region servers might have the opportunity to transfer the hlogs of rest 
> peers. This may also help the queued hlogs be processed more fast. Any 
> discussion is welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16281) TestMasterReplication is flaky

2016-07-25 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393146#comment-15393146
 ] 

Phil Yang commented on HBASE-16281:
---

Thanks [~Apache9] and [~carp84] for reviewing :)

> TestMasterReplication is flaky
> --
>
> Key: HBASE-16281
> URL: https://issues.apache.org/jira/browse/HBASE-16281
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 0.98.20, 1.1.6
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21, 1.2.3, 1.1.7
>
> Attachments: HBASE-16281-v1.patch, HBASE-16281-v1.patch, 
> HBASE-16281-v1.patch
>
>
> In TestMasterReplication we put some mutations and wait until we can read the 
> data from slave cluster. However the waiting time is too short. Replication 
> service in slave cluster may not be initialized and ready to handle 
> replication RPC requests in several seconds. 
> We should wait for more time.
> {quote}
> 2016-07-25 11:47:03,156 WARN  [Time-limited 
> test-EventThread.replicationSource,1.replicationSource.10.235.114.28%2C56313%2C1469418386448,1]
>  regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate 
> because of a local or network error: 
> java.io.IOException: java.io.IOException: Replication services are not 
> initialized yet
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2263)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:118)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:189)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:169)
> Caused by: com.google.protobuf.ServiceException: Replication services are not 
> initialized yet
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:1935)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22751)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2212)
>   ... 3 more
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16234) Expect and handle nulls when assigning replicas

2016-07-25 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-16234:
-
 Assignee: Yi Liang
Affects Version/s: (was: 1.2.0)
   2.0.0
   Status: Patch Available  (was: Open)

> Expect and handle nulls when assigning replicas
> ---
>
> Key: HBASE-16234
> URL: https://issues.apache.org/jira/browse/HBASE-16234
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: Harsh J
>Assignee: Yi Liang
> Attachments: HBASE-16234-V1.patch
>
>
> Observed this on a cluster:
> {code}
> FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting 
> shutdown. 
> java.lang.NullPointerException 
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.replicaRegionsNotRecordedInMeta(AssignmentManager.java:2799)
>  
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignAllUserRegions(AssignmentManager.java:2778)
>  
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.processDeadServersAndRegionsInTransition(AssignmentManager.java:638)
>  
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:485)
>  
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:723)
>  
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:169) 
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1481) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}
> It looks like {{FSTableDescriptors#get(…)}} can be expected to return null in 
> some cases, but {{AssignmentManager.replicaRegionsNotRecordedInMeta(…)}} does 
> not currently have any handling for such a possibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16234) Expect and handle nulls when assigning replicas

2016-07-25 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393143#comment-15393143
 ] 

Yi Liang commented on HBASE-16234:
--

replicaRegionsNotRecordedInMeta is used in 3 class: 

1.AssignmentManager(used when master start and assign all user regions)
2.EnableTableProcedure(used when master try to enable tables)
3.EnableTableHandler(used when master try to recover tables that are not fully 
moved to ENABLE state)


my idea is that if one table descriptor is null, we just skip this table and 
give some warning about which table is corrupted, so that user can use hbck to 
restore it, then we can enable or recover this table after restoring it.

And the patch is for master(2.0.0) branch


> Expect and handle nulls when assigning replicas
> ---
>
> Key: HBASE-16234
> URL: https://issues.apache.org/jira/browse/HBASE-16234
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: Harsh J
> Attachments: HBASE-16234-V1.patch
>
>
> Observed this on a cluster:
> {code}
> FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting 
> shutdown. 
> java.lang.NullPointerException 
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.replicaRegionsNotRecordedInMeta(AssignmentManager.java:2799)
>  
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignAllUserRegions(AssignmentManager.java:2778)
>  
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.processDeadServersAndRegionsInTransition(AssignmentManager.java:638)
>  
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:485)
>  
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:723)
>  
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:169) 
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1481) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}
> It looks like {{FSTableDescriptors#get(…)}} can be expected to return null in 
> some cases, but {{AssignmentManager.replicaRegionsNotRecordedInMeta(…)}} does 
> not currently have any handling for such a possibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16234) Expect and handle nulls when assigning replicas

2016-07-25 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-16234:
-
Attachment: HBASE-16234-V1.patch

> Expect and handle nulls when assigning replicas
> ---
>
> Key: HBASE-16234
> URL: https://issues.apache.org/jira/browse/HBASE-16234
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 1.2.0
>Reporter: Harsh J
> Attachments: HBASE-16234-V1.patch
>
>
> Observed this on a cluster:
> {code}
> FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting 
> shutdown. 
> java.lang.NullPointerException 
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.replicaRegionsNotRecordedInMeta(AssignmentManager.java:2799)
>  
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignAllUserRegions(AssignmentManager.java:2778)
>  
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.processDeadServersAndRegionsInTransition(AssignmentManager.java:638)
>  
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:485)
>  
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:723)
>  
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:169) 
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1481) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}
> It looks like {{FSTableDescriptors#get(…)}} can be expected to return null in 
> some cases, but {{AssignmentManager.replicaRegionsNotRecordedInMeta(…)}} does 
> not currently have any handling for such a possibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16281) TestMasterReplication is flaky

2016-07-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16281:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to all active branches.

Thanks [~yangzhe1991] for the patch. Thanks all for reviewing.

> TestMasterReplication is flaky
> --
>
> Key: HBASE-16281
> URL: https://issues.apache.org/jira/browse/HBASE-16281
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 0.98.20, 1.1.6
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21, 1.2.3, 1.1.7
>
> Attachments: HBASE-16281-v1.patch, HBASE-16281-v1.patch, 
> HBASE-16281-v1.patch
>
>
> In TestMasterReplication we put some mutations and wait until we can read the 
> data from slave cluster. However the waiting time is too short. Replication 
> service in slave cluster may not be initialized and ready to handle 
> replication RPC requests in several seconds. 
> We should wait for more time.
> {quote}
> 2016-07-25 11:47:03,156 WARN  [Time-limited 
> test-EventThread.replicationSource,1.replicationSource.10.235.114.28%2C56313%2C1469418386448,1]
>  regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate 
> because of a local or network error: 
> java.io.IOException: java.io.IOException: Replication services are not 
> initialized yet
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2263)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:118)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:189)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:169)
> Caused by: com.google.protobuf.ServiceException: Replication services are not 
> initialized yet
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:1935)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22751)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2212)
>   ... 3 more
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16281) TestMasterReplication is flaky

2016-07-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16281:
--
Affects Version/s: (was: 1.1.5)
   1.1.6
   1.4.0
   1.3.0
   2.0.0
Fix Version/s: 1.1.7
   1.2.3
   0.98.21
   1.4.0
   1.3.0
   2.0.0

> TestMasterReplication is flaky
> --
>
> Key: HBASE-16281
> URL: https://issues.apache.org/jira/browse/HBASE-16281
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 0.98.20, 1.1.6
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21, 1.2.3, 1.1.7
>
> Attachments: HBASE-16281-v1.patch, HBASE-16281-v1.patch, 
> HBASE-16281-v1.patch
>
>
> In TestMasterReplication we put some mutations and wait until we can read the 
> data from slave cluster. However the waiting time is too short. Replication 
> service in slave cluster may not be initialized and ready to handle 
> replication RPC requests in several seconds. 
> We should wait for more time.
> {quote}
> 2016-07-25 11:47:03,156 WARN  [Time-limited 
> test-EventThread.replicationSource,1.replicationSource.10.235.114.28%2C56313%2C1469418386448,1]
>  regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate 
> because of a local or network error: 
> java.io.IOException: java.io.IOException: Replication services are not 
> initialized yet
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2263)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:118)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:189)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:169)
> Caused by: com.google.protobuf.ServiceException: Replication services are not 
> initialized yet
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:1935)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22751)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2212)
>   ... 3 more
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16283) Batch Append/Increment will always fail if set ReturnResults to false

2016-07-25 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-16283:
---
Description: 
If set Append/Increment's ReturnResult attribute to false, and batch the 
appends/increments to server. The batch operation will always return false.
The reason is that, since return result is set to false, append/increment will 
return null instead of Result object. But in ResponseConverter#getResults, 
there is some check code 
{code}
if (requestRegionActionCount != responseRegionActionResultCount) {
  throw new IllegalStateException("Request mutation count=" + 
requestRegionActionCount +
  " does not match response mutation result count=" + 
responseRegionActionResultCount);
}
{code}
That means if the result count is not meet with request mutation count, it will 
fail the request.
The solution is simple, instead of returning a null result, returning a empty 
result if ReturnResult set to false.

  was:
If set Append/Increment's ReturnResult attribute to false, and batch the 
appends/increments to server. The batch operation will always return false.
The reason is that, since return result is set to false, append/increment will 
return null instead of Result object. But in ResponseConverter#getResults, 
there is some check code 
{code}
if (requestRegionActionCount != responseRegionActionResultCount) {
  throw new IllegalStateException("Request mutation count=" + 
requestRegionActionCount +
  " does not match response mutation result count=" + 
responseRegionActionResultCount);
}
{code}
That means if the result count is not meat with request mutation count, it will 
fail the request.
The solution is simple, instead of returning a null result, returning a empty 
result if ReturnResult set to false.


> Batch Append/Increment will always fail if set ReturnResults to false
> -
>
> Key: HBASE-16283
> URL: https://issues.apache.org/jira/browse/HBASE-16283
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.0.0, 1.1.5, 1.2.2
>Reporter: Allan Yang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: FailedCase.java, HBASE-16283.patch
>
>
> If set Append/Increment's ReturnResult attribute to false, and batch the 
> appends/increments to server. The batch operation will always return false.
> The reason is that, since return result is set to false, append/increment 
> will return null instead of Result object. But in 
> ResponseConverter#getResults, there is some check code 
> {code}
> if (requestRegionActionCount != responseRegionActionResultCount) {
>   throw new IllegalStateException("Request mutation count=" + 
> requestRegionActionCount +
>   " does not match response mutation result count=" + 
> responseRegionActionResultCount);
> }
> {code}
> That means if the result count is not meet with request mutation count, it 
> will fail the request.
> The solution is simple, instead of returning a null result, returning a empty 
> result if ReturnResult set to false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16283) Batch Append/Increment will always fail if set ReturnResults to false

2016-07-25 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393099#comment-15393099
 ] 

Allan Yang edited comment on HBASE-16283 at 7/26/16 2:55 AM:
-

Attach a Unit Test to reproduce this issue. I think this issue exists in branch 
0.94 too. Instead of returning fail as branch 1.0+ do, in branch 0.94, it will 
stuck at branch operation since in branch 0.94, batch operation will retry when 
result is null.


was (Author: allan163):
Attach a Unit Test to reproduce this issue

> Batch Append/Increment will always fail if set ReturnResults to false
> -
>
> Key: HBASE-16283
> URL: https://issues.apache.org/jira/browse/HBASE-16283
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.0.0, 1.1.5, 1.2.2
>Reporter: Allan Yang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: FailedCase.java, HBASE-16283.patch
>
>
> If set Append/Increment's ReturnResult attribute to false, and batch the 
> appends/increments to server. The batch operation will always return false.
> The reason is that, since return result is set to false, append/increment 
> will return null instead of Result object. But in 
> ResponseConverter#getResults, there is some check code 
> {code}
> if (requestRegionActionCount != responseRegionActionResultCount) {
>   throw new IllegalStateException("Request mutation count=" + 
> requestRegionActionCount +
>   " does not match response mutation result count=" + 
> responseRegionActionResultCount);
> }
> {code}
> That means if the result count is not meat with request mutation count, it 
> will fail the request.
> The solution is simple, instead of returning a null result, returning a empty 
> result if ReturnResult set to false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16283) Batch Append/Increment will always fail if set ReturnResults to false

2016-07-25 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-16283:
---
Description: 
If set Append/Increment's ReturnResult attribute to false, and batch the 
appends/increments to server. The batch operation will always return false.
The reason is that, since return result is set to false, append/increment will 
return null instead of Result object. But in ResponseConverter#getResults, 
there is some check code 
{code}
if (requestRegionActionCount != responseRegionActionResultCount) {
  throw new IllegalStateException("Request mutation count=" + 
requestRegionActionCount +
  " does not match response mutation result count=" + 
responseRegionActionResultCount);
}
{code}
That means if the result count is not meat with request mutation count, it will 
fail the request.
The solution is simple, instead of returning a null result, returning a empty 
result if ReturnResult set to false.

  was:
If set Append/Increment's ReturnResult attribute to false, and batch the 
appends/increments to server. The batch operation will always return false.
The reason is that, since return result is set to false, append/increment will 
return null instead of Result object. But in ResponseConverter#getResults, 
there is some check code 
{code}
if (requestRegionActionCount != responseRegionActionResultCount) {
  throw new IllegalStateException("Request mutation count=" + 
requestRegionActionCount +
  " does not match response mutation result count=" + 
responseRegionActionResultCount);
}
{code}
That means if the result count is not meat with request mutation count, it will 
fail the request.
The solution is simple, instead of returning a null result, return a empty 
result if ReturnResult set to null.


> Batch Append/Increment will always fail if set ReturnResults to false
> -
>
> Key: HBASE-16283
> URL: https://issues.apache.org/jira/browse/HBASE-16283
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.0.0, 1.1.5, 1.2.2
>Reporter: Allan Yang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: FailedCase.java, HBASE-16283.patch
>
>
> If set Append/Increment's ReturnResult attribute to false, and batch the 
> appends/increments to server. The batch operation will always return false.
> The reason is that, since return result is set to false, append/increment 
> will return null instead of Result object. But in 
> ResponseConverter#getResults, there is some check code 
> {code}
> if (requestRegionActionCount != responseRegionActionResultCount) {
>   throw new IllegalStateException("Request mutation count=" + 
> requestRegionActionCount +
>   " does not match response mutation result count=" + 
> responseRegionActionResultCount);
> }
> {code}
> That means if the result count is not meat with request mutation count, it 
> will fail the request.
> The solution is simple, instead of returning a null result, returning a empty 
> result if ReturnResult set to false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16283) Batch Append/Increment will always fail if set ReturnResults to false

2016-07-25 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-16283:
---
Attachment: HBASE-16283.patch

add a patch. One question, how can I assign this issue to me?

> Batch Append/Increment will always fail if set ReturnResults to false
> -
>
> Key: HBASE-16283
> URL: https://issues.apache.org/jira/browse/HBASE-16283
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.0.0, 1.1.5, 1.2.2
>Reporter: Allan Yang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: FailedCase.java, HBASE-16283.patch
>
>
> If set Append/Increment's ReturnResult attribute to false, and batch the 
> appends/increments to server. The batch operation will always return false.
> The reason is that, since return result is set to false, append/increment 
> will return null instead of Result object. But in 
> ResponseConverter#getResults, there is some check code 
> {code}
> if (requestRegionActionCount != responseRegionActionResultCount) {
>   throw new IllegalStateException("Request mutation count=" + 
> requestRegionActionCount +
>   " does not match response mutation result count=" + 
> responseRegionActionResultCount);
> }
> {code}
> That means if the result count is not meat with request mutation count, it 
> will fail the request.
> The solution is simple, instead of returning a null result, return a empty 
> result if ReturnResult set to null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16283) Batch Append/Increment will always fail if set ReturnResults to false

2016-07-25 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-16283:
---
Status: Patch Available  (was: Open)

> Batch Append/Increment will always fail if set ReturnResults to false
> -
>
> Key: HBASE-16283
> URL: https://issues.apache.org/jira/browse/HBASE-16283
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 1.2.2, 1.1.5, 2.0.0
>Reporter: Allan Yang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: FailedCase.java
>
>
> If set Append/Increment's ReturnResult attribute to false, and batch the 
> appends/increments to server. The batch operation will always return false.
> The reason is that, since return result is set to false, append/increment 
> will return null instead of Result object. But in 
> ResponseConverter#getResults, there is some check code 
> {code}
> if (requestRegionActionCount != responseRegionActionResultCount) {
>   throw new IllegalStateException("Request mutation count=" + 
> requestRegionActionCount +
>   " does not match response mutation result count=" + 
> responseRegionActionResultCount);
> }
> {code}
> That means if the result count is not meat with request mutation count, it 
> will fail the request.
> The solution is simple, instead of returning a null result, return a empty 
> result if ReturnResult set to null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16281) TestMasterReplication is flaky

2016-07-25 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393100#comment-15393100
 ] 

Yu Li commented on HBASE-16281:
---

+1, changes in patch are simple and should have no impact on other cases.

> TestMasterReplication is flaky
> --
>
> Key: HBASE-16281
> URL: https://issues.apache.org/jira/browse/HBASE-16281
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.5, 1.2.2, 0.98.20
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16281-v1.patch, HBASE-16281-v1.patch, 
> HBASE-16281-v1.patch
>
>
> In TestMasterReplication we put some mutations and wait until we can read the 
> data from slave cluster. However the waiting time is too short. Replication 
> service in slave cluster may not be initialized and ready to handle 
> replication RPC requests in several seconds. 
> We should wait for more time.
> {quote}
> 2016-07-25 11:47:03,156 WARN  [Time-limited 
> test-EventThread.replicationSource,1.replicationSource.10.235.114.28%2C56313%2C1469418386448,1]
>  regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate 
> because of a local or network error: 
> java.io.IOException: java.io.IOException: Replication services are not 
> initialized yet
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2263)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:118)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:189)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:169)
> Caused by: com.google.protobuf.ServiceException: Replication services are not 
> initialized yet
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:1935)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22751)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2212)
>   ... 3 more
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16283) Batch Append/Increment will always fail if set ReturnResults to false

2016-07-25 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-16283:
---
Attachment: FailedCase.java

Attach a Unit Test to reproduce this issue

> Batch Append/Increment will always fail if set ReturnResults to false
> -
>
> Key: HBASE-16283
> URL: https://issues.apache.org/jira/browse/HBASE-16283
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.0.0, 1.1.5, 1.2.2
>Reporter: Allan Yang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: FailedCase.java
>
>
> If set Append/Increment's ReturnResult attribute to false, and batch the 
> appends/increments to server. The batch operation will always return false.
> The reason is that, since return result is set to false, append/increment 
> will return null instead of Result object. But in 
> ResponseConverter#getResults, there is some check code 
> {code}
> if (requestRegionActionCount != responseRegionActionResultCount) {
>   throw new IllegalStateException("Request mutation count=" + 
> requestRegionActionCount +
>   " does not match response mutation result count=" + 
> responseRegionActionResultCount);
> }
> {code}
> That means if the result count is not meat with request mutation count, it 
> will fail the request.
> The solution is simple, instead of returning a null result, return a empty 
> result if ReturnResult set to null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16283) Batch Append/Increment will always fail if set ReturnResults to false

2016-07-25 Thread Allan Yang (JIRA)
Allan Yang created HBASE-16283:
--

 Summary: Batch Append/Increment will always fail if set 
ReturnResults to false
 Key: HBASE-16283
 URL: https://issues.apache.org/jira/browse/HBASE-16283
 Project: HBase
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.2, 1.1.5, 2.0.0
Reporter: Allan Yang
Priority: Minor
 Fix For: 2.0.0


If set Append/Increment's ReturnResult attribute to false, and batch the 
appends/increments to server. The batch operation will always return false.
The reason is that, since return result is set to false, append/increment will 
return null instead of Result object. But in ResponseConverter#getResults, 
there is some check code 
{code}
if (requestRegionActionCount != responseRegionActionResultCount) {
  throw new IllegalStateException("Request mutation count=" + 
requestRegionActionCount +
  " does not match response mutation result count=" + 
responseRegionActionResultCount);
}
{code}
That means if the result count is not meat with request mutation count, it will 
fail the request.
The solution is simple, instead of returning a null result, return a empty 
result if ReturnResult set to null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16225) Refactor ScanQueryMatcher

2016-07-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393072#comment-15393072
 ] 

Duo Zhang commented on HBASE-16225:
---

[~apurtell] [~stack] Any comments? Thanks.

And there are some topic to be discussed:

1. We should not pass delete marker to filter. A simple per cell filter can not 
deal with delete marker well. A per row filter maybe enough on logic, but it 
requires read a whole row always, this is not a good idea for compaction since 
it may cause OOM. So I think if you want to deal with delete marker with 
coprocessor when compaction, it is better to write a new scanner 
implementation. And also, I suggest that we should disable filter for raw scan.

2. Define a determinate behavior of how thing will go on if you specific a 
filter(maybe a new type which is called ServerFilter, as [~ghelmling] said 
above) and a explicit column tracker when compaction. This helps us not 
breaking coprocessor when modifying SQM(or that, give us the ability to modify 
SQM...).

3. Is it correct to check filter before check version? I think this could also 
make the disappear cell appear again?

And it is much easier to implement new stuffs on the refactored SQM than the 
old big SQM. Thanks.

> Refactor ScanQueryMatcher
> -
>
> Key: HBASE-16225
> URL: https://issues.apache.org/jira/browse/HBASE-16225
> Project: HBase
>  Issue Type: Improvement
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-16225-v1.patch, HBASE-16225-v2.patch, 
> HBASE-16225.patch
>
>
> As said in HBASE-16223, the code of {{ScanQueryMatcher}} is too complicated. 
> I suggest that we can abstract an interface and implement several sub classes 
> which separate different logic into different implementations. For example, 
> the requirements of compaction and user scan are different, now we also need 
> to consider the logic of user scan even if we only want to add a logic for 
> compaction. And at least, the raw scan does not need a query matcher... we 
> can implement a dummy query matcher for it.
> Suggestions are welcomed. Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16266) Do not throw ScannerTimeoutException when catch UnknownScannerException

2016-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392947#comment-15392947
 ] 

Hudson commented on HBASE-16266:


FAILURE: Integrated in HBase-1.1-JDK7 #1753 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1753/])
HBASE-16266 Do not throw ScannerTimeoutException when catch (zhangduo: rev 
4e8ec680feb4981ff9991372136dc646eb0e4af2)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestPartialResultsFromClientSide.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java


> Do not throw ScannerTimeoutException when catch UnknownScannerException
> ---
>
> Key: HBASE-16266
> URL: https://issues.apache.org/jira/browse/HBASE-16266
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Scanners
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3, 1.1.7
>
> Attachments: HBASE-16266-v1.patch, HBASE-16266-v2.patch, 
> HBASE-16266-v3.patch
>
>
> Now in scanner we have heartbeat to prevent timeout. The time blocked on 
> ResultScanner.next() may much longer than scanner timeout. So it is no need 
> any more to throw  ScannerTimeoutException when server throws 
> UnknownScannerException, we can just reset the scanner like 
> NotServingRegionException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16266) Do not throw ScannerTimeoutException when catch UnknownScannerException

2016-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392876#comment-15392876
 ] 

Hudson commented on HBASE-16266:


FAILURE: Integrated in HBase-1.1-JDK8 #1839 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1839/])
HBASE-16266 Do not throw ScannerTimeoutException when catch (zhangduo: rev 
4e8ec680feb4981ff9991372136dc646eb0e4af2)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestPartialResultsFromClientSide.java


> Do not throw ScannerTimeoutException when catch UnknownScannerException
> ---
>
> Key: HBASE-16266
> URL: https://issues.apache.org/jira/browse/HBASE-16266
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Scanners
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3, 1.1.7
>
> Attachments: HBASE-16266-v1.patch, HBASE-16266-v2.patch, 
> HBASE-16266-v3.patch
>
>
> Now in scanner we have heartbeat to prevent timeout. The time blocked on 
> ResultScanner.next() may much longer than scanner timeout. So it is no need 
> any more to throw  ScannerTimeoutException when server throws 
> UnknownScannerException, we can just reset the scanner like 
> NotServingRegionException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14743) Add metrics around HeapMemoryManager

2016-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392871#comment-15392871
 ] 

Hudson commented on HBASE-14743:


FAILURE: Integrated in HBase-Trunk_matrix #1295 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1295/])
HBASE-14743 Add metrics around HeapMemoryManager. (Reid Chan) (appy: rev 
064271da16efd3e5d9d4787d778fa711b7f9f6ab)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMetricsHeapMemoryManager.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsHeapMemoryManagerSource.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemoryManager.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsHeapMemoryManagerSourceImpl.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsHeapMemoryManager.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceFactory.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceFactoryImpl.java


> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14743.009.patch, HBASE-14743.009.rw3.patch, 
> HBASE-14743.009.v2.patch, HBASE-14743.010.patch, HBASE-14743.010.v2.patch, 
> HBASE-14743.011.patch, Metrics snapshot 2016-6-30.png, Screen Shot 2016-06-16 
> at 5.39.13 PM.png, test2_1.png, test2_2.png, test2_3.png, test2_4.png
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore

2016-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392872#comment-15392872
 ] 

Hudson commented on HBASE-16205:


FAILURE: Integrated in HBase-Trunk_matrix #1295 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1295/])
HBASE-16205 When Cells are not copied to MSLAB, deep clone it while 
(anoopsamjohn: rev 2df0ef549abe0ee8e58d4290193b5d69b3e8d6c6)
* hbase-common/src/main/java/org/apache/hadoop/hbase/ShareableMemory.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodecWithTags.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodec.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/AbstractMemStore.java


> When Cells are not copied to MSLAB, deep clone it while adding to Memstore
> --
>
> Key: HBASE-16205
> URL: https://issues.apache.org/jira/browse/HBASE-16205
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16205.patch, HBASE-16205_V2.patch, 
> HBASE-16205_V3.patch, HBASE-16205_V3.patch
>
>
> This is imp after HBASE-15180 optimization. After that we the cells flowing 
> in write path will be backed by the same byte[] where the RPC read the 
> request into. By default we have MSLAB On and so we have a copy operation 
> while adding Cells to memstore.  This copy might not be there if
> 1. MSLAB is turned OFF
> 2. Cell size is more than a configurable max size. This defaults to 256 KB
> 3. If the operation is Append/Increment. 
> In such cases, we should just clone the Cell into a new byte[] and then add 
> to memstore.  Or else we keep referring to the bigger byte[] chunk for longer 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16266) Do not throw ScannerTimeoutException when catch UnknownScannerException

2016-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392767#comment-15392767
 ] 

Hudson commented on HBASE-16266:


SUCCESS: Integrated in HBase-1.2-IT #562 (See 
[https://builds.apache.org/job/HBase-1.2-IT/562/])
HBASE-16266 Do not throw ScannerTimeoutException when catch (zhangduo: rev 
53f7dfccf9980b061063ef01c24c5bbc1369bf6e)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestPartialResultsFromClientSide.java


> Do not throw ScannerTimeoutException when catch UnknownScannerException
> ---
>
> Key: HBASE-16266
> URL: https://issues.apache.org/jira/browse/HBASE-16266
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Scanners
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3, 1.1.7
>
> Attachments: HBASE-16266-v1.patch, HBASE-16266-v2.patch, 
> HBASE-16266-v3.patch
>
>
> Now in scanner we have heartbeat to prevent timeout. The time blocked on 
> ResultScanner.next() may much longer than scanner timeout. So it is no need 
> any more to throw  ScannerTimeoutException when server throws 
> UnknownScannerException, we can just reset the scanner like 
> NotServingRegionException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16266) Do not throw ScannerTimeoutException when catch UnknownScannerException

2016-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392762#comment-15392762
 ] 

Hudson commented on HBASE-16266:


SUCCESS: Integrated in HBase-1.3-IT #764 (See 
[https://builds.apache.org/job/HBase-1.3-IT/764/])
HBASE-16266 Do not throw ScannerTimeoutException when catch (zhangduo: rev 
36e99c04892bde5d8273146c7c445fea04a525cd)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestPartialResultsFromClientSide.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java


> Do not throw ScannerTimeoutException when catch UnknownScannerException
> ---
>
> Key: HBASE-16266
> URL: https://issues.apache.org/jira/browse/HBASE-16266
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Scanners
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3, 1.1.7
>
> Attachments: HBASE-16266-v1.patch, HBASE-16266-v2.patch, 
> HBASE-16266-v3.patch
>
>
> Now in scanner we have heartbeat to prevent timeout. The time blocked on 
> ResultScanner.next() may much longer than scanner timeout. So it is no need 
> any more to throw  ScannerTimeoutException when server throws 
> UnknownScannerException, we can just reset the scanner like 
> NotServingRegionException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16142) Trigger JFR session when under duress -- e.g. backed-up request queue count -- and dump the recording to log dir

2016-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392713#comment-15392713
 ] 

stack commented on HBASE-16142:
---

No harm adding links to documentation and site for JMC and JFR to the class 
comment on JavaFlightRecorder

In this exception, could say how to enable the features or point at a page that 
shows how to enable them:
74  throw new IllegalStateException("Cannot initialize Java Flight 
Recorder: "
75  + "Commercial Features are 
not enabled");

FYI, there is a define HBASE_LOG_DIR in our hbase-env.sh that you might want to 
use instead in getLogDirectory.

Add where you are recording to in this message... 

  LOG.debug("starting Java Flight Recorder...");


... or nvm, I see you do it when stop is run.

Doesn't Options do the usage output for you if you ask it too?

Thanks Konstantin

> Trigger JFR session when under duress -- e.g. backed-up request queue count 
> -- and dump the recording to log dir
> 
>
> Key: HBASE-16142
> URL: https://issues.apache.org/jira/browse/HBASE-16142
> Project: HBase
>  Issue Type: Task
>  Components: Operability
>Reporter: stack
>Assignee: Konstantin Ryakhovskiy
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-16142.master.001.patch, 
> HBASE-16142.master.002.patch, HBASE-16142.master.003.patch
>
>
> Chatting today w/ a mighty hbase operator on how to figure what is happening 
> during transitory latency spike or any other transitory 'weirdness' in a 
> server, the idea came up that a java flight recording during a spike would 
> include a pretty good picture of what is going on during the time of duress 
> (more ideal would be a trace of the explicit slow queries showing call stack 
> with timings dumped to a sink for later review; i.e. trigger an htrace when a 
> query is slow...).
> Taking a look, programmatically triggering a JFR recording seems doable, if 
> awkward (MBean invocations). There is even a means of specifying 'triggers' 
> based off any published mbean emission -- e.g. a query queue count threshold 
> -- which looks nice. See 
> https://community.oracle.com/thread/3676275?start=0=0 and 
> https://docs.oracle.com/javacomponents/jmc-5-4/jfr-runtime-guide/run.htm#JFRUH184
> This feature could start out as a blog post describing how to do it for one 
> server. A plugin on Canary that looks at mbean values and if over a 
> configured threshold, triggers a recording remotely could be next. Finally 
> could integrate a couple of triggers that fire when issue via the trigger 
> mechanism.
> Marking as beginner feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16234) Expect and handle nulls when assigning replicas

2016-07-25 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392708#comment-15392708
 ] 

Yi Liang commented on HBASE-16234:
--

Is it fine if I pick up this jira. I have done some researches in the code, and 
patch will be provided soon.

> Expect and handle nulls when assigning replicas
> ---
>
> Key: HBASE-16234
> URL: https://issues.apache.org/jira/browse/HBASE-16234
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 1.2.0
>Reporter: Harsh J
>
> Observed this on a cluster:
> {code}
> FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting 
> shutdown. 
> java.lang.NullPointerException 
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.replicaRegionsNotRecordedInMeta(AssignmentManager.java:2799)
>  
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignAllUserRegions(AssignmentManager.java:2778)
>  
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.processDeadServersAndRegionsInTransition(AssignmentManager.java:638)
>  
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:485)
>  
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:723)
>  
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:169) 
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1481) 
> at java.lang.Thread.run(Thread.java:745) 
> {code}
> It looks like {{FSTableDescriptors#get(…)}} can be expected to return null in 
> some cases, but {{AssignmentManager.replicaRegionsNotRecordedInMeta(…)}} does 
> not currently have any handling for such a possibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16142) Trigger JFR session when under duress -- e.g. backed-up request queue count -- and dump the recording to log dir

2016-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392705#comment-15392705
 ] 

stack commented on HBASE-16142:
---

Do they? Have you tried a compile of hbase with this patch in place over 
openjdk?

What about my other questions [~ryakhovskiy]...  This tool triggers the 
recording. Is it needed? Are you thinking that operators might trigger it on 
clusters using your script rather than a java mission control window?

> Trigger JFR session when under duress -- e.g. backed-up request queue count 
> -- and dump the recording to log dir
> 
>
> Key: HBASE-16142
> URL: https://issues.apache.org/jira/browse/HBASE-16142
> Project: HBase
>  Issue Type: Task
>  Components: Operability
>Reporter: stack
>Assignee: Konstantin Ryakhovskiy
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-16142.master.001.patch, 
> HBASE-16142.master.002.patch, HBASE-16142.master.003.patch
>
>
> Chatting today w/ a mighty hbase operator on how to figure what is happening 
> during transitory latency spike or any other transitory 'weirdness' in a 
> server, the idea came up that a java flight recording during a spike would 
> include a pretty good picture of what is going on during the time of duress 
> (more ideal would be a trace of the explicit slow queries showing call stack 
> with timings dumped to a sink for later review; i.e. trigger an htrace when a 
> query is slow...).
> Taking a look, programmatically triggering a JFR recording seems doable, if 
> awkward (MBean invocations). There is even a means of specifying 'triggers' 
> based off any published mbean emission -- e.g. a query queue count threshold 
> -- which looks nice. See 
> https://community.oracle.com/thread/3676275?start=0=0 and 
> https://docs.oracle.com/javacomponents/jmc-5-4/jfr-runtime-guide/run.htm#JFRUH184
> This feature could start out as a blog post describing how to do it for one 
> server. A plugin on Canary that looks at mbean values and if over a 
> configured threshold, triggers a recording remotely could be next. Finally 
> could integrate a couple of triggers that fire when issue via the trigger 
> mechanism.
> Marking as beginner feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16279) Separate and rewrite intro and quickstart

2016-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392679#comment-15392679
 ] 

Hadoop QA commented on HBASE-16279:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 14s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 42s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 23s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 57s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
4s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 
2s {color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
38m 27s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 129m 25s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s 
{color} | {color:red} Patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 210m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures |
|   | hadoop.hbase.TestPartialResultsFromClientSide |
| Timed out junit tests | org.apache.hadoop.hbase.TestHBaseTestingUtility |
|   | org.apache.hadoop.hbase.TestZooKeeper |
|   | org.apache.hadoop.hbase.security.access.TestCellACLWithMultipleVersions |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819985/HBASE-16279-v1.patch |
| JIRA Issue | HBASE-16279 |
| Optional Tests |  asflicense  javac  javadoc  unit  xml  compile  shellcheck  
shelldocs  |
| uname | Linux asf910.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2df0ef5 |
| Default Java | 1.7.0_80 |
| Multi-JDK versions |  

[jira] [Commented] (HBASE-14921) Memory optimizations

2016-07-25 Thread Edward Bortnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392671#comment-15392671
 ] 

Edward Bortnikov commented on HBASE-14921:
--

Let me re-iterate we are respectful of everyone's contribution, and are trying 
to do the right thing, as much by-consensus as possible. 

Here's a suggestion. For the sake of the current patch, let's decouple the 
in-memory flush configuration from compaction configuration. The latter is a 
special case of the former. With compaction protected by a explicit flag, we no 
more need the speculative scan to predict its worthiness. The code becomes 
simple. In the future, we can discuss smart policies to help us eliminate this 
flag. 

[~anastas] and [~anoop.hbase], can we agree on this as base for further 
discussion? 

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, HBASE-14921-V04-CA-V02.patch, 
> HBASE-14921-V04-CA.patch, HBASE-14921-V05-CAO.patch, 
> HBASE-14921-V06-CAO.patch, InitialCellArrayMapEvaluation.pdf, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16261) MultiHFileOutputFormat Enhancement

2016-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392668#comment-15392668
 ] 

Hadoop QA commented on HBASE-16261:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 38s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 115m 7s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 58m 
36s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 233m 59s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestHRegion |
|   | hadoop.hbase.replication.TestMasterReplication |
|   | hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures |
| Timed out junit tests | org.apache.hadoop.hbase.util.TestHBaseFsckOneRS |
|   | org.apache.hadoop.hbase.TestZooKeeper |
|   | org.apache.hadoop.hbase.filter.TestFilterWithScanLimits |
|   | org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819980/HBASE-16261-V3.patch |
| JIRA Issue | HBASE-16261 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |

[jira] [Commented] (HBASE-14921) Memory optimizations

2016-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392669#comment-15392669
 ] 

stack commented on HBASE-14921:
---

Ok. Left some notes on the review but any chance of a high-level overview on 
what the latest patch iteration delivers? Does it jibe w/ the attached design? 
If so, thats grand. I am asking because I presume it has morphed since my old 
reviews. Would it help [~anastas] if i ran another version of [~ram_krish]'s 
loading test? A YCSB say?

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, HBASE-14921-V04-CA-V02.patch, 
> HBASE-14921-V04-CA.patch, HBASE-14921-V05-CAO.patch, 
> HBASE-14921-V06-CAO.patch, InitialCellArrayMapEvaluation.pdf, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14743) Add metrics around HeapMemoryManager

2016-07-25 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392642#comment-15392642
 ] 

Appy commented on HBASE-14743:
--

Committed to master.
Thanks [~reidchan] for the great fix and congrats on your first contribution to 
Apache HBase.
Also, i apologies for the delay. 

> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14743.009.patch, HBASE-14743.009.rw3.patch, 
> HBASE-14743.009.v2.patch, HBASE-14743.010.patch, HBASE-14743.010.v2.patch, 
> HBASE-14743.011.patch, Metrics snapshot 2016-6-30.png, Screen Shot 2016-06-16 
> at 5.39.13 PM.png, test2_1.png, test2_2.png, test2_3.png, test2_4.png
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14743) Add metrics around HeapMemoryManager

2016-07-25 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-14743:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14743.009.patch, HBASE-14743.009.rw3.patch, 
> HBASE-14743.009.v2.patch, HBASE-14743.010.patch, HBASE-14743.010.v2.patch, 
> HBASE-14743.011.patch, Metrics snapshot 2016-6-30.png, Screen Shot 2016-06-16 
> at 5.39.13 PM.png, test2_1.png, test2_2.png, test2_3.png, test2_4.png
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14743) Add metrics around HeapMemoryManager

2016-07-25 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-14743:
-
Fix Version/s: 2.0.0

> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14743.009.patch, HBASE-14743.009.rw3.patch, 
> HBASE-14743.009.v2.patch, HBASE-14743.010.patch, HBASE-14743.010.v2.patch, 
> HBASE-14743.011.patch, Metrics snapshot 2016-6-30.png, Screen Shot 2016-06-16 
> at 5.39.13 PM.png, test2_1.png, test2_2.png, test2_3.png, test2_4.png
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16266) Do not throw ScannerTimeoutException when catch UnknownScannerException

2016-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392589#comment-15392589
 ] 

Hudson commented on HBASE-16266:


SUCCESS: Integrated in HBase-1.2 #680 (See 
[https://builds.apache.org/job/HBase-1.2/680/])
HBASE-16266 Do not throw ScannerTimeoutException when catch (zhangduo: rev 
53f7dfccf9980b061063ef01c24c5bbc1369bf6e)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestPartialResultsFromClientSide.java


> Do not throw ScannerTimeoutException when catch UnknownScannerException
> ---
>
> Key: HBASE-16266
> URL: https://issues.apache.org/jira/browse/HBASE-16266
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Scanners
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3, 1.1.7
>
> Attachments: HBASE-16266-v1.patch, HBASE-16266-v2.patch, 
> HBASE-16266-v3.patch
>
>
> Now in scanner we have heartbeat to prevent timeout. The time blocked on 
> ResultScanner.next() may much longer than scanner timeout. So it is no need 
> any more to throw  ScannerTimeoutException when server throws 
> UnknownScannerException, we can just reset the scanner like 
> NotServingRegionException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14921) Memory optimizations

2016-07-25 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392580#comment-15392580
 ] 

Anastasia Braginsky commented on HBASE-14921:
-

All code review comments from review board were addressed. All replies are in 
the review board. As I have said above the main concerns are:

1. Correctness exceptions -- this is under investigation and is going to be 
fixed
2. The concern about how much the compaction-estimation costs -- we are going 
to run the PE tool ourselves
3. The problem with small flushes to disk due to lack of compaction -- no doubt 
this can be arranged, but probably not under this JIRA

This is my summary

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, HBASE-14921-V04-CA-V02.patch, 
> HBASE-14921-V04-CA.patch, HBASE-14921-V05-CAO.patch, 
> HBASE-14921-V06-CAO.patch, InitialCellArrayMapEvaluation.pdf, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14921) Memory optimizations

2016-07-25 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392571#comment-15392571
 ] 

Anastasia Braginsky commented on HBASE-14921:
-

First, I agree that the sizing issue is ugly and need to be improved. Which is 
partially done in this patch and I planned to improve it further. 

However, I think it is unnecessary and not urgent, to open another JIRA for 
this fix. This is not an issue of rebase only, because we take the code in two 
different directions. We could live with the code as is (or at least could see 
the final outcome of 14921) and later we could agree how to arrange the sizes 
(if what we have is not good enough)... 

Your two concerns are very clear.
1. The flattening without compaction is causing many small segments in 
pipeline, and they are not flushed all together.
2. The issue of compaction prediction cost.

Please correct me if I am wrong.

We understand those concerns. There is no argument that your first concern will 
be fixed. For your second concern we are going to benchmark it ourselves with 
PE.

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, HBASE-14921-V04-CA-V02.patch, 
> HBASE-14921-V04-CA.patch, HBASE-14921-V05-CAO.patch, 
> HBASE-14921-V06-CAO.patch, InitialCellArrayMapEvaluation.pdf, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16266) Do not throw ScannerTimeoutException when catch UnknownScannerException

2016-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392529#comment-15392529
 ] 

Hudson commented on HBASE-16266:


FAILURE: Integrated in HBase-Trunk_matrix #1294 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1294/])
HBASE-16266 Do not throw ScannerTimeoutException when catch (zhangduo: rev 
6dbce2a8cb3cf5a0a07b6c3f9825a99af03a6962)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestPartialResultsFromClientSide.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java


> Do not throw ScannerTimeoutException when catch UnknownScannerException
> ---
>
> Key: HBASE-16266
> URL: https://issues.apache.org/jira/browse/HBASE-16266
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Scanners
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3, 1.1.7
>
> Attachments: HBASE-16266-v1.patch, HBASE-16266-v2.patch, 
> HBASE-16266-v3.patch
>
>
> Now in scanner we have heartbeat to prevent timeout. The time blocked on 
> ResultScanner.next() may much longer than scanner timeout. So it is no need 
> any more to throw  ScannerTimeoutException when server throws 
> UnknownScannerException, we can just reset the scanner like 
> NotServingRegionException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14921) Memory optimizations

2016-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392525#comment-15392525
 ] 

stack commented on HBASE-14921:
---

Oh, and with [~ebortnik], what is shortest path to commit of this patch. 
Reviewing the RB comments, it seems like there are outstanding issues still. 
Can these be addressed or if not fundamental, removed and done in a separate 
issue?

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, HBASE-14921-V04-CA-V02.patch, 
> HBASE-14921-V04-CA.patch, HBASE-14921-V05-CAO.patch, 
> HBASE-14921-V06-CAO.patch, InitialCellArrayMapEvaluation.pdf, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14921) Memory optimizations

2016-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392497#comment-15392497
 ] 

stack commented on HBASE-14921:
---

Just did a reread of this whole issue.

First, what is going on in here is wonderful. True, the issue is going on too 
long and starting to run away from us but it is a shining example of the best 
of collaboration; informed, data-based compares, accommodating, smart, 
respectful back-and-forth, detailed reviews, actual testing (and fixes) of 
posted patches, etc. You can't beat it.

Second, all involved agree on the merit of these developments, their promise, 
and are trying to help land the patch. There is consensus that we should commit 
and then-address-outstanding-issues afterward but as I read it there seems to 
be a reluctance to take on the patch while it demonstrably slows down the 
default case -- i.e. when no duplicates -- and there is concern that we may not 
be able to recover the lost perf with the current approach. We could of course 
turn this feature 'off', by default, but most of us don't want to do that for 
reasons stated above (another is that [~ram_krish] and [~anoop.hbase] want to 
base some of their offheaping of write path on the work done here). Can I help 
in here? I can run some perf compares like [~ram_krish]'s?

[~anoop.hbase] Mind repeating what your two concerns were here just so the 
discussion is contained (this issue is long now).



> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, HBASE-14921-V04-CA-V02.patch, 
> HBASE-14921-V04-CA.patch, HBASE-14921-V05-CAO.patch, 
> HBASE-14921-V06-CAO.patch, InitialCellArrayMapEvaluation.pdf, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16266) Do not throw ScannerTimeoutException when catch UnknownScannerException

2016-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392432#comment-15392432
 ] 

Hudson commented on HBASE-16266:


FAILURE: Integrated in HBase-1.3 #794 (See 
[https://builds.apache.org/job/HBase-1.3/794/])
HBASE-16266 Do not throw ScannerTimeoutException when catch (zhangduo: rev 
36e99c04892bde5d8273146c7c445fea04a525cd)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestPartialResultsFromClientSide.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java


> Do not throw ScannerTimeoutException when catch UnknownScannerException
> ---
>
> Key: HBASE-16266
> URL: https://issues.apache.org/jira/browse/HBASE-16266
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Scanners
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3, 1.1.7
>
> Attachments: HBASE-16266-v1.patch, HBASE-16266-v2.patch, 
> HBASE-16266-v3.patch
>
>
> Now in scanner we have heartbeat to prevent timeout. The time blocked on 
> ResultScanner.next() may much longer than scanner timeout. So it is no need 
> any more to throw  ScannerTimeoutException when server throws 
> UnknownScannerException, we can just reset the scanner like 
> NotServingRegionException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16266) Do not throw ScannerTimeoutException when catch UnknownScannerException

2016-07-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392427#comment-15392427
 ] 

Hudson commented on HBASE-16266:


FAILURE: Integrated in HBase-1.4 #306 (See 
[https://builds.apache.org/job/HBase-1.4/306/])
HBASE-16266 Do not throw ScannerTimeoutException when catch (zhangduo: rev 
f35b2c45d4a87bf03d70d541fa105b82ddd3ac97)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestPartialResultsFromClientSide.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java


> Do not throw ScannerTimeoutException when catch UnknownScannerException
> ---
>
> Key: HBASE-16266
> URL: https://issues.apache.org/jira/browse/HBASE-16266
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Scanners
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3, 1.1.7
>
> Attachments: HBASE-16266-v1.patch, HBASE-16266-v2.patch, 
> HBASE-16266-v3.patch
>
>
> Now in scanner we have heartbeat to prevent timeout. The time blocked on 
> ResultScanner.next() may much longer than scanner timeout. So it is no need 
> any more to throw  ScannerTimeoutException when server throws 
> UnknownScannerException, we can just reset the scanner like 
> NotServingRegionException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14921) Memory optimizations

2016-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392425#comment-15392425
 ] 

stack commented on HBASE-14921:
---

[~anastas] I like this reasoning. You've done this a few times in this issue. 
Please do not have these comments lost in the general back and forth. Can you 
hoist your thoughts into release notes/documentation for this feature?

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, HBASE-14921-V04-CA-V02.patch, 
> HBASE-14921-V04-CA.patch, HBASE-14921-V05-CAO.patch, 
> HBASE-14921-V06-CAO.patch, InitialCellArrayMapEvaluation.pdf, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16280) Use hash based map in SequenceIdAccounting

2016-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392353#comment-15392353
 ] 

stack commented on HBASE-16280:
---

Skimmed. I was thinking the new Type should be elsewhere than in util package 
but thinking on it, it belongs beside Bytes which is in util.

+1

> Use hash based map in SequenceIdAccounting
> --
>
> Key: HBASE-16280
> URL: https://issues.apache.org/jira/browse/HBASE-16280
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-16280-v1.patch, HBASE-16280.patch
>
>
> Its update method is on the write path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16279) Separate and rewrite intro and quickstart

2016-07-25 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-16279:

Attachment: HBASE-16279-v1.patch

This patch addresses [~daniel_vimont]'s feedback and adds back the important 
installation notes / prereqs as an appendix. It also renames the new directory 
from intro/ to installation/.

> Separate and rewrite intro and quickstart
> -
>
> Key: HBASE-16279
> URL: https://issues.apache.org/jira/browse/HBASE-16279
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
> Attachments: HBASE-16279-v1.patch, HBASE-16279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16279) Separate and rewrite intro and quickstart

2016-07-25 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-16279:

Status: Open  (was: Patch Available)

> Separate and rewrite intro and quickstart
> -
>
> Key: HBASE-16279
> URL: https://issues.apache.org/jira/browse/HBASE-16279
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
> Attachments: HBASE-16279-v1.patch, HBASE-16279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16279) Separate and rewrite intro and quickstart

2016-07-25 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-16279:

Status: Patch Available  (was: Open)

> Separate and rewrite intro and quickstart
> -
>
> Key: HBASE-16279
> URL: https://issues.apache.org/jira/browse/HBASE-16279
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
> Attachments: HBASE-16279-v1.patch, HBASE-16279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore

2016-07-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16205:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Test fails not related to the patch. Pushed to master. Thanks for the reviews 
[~ram_krish], [~carp84]

> When Cells are not copied to MSLAB, deep clone it while adding to Memstore
> --
>
> Key: HBASE-16205
> URL: https://issues.apache.org/jira/browse/HBASE-16205
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16205.patch, HBASE-16205_V2.patch, 
> HBASE-16205_V3.patch, HBASE-16205_V3.patch
>
>
> This is imp after HBASE-15180 optimization. After that we the cells flowing 
> in write path will be backed by the same byte[] where the RPC read the 
> request into. By default we have MSLAB On and so we have a copy operation 
> while adding Cells to memstore.  This copy might not be there if
> 1. MSLAB is turned OFF
> 2. Cell size is more than a configurable max size. This defaults to 256 KB
> 3. If the operation is Append/Increment. 
> In such cases, we should just clone the Cell into a new byte[] and then add 
> to memstore.  Or else we keep referring to the bigger byte[] chunk for longer 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16261) MultiHFileOutputFormat Enhancement

2016-07-25 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-16261:
-
Attachment: HBASE-16261-V3.patch

>  MultiHFileOutputFormat Enhancement 
> 
>
> Key: HBASE-16261
> URL: https://issues.apache.org/jira/browse/HBASE-16261
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16261-V1.patch, HBASE-16261-V2.patch, 
> HBASE-16261-V3.patch
>
>
> MultiHFileOutputFormat follow HFileOutputFormat2
> (1) HFileOutputFormat2 can read one table's region split keys. and then 
> output multiple hfiles for one family, and each hfile map to one region. We 
> can add partitioner in MultiHFileOutputFormat to make it support this feature.
> (2) HFileOutputFormat2 support Customized Compression algorithm for column 
> family and BloomFilter, also support customized DataBlockEncoding for the 
> output hfiles. We can also make MultiHFileOutputFormat to support these 
> features.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14921) Memory optimizations

2016-07-25 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392298#comment-15392298
 ] 

Anoop Sam John commented on HBASE-14921:


HBASE-16229 trying to just make the size accounting in a better shape..  The 
accounting happens within each class like Segment/ CompactingMemstore.  Not 
like using setter some one else set a size and then some places we add some 
overhead and some other place minus it.. It was really confusing.
Pls see that change.. I said above that I can help with the rebase which might 
be needed because of this change.  Sorry for the rebase effort caused by other 
issue fixes.
See we all wanted to make sure that this feature is well accepted.  We feel 
that this has relevance not just in scenario where there are many 
duplicates/deletes.. But in a normal case also..  Or else we would not have 
given this much of our effort. 

I had raised 2 points of concerns on the general approach.  Am not saying that 
those has to be handled as part of this jira. We can get this in and then work 
on those also.. But I wanted to highlight those. I raised this at initial stage 
also.. But then there were counters.  And now those counter args can not stand 
at all time.

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, HBASE-14921-V04-CA-V02.patch, 
> HBASE-14921-V04-CA.patch, HBASE-14921-V05-CAO.patch, 
> HBASE-14921-V06-CAO.patch, InitialCellArrayMapEvaluation.pdf, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15558) Add random failure co processor to hbase-it

2016-07-25 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-15558:
---
Assignee: (was: Joseph)

> Add random failure co processor to hbase-it
> ---
>
> Key: HBASE-15558
> URL: https://issues.apache.org/jira/browse/HBASE-15558
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>
> HBase integration tests don't seem to be able to stress HBase all that much. 
> They don't add enough for there to be failures due to load. So lets add a 
> co-processor that can randomly fail rpc requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14921) Memory optimizations

2016-07-25 Thread Edward Bortnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392213#comment-15392213
 ] 

Edward Bortnikov commented on HBASE-14921:
--

Suggest we install some order in this discussion, there are really multiple 
issues on the table.

TL;DR: Let's get this patch in shape and check it in without over-optimizing; 
it's already quite big. 

1. Bugs in the current PR. Thanks for reporting. Those must be fixed, period. 
We are working on reproducing and fixing.
2. Decoupling In-Memory Flush (with Flattening) from Compaction - either 
algorithmic or via configuration. IMHO, this is a matter of optimization, 
either approach has its pros and contras. For example, if flattening and 
compaction were always coupled, the too-many-open-files problem would not have 
emerged. In general, we're in favor of having a smart system with as few 
parameter knobs as possible, capable of figuring out the compaction benefits at 
a low cost. But again, this is a matter of policy. We suggest to defer it 
beyond the current commit. 
3. Concurrent development. Currently, there are at least two JIRA's 
(HBASE-16003 and HBASE-16229) that try to concurrently handle the same issues 
as this JIRA, which creates a lot of friction in the code. The prior consensus 
was that HBASE-14921 would be the umbrella for all memory optimizations, 
including the ultimate flattening (CellChunkMap). Failing to stick to this 
discipline slows us down a lot. [~anoop.hbase] and [~ram_krish], if you feel 
you've reached a more advanced stage of flat memory implementation for the 
ultimate off-heaping, and prefer to lead that charge - this is perfectly fine 
with us. But let us merge the 14921 patch first (it's already heavily 
invested), and start optimizing on top of it. 

Cheers,
Ed

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, HBASE-14921-V04-CA-V02.patch, 
> HBASE-14921-V04-CA.patch, HBASE-14921-V05-CAO.patch, 
> HBASE-14921-V06-CAO.patch, InitialCellArrayMapEvaluation.pdf, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16266) Do not throw ScannerTimeoutException when catch UnknownScannerException

2016-07-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392113#comment-15392113
 ] 

stack commented on HBASE-16266:
---

Thanks [~Apache9] (and thanks for the considered fix [~yangzhe1991])

> Do not throw ScannerTimeoutException when catch UnknownScannerException
> ---
>
> Key: HBASE-16266
> URL: https://issues.apache.org/jira/browse/HBASE-16266
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Scanners
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3, 1.1.7
>
> Attachments: HBASE-16266-v1.patch, HBASE-16266-v2.patch, 
> HBASE-16266-v3.patch
>
>
> Now in scanner we have heartbeat to prevent timeout. The time blocked on 
> ResultScanner.next() may much longer than scanner timeout. So it is no need 
> any more to throw  ScannerTimeoutException when server throws 
> UnknownScannerException, we can just reset the scanner like 
> NotServingRegionException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16260) Audit dependencies for Category-X

2016-07-25 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392106#comment-15392106
 ] 

Sean Busbey commented on HBASE-16260:
-

Arg. I've been unable to prioritize this issue enough given its impact on the 
project. I think I'll get it done by Wednesday mid-day Central Time. If anyone 
thinks they can do it prior to that, please take the issue.

> Audit dependencies for Category-X
> -
>
> Key: HBASE-16260
> URL: https://issues.apache.org/jira/browse/HBASE-16260
> Project: HBase
>  Issue Type: Task
>  Components: community, dependencies
>Affects Versions: 2.0.0, 1.2.0, 1.3.0, 1.2.1, 1.1.4, 1.0.4, 1.1.5, 1.2.2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 1.1.6, 1.2.3
>
>
> Make sure we do not have category x dependencies.
> right now we atleast have an LGPL for xom:xom (thanks to PHOENIX-3103 for the 
> catch)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16266) Do not throw ScannerTimeoutException when catch UnknownScannerException

2016-07-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16266:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to branch-1.1+. Thanks [~yangzhe1991] for the patch. Thanks all for 
reviewing.

> Do not throw ScannerTimeoutException when catch UnknownScannerException
> ---
>
> Key: HBASE-16266
> URL: https://issues.apache.org/jira/browse/HBASE-16266
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Scanners
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3, 1.1.7
>
> Attachments: HBASE-16266-v1.patch, HBASE-16266-v2.patch, 
> HBASE-16266-v3.patch
>
>
> Now in scanner we have heartbeat to prevent timeout. The time blocked on 
> ResultScanner.next() may much longer than scanner timeout. So it is no need 
> any more to throw  ScannerTimeoutException when server throws 
> UnknownScannerException, we can just reset the scanner like 
> NotServingRegionException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore

2016-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392051#comment-15392051
 ] 

Hadoop QA commented on HBASE-16205:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 12m 34s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
48m 6s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 45s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 115m 32s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
56s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 219m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures |
| Timed out junit tests | 
org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles |
|   | org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles |
|   | 

[jira] [Updated] (HBASE-16266) Do not throw ScannerTimeoutException when catch UnknownScannerException

2016-07-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16266:
--
Affects Version/s: (was: 1.1.5)
   1.1.6
   1.4.0
   1.3.0
   2.0.0
Fix Version/s: 1.1.7
   1.2.3
   1.4.0
   1.3.0
   2.0.0

> Do not throw ScannerTimeoutException when catch UnknownScannerException
> ---
>
> Key: HBASE-16266
> URL: https://issues.apache.org/jira/browse/HBASE-16266
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Scanners
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.3, 1.1.7
>
> Attachments: HBASE-16266-v1.patch, HBASE-16266-v2.patch, 
> HBASE-16266-v3.patch
>
>
> Now in scanner we have heartbeat to prevent timeout. The time blocked on 
> ResultScanner.next() may much longer than scanner timeout. So it is no need 
> any more to throw  ScannerTimeoutException when server throws 
> UnknownScannerException, we can just reset the scanner like 
> NotServingRegionException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16281) TestMasterReplication is flaky

2016-07-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391984#comment-15391984
 ] 

Duo Zhang commented on HBASE-16281:
---

Something wrong with the jenkins build? I do not think this patch could cause 
so much test failing...

> TestMasterReplication is flaky
> --
>
> Key: HBASE-16281
> URL: https://issues.apache.org/jira/browse/HBASE-16281
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.5, 1.2.2, 0.98.20
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16281-v1.patch, HBASE-16281-v1.patch, 
> HBASE-16281-v1.patch
>
>
> In TestMasterReplication we put some mutations and wait until we can read the 
> data from slave cluster. However the waiting time is too short. Replication 
> service in slave cluster may not be initialized and ready to handle 
> replication RPC requests in several seconds. 
> We should wait for more time.
> {quote}
> 2016-07-25 11:47:03,156 WARN  [Time-limited 
> test-EventThread.replicationSource,1.replicationSource.10.235.114.28%2C56313%2C1469418386448,1]
>  regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate 
> because of a local or network error: 
> java.io.IOException: java.io.IOException: Replication services are not 
> initialized yet
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2263)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:118)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:189)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:169)
> Caused by: com.google.protobuf.ServiceException: Replication services are not 
> initialized yet
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:1935)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22751)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2212)
>   ... 3 more
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16282) java.io.IOException: Took too long to split the files and create the references, aborting split

2016-07-25 Thread wangyongqiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391939#comment-15391939
 ] 

wangyongqiang commented on HBASE-16282:
---


{quote}
2016-07-25 08:25:11,257 INFO [regionserver60020-splits-1466239518933] 
regionserver.SplitRequest: Running rollback/cleanup of failed split of 
ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.;
 Took too long to split the files and create the references, aborting split 
{quote}

1. you can see if there are many hfiles in region 
crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.

2. this problem is solved in 0.98.14 in hbase-13959


> java.io.IOException: Took too long to split the files and create the 
> references, aborting split
> ---
>
> Key: HBASE-16282
> URL: https://issues.apache.org/jira/browse/HBASE-16282
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 0.98.8
>Reporter: dcswinner
>
> Recently,I found some exception in my HBase cluser when some regions are 
> spliting,in the regionserver node logs,the exception log like below:
> 2016-07-25 08:24:30,502 INFO [regionserver60020-splits-1466239518933] 
> regionserver.SplitTransaction: Starting split of region 
> ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.
>  
> 2016-07-25 08:24:30,938 INFO [regionserver60020-splits-1466239518933] 
> regionserver.HRegion: Started memstore flush for 
> ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.,
>  current region memstore size 28.0 K 
> 2016-07-25 08:24:36,137 INFO [regionserver60020-splits-1466239518933] 
> regionserver.DefaultStoreFlusher: Flushed, sequenceid=15546530, memsize=28.0 
> K, hasBloomFilter=true, into tmp file 
> hdfs://suninghadoop2/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.tmp/6ee8bb3e4c0a4af591f94a163b272f5f
>  
> 2016-07-25 08:24:36,590 INFO [regionserver60020-splits-1466239518933] 
> regionserver.HStore: Added 
> hdfs://suninghadoop2/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/noverison/6ee8bb3e4c0a4af591f94a163b272f5f,
>  entries=24, sequenceid=15546530, filesize=25.9 K 
> 2016-07-25 08:24:36,591 INFO [regionserver60020-splits-1466239518933] 
> regionserver.HRegion: Finished memstore flush of ~28.0 K/28624, 
> currentsize=0/0 for region 
> ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.
>  in 5652ms, sequenceid=15546530, compaction requested=true 
> 2016-07-25 08:24:36,647 INFO 
> [StoreCloserThread-ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.-1]
>  regionserver.HStore: Closed noverison 
> 2016-07-25 08:24:36,647 INFO [regionserver60020-splits-1466239518933] 
> regionserver.HRegion: Closed 
> ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.
>  
> 2016-07-25 08:24:43,264 INFO [StoreFileSplitter-0] hdfs.DFSClient: Could not 
> complete 
> /hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
>  retrying... 
> 2016-07-25 08:24:47,842 INFO [StoreFileSplitter-0] hdfs.DFSClient: Could not 
> complete 
> /hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
>  retrying... 
> 2016-07-25 08:24:47,842 INFO [StoreFileSplitter-0] hdfs.DFSClient: Could not 
> complete 
> /hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
>  retrying... 
> 2016-07-25 08:24:55,334 INFO [StoreFileSplitter-0] hdfs.DFSClient: Could not 
> complete 
> /hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
>  retrying... 
> 2016-07-25 08:25:11,257 INFO [regionserver60020-splits-1466239518933] 
> regionserver.SplitRequest: Running rollback/cleanup of failed split of 
> ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.;
>  Took too long to split the files and create the references, aborting split 
> 

[jira] [Commented] (HBASE-16280) Use hash based map in SequenceIdAccounting

2016-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391901#comment-15391901
 ] 

Hadoop QA commented on HBASE-16280:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 47s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 49s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 125m 35s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
32s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 182m 10s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.TestNamespace |
|   | hadoop.hbase.coprocessor.TestRegionServerCoprocessorEndpoint |
|   | hadoop.hbase.replication.TestReplicationKillMasterRS |
|   | hadoop.hbase.client.TestTableSnapshotScanner |
|   | hadoop.hbase.client.TestMobCloneSnapshotFromClient |
|   | 

[jira] [Commented] (HBASE-16281) TestMasterReplication is flaky

2016-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391883#comment-15391883
 ] 

Hadoop QA commented on HBASE-16281:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
32s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 50s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 121m 47s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 170m 39s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.TestServerSideScanMetricsFromClientSide |
|   | hadoop.hbase.snapshot.TestMobSecureExportSnapshot |
|   | hadoop.hbase.snapshot.TestMobRestoreFlushSnapshotFromClient |
|   | hadoop.hbase.coprocessor.TestWALObserver |
|   | hadoop.hbase.TestClusterBootOrder |
|   | hadoop.hbase.mapreduce.TestTableInputFormatScan1 |
|   | hadoop.hbase.coprocessor.TestCoprocessorTableEndpoint |
|   | hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles |
|   | hadoop.hbase.mapreduce.TestSyncTable |
|   | hadoop.hbase.security.visibility.TestVisibilityLabelsWithSLGStack |
|   | hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient |
|   | hadoop.hbase.snapshot.TestMobExportSnapshot |
|   | hadoop.hbase.mapreduce.TestRowCounter |
|   | hadoop.hbase.mapreduce.TestImportTSVWithOperationAttributes |
|   | hadoop.hbase.TestGlobalMemStoreSize |
| 

[jira] [Commented] (HBASE-16282) java.io.IOException: Took too long to split the files and create the references, aborting split

2016-07-25 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391797#comment-15391797
 ] 

Heng Chen commented on HBASE-16282:
---

{quote}
2016-07-25 08:24:43,264 INFO [StoreFileSplitter-0] hdfs.DFSClient: Could not 
complete 
/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
 retrying... 
{quote}

It seems the relates file on hdfs could not be closed,  is your HDFS in normal 
state at that time?

> java.io.IOException: Took too long to split the files and create the 
> references, aborting split
> ---
>
> Key: HBASE-16282
> URL: https://issues.apache.org/jira/browse/HBASE-16282
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 0.98.8
>Reporter: dcswinner
>
> Recently,I found some exception in my HBase cluser when some regions are 
> spliting,in the regionserver node logs,the exception log like below:
> 2016-07-25 08:24:30,502 INFO [regionserver60020-splits-1466239518933] 
> regionserver.SplitTransaction: Starting split of region 
> ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.
>  
> 2016-07-25 08:24:30,938 INFO [regionserver60020-splits-1466239518933] 
> regionserver.HRegion: Started memstore flush for 
> ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.,
>  current region memstore size 28.0 K 
> 2016-07-25 08:24:36,137 INFO [regionserver60020-splits-1466239518933] 
> regionserver.DefaultStoreFlusher: Flushed, sequenceid=15546530, memsize=28.0 
> K, hasBloomFilter=true, into tmp file 
> hdfs://suninghadoop2/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.tmp/6ee8bb3e4c0a4af591f94a163b272f5f
>  
> 2016-07-25 08:24:36,590 INFO [regionserver60020-splits-1466239518933] 
> regionserver.HStore: Added 
> hdfs://suninghadoop2/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/noverison/6ee8bb3e4c0a4af591f94a163b272f5f,
>  entries=24, sequenceid=15546530, filesize=25.9 K 
> 2016-07-25 08:24:36,591 INFO [regionserver60020-splits-1466239518933] 
> regionserver.HRegion: Finished memstore flush of ~28.0 K/28624, 
> currentsize=0/0 for region 
> ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.
>  in 5652ms, sequenceid=15546530, compaction requested=true 
> 2016-07-25 08:24:36,647 INFO 
> [StoreCloserThread-ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.-1]
>  regionserver.HStore: Closed noverison 
> 2016-07-25 08:24:36,647 INFO [regionserver60020-splits-1466239518933] 
> regionserver.HRegion: Closed 
> ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.
>  
> 2016-07-25 08:24:43,264 INFO [StoreFileSplitter-0] hdfs.DFSClient: Could not 
> complete 
> /hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
>  retrying... 
> 2016-07-25 08:24:47,842 INFO [StoreFileSplitter-0] hdfs.DFSClient: Could not 
> complete 
> /hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
>  retrying... 
> 2016-07-25 08:24:47,842 INFO [StoreFileSplitter-0] hdfs.DFSClient: Could not 
> complete 
> /hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
>  retrying... 
> 2016-07-25 08:24:55,334 INFO [StoreFileSplitter-0] hdfs.DFSClient: Could not 
> complete 
> /hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
>  retrying... 
> 2016-07-25 08:25:11,257 INFO [regionserver60020-splits-1466239518933] 
> regionserver.SplitRequest: Running rollback/cleanup of failed split of 
> ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.;
>  Took too long to split the files and create the references, aborting split 
> java.io.IOException: Took too long to split the files and create the 
> references, aborting split 
> at 
> 

[jira] [Commented] (HBASE-16014) Get and Put constructor argument lists are divergent

2016-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391734#comment-15391734
 ] 

Hadoop QA commented on HBASE-16014:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
58s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
42m 27s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 16s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
32s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 14s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819895/HBASE-16014_v1.patch |
| JIRA Issue | HBASE-16014 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf910.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / bdd7782 |
| Default Java | 1.7.0_80 |
| Multi-JDK versions |  /usr/local/jenkins/java/jdk1.8.0:1.8.0 
/home/jenkins/jenkins-slave/tools/hudson.model.JDK/JDK_1.7_latest_:1.7.0_80 |
| findbugs | v3.0.0 |
|  Test Results | 

[jira] [Updated] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore

2016-07-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16205:
---
Attachment: (was: HBASE-16205_V3.patch)

> When Cells are not copied to MSLAB, deep clone it while adding to Memstore
> --
>
> Key: HBASE-16205
> URL: https://issues.apache.org/jira/browse/HBASE-16205
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16205.patch, HBASE-16205_V2.patch, 
> HBASE-16205_V3.patch, HBASE-16205_V3.patch
>
>
> This is imp after HBASE-15180 optimization. After that we the cells flowing 
> in write path will be backed by the same byte[] where the RPC read the 
> request into. By default we have MSLAB On and so we have a copy operation 
> while adding Cells to memstore.  This copy might not be there if
> 1. MSLAB is turned OFF
> 2. Cell size is more than a configurable max size. This defaults to 256 KB
> 3. If the operation is Append/Increment. 
> In such cases, we should just clone the Cell into a new byte[] and then add 
> to memstore.  Or else we keep referring to the bigger byte[] chunk for longer 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore

2016-07-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16205:
---
Attachment: HBASE-16205_V3.patch

Test fail seems related to env issue. I can see many OOME logs. Lets get 
another QA run. 

> When Cells are not copied to MSLAB, deep clone it while adding to Memstore
> --
>
> Key: HBASE-16205
> URL: https://issues.apache.org/jira/browse/HBASE-16205
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16205.patch, HBASE-16205_V2.patch, 
> HBASE-16205_V3.patch, HBASE-16205_V3.patch
>
>
> This is imp after HBASE-15180 optimization. After that we the cells flowing 
> in write path will be backed by the same byte[] where the RPC read the 
> request into. By default we have MSLAB On and so we have a copy operation 
> while adding Cells to memstore.  This copy might not be there if
> 1. MSLAB is turned OFF
> 2. Cell size is more than a configurable max size. This defaults to 256 KB
> 3. If the operation is Append/Increment. 
> In such cases, we should just clone the Cell into a new byte[] and then add 
> to memstore.  Or else we keep referring to the bigger byte[] chunk for longer 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore

2016-07-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16205:
---
Attachment: (was: HBASE-16205_V3.patch)

> When Cells are not copied to MSLAB, deep clone it while adding to Memstore
> --
>
> Key: HBASE-16205
> URL: https://issues.apache.org/jira/browse/HBASE-16205
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16205.patch, HBASE-16205_V2.patch, 
> HBASE-16205_V3.patch, HBASE-16205_V3.patch
>
>
> This is imp after HBASE-15180 optimization. After that we the cells flowing 
> in write path will be backed by the same byte[] where the RPC read the 
> request into. By default we have MSLAB On and so we have a copy operation 
> while adding Cells to memstore.  This copy might not be there if
> 1. MSLAB is turned OFF
> 2. Cell size is more than a configurable max size. This defaults to 256 KB
> 3. If the operation is Append/Increment. 
> In such cases, we should just clone the Cell into a new byte[] and then add 
> to memstore.  Or else we keep referring to the bigger byte[] chunk for longer 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16281) TestMasterReplication is flaky

2016-07-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16281:
--
Attachment: HBASE-16281-v1.patch

Retry.

> TestMasterReplication is flaky
> --
>
> Key: HBASE-16281
> URL: https://issues.apache.org/jira/browse/HBASE-16281
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.5, 1.2.2, 0.98.20
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16281-v1.patch, HBASE-16281-v1.patch, 
> HBASE-16281-v1.patch
>
>
> In TestMasterReplication we put some mutations and wait until we can read the 
> data from slave cluster. However the waiting time is too short. Replication 
> service in slave cluster may not be initialized and ready to handle 
> replication RPC requests in several seconds. 
> We should wait for more time.
> {quote}
> 2016-07-25 11:47:03,156 WARN  [Time-limited 
> test-EventThread.replicationSource,1.replicationSource.10.235.114.28%2C56313%2C1469418386448,1]
>  regionserver.HBaseInterClusterReplicationEndpoint(310): Can't replicate 
> because of a local or network error: 
> java.io.IOException: java.io.IOException: Replication services are not 
> initialized yet
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2263)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:118)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:189)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:169)
> Caused by: com.google.protobuf.ServiceException: Replication services are not 
> initialized yet
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.replicateWALEntry(RSRpcServices.java:1935)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22751)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2212)
>   ... 3 more
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16280) Use hash based map in SequenceIdAccounting

2016-07-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16280:
--
Attachment: HBASE-16280-v1.patch

Fix findbugs warnings.

> Use hash based map in SequenceIdAccounting
> --
>
> Key: HBASE-16280
> URL: https://issues.apache.org/jira/browse/HBASE-16280
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-16280-v1.patch, HBASE-16280.patch
>
>
> Its update method is on the write path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore

2016-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391677#comment-15391677
 ] 

Hadoop QA commented on HBASE-16205:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 48s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 118m 4s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
33s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 174m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.snapshot.TestMobSecureExportSnapshot |
|   | hadoop.hbase.snapshot.TestMobRestoreFlushSnapshotFromClient |
|   | hadoop.hbase.replication.TestReplicationKillMasterRS |
|   | hadoop.hbase.coprocessor.TestDoubleColumnInterpreter |
|   | 

[jira] [Commented] (HBASE-16281) TestMasterReplication is flaky

2016-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391679#comment-15391679
 ] 

Hadoop QA commented on HBASE-16281:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
34m 26s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 0s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 134m 31s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.replication.TestReplicationSyncUpToolWithBulkLoadedData |
|   | 
hadoop.hbase.replication.multiwal.TestReplicationSyncUpToolWithMultipleAsyncWAL 
|
|   | hadoop.hbase.replication.TestReplicationKillMasterRS |
|   | 
hadoop.hbase.replication.multiwal.TestReplicationKillMasterRSCompressedWithMultipleAsyncWAL
 |
|   | hadoop.hbase.replication.TestReplicationKillSlaveRS |
|   | hadoop.hbase.security.token.TestDelegationTokenWithEncryption |
|   | 
hadoop.hbase.replication.multiwal.TestReplicationEndpointWithMultipleAsyncWAL |
|   | hadoop.hbase.replication.TestReplicationStateHBaseImpl |
|   | hadoop.hbase.regionserver.TestRegionReplicas |
|   | 
hadoop.hbase.replication.multiwal.TestReplicationSyncUpToolWithMultipleWAL |
|   | hadoop.hbase.security.visibility.TestDefaultScanLabelGeneratorStack |
|   | 

[jira] [Commented] (HBASE-16280) Use hash based map in SequenceIdAccounting

2016-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391662#comment-15391662
 ] 

Hadoop QA commented on HBASE-16280:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 30s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
57m 18s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 45s 
{color} | {color:red} hbase-common generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 48s 
{color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 5s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 114m 39s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
10s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 216m 53s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-common |
|  |  

[jira] [Commented] (HBASE-16142) Trigger JFR session when under duress -- e.g. backed-up request queue count -- and dump the recording to log dir

2016-07-25 Thread Konstantin Ryakhovskiy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391663#comment-15391663
 ] 

Konstantin Ryakhovskiy commented on HBASE-16142:


[~stack] what do you mean?
Should I add something, like a button to trigger JFR?
or do you mean - the config set has to be included into trace, like additional 
information has to be dumped apart from JFR-file? Or do we need to include some 
additional metrics?

Can you please provide more details about your idea?



> Trigger JFR session when under duress -- e.g. backed-up request queue count 
> -- and dump the recording to log dir
> 
>
> Key: HBASE-16142
> URL: https://issues.apache.org/jira/browse/HBASE-16142
> Project: HBase
>  Issue Type: Task
>  Components: Operability
>Reporter: stack
>Assignee: Konstantin Ryakhovskiy
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-16142.master.001.patch, 
> HBASE-16142.master.002.patch, HBASE-16142.master.003.patch
>
>
> Chatting today w/ a mighty hbase operator on how to figure what is happening 
> during transitory latency spike or any other transitory 'weirdness' in a 
> server, the idea came up that a java flight recording during a spike would 
> include a pretty good picture of what is going on during the time of duress 
> (more ideal would be a trace of the explicit slow queries showing call stack 
> with timings dumped to a sink for later review; i.e. trigger an htrace when a 
> query is slow...).
> Taking a look, programmatically triggering a JFR recording seems doable, if 
> awkward (MBean invocations). There is even a means of specifying 'triggers' 
> based off any published mbean emission -- e.g. a query queue count threshold 
> -- which looks nice. See 
> https://community.oracle.com/thread/3676275?start=0=0 and 
> https://docs.oracle.com/javacomponents/jmc-5-4/jfr-runtime-guide/run.htm#JFRUH184
> This feature could start out as a blog post describing how to do it for one 
> server. A plugin on Canary that looks at mbean values and if over a 
> configured threshold, triggers a recording remotely could be next. Finally 
> could integrate a couple of triggers that fire when issue via the trigger 
> mechanism.
> Marking as beginner feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16014) Get and Put constructor argument lists are divergent

2016-07-25 Thread brandboat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

brandboat updated HBASE-16014:
--
Attachment: HBASE-16014_v1.patch

use git diff, not git format-patch

> Get and Put constructor argument lists are divergent
> 
>
> Key: HBASE-16014
> URL: https://issues.apache.org/jira/browse/HBASE-16014
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Nick Dimiduk
>Assignee: brandboat
> Fix For: 2.0.0
>
> Attachments: HBASE-16014_v0.patch, HBASE-16014_v1.patch
>
>
> API for construing Get and Put objects for a specific rowkey is quite 
> different. 
> [Put|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html#constructor_summary]
>  supports many more variations for specifying the target rowkey and timestamp 
> compared to 
> [Get|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#constructor_summary].
>  Notably lacking are {{Get(byte[], int, int)}} and {{Get(ByteBuffer)}} 
> variations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16014) Get and Put constructor argument lists are divergent

2016-07-25 Thread brandboat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

brandboat updated HBASE-16014:
--
Attachment: (was: HBASE-16014_v1.patch)

> Get and Put constructor argument lists are divergent
> 
>
> Key: HBASE-16014
> URL: https://issues.apache.org/jira/browse/HBASE-16014
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Nick Dimiduk
>Assignee: brandboat
> Fix For: 2.0.0
>
> Attachments: HBASE-16014_v0.patch
>
>
> API for construing Get and Put objects for a specific rowkey is quite 
> different. 
> [Put|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html#constructor_summary]
>  supports many more variations for specifying the target rowkey and timestamp 
> compared to 
> [Get|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#constructor_summary].
>  Notably lacking are {{Get(byte[], int, int)}} and {{Get(ByteBuffer)}} 
> variations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16014) Get and Put constructor argument lists are divergent

2016-07-25 Thread brandboat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

brandboat updated HBASE-16014:
--
Attachment: HBASE-16014_v1.patch

> Get and Put constructor argument lists are divergent
> 
>
> Key: HBASE-16014
> URL: https://issues.apache.org/jira/browse/HBASE-16014
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Nick Dimiduk
>Assignee: brandboat
> Fix For: 2.0.0
>
> Attachments: HBASE-16014_v0.patch, HBASE-16014_v1.patch
>
>
> API for construing Get and Put objects for a specific rowkey is quite 
> different. 
> [Put|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html#constructor_summary]
>  supports many more variations for specifying the target rowkey and timestamp 
> compared to 
> [Get|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#constructor_summary].
>  Notably lacking are {{Get(byte[], int, int)}} and {{Get(ByteBuffer)}} 
> variations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16014) Get and Put constructor argument lists are divergent

2016-07-25 Thread brandboat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

brandboat updated HBASE-16014:
--
Attachment: (was: HBASE-16014_v1.patch)

> Get and Put constructor argument lists are divergent
> 
>
> Key: HBASE-16014
> URL: https://issues.apache.org/jira/browse/HBASE-16014
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Nick Dimiduk
>Assignee: brandboat
> Fix For: 2.0.0
>
> Attachments: HBASE-16014_v0.patch, HBASE-16014_v1.patch
>
>
> API for construing Get and Put objects for a specific rowkey is quite 
> different. 
> [Put|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html#constructor_summary]
>  supports many more variations for specifying the target rowkey and timestamp 
> compared to 
> [Get|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#constructor_summary].
>  Notably lacking are {{Get(byte[], int, int)}} and {{Get(ByteBuffer)}} 
> variations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16014) Get and Put constructor argument lists are divergent

2016-07-25 Thread brandboat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

brandboat updated HBASE-16014:
--
Attachment: HBASE-16014_v1.patch

> Get and Put constructor argument lists are divergent
> 
>
> Key: HBASE-16014
> URL: https://issues.apache.org/jira/browse/HBASE-16014
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Nick Dimiduk
>Assignee: brandboat
> Fix For: 2.0.0
>
> Attachments: HBASE-16014_v0.patch, HBASE-16014_v1.patch
>
>
> API for construing Get and Put objects for a specific rowkey is quite 
> different. 
> [Put|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html#constructor_summary]
>  supports many more variations for specifying the target rowkey and timestamp 
> compared to 
> [Get|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#constructor_summary].
>  Notably lacking are {{Get(byte[], int, int)}} and {{Get(ByteBuffer)}} 
> variations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14330) Regular Expressions cause ipc.CallTimeoutException

2016-07-25 Thread Murtaza Kanchwala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391611#comment-15391611
 ] 

Murtaza Kanchwala commented on HBASE-14330:
---

I am facing this issue from Phoenix side, I am writing a "LIKE" query on it 
which as per my understanding will reduce to the same thing.

> Regular Expressions cause  ipc.CallTimeoutException
> ---
>
> Key: HBASE-14330
> URL: https://issues.apache.org/jira/browse/HBASE-14330
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Filters, IPC/RPC
>Affects Versions: 1.0.1
> Environment: CDH5.4.0
> hbase-client-1.0.0-cdh5.4.0
>Reporter: 茂军王
>  Labels: performance
>
> Appear "ipc.CallTimeoutException" When I use scan with RowFilter. The 
> RowFilter use regular expression ".*_10_version$".
> The below is my code:
> {code}
> public static void main(String[] args) {
>   Scan scan = new Scan();
>   scan.setStartRow("2014-12-01".getBytes());
>   scan.setStopRow("2015-01-01".getBytes());
>   
>   String rowPattern = ".*_10_version$";
>   Filter myRowfilter = new RowFilter(CompareFilter.CompareOp.EQUAL, 
>   new RegexStringComparator(rowPattern));
>   List myFilterList = new ArrayList();
>   myFilterList.add(myRowfilter);
>   FilterList filterList = new FilterList(myFilterList);
>   scan.setFilter(filterList);
>   
>   TableName tn = TableName.valueOf("oneday");
>   Table t = null;
>   ResultScanner rs = null;
>   Long i = 0L;
>   try {
>   t = HBaseUtil.getHTable(tn);
>   rs = t.getScanner(scan);
>   Iterator iter = rs.iterator();
>   
>   while(iter.hasNext()){
>   Result r = iter.next();
>   i++;
>   }
>   System.out.println(i);
>   } catch (IOException e) {
>   e.printStackTrace();
>   }finally{
>   HBaseUtil.closeTable(t);
>   }
> }
> {code}
> The below is the error:
> {code}
> xception in thread "main" java.lang.RuntimeException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=36, exceptions:
> Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
> callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
> region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
>  hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
>   at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:97)
>   at com.jj.door.ScanFilterDoor.main(ScanFilterDoor.java:58)
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed 
> after attempts=36, exceptions:
> Fri Aug 28 08:17:23 CST 2015, null, java.net.SocketTimeoutException: 
> callTimeout=6, callDuration=60308: row '2014-12-01' on table 'oneday' at 
> region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
>  hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:270)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:203)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:294)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:374)
>   at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)
>   ... 1 more
> Caused by: java.net.SocketTimeoutException: callTimeout=6, 
> callDuration=60308: row '2014-12-01' on table 'oneday' at 
> region=oneday,2014-10-18_5_osversion_6b3699557822c74d7237f2467938c62b_3.4.2,1437563105965.74b33ebe56e5d6332e823c3ebfa36b56.,
>  hostname=tk-mapp-hadoop185,60020,1440385238156, seqNum=18648
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
>   at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Call to 

[jira] [Updated] (HBASE-16282) java.io.IOException: Took too long to split the files and create the references, aborting split

2016-07-25 Thread dcswinner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dcswinner updated HBASE-16282:
--
Description: 
Recently,I found some exception in my HBase cluser when some regions are 
spliting,in the regionserver node logs,the exception log like below:
2016-07-25 08:24:30,502 INFO [regionserver60020-splits-1466239518933] 
regionserver.SplitTransaction: Starting split of region 
ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.
 
2016-07-25 08:24:30,938 INFO [regionserver60020-splits-1466239518933] 
regionserver.HRegion: Started memstore flush for 
ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.,
 current region memstore size 28.0 K 
2016-07-25 08:24:36,137 INFO [regionserver60020-splits-1466239518933] 
regionserver.DefaultStoreFlusher: Flushed, sequenceid=15546530, memsize=28.0 K, 
hasBloomFilter=true, into tmp file 
hdfs://suninghadoop2/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.tmp/6ee8bb3e4c0a4af591f94a163b272f5f
 
2016-07-25 08:24:36,590 INFO [regionserver60020-splits-1466239518933] 
regionserver.HStore: Added 
hdfs://suninghadoop2/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/noverison/6ee8bb3e4c0a4af591f94a163b272f5f,
 entries=24, sequenceid=15546530, filesize=25.9 K 
2016-07-25 08:24:36,591 INFO [regionserver60020-splits-1466239518933] 
regionserver.HRegion: Finished memstore flush of ~28.0 K/28624, currentsize=0/0 
for region 
ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.
 in 5652ms, sequenceid=15546530, compaction requested=true 
2016-07-25 08:24:36,647 INFO 
[StoreCloserThread-ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.-1]
 regionserver.HStore: Closed noverison 
2016-07-25 08:24:36,647 INFO [regionserver60020-splits-1466239518933] 
regionserver.HRegion: Closed 
ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.
 
2016-07-25 08:24:43,264 INFO [StoreFileSplitter-0] hdfs.DFSClient: Could not 
complete 
/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
 retrying... 
2016-07-25 08:24:47,842 INFO [StoreFileSplitter-0] hdfs.DFSClient: Could not 
complete 
/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
 retrying... 
2016-07-25 08:24:47,842 INFO [StoreFileSplitter-0] hdfs.DFSClient: Could not 
complete 
/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
 retrying... 
2016-07-25 08:24:55,334 INFO [StoreFileSplitter-0] hdfs.DFSClient: Could not 
complete 
/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
 retrying... 
2016-07-25 08:25:11,257 INFO [regionserver60020-splits-1466239518933] 
regionserver.SplitRequest: Running rollback/cleanup of failed split of 
ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.;
 Took too long to split the files and create the references, aborting split 
java.io.IOException: Took too long to split the files and create the 
references, aborting split 
at 
org.apache.hadoop.hbase.regionserver.SplitTransaction.splitStoreFiles(SplitTransaction.java:825)
 
at 
org.apache.hadoop.hbase.regionserver.SplitTransaction.stepsBeforePONR(SplitTransaction.java:429)
 
at 
org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:303)
 
at 
org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:655)
 
at org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:84) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)
=
and in the master node logs,the exception log like below:
2016-07-25 08:24:30,504 INFO [AM.ZK.Worker-pool2-t501] master.RegionStates: 
Transition null to {e142341c56805aed68d3f99bae3e14f3 state=SPLITTING_NEW, 
ts=1469406270504, server=slave77-prd3.cn 
suning.com,60020,1466236968700} 
2016-07-25 08:24:30,504 INFO 

[jira] [Updated] (HBASE-16282) java.io.IOException: Took too long to split the files and create the references, aborting split

2016-07-25 Thread dcswinner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dcswinner updated HBASE-16282:
--
Description: HBase region split took too long to split the files and create 
the references, and aborting split

> java.io.IOException: Took too long to split the files and create the 
> references, aborting split
> ---
>
> Key: HBASE-16282
> URL: https://issues.apache.org/jira/browse/HBASE-16282
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 0.98.8
>Reporter: dcswinner
>
> HBase region split took too long to split the files and create the 
> references, and aborting split



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16014) Get and Put constructor argument lists are divergent

2016-07-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391554#comment-15391554
 ] 

Hadoop QA commented on HBASE-16014:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 19m 54s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 22m 45s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 25m 35s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 28m 25s 
{color} | {color:red} Patch causes 11 errors with Hadoop v2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
7s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 38s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819868/HBASE-16014_v0.patch |
| JIRA Issue | HBASE-16014 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HBASE-16282) java.io.IOException: Took too long to split the files and create the references, aborting split

2016-07-25 Thread dcswinner (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391525#comment-15391525
 ] 

dcswinner commented on HBASE-16282:
---

Recently,I found my HBase cluster has some exception when region spliting,the 
log in regionserver like this:
2016-07-25 08:24:30,502 INFO  [regionserver60020-splits-1466239518933] 
regionserver.SplitTransaction: Starting split of region 
ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.
2016-07-25 08:24:30,938 INFO  [regionserver60020-splits-1466239518933] 
regionserver.HRegion: Started memstore flush for 
ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.,
 current region memstore size 28.0 K
2016-07-25 08:24:36,137 INFO  [regionserver60020-splits-1466239518933] 
regionserver.DefaultStoreFlusher: Flushed, sequenceid=15546530, memsize=28.0 K, 
hasBloomFilter=true, into tmp file 
hdfs://suninghadoop2/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.tmp/6ee8bb3e4c0a4af591f94a163b272f5f
2016-07-25 08:24:36,590 INFO  [regionserver60020-splits-1466239518933] 
regionserver.HStore: Added 
hdfs://suninghadoop2/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/noverison/6ee8bb3e4c0a4af591f94a163b272f5f,
 entries=24, sequenceid=15546530, filesize=25.9 K
2016-07-25 08:24:36,591 INFO  [regionserver60020-splits-1466239518933] 
regionserver.HRegion: Finished memstore flush of ~28.0 K/28624, currentsize=0/0 
for region 
ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.
 in 5652ms, sequenceid=15546530, compaction requested=true
2016-07-25 08:24:36,647 INFO  
[StoreCloserThread-ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.-1]
 regionserver.HStore: Closed noverison
2016-07-25 08:24:36,647 INFO  [regionserver60020-splits-1466239518933] 
regionserver.HRegion: Closed 
ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.
2016-07-25 08:24:43,264 INFO  [StoreFileSplitter-0] hdfs.DFSClient: Could not 
complete 
/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
 retrying...
2016-07-25 08:24:47,842 INFO  [StoreFileSplitter-0] hdfs.DFSClient: Could not 
complete 
/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
 retrying...
2016-07-25 08:24:47,842 INFO  [StoreFileSplitter-0] hdfs.DFSClient: Could not 
complete 
/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
 retrying...
2016-07-25 08:24:55,334 INFO  [StoreFileSplitter-0] hdfs.DFSClient: Could not 
complete 
/hbase/data/ns_spider/crawl_task_exception_detail/b318fc37c2aac4705007200cc454e7fa/.splits/e142341c56805aed68d3f99bae3e14f3/noverison/14126e7af90e4d4cbcbdc45d98e130d0.b318fc37c2aac4705007200cc454e7fa
 retrying...
2016-07-25 08:25:11,257 INFO  [regionserver60020-splits-1466239518933] 
regionserver.SplitRequest: Running rollback/cleanup of failed split of 
ns_spider:crawl_task_exception_detail,8\xFF\xE3\x0D\x00\x00\x00\x00,1463915449300.b318fc37c2aac4705007200cc454e7fa.;
 Took too long to split the files and create the references, aborting split
java.io.IOException: Took too long to split the files and create the 
references, aborting split
at 
org.apache.hadoop.hbase.regionserver.SplitTransaction.splitStoreFiles(SplitTransaction.java:825)
at 
org.apache.hadoop.hbase.regionserver.SplitTransaction.stepsBeforePONR(SplitTransaction.java:429)
at 
org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:303)
at 
org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:655)
at 
org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:84)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

2016-07-25 08:25:12,436 INFO  [StoreOpener-b318fc37c2aac4705007200cc454e7fa-1] 
compactions.CompactionConfiguration: size [134217728, 9223372036854775807); 
files [3, 10); ratio 1.20; off-peak ratio 5.00; throttle point 
2684354560; major period 60480, major jitter 0.50
2016-07-25 08:25:16,461 INFO  

[jira] [Created] (HBASE-16282) java.io.IOException: Took too long to split the files and create the references, aborting split

2016-07-25 Thread dcswinner (JIRA)
dcswinner created HBASE-16282:
-

 Summary: java.io.IOException: Took too long to split the files and 
create the references, aborting split
 Key: HBASE-16282
 URL: https://issues.apache.org/jira/browse/HBASE-16282
 Project: HBase
  Issue Type: Bug
  Components: master, regionserver
Affects Versions: 0.98.8
Reporter: dcswinner






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16278) Use ConcurrentHashMap instead of ConcurrentSkipListMap if possible

2016-07-25 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391520#comment-15391520
 ] 

Heng Chen commented on HBASE-16278:
---

I don't  think use {{ByteArrayWrapper}} as [~Apache9] said above has essential 
difference with {{ConcurrentHashByteArrayMap}} which [~ikeda] uploaded.  
It seems {{ConcurrentHashByteArrayMap}} has wrapped the byte[] internal, but 
[~Apache9] do it explicitly outside the CHM   

> Use ConcurrentHashMap instead of ConcurrentSkipListMap if possible
> --
>
> Key: HBASE-16278
> URL: https://issues.apache.org/jira/browse/HBASE-16278
> Project: HBase
>  Issue Type: Improvement
>Reporter: Duo Zhang
> Attachments: ConcurrentHashByteArrayMap.java
>
>
> SSD and 10G network make our system CPU bound again, so the speed of memory 
> operation only code becomes more and more important.
> In HBase, if want to use byte[] as a map key, then we will always use CSLM 
> even if we do not need the map to be ordered. I know that this could save one 
> object allocation since we can not use byte[] directly as CHM's key. But we 
> all know that CHM is faster than CSLM, so I wonder if it worth to use CSLM 
> instead of CHM only because one extra object allocation.
> Then I wrote a simple jmh micro benchmark to test the performance of CHM and 
> CSLM. The code could be found here
> https://github.com/Apache9/microbench
> It turns out that CHM is still much faster than CSLM with one extra object 
> allocation.
> So I think we should always use CHM if we do not need the keys to be sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16014) Get and Put constructor argument lists are divergent

2016-07-25 Thread brandboat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

brandboat updated HBASE-16014:
--
Attachment: HBASE-16014_v0.patch

> Get and Put constructor argument lists are divergent
> 
>
> Key: HBASE-16014
> URL: https://issues.apache.org/jira/browse/HBASE-16014
> Project: HBase
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: brandboat
> Attachments: HBASE-16014_v0.patch
>
>
> API for construing Get and Put objects for a specific rowkey is quite 
> different. 
> [Put|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html#constructor_summary]
>  supports many more variations for specifying the target rowkey and timestamp 
> compared to 
> [Get|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#constructor_summary].
>  Notably lacking are {{Get(byte[], int, int)}} and {{Get(ByteBuffer)}} 
> variations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >