[jira] [Commented] (HBASE-17231) Region#getCellCompartor sp?

2016-12-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714370#comment-15714370
 ] 

Anoop Sam John commented on HBASE-17231:


Ah...  That is bad!
But pls note that Region is not private interface but exposed to CPs ( 
@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC) )
We need a deprecation way? Or may be deprecate in branch-1 and remove in master?
cc [~saint@gmail.com], [~enis]

> Region#getCellCompartor sp?
> ---
>
> Key: HBASE-17231
> URL: https://issues.apache.org/jira/browse/HBASE-17231
> Project: HBase
>  Issue Type: Bug
>Reporter: John Leach
>Assignee: John Leach
>Priority: Trivial
> Attachments: HBASE-17231.patch
>
>
> Region#getCellCompartor -> Region#getCellComparator



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17191) Make use of UnsafeByteOperations#unsafeWrap(ByteBuffer buffer) in PBUtil#toCell(Cell cell)

2016-12-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714364#comment-15714364
 ] 

Hadoop QA commented on HBASE-17191:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 56s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 6s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 102m 26s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
50s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 152m 18s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841410/HBASE-17191_1.patch |
| JIRA Issue | HBASE-17191 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 6c08acf5ab4e 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 00b3024 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4753/testReport/ |
| modules | C: hbase-client hbase-server U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4753/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Make use of 

[jira] [Commented] (HBASE-17233) See if we should replace System.arrayCopy with Arrays.copyOfRange

2016-12-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714362#comment-15714362
 ] 

Anoop Sam John commented on HBASE-17233:


Yep noticed this..
Can u do a JMH test?

> See if we should replace System.arrayCopy with Arrays.copyOfRange
> -
>
> Key: HBASE-17233
> URL: https://issues.apache.org/jira/browse/HBASE-17233
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>
> Just saw this interesting comment in PB code. Since we deal with byte[] 
> extensively (when we are onheap) we do lot of copies too.
> {code}
> * One of the noticeable costs of copying a byte[] into a new array using
>* {@code System.arraycopy} is nullification of a new buffer before the 
> copy. It has been shown
>* the Hotspot VM is capable to intrisicfy {@code Arrays.copyOfRange} 
> operation to avoid this
>* expensive nullification and provide substantial performance gain. 
> Unfortunately this does not
>* hold on Android runtimes and could make the copy slightly slower due to 
> additional code in
>* the {@code Arrays.copyOfRange}. 
> {code}
> So since we are hotspot VM we could see if the places we use System.arrayCopy 
> can be replaced with Arrays.copyOfRange.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17235) Improvement in creation of CIS for onheap buffer cases

2016-12-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714359#comment-15714359
 ] 

Anoop Sam John commented on HBASE-17235:


This is not so minor IMO. :-)  So changed the subject and priority.. I noticed 
this but was wondering why PB not exposed.  Thought of checking this later but 
just forgot. Thanks for the nice catch

> Improvement in creation of CIS for onheap buffer cases
> --
>
> Key: HBASE-17235
> URL: https://issues.apache.org/jira/browse/HBASE-17235
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17235.patch
>
>
> {code}
>   if (buf.hasArray()) {
> cis = CodedInputStream.newInstance(buf.array(), offset, buf.limit());
>   } else {
> {code}
> Currently we do this for onheap buffers incase there is no reservoir or the 
> size is less than the minSizeforReservoir. I could see that even if reservoir 
> is there there are requests which goes with the above way of creating CIS. 
> This could be made efficient to avoid underlying copies by just doing this
> {code}
> cis = UnsafeByteOperations.unsafeWrap(buf.array(), offset, 
> buf.limit()).newCodedInput();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17235) Minor improvement in creation of CIS for onheap buffer cases

2016-12-01 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-17235:
---
Priority: Major  (was: Minor)

> Minor improvement in creation of CIS for onheap buffer cases
> 
>
> Key: HBASE-17235
> URL: https://issues.apache.org/jira/browse/HBASE-17235
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17235.patch
>
>
> {code}
>   if (buf.hasArray()) {
> cis = CodedInputStream.newInstance(buf.array(), offset, buf.limit());
>   } else {
> {code}
> Currently we do this for onheap buffers incase there is no reservoir or the 
> size is less than the minSizeforReservoir. I could see that even if reservoir 
> is there there are requests which goes with the above way of creating CIS. 
> This could be made efficient to avoid underlying copies by just doing this
> {code}
> cis = UnsafeByteOperations.unsafeWrap(buf.array(), offset, 
> buf.limit()).newCodedInput();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17235) Improvement in creation of CIS for onheap buffer cases

2016-12-01 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-17235:
---
Summary: Improvement in creation of CIS for onheap buffer cases  (was: 
Minor improvement in creation of CIS for onheap buffer cases)

> Improvement in creation of CIS for onheap buffer cases
> --
>
> Key: HBASE-17235
> URL: https://issues.apache.org/jira/browse/HBASE-17235
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17235.patch
>
>
> {code}
>   if (buf.hasArray()) {
> cis = CodedInputStream.newInstance(buf.array(), offset, buf.limit());
>   } else {
> {code}
> Currently we do this for onheap buffers incase there is no reservoir or the 
> size is less than the minSizeforReservoir. I could see that even if reservoir 
> is there there are requests which goes with the above way of creating CIS. 
> This could be made efficient to avoid underlying copies by just doing this
> {code}
> cis = UnsafeByteOperations.unsafeWrap(buf.array(), offset, 
> buf.limit()).newCodedInput();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17235) Minor improvement in creation of CIS for onheap buffer cases

2016-12-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714354#comment-15714354
 ] 

Anoop Sam John commented on HBASE-17235:


That looks better. ya..  I was wondering why the new boolean based static 
creator is not exposed as public. Ya all Unsafe way (unsafe if u r not sure 
whether ur backing data structure is immutable) done via Unsafe*** is better.  
Can do the fix in ByteInput via a new jira as that has to patch PB.
On the patch u have to call cis.enableAliasing(true); also. Then only it will 
avoid copying.
Now if and else block both need this enableAliasing call and so put it outside.
Can fix that on commit. +1

> Minor improvement in creation of CIS for onheap buffer cases
> 
>
> Key: HBASE-17235
> URL: https://issues.apache.org/jira/browse/HBASE-17235
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17235.patch
>
>
> {code}
>   if (buf.hasArray()) {
> cis = CodedInputStream.newInstance(buf.array(), offset, buf.limit());
>   } else {
> {code}
> Currently we do this for onheap buffers incase there is no reservoir or the 
> size is less than the minSizeforReservoir. I could see that even if reservoir 
> is there there are requests which goes with the above way of creating CIS. 
> This could be made efficient to avoid underlying copies by just doing this
> {code}
> cis = UnsafeByteOperations.unsafeWrap(buf.array(), offset, 
> buf.limit()).newCodedInput();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17235) Minor improvement in creation of CIS for onheap buffer cases

2016-12-01 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-17235:
---
Fix Version/s: 2.0.0

> Minor improvement in creation of CIS for onheap buffer cases
> 
>
> Key: HBASE-17235
> URL: https://issues.apache.org/jira/browse/HBASE-17235
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17235.patch
>
>
> {code}
>   if (buf.hasArray()) {
> cis = CodedInputStream.newInstance(buf.array(), offset, buf.limit());
>   } else {
> {code}
> Currently we do this for onheap buffers incase there is no reservoir or the 
> size is less than the minSizeforReservoir. I could see that even if reservoir 
> is there there are requests which goes with the above way of creating CIS. 
> This could be made efficient to avoid underlying copies by just doing this
> {code}
> cis = UnsafeByteOperations.unsafeWrap(buf.array(), offset, 
> buf.limit()).newCodedInput();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17232) Replace HashSet with ArrayList to accumulate delayed scanners in KVHeap and StoreScanner.

2016-12-01 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-17232:
---
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

> Replace HashSet with ArrayList to accumulate delayed scanners in KVHeap and 
> StoreScanner.
> -
>
> Key: HBASE-17232
> URL: https://issues.apache.org/jira/browse/HBASE-17232
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-17232.patch
>
>
> HashSet is slow than ArrayList, also generate more garbage.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17232) Replace HashSet with ArrayList

2016-12-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714338#comment-15714338
 ] 

Anoop Sam John commented on HBASE-17232:


Looks ok.
We are sure we wont be adding duplicated scanners in any case.. The Set might 
have been selected to make sure no dup addition.. But I believe there is no dup 
chance at all.  +1 if QA is fine. Tks.

> Replace HashSet with ArrayList
> --
>
> Key: HBASE-17232
> URL: https://issues.apache.org/jira/browse/HBASE-17232
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-17232.patch
>
>
> HashSet is slow than ArrayList, also generate more garbage.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17235) Minor improvement in creation of CIS for onheap buffer cases

2016-12-01 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-17235:
---
Attachment: HBASE-17235.patch

Simple patch.
I think we could do the same for the ByteBuffInput also. Instead of exposing 
the newInstance(ByteInput, boolean) in CIS we could just add a 
UnsafeByteOperations#wrap(ByteInput, offset, len). And we could just call that 
and do a #newcodedInput() over that. So internally we do return a immutable 
version of the ByteInput only. This way we can avoid CIS#newInstance(ByteInput) 
exposure and can keep it package private as done in COS. What others think 
[~anoopsamjohn] and [~saint@gmail.com]?

> Minor improvement in creation of CIS for onheap buffer cases
> 
>
> Key: HBASE-17235
> URL: https://issues.apache.org/jira/browse/HBASE-17235
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Attachments: HBASE-17235.patch
>
>
> {code}
>   if (buf.hasArray()) {
> cis = CodedInputStream.newInstance(buf.array(), offset, buf.limit());
>   } else {
> {code}
> Currently we do this for onheap buffers incase there is no reservoir or the 
> size is less than the minSizeforReservoir. I could see that even if reservoir 
> is there there are requests which goes with the above way of creating CIS. 
> This could be made efficient to avoid underlying copies by just doing this
> {code}
> cis = UnsafeByteOperations.unsafeWrap(buf.array(), offset, 
> buf.limit()).newCodedInput();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17232) Replace HashSet with ArrayList to accumulate delayed scanners in KVHeap and StoreScanner.

2016-12-01 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-17232:
---
Summary: Replace HashSet with ArrayList to accumulate delayed scanners in 
KVHeap and StoreScanner.  (was: Replace HashSet with ArrayList to accumulate 
delayed scanners in KVHeap and )

> Replace HashSet with ArrayList to accumulate delayed scanners in KVHeap and 
> StoreScanner.
> -
>
> Key: HBASE-17232
> URL: https://issues.apache.org/jira/browse/HBASE-17232
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-17232.patch
>
>
> HashSet is slow than ArrayList, also generate more garbage.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17232) Replace HashSet with ArrayList to accumulate delayed scanners in KVHeap and

2016-12-01 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-17232:
---
Summary: Replace HashSet with ArrayList to accumulate delayed scanners in 
KVHeap and   (was: Replace HashSet with ArrayList)

> Replace HashSet with ArrayList to accumulate delayed scanners in KVHeap and 
> 
>
> Key: HBASE-17232
> URL: https://issues.apache.org/jira/browse/HBASE-17232
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-17232.patch
>
>
> HashSet is slow than ArrayList, also generate more garbage.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16119) Procedure v2 - Reimplement merge

2016-12-01 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-16119:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Procedure v2 - Reimplement merge
> 
>
> Key: HBASE-16119
> URL: https://issues.apache.org/jira/browse/HBASE-16119
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Region Assignment
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Stephen Yuan Jiang
> Fix For: 2.0.0
>
> Attachments: HBASE-16119.v1-master.patch, HBASE-16119.v2-master.patch
>
>
> use the proc-v2 state machine for merge. also update the logic to have a 
> single meta-writer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17172) Optimize major mob compaction with _del files

2016-12-01 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714331#comment-15714331
 ] 

huaxiang sun commented on HBASE-17172:
--

One more question for you, Jingcheng, :). When threshold is so big that size 
for all mob files is less than this threshold, in this case, if there are _del 
files, the minor mob compaction actually turns into a major mob compaction. 
What is the reason behind the design? Since threshold is a user configurable 
variable, user may choose to configure a large value and turns the mob 
compaction into a major one, if there are _del files, compaction will take 
longer than expected. Thinking about compacting 1 mob file with _del files only 
for major_mob_compact case so user is aware of what is going to happen. 
comments? Thanks..

> Optimize major mob compaction with _del files
> -
>
> Key: HBASE-17172
> URL: https://issues.apache.org/jira/browse/HBASE-17172
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>
> Today, when there is a _del file in mobdir, with major mob compaction, every 
> mob file will be recompacted, this causes lots of IO and slow down major mob 
> compaction (may take months to finish). This needs to be improved. A few 
> ideas are: 
> 1) Do not compact all _del files into one, instead, compact them based on 
> groups with startKey as the key. Then use firstKey/startKey to make each mob 
> file to see if the _del file needs to be included for this partition.
> 2). Based on the timerange of the _del file, compaction for files after that 
> timerange does not need to include the _del file as these are newer files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17234) Allow alternate Readers/Writers; currently hardcoded

2016-12-01 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17234:
--
Attachment: HBASE-17234.master.001.patch

> Allow alternate Readers/Writers; currently hardcoded
> 
>
> Key: HBASE-17234
> URL: https://issues.apache.org/jira/browse/HBASE-17234
> Project: HBase
>  Issue Type: Task
>  Components: io
>Reporter: stack
> Attachments: HBASE-17234.master.001.patch
>
>
> Allow alternate HFile Reader and Writers. For Writers, we have WriterFactory 
> so you'd think it possible to supply a different Writer but in actuality, 
> WriterFactory is hardcoded.
> Read side does something else altogether complicated by fact that Reader 
> presumes trailer and that it has to take a Stream.
> Yeah, expecting someone would provide their own Reader/Writer is a little 
> unexpected but



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17235) Minor improvement in creation of CIS for onheap buffer cases

2016-12-01 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-17235:
--

 Summary: Minor improvement in creation of CIS for onheap buffer 
cases
 Key: HBASE-17235
 URL: https://issues.apache.org/jira/browse/HBASE-17235
 Project: HBase
  Issue Type: Improvement
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor


{code}
  if (buf.hasArray()) {
cis = CodedInputStream.newInstance(buf.array(), offset, buf.limit());
  } else {
{code}
Currently we do this for onheap buffers incase there is no reservoir or the 
size is less than the minSizeforReservoir. I could see that even if reservoir 
is there there are requests which goes with the above way of creating CIS. This 
could be made efficient to avoid underlying copies by just doing this
{code}
cis = UnsafeByteOperations.unsafeWrap(buf.array(), offset, 
buf.limit()).newCodedInput();
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17234) Allow alternate Readers/Writers; currently hardcoded

2016-12-01 Thread stack (JIRA)
stack created HBASE-17234:
-

 Summary: Allow alternate Readers/Writers; currently hardcoded
 Key: HBASE-17234
 URL: https://issues.apache.org/jira/browse/HBASE-17234
 Project: HBase
  Issue Type: Task
  Components: io
Reporter: stack


Allow alternate HFile Reader and Writers. For Writers, we have WriterFactory so 
you'd think it possible to supply a different Writer but in actuality, 
WriterFactory is hardcoded.

Read side does something else altogether complicated by fact that Reader 
presumes trailer and that it has to take a Stream.

Yeah, expecting someone would provide their own Reader/Writer is a little 
unexpected but



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17233) See if we should replace System.arrayCopy with Arrays.copyOfRange

2016-12-01 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-17233:
--

 Summary: See if we should replace System.arrayCopy with 
Arrays.copyOfRange
 Key: HBASE-17233
 URL: https://issues.apache.org/jira/browse/HBASE-17233
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan


Just saw this interesting comment in PB code. Since we deal with byte[] 
extensively (when we are onheap) we do lot of copies too.
{code}
* One of the noticeable costs of copying a byte[] into a new array using
   * {@code System.arraycopy} is nullification of a new buffer before the copy. 
It has been shown
   * the Hotspot VM is capable to intrisicfy {@code Arrays.copyOfRange} 
operation to avoid this
   * expensive nullification and provide substantial performance gain. 
Unfortunately this does not
   * hold on Android runtimes and could make the copy slightly slower due to 
additional code in
   * the {@code Arrays.copyOfRange}. 
{code}
So since we are hotspot VM we could see if the places we use System.arrayCopy 
can be replaced with Arrays.copyOfRange.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2016-12-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714322#comment-15714322
 ] 

Anoop Sam John commented on HBASE-15437:


Oh that make sense..  I did not think that Call is within RpcServer.. Ya that 
will be bad..  
Also we have Call in client side and server side.. We need a better name for 
the interface u make.

> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: deepankar
> Attachments: HBASE-15437-v1.patch, HBASE-15437-v2.patch, 
> HBASE-15437.patch
>
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16859) Use Bytebuffer pool for non java clients specifically for scans/gets

2016-12-01 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714317#comment-15714317
 ] 

ramkrishna.s.vasudevan commented on HBASE-16859:


I did not find any significant perf difference E2E by doing the above two. I 
think we can avoid exposing the newInstance(ByteInput,  boolean) by an indirect 
way.

> Use Bytebuffer pool for non java clients specifically for scans/gets
> 
>
> Key: HBASE-16859
> URL: https://issues.apache.org/jira/browse/HBASE-16859
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-16859_V1.patch, HBASE-16859_V2.patch, 
> HBASE-16859_V2.patch, HBASE-16859_V4.patch
>
>
> In case of non java clients we still write the results and header into a on 
> demand  byte[]. This can be changed to use the BBPool (onheap or offheap 
> buffer?).
> But the basic problem is to identify if the response is for scans/gets. 
> - One easy way to do it is use the MethodDescriptor per Call and use the   
> name of the MethodDescriptor to identify it is a scan/get. But this will 
> pollute RpcServer by checking for scan/get type response.
> - Other way is always set the result to cellScanner but we know that 
> isClientCellBlockSupported is going to false for non PB clients. So ignore 
> the cellscanner and go ahead with the results in PB. But this is not clean
> - third one is that we already have a RpccallContext being passed to the RS. 
> In case of scan/gets/multiGets we already set a Rpccallback for shipped call. 
> So here on response we can check if the callback is not null and check for 
> isclientBlockSupported. In this case we can get the BB from the pool and 
> write the result and header to that BB. May be this looks clean?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17232) Replace HashSet with ArrayList

2016-12-01 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-17232:
-
Attachment: HBASE-17232.patch

> Replace HashSet with ArrayList
> --
>
> Key: HBASE-17232
> URL: https://issues.apache.org/jira/browse/HBASE-17232
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-17232.patch
>
>
> HashSet is slow than ArrayList, also generate more garbage.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17232) Replace HashSet with ArrayList

2016-12-01 Thread binlijin (JIRA)
binlijin created HBASE-17232:


 Summary: Replace HashSet with ArrayList
 Key: HBASE-17232
 URL: https://issues.apache.org/jira/browse/HBASE-17232
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: binlijin
Assignee: binlijin
 Fix For: 2.0.0
 Attachments: HBASE-17232.patch

HashSet is slow than ArrayList, also generate more garbage.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17191) Make use of UnsafeByteOperations#unsafeWrap(ByteBuffer buffer) in PBUtil#toCell(Cell cell)

2016-12-01 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-17191:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the reviews.

> Make use of UnsafeByteOperations#unsafeWrap(ByteBuffer buffer) in 
> PBUtil#toCell(Cell cell)
> --
>
> Key: HBASE-17191
> URL: https://issues.apache.org/jira/browse/HBASE-17191
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-17191.patch, HBASE-17191_1.patch
>
>
> Since we now have support for BBs in 
> UnsafeByteOperations#unsafeWrap(ByteBuffer) . So for the non - java clients 
> while creating the PB result we could avoid the copy to an temp array. 
> Since we have a support to write to BB in ByteOutput having a result backed 
> by Bytebuffer should be fine. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17172) Optimize major mob compaction with _del files

2016-12-01 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714293#comment-15714293
 ] 

Jingcheng Du commented on HBASE-17172:
--

bq. Can I create a new jira to address "Meanwhile, we can add more 
constriction, for example only perform compaction when there are more than 2 
mob files and _del files in minor compaction?"?
Sure, thanks!

> Optimize major mob compaction with _del files
> -
>
> Key: HBASE-17172
> URL: https://issues.apache.org/jira/browse/HBASE-17172
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>
> Today, when there is a _del file in mobdir, with major mob compaction, every 
> mob file will be recompacted, this causes lots of IO and slow down major mob 
> compaction (may take months to finish). This needs to be improved. A few 
> ideas are: 
> 1) Do not compact all _del files into one, instead, compact them based on 
> groups with startKey as the key. Then use firstKey/startKey to make each mob 
> file to see if the _del file needs to be included for this partition.
> 2). Based on the timerange of the _del file, compaction for files after that 
> timerange does not need to include the _del file as these are newer files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16841) Data loss in MOB files after cloning a snapshot and deleting that snapshot

2016-12-01 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-16841:
-
Attachment: HBASE-16841-V6.patch

Thanks [~mbertozzi].
Upload a new patch V6 according to the comments. Is this one good to go?

> Data loss in MOB files after cloning a snapshot and deleting that snapshot
> --
>
> Key: HBASE-16841
> URL: https://issues.apache.org/jira/browse/HBASE-16841
> Project: HBase
>  Issue Type: Bug
>  Components: mob, snapshots
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HBASE-16841-V2.patch, HBASE-16841-V3.patch, 
> HBASE-16841-V4.patch, HBASE-16841-V5.patch, HBASE-16841-V6.patch, 
> HBASE-16841.patch
>
>
> Running the following steps will probably lose MOB data when working with 
> snapshots.
> 1. Create a mob-enabled table by running create 't1', {NAME => 'f1', IS_MOB 
> => true, MOB_THRESHOLD => 0}.
> 2. Put millions of data.
> 3. Run {{snapshot 't1','t1_snapshot'}} to take a snapshot for this table t1.
> 4. Run {{clone_snapshot 't1_snapshot','t1_cloned'}} to clone this snapshot.
> 5. Run {{delete_snapshot 't1_snapshot'}} to delete this snapshot.
> 6. Run {{disable 't1'}} and {{delete 't1'}} to delete the table.
> 7. Now go to the archive directory of t1, the number of .link directories is 
> different from the number of hfiles which means some data will be lost after 
> the hfile cleaner runs.
> This is because, when taking a snapshot on a enabled mob table, each region 
> flushes itself and takes a snapshot, and the mob snapshot is taken only if 
> the current region is first region of the table. At that time, the flushing 
> of some regions might not be finished, and some mob files are not flushed to 
> disk yet. Eventually some mob files are not recorded in the snapshot manifest.
> To solve this, we need to take the mob snapshot at last after the snapshots 
> on all the online and offline regions are finished in 
> {{EnabledTableSnapshotHandler}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17231) Region#getCellCompartor sp?

2016-12-01 Thread John Leach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Leach updated HBASE-17231:
---
Priority: Trivial  (was: Major)

> Region#getCellCompartor sp?
> ---
>
> Key: HBASE-17231
> URL: https://issues.apache.org/jira/browse/HBASE-17231
> Project: HBase
>  Issue Type: Bug
>Reporter: John Leach
>Assignee: John Leach
>Priority: Trivial
> Attachments: HBASE-17231.patch
>
>
> Region#getCellCompartor -> Region#getCellComparator



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17177) Compaction can break the region/row level atomic when scan even if we pass mvcc to client

2016-12-01 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714238#comment-15714238
 ] 

Duo Zhang commented on HBASE-17177:
---

Oh I made a mistake... Even minor compaction could also reclaim the delete 
cells, the difference of major compaction is that it can reclaim the delete 
marker itself...

So in general, we need to record a mvcc below which we may delete some cells 
and you may not read all the cells. And when a region is newly opened, we need 
to freeze this value for a small amount(maybe the scanner TTL as [~yangzhe1991] 
proposed above), either by disable compaction or set KeepDeleteCells to true 
when compaction.

Thanks.

> Compaction can break the region/row level atomic when scan even if we pass 
> mvcc to client
> -
>
> Key: HBASE-17177
> URL: https://issues.apache.org/jira/browse/HBASE-17177
> Project: HBase
>  Issue Type: Sub-task
>  Components: scan
>Reporter: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> We know that major compaction will actually delete the cells which are 
> deleted by a delete marker. In order to give a consistent view for a scan, we 
> need to use a map to track the read points for all scanners for a region, and 
> the smallest one will be used for a compaction. For all delete markers whose 
> mvcc is greater than this value, we will not use it to delete other cells.
> And the problem for a scan restart after region move is that, the new RS does 
> not have the information of the scanners opened at the old RS before the 
> client sends scan requests to the new RS which means the read points map is 
> incomplete and the smallest read point maybe greater than the correct value. 
> So if a major compaction happens at that time, it may delete some cells which 
> should be kept.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17177) Compaction can break the region/row level atomic when scan even if we pass mvcc to client

2016-12-01 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17177:
--
Summary: Compaction can break the region/row level atomic when scan even if 
we pass mvcc to client  (was: Major compaction can break the region/row level 
atomic when scan even if we pass mvcc to client)

> Compaction can break the region/row level atomic when scan even if we pass 
> mvcc to client
> -
>
> Key: HBASE-17177
> URL: https://issues.apache.org/jira/browse/HBASE-17177
> Project: HBase
>  Issue Type: Sub-task
>  Components: scan
>Reporter: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> We know that major compaction will actually delete the cells which are 
> deleted by a delete marker. In order to give a consistent view for a scan, we 
> need to use a map to track the read points for all scanners for a region, and 
> the smallest one will be used for a compaction. For all delete markers whose 
> mvcc is greater than this value, we will not use it to delete other cells.
> And the problem for a scan restart after region move is that, the new RS does 
> not have the information of the scanners opened at the old RS before the 
> client sends scan requests to the new RS which means the read points map is 
> incomplete and the smallest read point maybe greater than the correct value. 
> So if a major compaction happens at that time, it may delete some cells which 
> should be kept.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17231) Region#getCellCompartor sp?

2016-12-01 Thread John Leach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Leach updated HBASE-17231:
---
Status: Patch Available  (was: Open)

> Region#getCellCompartor sp?
> ---
>
> Key: HBASE-17231
> URL: https://issues.apache.org/jira/browse/HBASE-17231
> Project: HBase
>  Issue Type: Bug
>Reporter: John Leach
>Assignee: John Leach
> Attachments: HBASE-17231.patch
>
>
> Region#getCellCompartor -> Region#getCellComparator



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17231) Region#getCellCompartor sp?

2016-12-01 Thread John Leach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Leach updated HBASE-17231:
---
Attachment: HBASE-17231.patch

> Region#getCellCompartor sp?
> ---
>
> Key: HBASE-17231
> URL: https://issues.apache.org/jira/browse/HBASE-17231
> Project: HBase
>  Issue Type: Bug
>Reporter: John Leach
>Assignee: John Leach
> Attachments: HBASE-17231.patch
>
>
> Region#getCellCompartor -> Region#getCellComparator



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17231) Region#getCellCompartor sp?

2016-12-01 Thread John Leach (JIRA)
John Leach created HBASE-17231:
--

 Summary: Region#getCellCompartor sp?
 Key: HBASE-17231
 URL: https://issues.apache.org/jira/browse/HBASE-17231
 Project: HBase
  Issue Type: Bug
Reporter: John Leach


Region#getCellCompartor -> Region#getCellComparator



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-17231) Region#getCellCompartor sp?

2016-12-01 Thread John Leach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Leach reassigned HBASE-17231:
--

Assignee: John Leach

> Region#getCellCompartor sp?
> ---
>
> Key: HBASE-17231
> URL: https://issues.apache.org/jira/browse/HBASE-17231
> Project: HBase
>  Issue Type: Bug
>Reporter: John Leach
>Assignee: John Leach
>
> Region#getCellCompartor -> Region#getCellComparator



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17181) Let HBase thrift2 support TThreadedSelectorServer

2016-12-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714225#comment-15714225
 ] 

Hudson commented on HBASE-17181:


FAILURE: Integrated in Jenkins build HBase-1.4 #557 (See 
[https://builds.apache.org/job/HBase-1.4/557/])
HBASE-17181 Let HBase thrift2 support TThreadedSelectorServer (zhangduo: rev 
55645c351e7a7a8656b5d7b5c2fe99efacd29b85)
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java


> Let HBase thrift2 support TThreadedSelectorServer
> -
>
> Key: HBASE-17181
> URL: https://issues.apache.org/jira/browse/HBASE-17181
> Project: HBase
>  Issue Type: New Feature
>  Components: Thrift
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Jian Yi
>Assignee: Jian Yi
>Priority: Minor
>  Labels: features
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17181-V1.patch, HBASE-17181-V2.patch, 
> HBASE-17181-V3.patch, HBASE-17181-V4.patch, HBASE-17181-V5.patch, 
> HBASE-17181-V6.patch, ThriftServer.java
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Add TThreadedSelectorServer for HBase Thrift2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests

2016-12-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714224#comment-15714224
 ] 

Hudson commented on HBASE-16209:


FAILURE: Integrated in Jenkins build HBase-1.4 #557 (See 
[https://builds.apache.org/job/HBase-1.4/557/])
Addendum HBASE-16209: Add an ExponentialBackOffPolicy so that we spread 
(zhangduo: rev cbdc9fcb8a705f4e5ee28a917a335c6f1ef5df42)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java


> Provide an ExponentialBackOffPolicy sleep between failed region open requests
> -
>
> Key: HBASE-16209
> URL: https://issues.apache.org/jira/browse/HBASE-16209
> Project: HBase
>  Issue Type: Bug
>Reporter: Joseph
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16209-addendum.patch, 
> HBASE-16209-addendum.v6.branch-1.patch, 
> HBASE-16209-branch-1-addendum-v2.patch, HBASE-16209-branch-1-addendum.patch, 
> HBASE-16209-branch-1-v3.patch, HBASE-16209-branch-1-v4.patch, 
> HBASE-16209-branch-1-v5.patch, HBASE-16209-branch-1.patch, 
> HBASE-16209-v2.patch, HBASE-16209.patch
>
>
> Related to HBASE-16138. As of now we currently have no pause between retrying 
> failed region open requests. And with a low maximumAttempt default, we can 
> quickly use up all our regionOpen retries if the server is in a bad state. I 
> added in a ExponentialBackOffPolicy so that we spread out the timing of our 
> open region retries in AssignmentManager. Review board at 
> https://reviews.apache.org/r/50011/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-01 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714210#comment-15714210
 ] 

ramkrishna.s.vasudevan commented on HBASE-17081:


bq.I would prefer to leave this JIRA as is (and to commit it asap  ) and if you 
can take it please fix the snapshot scanner count everywhere.
Ok can do this in follow up JIRA. I will leave to other to review before this 
get committed.

> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17081-V01.patch, HBASE-17081-V02.patch, 
> HBASE-17081-V03.patch, HBASE-17081-V04.patch, HBASE-17081-V05.patch, 
> Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17172) Optimize major mob compaction with _del files

2016-12-01 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714196#comment-15714196
 ] 

huaxiang sun commented on HBASE-17172:
--

Is it possible to compact the _del files by regions, and save the start keys 
and stop keys in memory for each partition to decide if we need to compact?

That is one of the ideas to optimize compaction with _del files. 

Can I create a new jira to address "Meanwhile, we can add more constriction, 
for example only perform compaction when there are more than 2 mob files and 
_del files in minor compaction?"?

Thanks!

> Optimize major mob compaction with _del files
> -
>
> Key: HBASE-17172
> URL: https://issues.apache.org/jira/browse/HBASE-17172
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>
> Today, when there is a _del file in mobdir, with major mob compaction, every 
> mob file will be recompacted, this causes lots of IO and slow down major mob 
> compaction (may take months to finish). This needs to be improved. A few 
> ideas are: 
> 1) Do not compact all _del files into one, instead, compact them based on 
> groups with startKey as the key. Then use firstKey/startKey to make each mob 
> file to see if the _del file needs to be included for this partition.
> 2). Based on the timerange of the _del file, compaction for files after that 
> timerange does not need to include the _del file as these are newer files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17112) Prevent setting timestamp of delta operations the same as previous value's

2016-12-01 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-17112:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to branch-1.2 and branch-1.1. Let's resolved this issue first and add a 
sub-task to branch-1.3.

Thanks all for review.

> Prevent setting timestamp of delta operations the same as previous value's
> --
>
> Key: HBASE-17112
> URL: https://issues.apache.org/jira/browse/HBASE-17112
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.7, 0.98.23, 1.2.4
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: HBASE-17112-branch-1-v1.patch, 
> HBASE-17112-branch-1-v1.patch, HBASE-17112-branch-1.1-v1.patch, 
> HBASE-17112-branch-1.1-v1.patch, HBASE-17112-v1.patch, HBASE-17112-v2.patch, 
> HBASE-17112-v2.patch
>
>
> In delta operations, Increment and Append. We will read current value first 
> and then write the new whole result into WAL as the type of Put with current 
> timestamp. If the previous ts is larger than current ts, we will use the 
> previous ts.
> If we have two Puts with same TS, we will ignore the Put with lower sequence 
> id. It is not friendly with versioning. And for replication we will drop 
> sequence id  while writing to peer cluster so in the slave we don't know what 
> the order they are being written. If the pushing is disordered, the result 
> will be wrong.
> We can set the new ts to previous+1 if the previous is not less than now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17230) port HBASE-17112 to 1.3.1

2016-12-01 Thread Phil Yang (JIRA)
Phil Yang created HBASE-17230:
-

 Summary: port HBASE-17112 to 1.3.1
 Key: HBASE-17230
 URL: https://issues.apache.org/jira/browse/HBASE-17230
 Project: HBase
  Issue Type: Sub-task
Reporter: Phil Yang
Assignee: Phil Yang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17112) Prevent setting timestamp of delta operations the same as previous value's

2016-12-01 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-17112:
--
Fix Version/s: 1.1.8
   1.2.5

> Prevent setting timestamp of delta operations the same as previous value's
> --
>
> Key: HBASE-17112
> URL: https://issues.apache.org/jira/browse/HBASE-17112
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.7, 0.98.23, 1.2.4
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: HBASE-17112-branch-1-v1.patch, 
> HBASE-17112-branch-1-v1.patch, HBASE-17112-branch-1.1-v1.patch, 
> HBASE-17112-branch-1.1-v1.patch, HBASE-17112-v1.patch, HBASE-17112-v2.patch, 
> HBASE-17112-v2.patch
>
>
> In delta operations, Increment and Append. We will read current value first 
> and then write the new whole result into WAL as the type of Put with current 
> timestamp. If the previous ts is larger than current ts, we will use the 
> previous ts.
> If we have two Puts with same TS, we will ignore the Put with lower sequence 
> id. It is not friendly with versioning. And for replication we will drop 
> sequence id  while writing to peer cluster so in the slave we don't know what 
> the order they are being written. If the pushing is disordered, the result 
> will be wrong.
> We can set the new ts to previous+1 if the previous is not less than now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17172) Optimize major mob compaction with _del files

2016-12-01 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714169#comment-15714169
 ] 

Jingcheng Du commented on HBASE-17172:
--

You are right. It is like this now.
We are now is trying to avoid the unnecessary, right? Is it possible to compact 
the _del files by regions, and save the start keys and stop keys in memory for 
each partition to decide if we need to compact? Meanwhile, we can add more 
constriction, for example only perform compaction when there are more than 2 
mob files and _del files in minor compaction?

> Optimize major mob compaction with _del files
> -
>
> Key: HBASE-17172
> URL: https://issues.apache.org/jira/browse/HBASE-17172
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>
> Today, when there is a _del file in mobdir, with major mob compaction, every 
> mob file will be recompacted, this causes lots of IO and slow down major mob 
> compaction (may take months to finish). This needs to be improved. A few 
> ideas are: 
> 1) Do not compact all _del files into one, instead, compact them based on 
> groups with startKey as the key. Then use firstKey/startKey to make each mob 
> file to see if the _del file needs to be included for this partition.
> 2). Based on the timerange of the _del file, compaction for files after that 
> timerange does not need to include the _del file as these are newer files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17191) Make use of UnsafeByteOperations#unsafeWrap(ByteBuffer buffer) in PBUtil#toCell(Cell cell)

2016-12-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714166#comment-15714166
 ] 

Anoop Sam John commented on HBASE-17191:


+1

> Make use of UnsafeByteOperations#unsafeWrap(ByteBuffer buffer) in 
> PBUtil#toCell(Cell cell)
> --
>
> Key: HBASE-17191
> URL: https://issues.apache.org/jira/browse/HBASE-17191
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-17191.patch, HBASE-17191_1.patch
>
>
> Since we now have support for BBs in 
> UnsafeByteOperations#unsafeWrap(ByteBuffer) . So for the non - java clients 
> while creating the PB result we could avoid the copy to an temp array. 
> Since we have a support to write to BB in ByteOutput having a result backed 
> by Bytebuffer should be fine. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-17177) Major compaction can break the region/row level atomic when scan even if we pass mvcc to client

2016-12-01 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714140#comment-15714140
 ] 

Phil Yang edited comment on HBASE-17177 at 12/2/16 5:45 AM:


I think at first we should know if we can return a consistent view to a 
reopened scanner, no matter the region is moved or not. So we should record the 
minReadPoint of last major compaction and when we open a region we should also 
know it. We can add a field to HFile's header and if it is generated by a major 
compaction this field is the minReadPoint that the compaction used. After this 
we will know when a scanner comes, we can return a consistent view or not.

Now we have a TTL(hbase.client.scanner.timeout.period) for scanner in server. 
If there is no requests within TTL milliseconds, we can remove the scanner. So 
I think when we open a region, we can wait same time before we want to do a 
major compaction. Although the scanner may has been expired at former RS, it is 
safe and TTL is not a long time.


was (Author: yangzhe1991):
I think at first we should know if we can return a consistent view to a 
reopened scanner, no matter the region is moved or not. So we should record the 
minReadPoint of last major compaction and when we open a region we should also 
know it. We can add a field to HFile's header and if it is generated by a major 
compaction this filed is the minReadPoint that the compaction used. After this 
we will know when a scanner comes, we can return a consistent view or not.

Now we have a TTL(hbase.client.scanner.timeout.period) for scanner in server. 
If there is no requests within TTL milliseconds, we can remove the scanner. So 
I think when we open a region, we can wait same time before we want to do a 
major compaction. Although the scanner may has been expired at former RS, it is 
safe and TTL is not a long time.

> Major compaction can break the region/row level atomic when scan even if we 
> pass mvcc to client
> ---
>
> Key: HBASE-17177
> URL: https://issues.apache.org/jira/browse/HBASE-17177
> Project: HBase
>  Issue Type: Sub-task
>  Components: scan
>Reporter: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> We know that major compaction will actually delete the cells which are 
> deleted by a delete marker. In order to give a consistent view for a scan, we 
> need to use a map to track the read points for all scanners for a region, and 
> the smallest one will be used for a compaction. For all delete markers whose 
> mvcc is greater than this value, we will not use it to delete other cells.
> And the problem for a scan restart after region move is that, the new RS does 
> not have the information of the scanners opened at the old RS before the 
> client sends scan requests to the new RS which means the read points map is 
> incomplete and the smallest read point maybe greater than the correct value. 
> So if a major compaction happens at that time, it may delete some cells which 
> should be kept.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-17177) Major compaction can break the region/row level atomic when scan even if we pass mvcc to client

2016-12-01 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714140#comment-15714140
 ] 

Phil Yang edited comment on HBASE-17177 at 12/2/16 5:45 AM:


I think at first we should know if we can return a consistent view to a 
reopened scanner, no matter the region is moved or not. So we should record the 
minReadPoint of last major compaction and when we open a region we should also 
know it. We can add a field to HFile's header and if it is generated by a major 
compaction this filed is the minReadPoint that the compaction used. After this 
we will know when a scanner comes, we can return a consistent view or not.

Now we have a TTL(hbase.client.scanner.timeout.period) for scanner in server. 
If there is no requests within TTL milliseconds, we can remove the scanner. So 
I think when we open a region, we can wait same time before we want to do a 
major compaction. Although the scanner may has been expired at former RS, it is 
safe and TTL is not a long time.


was (Author: yangzhe1991):
I think at first we should know if we can return a consistent view to a 
reopened scanner, no matter the region is moved or not. So we should record the 
minReadPoint of last major compaction and when we open a region we should also 
know it. We can add a filed to HFile's header and if it is generated by a major 
compaction this filed is the minReadPoint that the compaction used. After this 
we will know when a scanner comes, we can return a consistent view or not.

Now we have a TTL(hbase.client.scanner.timeout.period) for scanner in server. 
If there is no requests within TTL milliseconds, we can remove the scanner. So 
I think when we open a region, we can wait same time before we want to do a 
major compaction. Although the scanner may has been expired at former RS, it is 
safe and TTL is not a long time.

> Major compaction can break the region/row level atomic when scan even if we 
> pass mvcc to client
> ---
>
> Key: HBASE-17177
> URL: https://issues.apache.org/jira/browse/HBASE-17177
> Project: HBase
>  Issue Type: Sub-task
>  Components: scan
>Reporter: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> We know that major compaction will actually delete the cells which are 
> deleted by a delete marker. In order to give a consistent view for a scan, we 
> need to use a map to track the read points for all scanners for a region, and 
> the smallest one will be used for a compaction. For all delete markers whose 
> mvcc is greater than this value, we will not use it to delete other cells.
> And the problem for a scan restart after region move is that, the new RS does 
> not have the information of the scanners opened at the old RS before the 
> client sends scan requests to the new RS which means the read points map is 
> incomplete and the smallest read point maybe greater than the correct value. 
> So if a major compaction happens at that time, it may delete some cells which 
> should be kept.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17228) precommit grep -c ERROR may grab non errors

2016-12-01 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714153#comment-15714153
 ] 

stack commented on HBASE-17228:
---

I messed around. Attached patch seems to work. Will commit in morning. Can 
revert if it doesn't fix.

> precommit grep -c ERROR may grab non errors
> ---
>
> Key: HBASE-17228
> URL: https://issues.apache.org/jira/browse/HBASE-17228
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Reporter: Matteo Bertozzi
>Priority: Minor
> Attachments: HBASE-17228.master.001.patch
>
>
> it looks like that we do a simple "grep -c ERROR" to count the errors that we 
> have from the build.
> https://github.com/apache/hbase/blob/master/dev-support/hbase-personality.sh#L305
> but in this way we ended up with a count=1 just because we have one enum 
> called ERROR_CODE in hbase. and the enum shows up as debug message
> {noformat}
> $ grep ERROR patch-hbaseprotoc-hbase-server.txt 
> [DEBUG] adding entry 
> org/apache/hadoop/hbase/util/HBaseFsck$ErrorReporter$ERROR_CODE.class
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17228) precommit grep -c ERROR may grab non errors

2016-12-01 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17228:
--
Attachment: HBASE-17228.master.001.patch

> precommit grep -c ERROR may grab non errors
> ---
>
> Key: HBASE-17228
> URL: https://issues.apache.org/jira/browse/HBASE-17228
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Reporter: Matteo Bertozzi
>Priority: Minor
> Attachments: HBASE-17228.master.001.patch
>
>
> it looks like that we do a simple "grep -c ERROR" to count the errors that we 
> have from the build.
> https://github.com/apache/hbase/blob/master/dev-support/hbase-personality.sh#L305
> but in this way we ended up with a count=1 just because we have one enum 
> called ERROR_CODE in hbase. and the enum shows up as debug message
> {noformat}
> $ grep ERROR patch-hbaseprotoc-hbase-server.txt 
> [DEBUG] adding entry 
> org/apache/hadoop/hbase/util/HBaseFsck$ErrorReporter$ERROR_CODE.class
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17172) Optimize major mob compaction with _del files

2016-12-01 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714142#comment-15714142
 ] 

huaxiang sun commented on HBASE-17172:
--

Thanks Jingcheng. Regarding with "If we skip the compacted files, the threshold 
is not that useful anymore.", today if there is only one file in the partition, 
and there is no _del files, the file is skipped. With del file, the current 
logic is to compact the already-compacted file with _del file. Let's say there 
is one mob file regionA20161101, which was compacted. On 12/1/2016, there 
is  _del file regionB20161201_del, mob compaction kicks in, 
regionA20161101 is less than the threshold, and it is picked for 
compaction. Since there is a _del file, regionA20161101 and 
regionB20161201_del are compacted into regionA20161101_1 . After that, 
regionB20161201_del cannot be deleted since it is not a allFile compaction. 
The next mob compaction, regionA20161101_1 and regionB20161201_del  
will be picked up again and be compacted into regionA20161101_2. So in this 
case, it will cause more unnecessary IOs. Could you double confirm if this is 
the case?

> Optimize major mob compaction with _del files
> -
>
> Key: HBASE-17172
> URL: https://issues.apache.org/jira/browse/HBASE-17172
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>
> Today, when there is a _del file in mobdir, with major mob compaction, every 
> mob file will be recompacted, this causes lots of IO and slow down major mob 
> compaction (may take months to finish). This needs to be improved. A few 
> ideas are: 
> 1) Do not compact all _del files into one, instead, compact them based on 
> groups with startKey as the key. Then use firstKey/startKey to make each mob 
> file to see if the _del file needs to be included for this partition.
> 2). Based on the timerange of the _del file, compaction for files after that 
> timerange does not need to include the _del file as these are newer files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17177) Major compaction can break the region/row level atomic when scan even if we pass mvcc to client

2016-12-01 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714140#comment-15714140
 ] 

Phil Yang commented on HBASE-17177:
---

I think at first we should know if we can return a consistent view to a 
reopened scanner, no matter the region is moved or not. So we should record the 
minReadPoint of last major compaction and when we open a region we should also 
know it. We can add a filed to HFile's header and if it is generated by a major 
compaction this filed is the minReadPoint that the compaction used. After this 
we will know when a scanner comes, we can return a consistent view or not.

Now we have a TTL(hbase.client.scanner.timeout.period) for scanner in server. 
If there is no requests within TTL milliseconds, we can remove the scanner. So 
I think when we open a region, we can wait same time before we want to do a 
major compaction. Although the scanner may has been expired at former RS, it is 
safe and TTL is not a long time.

> Major compaction can break the region/row level atomic when scan even if we 
> pass mvcc to client
> ---
>
> Key: HBASE-17177
> URL: https://issues.apache.org/jira/browse/HBASE-17177
> Project: HBase
>  Issue Type: Sub-task
>  Components: scan
>Reporter: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> We know that major compaction will actually delete the cells which are 
> deleted by a delete marker. In order to give a consistent view for a scan, we 
> need to use a map to track the read points for all scanners for a region, and 
> the smallest one will be used for a compaction. For all delete markers whose 
> mvcc is greater than this value, we will not use it to delete other cells.
> And the problem for a scan restart after region move is that, the new RS does 
> not have the information of the scanners opened at the old RS before the 
> client sends scan requests to the new RS which means the read points map is 
> incomplete and the smallest read point maybe greater than the correct value. 
> So if a major compaction happens at that time, it may delete some cells which 
> should be kept.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17177) Major compaction can break the region/row level atomic when scan even if we pass mvcc to client

2016-12-01 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714130#comment-15714130
 ] 

Duo Zhang commented on HBASE-17177:
---

{quote}
Not sure about NONE/ROW/REGION. Can we do REGION first, since mvcc is by 
region, and then if needed do ROW and NONE.
{quote}

NONE/ROW/REGION is the lower bound, if there is no error then we will always 
have the REGION level atomicity. The problem only happens when there is an 
error and we need to reopen a scanner. We will try our best to keep the REGION 
level atomicity but as said above, we can not always succeed. And if the bad 
things happen, then we will use the 'atomicity' option to determine if we can 
go on or throw an exception to user.

Thanks.

> Major compaction can break the region/row level atomic when scan even if we 
> pass mvcc to client
> ---
>
> Key: HBASE-17177
> URL: https://issues.apache.org/jira/browse/HBASE-17177
> Project: HBase
>  Issue Type: Sub-task
>  Components: scan
>Reporter: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> We know that major compaction will actually delete the cells which are 
> deleted by a delete marker. In order to give a consistent view for a scan, we 
> need to use a map to track the read points for all scanners for a region, and 
> the smallest one will be used for a compaction. For all delete markers whose 
> mvcc is greater than this value, we will not use it to delete other cells.
> And the problem for a scan restart after region move is that, the new RS does 
> not have the information of the scanners opened at the old RS before the 
> client sends scan requests to the new RS which means the read points map is 
> incomplete and the smallest read point maybe greater than the correct value. 
> So if a major compaction happens at that time, it may delete some cells which 
> should be kept.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17177) Major compaction can break the region/row level atomic when scan even if we pass mvcc to client

2016-12-01 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714094#comment-15714094
 ] 

stack commented on HBASE-17177:
---

A region opens after a move, and a major compaction could start. It would look 
for smallest read point. There might be none so it would think it could clean 
up all deletes.

After, a restarted scan comes in with an mvcc that is older than current read 
point.

Region does not keep record of the mvcc that the last or current ongoing major 
compaction used. If it did, we could fail the scan if its mvcc was older than 
that of the major compaction.

Yeah, seems smart to delay major compaction until a good while after a region 
opens so restarted acanners have a chance of getting back in. Can we find a 
latch that is other than time based (Wait a few minutes)?

Compactions get promoted from minor to major if it happens that the minor 
compaction includes all hfiles. We'd have to undo this or not allow the upgrade.

Not sure about NONE/ROW/REGION. Can we do REGION first, since mvcc is by 
region, and then if needed do ROW and NONE.

This is an awkward problem. 

> Major compaction can break the region/row level atomic when scan even if we 
> pass mvcc to client
> ---
>
> Key: HBASE-17177
> URL: https://issues.apache.org/jira/browse/HBASE-17177
> Project: HBase
>  Issue Type: Sub-task
>  Components: scan
>Reporter: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> We know that major compaction will actually delete the cells which are 
> deleted by a delete marker. In order to give a consistent view for a scan, we 
> need to use a map to track the read points for all scanners for a region, and 
> the smallest one will be used for a compaction. For all delete markers whose 
> mvcc is greater than this value, we will not use it to delete other cells.
> And the problem for a scan restart after region move is that, the new RS does 
> not have the information of the scanners opened at the old RS before the 
> client sends scan requests to the new RS which means the read points map is 
> incomplete and the smallest read point maybe greater than the correct value. 
> So if a major compaction happens at that time, it may delete some cells which 
> should be kept.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17191) Make use of UnsafeByteOperations#unsafeWrap(ByteBuffer buffer) in PBUtil#toCell(Cell cell)

2016-12-01 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-17191:
---
Status: Patch Available  (was: Open)

> Make use of UnsafeByteOperations#unsafeWrap(ByteBuffer buffer) in 
> PBUtil#toCell(Cell cell)
> --
>
> Key: HBASE-17191
> URL: https://issues.apache.org/jira/browse/HBASE-17191
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-17191.patch, HBASE-17191_1.patch
>
>
> Since we now have support for BBs in 
> UnsafeByteOperations#unsafeWrap(ByteBuffer) . So for the non - java clients 
> while creating the PB result we could avoid the copy to an temp array. 
> Since we have a support to write to BB in ByteOutput having a result backed 
> by Bytebuffer should be fine. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17191) Make use of UnsafeByteOperations#unsafeWrap(ByteBuffer buffer) in PBUtil#toCell(Cell cell)

2016-12-01 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-17191:
---
Attachment: HBASE-17191_1.patch

Updated patch based on comments. Yes using CellComparator is much better so 
that we can check for all the parts of the cell.
Thanks for the review. Will commit unless objections.

> Make use of UnsafeByteOperations#unsafeWrap(ByteBuffer buffer) in 
> PBUtil#toCell(Cell cell)
> --
>
> Key: HBASE-17191
> URL: https://issues.apache.org/jira/browse/HBASE-17191
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-17191.patch, HBASE-17191_1.patch
>
>
> Since we now have support for BBs in 
> UnsafeByteOperations#unsafeWrap(ByteBuffer) . So for the non - java clients 
> while creating the PB result we could avoid the copy to an temp array. 
> Since we have a support to write to BB in ByteOutput having a result backed 
> by Bytebuffer should be fine. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17191) Make use of UnsafeByteOperations#unsafeWrap(ByteBuffer buffer) in PBUtil#toCell(Cell cell)

2016-12-01 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-17191:
---
Status: Open  (was: Patch Available)

> Make use of UnsafeByteOperations#unsafeWrap(ByteBuffer buffer) in 
> PBUtil#toCell(Cell cell)
> --
>
> Key: HBASE-17191
> URL: https://issues.apache.org/jira/browse/HBASE-17191
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-17191.patch
>
>
> Since we now have support for BBs in 
> UnsafeByteOperations#unsafeWrap(ByteBuffer) . So for the non - java clients 
> while creating the PB result we could avoid the copy to an temp array. 
> Since we have a support to write to BB in ByteOutput having a result backed 
> by Bytebuffer should be fine. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16773) AccessController should access local region if possible

2016-12-01 Thread Pankaj Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714026#comment-15714026
 ] 

Pankaj Kumar commented on HBASE-16773:
--

Hi [~tedyu], can we have this improvement in other branches (1.3.X/1.2.X/1.1.X) 
as well?

> AccessController should access local region if possible
> ---
>
> Key: HBASE-16773
> URL: https://issues.apache.org/jira/browse/HBASE-16773
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16773.branch-1.txt, 16773.v2.txt, 16773.v3.txt, 
> 16773.v4.txt, 16773.v5.txt, 16773.v6.txt, 16773.v7.txt
>
>
> We observed the following in the stack trace of region server on a 1.1.2 
> cluster:
> {code}
> "PriorityRpcServer.handler=19,queue=1,port=60200" #225 daemon prio=5 
> os_prio=0 tid=0x7fb562296000 nid=0x81c0 runnable [0x7fb509a27000]
>java.lang.Thread.State: RUNNABLE
>   at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
>   at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
>   at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
>   at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
>   - locked <0x0003d4dfd770> (a sun.nio.ch.Util$2)
>   - locked <0x0003d4dfd760> (a java.util.Collections$UnmodifiableSet)
>   - locked <0x0003d4dfd648> (a sun.nio.ch.EPollSelectorImpl)
>   at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>   at java.io.FilterInputStream.read(FilterInputStream.java:133)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
>   - locked <0x0003d7dae180> (a java.io.BufferedInputStream)
>   at java.io.DataInputStream.readInt(DataInputStream.java:387)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.readStatus(HBaseSaslRpcClient.java:151)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:189)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:611)
>   - locked <0x0003d5c7edc0> (a 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:156)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:737)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:734)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:734)
>   - locked <0x0003d5c7edc0> (a 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
>   at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1199)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:32627)
>   at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:854)
>   at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:845)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
>   at org.apache.hadoop.hbase.client.HTable.get(HTable.java:862)
>   at org.apache.hadoop.hbase.client.HTable.get(HTable.java:828)
>   at 
> org.apache.hadoop.hbase.security.access.AccessControlLists.getPermissions(AccessControlLists.java:461)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.updateACL(AccessController.java:260)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.postPut(AccessController.java:1661)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$32.call(RegionCoprocessorHost.java:940)
>   at 
> 

[jira] [Commented] (HBASE-16119) Procedure v2 - Reimplement merge

2016-12-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714021#comment-15714021
 ] 

Hadoop QA commented on HBASE-16119:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 43s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 17m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
50s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 49s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 17m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 40 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 31s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 21s 
{color} | {color:red} Patch generated 1 new protoc errors in hbase-server. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 29s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 2s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 92m 43s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s 
{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
5s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 185m 41s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841394/HBASE-16119.v2-master.patch
 |
| JIRA Issue | HBASE-16119 |
| Optional Tests |  

[jira] [Commented] (HBASE-16616) Rpc handlers stuck on ThreadLocalMap.expungeStaleEntry

2016-12-01 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714003#comment-15714003
 ] 

stack commented on HBASE-16616:
---

Opened HBASE-17229 to backport purge of threadlocals. Thanks.

> Rpc handlers stuck on ThreadLocalMap.expungeStaleEntry
> --
>
> Key: HBASE-16616
> URL: https://issues.apache.org/jira/browse/HBASE-16616
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 1.2.2
>Reporter: Tomu Tsuruhara
>Assignee: Tomu Tsuruhara
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16616.branch-1.v2.txt, HBASE-16616.master.001.patch, 
> HBASE-16616.master.002.patch, ScreenShot 2016-09-09 14.17.53.png
>
>
> In our HBase 1.2.2 cluster, some regionserver showed too bad 
> "QueueCallTime_99th_percentile" exceeding 10 seconds.
> Most rpc handler threads stuck on ThreadLocalMap.expungeStaleEntry call at 
> that time.
> {noformat}
> "PriorityRpcServer.handler=18,queue=0,port=16020" #322 daemon prio=5 
> os_prio=0 tid=0x7fd422062800 nid=0x19b89 runnable [0x7fcb8a821000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.ThreadLocal$ThreadLocalMap.expungeStaleEntry(ThreadLocal.java:617)
> at java.lang.ThreadLocal$ThreadLocalMap.remove(ThreadLocal.java:499)
> at 
> java.lang.ThreadLocal$ThreadLocalMap.access$200(ThreadLocal.java:298)
> at java.lang.ThreadLocal.remove(ThreadLocal.java:222)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryReleaseShared(ReentrantReadWriteLock.java:426)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1341)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.unlock(ReentrantReadWriteLock.java:881)
> at 
> com.yammer.metrics.stats.ExponentiallyDecayingSample.unlockForRegularUsage(ExponentiallyDecayingSample.java:196)
> at 
> com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:113)
> at 
> com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:81)
> at 
> org.apache.hadoop.metrics2.lib.MutableHistogram.add(MutableHistogram.java:81)
> at 
> org.apache.hadoop.metrics2.lib.MutableRangeHistogram.add(MutableRangeHistogram.java:59)
> at 
> org.apache.hadoop.hbase.ipc.MetricsHBaseServerSourceImpl.dequeuedCall(MetricsHBaseServerSourceImpl.java:194)
> at 
> org.apache.hadoop.hbase.ipc.MetricsHBaseServer.dequeuedCall(MetricsHBaseServer.java:76)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2192)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We were using jdk 1.8.0_92 and here is a snippet from ThreadLocal.java.
> {code}
> 616:while (tab[h] != null)
> 617:h = nextIndex(h, len);
> {code}
> So I hypothesized that there're too many consecutive entries in {{tab}} array 
> and actually I found them in the heapdump.
> !ScreenShot 2016-09-09 14.17.53.png|width=50%!
> Most of these entries pointed at instance of 
> {{org.apache.hadoop.hbase.util.Counter$1}}
> which is equivarent to {{indexHolderThreadLocal}} instance-variable in the 
> {{Counter}} class.
> Because {{RpcServer$Connection}} class creates a {{Counter}} instance 
> {{rpcCount}} for every connections,
> it is possible to have lots of {{Counter#indexHolderThreadLocal}} instances 
> in RegionServer process
> when we repeat connect-and-close from client. As a result, a ThreadLocalMap 
> can have lots of consecutive
> entires.
> Usually, since each entry is a {{WeakReference}}, these entries are collected 
> and removed
> by garbage-collector soon after connection closed.
> But if connection's life-time was long enough to survive youngGC, it wouldn't 
> be collected until old-gen collector runs.
> Furthermore, under G1GC deployment, it is possible not to be collected even 
> by old-gen GC(mixed GC)
> if entries sit in a region which doesn't have much garbages.
> Actually we used G1GC when we encountered this problem.
> We should remove the entry from ThreadLocalMap by calling ThreadLocal#remove 
> explicitly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17229) Backport of purge ThreadLocals

2016-12-01 Thread stack (JIRA)
stack created HBASE-17229:
-

 Summary: Backport of purge ThreadLocals
 Key: HBASE-17229
 URL: https://issues.apache.org/jira/browse/HBASE-17229
 Project: HBase
  Issue Type: Bug
Reporter: stack
Priority: Critical
 Fix For: 1.3.1, 1.2.5


Backport HBASE-17072 and HBASE-16146. The former needs to be backported to 1.3 
([~mantonov]) and 1.2 ([~busbey]). The latter is already in 1.3.  Needs to be 
backported to 1.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-17225) Backport "HBASE-16616 Rpc handlers stuck on ThreadLocalMap.expungeStaleEntry"

2016-12-01 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-17225.
---
Resolution: Invalid

Resolving as invalid. [~tomu.tsuruhara] notes in parent issue that this patch 
was insufficient. Better to purge ThreadLocal usage altogether. Backport 
HBASE-16146

> Backport "HBASE-16616 Rpc handlers stuck on ThreadLocalMap.expungeStaleEntry"
> -
>
> Key: HBASE-17225
> URL: https://issues.apache.org/jira/browse/HBASE-17225
> Project: HBase
>  Issue Type: Sub-task
>  Components: Operability, Performance
>Reporter: stack
>Priority: Critical
> Fix For: 1.3.0, 1.2.5
>
>
> Lets get the parent issue into older branches also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16616) Rpc handlers stuck on ThreadLocalMap.expungeStaleEntry

2016-12-01 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713995#comment-15713995
 ] 

stack commented on HBASE-16616:
---

Thank you for the clarification [~tomu.tsuruhara] Let me then resolve the issue 
I made as invalid and open another to backport HBASE-16146 to 1.2. Thanks.

> Rpc handlers stuck on ThreadLocalMap.expungeStaleEntry
> --
>
> Key: HBASE-16616
> URL: https://issues.apache.org/jira/browse/HBASE-16616
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 1.2.2
>Reporter: Tomu Tsuruhara
>Assignee: Tomu Tsuruhara
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16616.branch-1.v2.txt, HBASE-16616.master.001.patch, 
> HBASE-16616.master.002.patch, ScreenShot 2016-09-09 14.17.53.png
>
>
> In our HBase 1.2.2 cluster, some regionserver showed too bad 
> "QueueCallTime_99th_percentile" exceeding 10 seconds.
> Most rpc handler threads stuck on ThreadLocalMap.expungeStaleEntry call at 
> that time.
> {noformat}
> "PriorityRpcServer.handler=18,queue=0,port=16020" #322 daemon prio=5 
> os_prio=0 tid=0x7fd422062800 nid=0x19b89 runnable [0x7fcb8a821000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.ThreadLocal$ThreadLocalMap.expungeStaleEntry(ThreadLocal.java:617)
> at java.lang.ThreadLocal$ThreadLocalMap.remove(ThreadLocal.java:499)
> at 
> java.lang.ThreadLocal$ThreadLocalMap.access$200(ThreadLocal.java:298)
> at java.lang.ThreadLocal.remove(ThreadLocal.java:222)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryReleaseShared(ReentrantReadWriteLock.java:426)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1341)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.unlock(ReentrantReadWriteLock.java:881)
> at 
> com.yammer.metrics.stats.ExponentiallyDecayingSample.unlockForRegularUsage(ExponentiallyDecayingSample.java:196)
> at 
> com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:113)
> at 
> com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:81)
> at 
> org.apache.hadoop.metrics2.lib.MutableHistogram.add(MutableHistogram.java:81)
> at 
> org.apache.hadoop.metrics2.lib.MutableRangeHistogram.add(MutableRangeHistogram.java:59)
> at 
> org.apache.hadoop.hbase.ipc.MetricsHBaseServerSourceImpl.dequeuedCall(MetricsHBaseServerSourceImpl.java:194)
> at 
> org.apache.hadoop.hbase.ipc.MetricsHBaseServer.dequeuedCall(MetricsHBaseServer.java:76)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2192)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We were using jdk 1.8.0_92 and here is a snippet from ThreadLocal.java.
> {code}
> 616:while (tab[h] != null)
> 617:h = nextIndex(h, len);
> {code}
> So I hypothesized that there're too many consecutive entries in {{tab}} array 
> and actually I found them in the heapdump.
> !ScreenShot 2016-09-09 14.17.53.png|width=50%!
> Most of these entries pointed at instance of 
> {{org.apache.hadoop.hbase.util.Counter$1}}
> which is equivarent to {{indexHolderThreadLocal}} instance-variable in the 
> {{Counter}} class.
> Because {{RpcServer$Connection}} class creates a {{Counter}} instance 
> {{rpcCount}} for every connections,
> it is possible to have lots of {{Counter#indexHolderThreadLocal}} instances 
> in RegionServer process
> when we repeat connect-and-close from client. As a result, a ThreadLocalMap 
> can have lots of consecutive
> entires.
> Usually, since each entry is a {{WeakReference}}, these entries are collected 
> and removed
> by garbage-collector soon after connection closed.
> But if connection's life-time was long enough to survive youngGC, it wouldn't 
> be collected until old-gen collector runs.
> Furthermore, under G1GC deployment, it is possible not to be collected even 
> by old-gen GC(mixed GC)
> if entries sit in a region which doesn't have much garbages.
> Actually we used G1GC when we encountered this problem.
> We should remove the entry from ThreadLocalMap by calling ThreadLocal#remove 
> explicitly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17177) Major compaction can break the region/row level atomic when scan even if we pass mvcc to client

2016-12-01 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713934#comment-15713934
 ] 

Duo Zhang commented on HBASE-17177:
---

Have been thinking this for days. I think we should have an option for scan 
called 'atomicity' which has three values: {{None}}, {{Row}} and {{Region}}. 
The default value wil be {{Row}}.

And this will change the way of error handling at client side.

For {{None}}, in general we can recover from any exceptions by reopening a new 
region scanner, unless timeout.

For {{Row}}, if allowPartial is enabled and we failed at the middle of a row, 
then it is not always safe to reopen a new scanner. We need to do something at 
the server side. If we get open new scanner request that have a mvcc read point 
at RS side, then we need to check if the read point is larger than or equals to 
the current smallest read point, or we are in the 'no major compaction period' 
introduced above, if not we need to tell client that the atomicity can not be 
guaranteed and you need to give up.

For {{Region}}, the above thing will also happen even if allowPartial is 
disabled as we need cross row atomicity.

And I think the {{None}} here is the same thing of 'stateless' in HBASE-15576.

Thanks.

> Major compaction can break the region/row level atomic when scan even if we 
> pass mvcc to client
> ---
>
> Key: HBASE-17177
> URL: https://issues.apache.org/jira/browse/HBASE-17177
> Project: HBase
>  Issue Type: Sub-task
>  Components: scan
>Reporter: Duo Zhang
> Fix For: 2.0.0, 1.4.0
>
>
> We know that major compaction will actually delete the cells which are 
> deleted by a delete marker. In order to give a consistent view for a scan, we 
> need to use a map to track the read points for all scanners for a region, and 
> the smallest one will be used for a compaction. For all delete markers whose 
> mvcc is greater than this value, we will not use it to delete other cells.
> And the problem for a scan restart after region move is that, the new RS does 
> not have the information of the scanners opened at the old RS before the 
> client sends scan requests to the new RS which means the read points map is 
> incomplete and the smallest read point maybe greater than the correct value. 
> So if a major compaction happens at that time, it may delete some cells which 
> should be kept.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17187) DoNotRetryExceptions from coprocessors should bubble up to the application

2016-12-01 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713925#comment-15713925
 ] 

Anoop Sam John commented on HBASE-17187:


Thanks
Looks good then
Below the new lines we have 
if (e instanceof CorruptHFileException || e instanceof FileNotFoundException) {
throw new DoNotRetryIOException(e);
  }
We can avoid the CorruptHFileException check now.. FNFE can come from MOB area 
where the MOB file is missing..  So we need to have this check unless at the 
throwing place, we catch FNFE and rethrow as a type of DNRIOE.  Any way not 
related to ur patch.. May be u can add a TODO pls?  Also remove 
CorruptHFileException check here.  Can fix on commit pls? +1

> DoNotRetryExceptions from coprocessors should bubble up to the application
> --
>
> Key: HBASE-17187
> URL: https://issues.apache.org/jira/browse/HBASE-17187
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Attachments: hbase-17187_v1.patch
>
>
> In HBASE-16604, we fixed a case where scanner retries was causing the scan to 
> miss some data in case the scanner is left with a dirty state (like a 
> half-seeked KVHeap). 
> The patch introduced a minor compatibility issue, because now if a 
> coprocessor throws DNRIOE, we still retry the ClientScanner indefinitely. 
> The test {{ServerExceptionIT}} in Phoenix is failing because of this with 
> HBASE-16604. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16169) Make RegionSizeCalculator scalable

2016-12-01 Thread Thiruvel Thirumoolan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713849#comment-15713849
 ] 

Thiruvel Thirumoolan commented on HBASE-16169:
--

Sorry about the delay. Fine with 2.0 commit. Thanks [~stack]!

> Make RegionSizeCalculator scalable
> --
>
> Key: HBASE-16169
> URL: https://issues.apache.org/jira/browse/HBASE-16169
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, scaling
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0
>
> Attachments: HBASE-16169.master.000.patch, 
> HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, 
> HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, 
> HBASE-16169.master.005.patch, HBASE-16169.master.006.patch, 
> HBASE-16169.master.007.patch, HBASE-16169.master.007.patch, 
> HBASE-16169.master.008.patch
>
>
> RegionSizeCalculator is needed for better split generation of MR jobs. This 
> requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing 
> Master. We don't want master to be in this path.
> The proposal is to add an API to the RegionServer that gets RegionLoad of all 
> regions hosted on it or those of a table if specified. RegionSizeCalculator 
> can use the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests

2016-12-01 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16209:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed the addendum to branch-1. Thanks [~ashu210890] for the contribution.

> Provide an ExponentialBackOffPolicy sleep between failed region open requests
> -
>
> Key: HBASE-16209
> URL: https://issues.apache.org/jira/browse/HBASE-16209
> Project: HBase
>  Issue Type: Bug
>Reporter: Joseph
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16209-addendum.patch, 
> HBASE-16209-addendum.v6.branch-1.patch, 
> HBASE-16209-branch-1-addendum-v2.patch, HBASE-16209-branch-1-addendum.patch, 
> HBASE-16209-branch-1-v3.patch, HBASE-16209-branch-1-v4.patch, 
> HBASE-16209-branch-1-v5.patch, HBASE-16209-branch-1.patch, 
> HBASE-16209-v2.patch, HBASE-16209.patch
>
>
> Related to HBASE-16138. As of now we currently have no pause between retrying 
> failed region open requests. And with a low maximumAttempt default, we can 
> quickly use up all our regionOpen retries if the server is in a bad state. I 
> added in a ExponentialBackOffPolicy so that we spread out the timing of our 
> open region retries in AssignmentManager. Review board at 
> https://reviews.apache.org/r/50011/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests

2016-12-01 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713798#comment-15713798
 ] 

Duo Zhang commented on HBASE-16209:
---

+1. Let me commit and resolve this issue.

> Provide an ExponentialBackOffPolicy sleep between failed region open requests
> -
>
> Key: HBASE-16209
> URL: https://issues.apache.org/jira/browse/HBASE-16209
> Project: HBase
>  Issue Type: Bug
>Reporter: Joseph
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16209-addendum.patch, 
> HBASE-16209-addendum.v6.branch-1.patch, 
> HBASE-16209-branch-1-addendum-v2.patch, HBASE-16209-branch-1-addendum.patch, 
> HBASE-16209-branch-1-v3.patch, HBASE-16209-branch-1-v4.patch, 
> HBASE-16209-branch-1-v5.patch, HBASE-16209-branch-1.patch, 
> HBASE-16209-v2.patch, HBASE-16209.patch
>
>
> Related to HBASE-16138. As of now we currently have no pause between retrying 
> failed region open requests. And with a low maximumAttempt default, we can 
> quickly use up all our regionOpen retries if the server is in a bad state. I 
> added in a ExponentialBackOffPolicy so that we spread out the timing of our 
> open region retries in AssignmentManager. Review board at 
> https://reviews.apache.org/r/50011/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests

2016-12-01 Thread Ashu Pachauri (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713778#comment-15713778
 ] 

Ashu Pachauri commented on HBASE-16209:
---

[~Apache9] Does the addendum to branch-1 look good?

> Provide an ExponentialBackOffPolicy sleep between failed region open requests
> -
>
> Key: HBASE-16209
> URL: https://issues.apache.org/jira/browse/HBASE-16209
> Project: HBase
>  Issue Type: Bug
>Reporter: Joseph
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16209-addendum.patch, 
> HBASE-16209-addendum.v6.branch-1.patch, 
> HBASE-16209-branch-1-addendum-v2.patch, HBASE-16209-branch-1-addendum.patch, 
> HBASE-16209-branch-1-v3.patch, HBASE-16209-branch-1-v4.patch, 
> HBASE-16209-branch-1-v5.patch, HBASE-16209-branch-1.patch, 
> HBASE-16209-v2.patch, HBASE-16209.patch
>
>
> Related to HBASE-16138. As of now we currently have no pause between retrying 
> failed region open requests. And with a low maximumAttempt default, we can 
> quickly use up all our regionOpen retries if the server is in a bad state. I 
> added in a ExponentialBackOffPolicy so that we spread out the timing of our 
> open region retries in AssignmentManager. Review board at 
> https://reviews.apache.org/r/50011/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17181) Let HBase thrift2 support TThreadedSelectorServer

2016-12-01 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17181:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
Release Note: Add TThreadedSelectorServer support for HBase Thrift2  (was: 
Add TThreadedSelectorServer for HBase Thrift2)
  Status: Resolved  (was: Patch Available)

Pushed to master and branch-1.

Thanks [~eyj...@gmail.com] for the contribution. Thanks all for reviewing.

> Let HBase thrift2 support TThreadedSelectorServer
> -
>
> Key: HBASE-17181
> URL: https://issues.apache.org/jira/browse/HBASE-17181
> Project: HBase
>  Issue Type: New Feature
>  Components: Thrift
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Jian Yi
>Assignee: Jian Yi
>Priority: Minor
>  Labels: features
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17181-V1.patch, HBASE-17181-V2.patch, 
> HBASE-17181-V3.patch, HBASE-17181-V4.patch, HBASE-17181-V5.patch, 
> HBASE-17181-V6.patch, ThriftServer.java
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Add TThreadedSelectorServer for HBase Thrift2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17111) Use Apache CLI in SnapshotInfo tool

2016-12-01 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-17111:
-
Fix Version/s: 2.0.0

> Use Apache CLI in SnapshotInfo tool
> ---
>
> Key: HBASE-17111
> URL: https://issues.apache.org/jira/browse/HBASE-17111
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-17111.master.001.patch
>
>
> AbstractHBaseTool uses Apache CLI to manage command line options, help, etc. 
> We should use it for all tools. This jira is about changing SnapshotInfo to 
> use the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17111) Use Apache CLI in SnapshotInfo tool

2016-12-01 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-17111:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Use Apache CLI in SnapshotInfo tool
> ---
>
> Key: HBASE-17111
> URL: https://issues.apache.org/jira/browse/HBASE-17111
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-17111.master.001.patch
>
>
> AbstractHBaseTool uses Apache CLI to manage command line options, help, etc. 
> We should use it for all tools. This jira is about changing SnapshotInfo to 
> use the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16119) Procedure v2 - Reimplement merge

2016-12-01 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713714#comment-15713714
 ] 

Stephen Yuan Jiang commented on HBASE-16119:


V2 patch would fix the warnings.

For {{hbaseprotoc}}, not seem any errors from my change.  [~mbertozzi] thinks 
that "the script that counts that is broken: 
"https://github.com/apache/hbase/blob/master/dev-support/hbase-personality.sh#L305;
 

> Procedure v2 - Reimplement merge
> 
>
> Key: HBASE-16119
> URL: https://issues.apache.org/jira/browse/HBASE-16119
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Region Assignment
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Stephen Yuan Jiang
> Fix For: 2.0.0
>
> Attachments: HBASE-16119.v1-master.patch, HBASE-16119.v2-master.patch
>
>
> use the proc-v2 state machine for merge. also update the logic to have a 
> single meta-writer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16119) Procedure v2 - Reimplement merge

2016-12-01 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-16119:
---
Attachment: HBASE-16119.v2-master.patch

> Procedure v2 - Reimplement merge
> 
>
> Key: HBASE-16119
> URL: https://issues.apache.org/jira/browse/HBASE-16119
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Region Assignment
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Stephen Yuan Jiang
> Fix For: 2.0.0
>
> Attachments: HBASE-16119.v1-master.patch, HBASE-16119.v2-master.patch
>
>
> use the proc-v2 state machine for merge. also update the logic to have a 
> single meta-writer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17228) precommit grep -c ERROR may grab non errors

2016-12-01 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-17228:
---

 Summary: precommit grep -c ERROR may grab non errors
 Key: HBASE-17228
 URL: https://issues.apache.org/jira/browse/HBASE-17228
 Project: HBase
  Issue Type: Bug
  Components: scripts
Reporter: Matteo Bertozzi
Priority: Minor


it looks like that we do a simple "grep -c ERROR" to count the errors that we 
have from the build.
https://github.com/apache/hbase/blob/master/dev-support/hbase-personality.sh#L305

but in this way we ended up with a count=1 just because we have one enum called 
ERROR_CODE in hbase. and the enum shows up as debug message
{noformat}
$ grep ERROR patch-hbaseprotoc-hbase-server.txt 
[DEBUG] adding entry 
org/apache/hadoop/hbase/util/HBaseFsck$ErrorReporter$ERROR_CODE.class
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16700) Allow for coprocessor whitelisting

2016-12-01 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713677#comment-15713677
 ] 

Enis Soztutar commented on HBASE-16700:
---

Thanks Clay for the updated patches. Looks pretty good to commit. Just some 
last items: 
 - We should remove this (assuming that you added that for debugging): 
{code}
+  static {
+
Logger.getLogger(CoprocessorWhitelistMasterObserver.class).setLevel(Level.TRACE);
+Logger.getLogger("org.apache.hbase.server").setLevel(Level.TRACE);
+  }
{code}
 - Can you please refactor var names like {{coproc_path}} to camelCase.
- Did you want to enable this test? 
{code}
+//  @Test
+  @Category(MediumTests.class)
+  public void testCreationClasspathCoprocessor() throws Exception {
{code}
 - great doc! 

> Allow for coprocessor whitelisting
> --
>
> Key: HBASE-16700
> URL: https://issues.apache.org/jira/browse/HBASE-16700
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors
>Reporter: Clay B.
>Priority: Minor
>  Labels: security
> Attachments: HBASE-16700.000.patch, HBASE-16700.001.patch, 
> HBASE-16700.002.patch, HBASE-16700.003.patch, HBASE-16700.004.patch, 
> HBASE-16700.005.patch, HBASE-16700.006.patch, HBASE-16700.007.patch
>
>
> Today one can turn off all non-system coprocessors with 
> {{hbase.coprocessor.user.enabled}} however, this disables very useful things 
> like Apache Phoenix's coprocessors. Some tenants of a multi-user HBase may 
> also need to run bespoke coprocessors. But as an operator I would not want 
> wanton coprocessor usage. Ideally, one could do one of two things:
> * Allow coprocessors defined in {{hbase-site.xml}} -- this can only be 
> administratively changed in most cases
> * Allow coprocessors from table descriptors but only if the coprocessor is 
> whitelisted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17181) Let HBase thrift2 support TThreadedSelectorServer

2016-12-01 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17181:
--
Affects Version/s: (was: 1.2.4)
Fix Version/s: (was: 1.2.5)

Let's finish the issue first.

[~eyj...@gmail.com] Feel free to open a new backport issue to backport the 
changes to branch-1.2 or any other branches.

Thanks.

> Let HBase thrift2 support TThreadedSelectorServer
> -
>
> Key: HBASE-17181
> URL: https://issues.apache.org/jira/browse/HBASE-17181
> Project: HBase
>  Issue Type: New Feature
>  Components: Thrift
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Jian Yi
>Assignee: Jian Yi
>Priority: Minor
>  Labels: features
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17181-V1.patch, HBASE-17181-V2.patch, 
> HBASE-17181-V3.patch, HBASE-17181-V4.patch, HBASE-17181-V5.patch, 
> HBASE-17181-V6.patch, ThriftServer.java
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Add TThreadedSelectorServer for HBase Thrift2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17227) Backport HBASE-17206 to branch-1.3

2016-12-01 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17227:
--
Issue Type: Task  (was: Bug)

> Backport HBASE-17206 to branch-1.3
> --
>
> Key: HBASE-17227
> URL: https://issues.apache.org/jira/browse/HBASE-17227
> Project: HBase
>  Issue Type: Task
>  Components: wal
>Affects Versions: 1.3.0
>Reporter: Duo Zhang
>Priority: Critical
> Fix For: 1.3.1
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17227) Backport HBASE-17206 to branch-1.3

2016-12-01 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-17227:
-

 Summary: Backport HBASE-17206 to branch-1.3
 Key: HBASE-17227
 URL: https://issues.apache.org/jira/browse/HBASE-17227
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 1.3.0
Reporter: Duo Zhang
Priority: Critical
 Fix For: 1.3.1






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17187) DoNotRetryExceptions from coprocessors should bubble up to the application

2016-12-01 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713642#comment-15713642
 ] 

Enis Soztutar commented on HBASE-17187:
---

bq. So here we throw UnknownScannerException and RsRpcServices catch IOE and 
the new code is having a DNRIOE check and now we will throw back 
UnknownScannerException not ScannerResetException. I think that is ok. Or not?
That is fine. In the client-side (ClientScanner), {{UnknownScannerException}} 
and {{ScannerResetException}} is treated the same way. We have introduced 
ScannerResetException rather than UnknownScannerException, because the two are 
semantically different. The way they are treated in the client is the same. 
UKSE is thrown when the client asks for a scanner, but we cannot find it. SRE 
is thrown when the client was continuing a *known* scanner, but due to some 
exception, we have reset the scanner, thus telling the client that the current 
scanner that it was using is already closed by us. In the above places, which 
one to use is debatable. We can keep throwing UKSE I think. 

bq. Just trying to understand the impact of the change fully. Tks.
No worries at all. 




> DoNotRetryExceptions from coprocessors should bubble up to the application
> --
>
> Key: HBASE-17187
> URL: https://issues.apache.org/jira/browse/HBASE-17187
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Attachments: hbase-17187_v1.patch
>
>
> In HBASE-16604, we fixed a case where scanner retries was causing the scan to 
> miss some data in case the scanner is left with a dirty state (like a 
> half-seeked KVHeap). 
> The patch introduced a minor compatibility issue, because now if a 
> coprocessor throws DNRIOE, we still retry the ClientScanner indefinitely. 
> The test {{ServerExceptionIT}} in Phoenix is failing because of this with 
> HBASE-16604. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17206) FSHLog may roll a new writer successfully with unflushed entries

2016-12-01 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17206:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to branch-1.1+ except branch-1.3. Will open a backport issue for 
branch-1.3.

Thanks all for reviewing.

> FSHLog may roll a new writer successfully with unflushed entries
> 
>
> Key: HBASE-17206
> URL: https://issues.apache.org/jira/browse/HBASE-17206
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: HBASE-17206.patch
>
>
> Found it when debugging the flakey TestFailedAppendAndSync.
> The problem is in waitSafePoint.
> {code}
>   while (true) {
> if (this.safePointAttainedLatch.await(1, TimeUnit.MILLISECONDS)) {
>   break;
> }
> if (syncFuture.isThrowable()) {
>   throw new 
> FailedSyncBeforeLogCloseException(syncFuture.getThrowable());
> }
>   }
>   return syncFuture;
> {code}
> If we attach the safe point quick enough then we will bypass the 
> syncFuture.isThrowable check and will not throw 
> FailedSyncBeforeLogCloseException.
> This may cause incosistency between memstore and wal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17206) FSHLog may roll a new writer successfully with unflushed entries

2016-12-01 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17206:
--
Affects Version/s: (was: 1.3.0)
Fix Version/s: (was: 1.3.0)

> FSHLog may roll a new writer successfully with unflushed entries
> 
>
> Key: HBASE-17206
> URL: https://issues.apache.org/jira/browse/HBASE-17206
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: HBASE-17206.patch
>
>
> Found it when debugging the flakey TestFailedAppendAndSync.
> The problem is in waitSafePoint.
> {code}
>   while (true) {
> if (this.safePointAttainedLatch.await(1, TimeUnit.MILLISECONDS)) {
>   break;
> }
> if (syncFuture.isThrowable()) {
>   throw new 
> FailedSyncBeforeLogCloseException(syncFuture.getThrowable());
> }
>   }
>   return syncFuture;
> {code}
> If we attach the safe point quick enough then we will bypass the 
> syncFuture.isThrowable check and will not throw 
> FailedSyncBeforeLogCloseException.
> This may cause incosistency between memstore and wal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17206) FSHLog may roll a new writer successfully with unflushed entries

2016-12-01 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713637#comment-15713637
 ] 

Duo Zhang commented on HBASE-17206:
---

Got it. Let me resolve this issue and open a backport issue for branch-1.3.

> FSHLog may roll a new writer successfully with unflushed entries
> 
>
> Key: HBASE-17206
> URL: https://issues.apache.org/jira/browse/HBASE-17206
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: HBASE-17206.patch
>
>
> Found it when debugging the flakey TestFailedAppendAndSync.
> The problem is in waitSafePoint.
> {code}
>   while (true) {
> if (this.safePointAttainedLatch.await(1, TimeUnit.MILLISECONDS)) {
>   break;
> }
> if (syncFuture.isThrowable()) {
>   throw new 
> FailedSyncBeforeLogCloseException(syncFuture.getThrowable());
> }
>   }
>   return syncFuture;
> {code}
> If we attach the safe point quick enough then we will bypass the 
> syncFuture.isThrowable check and will not throw 
> FailedSyncBeforeLogCloseException.
> This may cause incosistency between memstore and wal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-7612) [JDK8] Replace use of high-scale-lib counters with intrinsic facilities

2016-12-01 Thread Hiroshi Ikeda (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713609#comment-15713609
 ] 

Hiroshi Ikeda commented on HBASE-7612:
--

Sorry I have been busy and not created a patch.

> [JDK8] Replace use of high-scale-lib counters with intrinsic facilities
> ---
>
> Key: HBASE-7612
> URL: https://issues.apache.org/jira/browse/HBASE-7612
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: Duo Zhang
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-7612-v1.patch, HBASE-7612.patch
>
>
> JEP155 introduces a few new classes (DoubleAccumulator, DoubleAdder, 
> LongAccumulator, LongAdder) that "internally employ contention-reduction 
> techniques that provide huge throughput improvements as compared to Atomic 
> variables". There are applications of these where we are currently using 
> Cliff Click's high-scale-lib and for metrics.
> See http://openjdk.java.net/jeps/155



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17205) Add a metric for the duration of region in transition

2016-12-01 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713594#comment-15713594
 ] 

Guanghao Zhang commented on HBASE-17205:


Thanks [~mbertozzi] for reviewing.

> Add a metric for the duration of region in transition
> -
>
> Key: HBASE-17205
> URL: https://issues.apache.org/jira/browse/HBASE-17205
> Project: HBase
>  Issue Type: Improvement
>  Components: Region Assignment
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17205-branch-1.patch, HBASE-17205-v1.patch, 
> HBASE-17205-v1.patch, HBASE-17205.patch
>
>
> When work for HBASE-17178, I found there are not a metric for the overall 
> duration of region in transition. When move a region form A to B, the 
> transformation of region state is PENDING_CLOSE => CLOSING => CLOSED => 
> PENDING_OPEN => OPENING => OPENED. When transform old region state to new 
> region state, it update the time stamp to current time. So we can't get the 
> overall transformation's duration of region in transition. Add a rit duration 
> to RegionState for accumulating this metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16616) Rpc handlers stuck on ThreadLocalMap.expungeStaleEntry

2016-12-01 Thread Tomu Tsuruhara (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713580#comment-15713580
 ] 

Tomu Tsuruhara edited comment on HBASE-16616 at 12/2/16 12:49 AM:
--

Actually, a patch here was not enough to resolve the issue because of the same 
reason mentioned here 
https://issues.apache.org/jira/browse/HBASE-17072?focusedCommentId=15675394=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15675394

HBASE-16146 seems more essential. It removes {{ThreadLocal}} usage from 
{{Counter}} entirely.


was (Author: tomu.tsuruhara):
Actually, a patch here was not enough to resolve the issue because of the same 
reason mentioned here 
https://issues.apache.org/jira/browse/HBASE-17072?focusedCommentId=15675394=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15675394

HBASE-16146 seems more essential. It removes {{ThreadLocal}} usage from 
{[Counter}} entirely.

> Rpc handlers stuck on ThreadLocalMap.expungeStaleEntry
> --
>
> Key: HBASE-16616
> URL: https://issues.apache.org/jira/browse/HBASE-16616
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 1.2.2
>Reporter: Tomu Tsuruhara
>Assignee: Tomu Tsuruhara
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16616.branch-1.v2.txt, HBASE-16616.master.001.patch, 
> HBASE-16616.master.002.patch, ScreenShot 2016-09-09 14.17.53.png
>
>
> In our HBase 1.2.2 cluster, some regionserver showed too bad 
> "QueueCallTime_99th_percentile" exceeding 10 seconds.
> Most rpc handler threads stuck on ThreadLocalMap.expungeStaleEntry call at 
> that time.
> {noformat}
> "PriorityRpcServer.handler=18,queue=0,port=16020" #322 daemon prio=5 
> os_prio=0 tid=0x7fd422062800 nid=0x19b89 runnable [0x7fcb8a821000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.ThreadLocal$ThreadLocalMap.expungeStaleEntry(ThreadLocal.java:617)
> at java.lang.ThreadLocal$ThreadLocalMap.remove(ThreadLocal.java:499)
> at 
> java.lang.ThreadLocal$ThreadLocalMap.access$200(ThreadLocal.java:298)
> at java.lang.ThreadLocal.remove(ThreadLocal.java:222)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryReleaseShared(ReentrantReadWriteLock.java:426)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1341)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.unlock(ReentrantReadWriteLock.java:881)
> at 
> com.yammer.metrics.stats.ExponentiallyDecayingSample.unlockForRegularUsage(ExponentiallyDecayingSample.java:196)
> at 
> com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:113)
> at 
> com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:81)
> at 
> org.apache.hadoop.metrics2.lib.MutableHistogram.add(MutableHistogram.java:81)
> at 
> org.apache.hadoop.metrics2.lib.MutableRangeHistogram.add(MutableRangeHistogram.java:59)
> at 
> org.apache.hadoop.hbase.ipc.MetricsHBaseServerSourceImpl.dequeuedCall(MetricsHBaseServerSourceImpl.java:194)
> at 
> org.apache.hadoop.hbase.ipc.MetricsHBaseServer.dequeuedCall(MetricsHBaseServer.java:76)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2192)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We were using jdk 1.8.0_92 and here is a snippet from ThreadLocal.java.
> {code}
> 616:while (tab[h] != null)
> 617:h = nextIndex(h, len);
> {code}
> So I hypothesized that there're too many consecutive entries in {{tab}} array 
> and actually I found them in the heapdump.
> !ScreenShot 2016-09-09 14.17.53.png|width=50%!
> Most of these entries pointed at instance of 
> {{org.apache.hadoop.hbase.util.Counter$1}}
> which is equivarent to {{indexHolderThreadLocal}} instance-variable in the 
> {{Counter}} class.
> Because {{RpcServer$Connection}} class creates a {{Counter}} instance 
> {{rpcCount}} for every connections,
> it is possible to have lots of {{Counter#indexHolderThreadLocal}} instances 
> in RegionServer process
> when we repeat connect-and-close from client. As a result, a ThreadLocalMap 
> can have lots of consecutive
> entires.
> Usually, since each entry is a {{WeakReference}}, these entries are collected 
> and removed
> by garbage-collector soon after connection closed.
> But if connection's life-time was long enough to survive 

[jira] [Commented] (HBASE-16616) Rpc handlers stuck on ThreadLocalMap.expungeStaleEntry

2016-12-01 Thread Tomu Tsuruhara (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713580#comment-15713580
 ] 

Tomu Tsuruhara commented on HBASE-16616:


Actually, a patch here was not enough to resolve the issue because of the same 
reason mentioned here 
https://issues.apache.org/jira/browse/HBASE-17072?focusedCommentId=15675394=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15675394

HBASE-16146 seems more essential. It removes {{ThreadLocal}} usage from 
{[Counter}} entirely.

> Rpc handlers stuck on ThreadLocalMap.expungeStaleEntry
> --
>
> Key: HBASE-16616
> URL: https://issues.apache.org/jira/browse/HBASE-16616
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 1.2.2
>Reporter: Tomu Tsuruhara
>Assignee: Tomu Tsuruhara
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16616.branch-1.v2.txt, HBASE-16616.master.001.patch, 
> HBASE-16616.master.002.patch, ScreenShot 2016-09-09 14.17.53.png
>
>
> In our HBase 1.2.2 cluster, some regionserver showed too bad 
> "QueueCallTime_99th_percentile" exceeding 10 seconds.
> Most rpc handler threads stuck on ThreadLocalMap.expungeStaleEntry call at 
> that time.
> {noformat}
> "PriorityRpcServer.handler=18,queue=0,port=16020" #322 daemon prio=5 
> os_prio=0 tid=0x7fd422062800 nid=0x19b89 runnable [0x7fcb8a821000]
>java.lang.Thread.State: RUNNABLE
> at 
> java.lang.ThreadLocal$ThreadLocalMap.expungeStaleEntry(ThreadLocal.java:617)
> at java.lang.ThreadLocal$ThreadLocalMap.remove(ThreadLocal.java:499)
> at 
> java.lang.ThreadLocal$ThreadLocalMap.access$200(ThreadLocal.java:298)
> at java.lang.ThreadLocal.remove(ThreadLocal.java:222)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryReleaseShared(ReentrantReadWriteLock.java:426)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1341)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.unlock(ReentrantReadWriteLock.java:881)
> at 
> com.yammer.metrics.stats.ExponentiallyDecayingSample.unlockForRegularUsage(ExponentiallyDecayingSample.java:196)
> at 
> com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:113)
> at 
> com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:81)
> at 
> org.apache.hadoop.metrics2.lib.MutableHistogram.add(MutableHistogram.java:81)
> at 
> org.apache.hadoop.metrics2.lib.MutableRangeHistogram.add(MutableRangeHistogram.java:59)
> at 
> org.apache.hadoop.hbase.ipc.MetricsHBaseServerSourceImpl.dequeuedCall(MetricsHBaseServerSourceImpl.java:194)
> at 
> org.apache.hadoop.hbase.ipc.MetricsHBaseServer.dequeuedCall(MetricsHBaseServer.java:76)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2192)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We were using jdk 1.8.0_92 and here is a snippet from ThreadLocal.java.
> {code}
> 616:while (tab[h] != null)
> 617:h = nextIndex(h, len);
> {code}
> So I hypothesized that there're too many consecutive entries in {{tab}} array 
> and actually I found them in the heapdump.
> !ScreenShot 2016-09-09 14.17.53.png|width=50%!
> Most of these entries pointed at instance of 
> {{org.apache.hadoop.hbase.util.Counter$1}}
> which is equivarent to {{indexHolderThreadLocal}} instance-variable in the 
> {{Counter}} class.
> Because {{RpcServer$Connection}} class creates a {{Counter}} instance 
> {{rpcCount}} for every connections,
> it is possible to have lots of {{Counter#indexHolderThreadLocal}} instances 
> in RegionServer process
> when we repeat connect-and-close from client. As a result, a ThreadLocalMap 
> can have lots of consecutive
> entires.
> Usually, since each entry is a {{WeakReference}}, these entries are collected 
> and removed
> by garbage-collector soon after connection closed.
> But if connection's life-time was long enough to survive youngGC, it wouldn't 
> be collected until old-gen collector runs.
> Furthermore, under G1GC deployment, it is possible not to be collected even 
> by old-gen GC(mixed GC)
> if entries sit in a region which doesn't have much garbages.
> Actually we used G1GC when we encountered this problem.
> We should remove the entry from ThreadLocalMap by calling ThreadLocal#remove 
> explicitly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17051) libhbase++: implement RPC client and connection management

2016-12-01 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713566#comment-15713566
 ] 

Enis Soztutar commented on HBASE-17051:
---

- we do not need to have two different modules connection and ipc, we should 
just stick with only one. 
- We should use camel case in function names for regular methods, accessor 
methods can still be _ delimited (following Google style guides: 
https://google.github.io/styleguide/cppguide.html#Function_Names). 
 - In {{ConnectionId}} we should also have the serviceName as well, since 
servers can implement more than one PB Service, and each should have its own 
TPC socket (see 
https://github.com/apache/hbase/blob/bb3d9ccd489fb64e3cb2020583935a393382a678/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/ConnectionId.java).
 Also instead of using {{ServerName}} here, we should use the hostname/ip + 
port. 
 - RpcClient in java is holding the ConnectionPool in the java world, versus 
here it is is only for a single server. I think we should follow the same model 
(see AsyncRpcClient.java). 
 - How are you gonna use the ClientDispatcher? Also we seem to be creating one 
dispatcher per connection (not sure that is correct). We want something like 
the RpcClient.newStub() or newService() will give us a PB Service that can call 
the methods of? 
  

> libhbase++: implement RPC client and connection management
> --
>
> Key: HBASE-17051
> URL: https://issues.apache.org/jira/browse/HBASE-17051
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HBASE-17051-HBASE-14850.000.patch, 
> HBASE-17051-HBASE-14850.001.patch, HBASE-17051-HBASE-14850.002.patch, 
> HBASE-17051-HBASE-14850.003.patch
>
>
> This proposes building RPC client and connection management layer, which 
> supports the equivalent functions resides in RpcClient.java and 
> RpcConnection.java. Specifically, handler/pipeline concepts will be used for 
> implementation, similar to NettyRpcClient and NettyRpcConnection in java side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17205) Add a metric for the duration of region in transition

2016-12-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713480#comment-15713480
 ] 

Hudson commented on HBASE-17205:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2055 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2055/])
HBASE-17205 Add a metric for the duration of region in transition 
(matteo.bertozzi: rev b3d8d06703a34d48d1fd14ab2c77439ce9cfff6c)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManager.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/master/RegionState.java
* (edit) 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManagerSourceImpl.java
* (edit) 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManagerSource.java


> Add a metric for the duration of region in transition
> -
>
> Key: HBASE-17205
> URL: https://issues.apache.org/jira/browse/HBASE-17205
> Project: HBase
>  Issue Type: Improvement
>  Components: Region Assignment
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17205-branch-1.patch, HBASE-17205-v1.patch, 
> HBASE-17205-v1.patch, HBASE-17205.patch
>
>
> When work for HBASE-17178, I found there are not a metric for the overall 
> duration of region in transition. When move a region form A to B, the 
> transformation of region state is PENDING_CLOSE => CLOSING => CLOSED => 
> PENDING_OPEN => OPENING => OPENED. When transform old region state to new 
> region state, it update the time stamp to current time. So we can't get the 
> overall transformation's duration of region in transition. Add a rit duration 
> to RegionState for accumulating this metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17128) Find Cause of a Write Perf Regression in branch-1.2

2016-12-01 Thread Graham Baecher (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713475#comment-15713475
 ] 

Graham Baecher commented on HBASE-17128:


Testing with G1GC enabled and the patches from HBASE-16616 and HBASE-16146, it 
looks like there's still a regression, though there's some improvement with 
those two patches. I wasn't able to trivially apply the patch from HBASE-17072 
so haven't included those changes yet.

It looks like there might be more than just these few patches affecting G1GC 
performance, but if possible, a backport of HBASE-17072 would be helpful.

My next step will be working on trying to isolate the performance difference in 
1.2.0-cdh5.9.0 between G1GC and default GC. Hopefully can narrow it down to to 
one or more of the likely commits you mentioned at the top.

> Find Cause of a Write Perf Regression in branch-1.2
> ---
>
> Key: HBASE-17128
> URL: https://issues.apache.org/jira/browse/HBASE-17128
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>
> As reported by [~gbaecher] up on the mailing list, there is a regression in 
> 1.2. The regression is in a CDH version of 1.2 actually but the CDH hbase is 
> a near pure 1.2. This is a working issue to figure which of the below changes 
> brought on slower writes (The list comes from doing the following...git log 
> --oneline  
> remotes/origin/cdh5-1.2.0_5.8.0_dev..remotes/origin/cdh5-1.2.0_5.9.0_dev ... 
> I stripped the few CDH specific changes, packaging and tagging only, and then 
> made two groupings; candidates and the unlikelies):
> {code}
>   1 bbc6762 HBASE-16023 Fastpath for the FIFO rpcscheduler Adds an executor 
> that does balanced queue and fast path handing off requests directly to 
> waiting handlers if any present. Idea taken from Apace Kudu (incubating). See 
> https://gerr#
>   2 a260917 HBASE-16288 HFile intermediate block level indexes might recurse 
> forever creating multi TB files
>   3 5633281 HBASE-15811 Batch Get after batch Put does not fetch all Cells We 
> were not waiting on all executors in a batch to complete. The test for 
> no-more-executors was damaged by the 0.99/0.98.4 fix "HBASE-11403 Fix race 
> conditions aro#
>   4 780f720 HBASE-11625 - Verifies data before building HFileBlock. - Adds 
> HFileBlock.Header class which contains information about location of fields. 
> Testing: Adds CorruptedFSReaderImpl to TestChecksum. (Apekshit)
>   5 d735680 HBASE-12133 Add FastLongHistogram for metric computation (Yi Deng)
>   6 c4ee832 HBASE-15222 Use less contended classes for metrics
>   7
>   8 17320a4 HBASE-15683 Min latency in latency histograms are emitted as 
> Long.MAX_VALUE
>   9 283b39f HBASE-15396 Enhance mapreduce.TableSplit to add encoded region 
> name
>  10 39db592 HBASE-16195 Should not add chunk into chunkQueue if not using 
> chunk pool in HeapMemStoreLAB
>  11 5ff28b7 HBASE-16194 Should count in MSLAB chunk allocation into heap size 
> change when adding duplicate cells
>  12 5e3e0d2 HBASE-16318 fail build while rendering velocity template if 
> dependency license isn't in whitelist.
>  13 3ed66e3 HBASE-16318 consistently use the correct name for 'Apache 
> License, Version 2.0'
>  14 351832d HBASE-16340 exclude Xerces iplementation jars from coming in 
> transitively.
>  15 b6aa4be HBASE-16321 ensure no findbugs-jsr305
>  16 4f9dde7 HBASE-16317 revert all ESAPI changes
>  17 71b6a8a HBASE-16284 Unauthorized client can shutdown the cluster (Deokwoo 
> Han)
>  18 523753f HBASE-16450 Shell tool to dump replication queues
>  19 ca5f2ee HBASE-16379 [replication] Minor improvement to 
> replication/copy_tables_desc.rb
>  20 effd105 HBASE-16135 PeerClusterZnode under rs of removed peer may never 
> be deleted
>  21 a5c6610 HBASE-16319 Fix TestCacheOnWrite after HBASE-16288
>  22 1956bb0 HBASE-15808 Reduce potential bulk load intermediate space usage 
> and waste
>  23 031c54e HBASE-16096 Backport. Cleanly remove replication peers from 
> ZooKeeper.
>  24 60a3b12 HBASE-14963 Remove use of Guava Stopwatch from HBase client code 
> (Devaraj Das)
>  25 c7724fc HBASE-16207 can't restore snapshot without "Admin" permission
>  26 8322a0b HBASE-16227 [Shell] Column value formatter not working in scans. 
> Tested : manually using shell.
>  27 8f86658 HBASE-14818 user_permission does not list namespace permissions 
> (li xiang)
>  28 775cd21 HBASE-15465 userPermission returned by getUserPermission() for 
> the selected namespace does not have namespace set (li xiang)
>  29 8d85aff HBASE-16093 Fix splits failed before creating daughter regions 
> leave meta inconsistent
>  30 bc41317 HBASE-16140 bump owasp.esapi from 2.1.0 to 2.1.0.1
>  31 6fc70cd HBASE-16035 Nested AutoCloseables might not all get closed (Sean 
> Mackrory)
>  32 fe28fe84 HBASE-15891. Closeable resources potentially not getting 

[jira] [Commented] (HBASE-17111) Use Apache CLI in SnapshotInfo tool

2016-12-01 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713465#comment-15713465
 ] 

stack commented on HBASE-17111:
---

+1

> Use Apache CLI in SnapshotInfo tool
> ---
>
> Key: HBASE-17111
> URL: https://issues.apache.org/jira/browse/HBASE-17111
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-17111.master.001.patch
>
>
> AbstractHBaseTool uses Apache CLI to manage command line options, help, etc. 
> We should use it for all tools. This jira is about changing SnapshotInfo to 
> use the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17224) There are lots of spelling errors in the HBase logging and exception messages

2016-12-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713449#comment-15713449
 ] 

Hudson commented on HBASE-17224:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #70 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/70/])
HBASE-17224 Fix lots of spelling errors in HBase logging and exception 
(jmhsieh: rev 8e0b8052ed9554b5f9552fc61ea7547001ab43cb)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/HDFSBlocksDistribution.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/TableAuthManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSecretManager.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ZKProcedureMemberRpcs.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentVerificationReport.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemoryManager.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/BoundedByteBufferPool.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransactionImpl.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/WALSplitterHandler.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactedHFilesDischarger.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/FavoredNodeAssignmentHelper.java


> There are lots of spelling errors in the HBase logging and exception messages
> -
>
> Key: HBASE-17224
> URL: https://issues.apache.org/jira/browse/HBASE-17224
> Project: HBase
>  Issue Type: Bug
>  Components: Client, io, mapreduce, master, regionserver, security, 
> wal
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-17224.1.patch, hbase-17224.branch-1.patch
>
>
> Found a bunch of spelling errors in log messages and exception messages such 
> as "Stoping" instead of "Stopping", "alligned" instead of "aligned".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17194) Assign the new region to the idle server after splitting

2016-12-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713424#comment-15713424
 ] 

Hadoop QA commented on HBASE-17194:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
30m 1s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 104m 5s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 147m 40s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.client.TestAvoidCellReferencesIntoShippedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841348/HBASE-17194.v3.patch |
| JIRA Issue | HBASE-17194 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 0f6503c7909c 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b3d8d06 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4751/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/4751/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4751/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4751/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Assign the new region to the idle server after splitting
> 
>
>   

[jira] [Commented] (HBASE-16841) Data loss in MOB files after cloning a snapshot and deleting that snapshot

2016-12-01 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713396#comment-15713396
 ] 

Matteo Bertozzi commented on HBASE-16841:
-

sorry, missed the ping. 
the snapshot code moved, so the patch does not apply to current master, need a 
little change.
patch looks good. only thing is that we can replace the two for loop that find 
if hcd.isMobEnabled() with MobUtil.hasMobColumns(htd).

> Data loss in MOB files after cloning a snapshot and deleting that snapshot
> --
>
> Key: HBASE-16841
> URL: https://issues.apache.org/jira/browse/HBASE-16841
> Project: HBase
>  Issue Type: Bug
>  Components: mob, snapshots
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HBASE-16841-V2.patch, HBASE-16841-V3.patch, 
> HBASE-16841-V4.patch, HBASE-16841-V5.patch, HBASE-16841.patch
>
>
> Running the following steps will probably lose MOB data when working with 
> snapshots.
> 1. Create a mob-enabled table by running create 't1', {NAME => 'f1', IS_MOB 
> => true, MOB_THRESHOLD => 0}.
> 2. Put millions of data.
> 3. Run {{snapshot 't1','t1_snapshot'}} to take a snapshot for this table t1.
> 4. Run {{clone_snapshot 't1_snapshot','t1_cloned'}} to clone this snapshot.
> 5. Run {{delete_snapshot 't1_snapshot'}} to delete this snapshot.
> 6. Run {{disable 't1'}} and {{delete 't1'}} to delete the table.
> 7. Now go to the archive directory of t1, the number of .link directories is 
> different from the number of hfiles which means some data will be lost after 
> the hfile cleaner runs.
> This is because, when taking a snapshot on a enabled mob table, each region 
> flushes itself and takes a snapshot, and the mob snapshot is taken only if 
> the current region is first region of the table. At that time, the flushing 
> of some regions might not be finished, and some mob files are not flushed to 
> disk yet. Eventually some mob files are not recorded in the snapshot manifest.
> To solve this, we need to take the mob snapshot at last after the snapshots 
> on all the online and offline regions are finished in 
> {{EnabledTableSnapshotHandler}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15902) Scan Object

2016-12-01 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713345#comment-15713345
 ] 

Enis Soztutar commented on HBASE-15902:
---

The patch looks pretty good. Just some last items: 
In the Scan copy constructors (same thing in assignment), I think we need to do 
a deep-copy of the family vectors. In the Get copy-constructors we do not copy 
the FamilyMap, so we should do that there as well.  
{code}
+  family_map_.insert(scan.family_map_.begin(), scan.family_map_.end());
{code}

This is what the java code does: 
{code}
for (Map.Entry> entry : fams.entrySet()) {
  byte [] fam = entry.getKey();
  NavigableSet cols = entry.getValue();
  if (cols != null && cols.size() > 0) {
for (byte[] col : cols) {
  addColumn(fam, col);
}
  } else {
addFamily(fam);
  }
}
{code}

 - Get has FamilyMap(), HasFamilies(), etc. Let's add those to the Scan as well 
to bring these on-par. 

 - In the cpplint scan (HBASE-17220), one issue that came up was to converting 
all of the usages of {{long}} which is not portable to using fixed-length 
values (int64, etc). But lets leave those as it is for this patch, since we 
will address them in HBASE-17220 patch. 




> Scan Object
> ---
>
> Key: HBASE-15902
> URL: https://issues.apache.org/jira/browse/HBASE-15902
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Sudeep Sunthankar
> Attachments: HBASE-15902.HBASE-14850.patch, 
> HBASE-15902.HBASE-14850.v2.patch, HBASE-15902.HBASE-14850.v3.patch
>
>
> Patch for creating Scan objects. Scan objects thus created can be used by 
> Table implementation to fetch results for a given row.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17194) Assign the new region to the idle server after splitting

2016-12-01 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713329#comment-15713329
 ] 

Ted Yu commented on HBASE-17194:


{code}
+if (load != null && idleServerPredicator != null) {
{code}
The null check for idleServerPredicator can be lifted out of the loop.
{code}
+  static final Predicate SERVER_PREDICATOR
+= load -> load.getNumberOfRegions() == 0;
{code}
Name the variable EMPTY_SERVER_PREDICATOR (SERVER_PREDICATOR is too general).
Since arbitrary predicate can be passed to 
getOnlineServersListWithIdlePredicator(), name it 
getOnlineServersListWithPredicator().

Looks good overall.

> Assign the new region to the idle server after splitting
> 
>
> Key: HBASE-17194
> URL: https://issues.apache.org/jira/browse/HBASE-17194
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17194.v0.patch, HBASE-17194.v1.patch, 
> HBASE-17194.v2.patch, HBASE-17194.v3.patch, evaluation-v0.png, tests.xlsx
>
>
> The new regions are assigned to the random servers after splitting, but there 
> always are some idle servers which don’t be assigned any regions on the new 
> cluster. It is a bad start of load balance, hence we should give priority to 
> the idle servers for assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17205) Add a metric for the duration of region in transition

2016-12-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713291#comment-15713291
 ] 

Hudson commented on HBASE-17205:


SUCCESS: Integrated in Jenkins build HBase-1.4 #556 (See 
[https://builds.apache.org/job/HBase-1.4/556/])
HBASE-17205 Add a metric for the duration of region in transition 
(matteo.bertozzi: rev 682dd57cd63f3ab0786d8bcc31a38ffd400f81b1)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java
* (edit) 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManagerSourceImpl.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/master/RegionState.java
* (edit) 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManagerSource.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionStates.java


> Add a metric for the duration of region in transition
> -
>
> Key: HBASE-17205
> URL: https://issues.apache.org/jira/browse/HBASE-17205
> Project: HBase
>  Issue Type: Improvement
>  Components: Region Assignment
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17205-branch-1.patch, HBASE-17205-v1.patch, 
> HBASE-17205-v1.patch, HBASE-17205.patch
>
>
> When work for HBASE-17178, I found there are not a metric for the overall 
> duration of region in transition. When move a region form A to B, the 
> transformation of region state is PENDING_CLOSE => CLOSING => CLOSED => 
> PENDING_OPEN => OPENING => OPENED. When transform old region state to new 
> region state, it update the time stamp to current time. So we can't get the 
> overall transformation's duration of region in transition. Add a rit duration 
> to RegionState for accumulating this metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17224) There are lots of spelling errors in the HBase logging and exception messages

2016-12-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713292#comment-15713292
 ] 

Hudson commented on HBASE-17224:


SUCCESS: Integrated in Jenkins build HBase-1.4 #556 (See 
[https://builds.apache.org/job/HBase-1.4/556/])
HBASE-17224 Fix lots of spelling errors in HBase logging and exception 
(jmhsieh: rev 9da0d5d00ed9a776643dc4d9faed1570edcbd3b9)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentVerificationReport.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactedHFilesDischarger.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/WALSplitterHandler.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/TableAuthManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemoryManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransactionImpl.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSecretManager.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ZKProcedureMemberRpcs.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/HDFSBlocksDistribution.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/FavoredNodeAssignmentHelper.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/BoundedByteBufferPool.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java


> There are lots of spelling errors in the HBase logging and exception messages
> -
>
> Key: HBASE-17224
> URL: https://issues.apache.org/jira/browse/HBASE-17224
> Project: HBase
>  Issue Type: Bug
>  Components: Client, io, mapreduce, master, regionserver, security, 
> wal
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-17224.1.patch, hbase-17224.branch-1.patch
>
>
> Found a bunch of spelling errors in log messages and exception messages such 
> as "Stoping" instead of "Stopping", "alligned" instead of "aligned".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17226) [C++] Filter and Query classes

2016-12-01 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-17226:
-

 Summary: [C++] Filter and Query classes
 Key: HBASE-17226
 URL: https://issues.apache.org/jira/browse/HBASE-17226
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar


Implement {{Query}} and {{Filter}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17224) There are lots of spelling errors in the HBase logging and exception messages

2016-12-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713245#comment-15713245
 ] 

Hudson commented on HBASE-17224:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #81 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/81/])
HBASE-17224 Fix lots of spelling errors in HBase logging and exception 
(jmhsieh: rev 8e0b8052ed9554b5f9552fc61ea7547001ab43cb)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ZKProcedureMemberRpcs.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransactionImpl.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentVerificationReport.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/WALSplitterHandler.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/BoundedByteBufferPool.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSecretManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/TableAuthManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/FavoredNodeAssignmentHelper.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/HDFSBlocksDistribution.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemoryManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactedHFilesDischarger.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java


> There are lots of spelling errors in the HBase logging and exception messages
> -
>
> Key: HBASE-17224
> URL: https://issues.apache.org/jira/browse/HBASE-17224
> Project: HBase
>  Issue Type: Bug
>  Components: Client, io, mapreduce, master, regionserver, security, 
> wal
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-17224.1.patch, hbase-17224.branch-1.patch
>
>
> Found a bunch of spelling errors in log messages and exception messages such 
> as "Stoping" instead of "Stopping", "alligned" instead of "aligned".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15704) Refactoring: Move HFileArchiver from backup to tool package, remove backup.examples

2016-12-01 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713176#comment-15713176
 ] 

Vladimir Rodionov commented on HBASE-15704:
---

[~enis] when you will have time please take a look at this patch.

> Refactoring: Move HFileArchiver from backup to tool package, remove 
> backup.examples
> ---
>
> Key: HBASE-15704
> URL: https://issues.apache.org/jira/browse/HBASE-15704
> Project: HBase
>  Issue Type: Task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-15704-v2.patch, HBASE-15704-v3.patch
>
>
> This class is in backup package (as well as backup/examples classes) but is 
> not backup - related.  Remove examples classes from  a codebase



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17194) Assign the new region to the idle server after splitting

2016-12-01 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713168#comment-15713168
 ] 

Jerry He commented on HBASE-17194:
--

+1 on v3.

> Assign the new region to the idle server after splitting
> 
>
> Key: HBASE-17194
> URL: https://issues.apache.org/jira/browse/HBASE-17194
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17194.v0.patch, HBASE-17194.v1.patch, 
> HBASE-17194.v2.patch, HBASE-17194.v3.patch, evaluation-v0.png, tests.xlsx
>
>
> The new regions are assigned to the random servers after splitting, but there 
> always are some idle servers which don’t be assigned any regions on the new 
> cluster. It is a bad start of load balance, hence we should give priority to 
> the idle servers for assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16119) Procedure v2 - Reimplement merge

2016-12-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713169#comment-15713169
 ] 

Hadoop QA commented on HBASE-16119:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 17m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
56s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 5s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 17m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 43 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 27s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 20s 
{color} | {color:red} Patch generated 1 new protoc errors in hbase-server. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
48s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s 
{color} | {color:red} hbase-client generated 2 new + 13 unchanged - 0 fixed = 
15 total (was 13) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 31s 
{color} | {color:red} hbase-server generated 6 new + 1 unchanged - 0 fixed = 7 
total (was 1) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 31s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 5s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 99m 40s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
0s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 196m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| 

[jira] [Commented] (HBASE-17221) Abstract out an interface for RpcServer.Call

2016-12-01 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713165#comment-15713165
 ] 

Jerry He commented on HBASE-17221:
--

ok.  There is a org.apache.hadoop.hbase.ipc.Call in hbase-client.  I'll get 
another name then.

> Abstract out an interface for RpcServer.Call
> 
>
> Key: HBASE-17221
> URL: https://issues.apache.org/jira/browse/HBASE-17221
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Jerry He
> Fix For: 2.0.0
>
> Attachments: HBASE-17221.patch
>
>
> RpcServer.Call is a concrete class, but it is marked as:
> {noformat}
> @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, 
> HBaseInterfaceAudience.PHOENIX})
> {noformat}
> Let's abstract out an interface out of it for potential consumers that want 
> to pass it around.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >