[jira] [Commented] (HBASE-14925) Develop HBase shell command/tool to list table's region info through command line

2017-04-05 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958305#comment-15958305
 ] 

Ashish Singhi commented on HBASE-14925:
---

bq. This is to determine the time taken for the command to execute, the 
start_time variable is initialized before the call to this function.
I could not find the start_time variable in the patch and also end_time 
variable value not being used anywhere in the patch. What am I missing ? Can 
you point me ?

> Develop HBase shell command/tool to list table's region info through command 
> line
> -
>
> Key: HBASE-14925
> URL: https://issues.apache.org/jira/browse/HBASE-14925
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Romil Choksi
>Assignee: Karan Mehta
> Attachments: HBASE-14925.002.patch, HBASE-14925.003.patch, 
> HBASE-14925.patch
>
>
> I am going through the hbase shell commands to see if there is anything I can 
> use to get all the regions info just for a particular table. I don’t see any 
> such command that provides me that information.
> It would be better to have a command that provides region info, start key, 
> end key etc taking a table name as the input parameter. This is available 
> through HBase UI on clicking on a particular table's link
> A tool/shell command to get a list of regions for a table or all tables in a 
> tabular structured output (that is machine readable)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17877) Replace/improve HBase's byte[] comparator

2017-04-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958302#comment-15958302
 ] 

Duo Zhang commented on HBASE-17877:
---

Agree that we'd better use jmh to test the performance.

And 1 comparison is too less I think, especially for short key.

Thanks.

> Replace/improve HBase's byte[] comparator
> -
>
> Key: HBASE-17877
> URL: https://issues.apache.org/jira/browse/HBASE-17877
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Vikas Vishwakarma
> Attachments: 17877-1.2.patch, 17877-v2-1.3.patch, 
> ByteComparatorJiraHBASE-17877.pdf
>
>
> [~vik.karma] did some extensive tests and found that Hadoop's version is 
> faster - dramatically faster in some cases.
> Patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17872) The MSLABImpl generates the invaild cells when unsafe is not availble

2017-04-05 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958297#comment-15958297
 ] 

Anoop Sam John commented on HBASE-17872:


No need to add new patch. Just fix that on ur commit.

> The MSLABImpl generates the invaild cells when unsafe is not availble
> -
>
> Key: HBASE-17872
> URL: https://issues.apache.org/jira/browse/HBASE-17872
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17872.v0.patch, HBASE-17872.v1.patch
>
>
> We will get the wrong position of buffer in multithreaded environment, so the 
> method makes the invalid cell in MSLAB.
> {noformat}
>   public static int copyFromBufferToBuffer(ByteBuffer in, ByteBuffer out, int 
> sourceOffset,
>   int destinationOffset, int length) {
> if (in.hasArray() && out.hasArray()) {
>   // ...
> } else if (UNSAFE_AVAIL) {
>   // ...
> } else {
>   int outOldPos = out.position();
>   out.position(destinationOffset);
>   ByteBuffer inDup = in.duplicate();
>   inDup.position(sourceOffset).limit(sourceOffset + length);
>   out.put(inDup);
>   out.position(outOldPos);
> }
> return destinationOffset + length;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17872) The MSLABImpl generates the invaild cells when unsafe is not availble

2017-04-05 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958296#comment-15958296
 ] 

Chia-Ping Tsai commented on HBASE-17872:


I will add the @visibileForTesting in next patch. Thanks for the feedback. 
[~anoop.hbase] and [~ram_krish]

> The MSLABImpl generates the invaild cells when unsafe is not availble
> -
>
> Key: HBASE-17872
> URL: https://issues.apache.org/jira/browse/HBASE-17872
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17872.v0.patch, HBASE-17872.v1.patch
>
>
> We will get the wrong position of buffer in multithreaded environment, so the 
> method makes the invalid cell in MSLAB.
> {noformat}
>   public static int copyFromBufferToBuffer(ByteBuffer in, ByteBuffer out, int 
> sourceOffset,
>   int destinationOffset, int length) {
> if (in.hasArray() && out.hasArray()) {
>   // ...
> } else if (UNSAFE_AVAIL) {
>   // ...
> } else {
>   int outOldPos = out.position();
>   out.position(destinationOffset);
>   ByteBuffer inDup = in.duplicate();
>   inDup.position(sourceOffset).limit(sourceOffset + length);
>   out.put(inDup);
>   out.position(outOldPos);
> }
> return destinationOffset + length;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17872) The MSLABImpl generates the invaild cells when unsafe is not availble

2017-04-05 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958292#comment-15958292
 ] 

ramkrishna.s.vasudevan commented on HBASE-17872:


Seeing 2nd patch. Looks good. @visibileForTesting tag may be good to be added.

> The MSLABImpl generates the invaild cells when unsafe is not availble
> -
>
> Key: HBASE-17872
> URL: https://issues.apache.org/jira/browse/HBASE-17872
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17872.v0.patch, HBASE-17872.v1.patch
>
>
> We will get the wrong position of buffer in multithreaded environment, so the 
> method makes the invalid cell in MSLAB.
> {noformat}
>   public static int copyFromBufferToBuffer(ByteBuffer in, ByteBuffer out, int 
> sourceOffset,
>   int destinationOffset, int length) {
> if (in.hasArray() && out.hasArray()) {
>   // ...
> } else if (UNSAFE_AVAIL) {
>   // ...
> } else {
>   int outOldPos = out.position();
>   out.position(destinationOffset);
>   ByteBuffer inDup = in.duplicate();
>   inDup.position(sourceOffset).limit(sourceOffset + length);
>   out.put(inDup);
>   out.position(outOldPos);
> }
> return destinationOffset + length;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17869) UnsafeAvailChecker wrongly returns false on ppc

2017-04-05 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958291#comment-15958291
 ] 

Anoop Sam John commented on HBASE-17869:


Yes I mean the generic pattern match rather than stricter string match.  If u 
feel this is fine enough after ur test go ahead pls.  Ur call Jerry.  Good on 
you. Good find.

> UnsafeAvailChecker wrongly returns false on ppc
> ---
>
> Key: HBASE-17869
> URL: https://issues.apache.org/jira/browse/HBASE-17869
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.4
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17869.patch
>
>
> On ppc64 arch,  java.nio.Bits.unaligned() wrongly returns false due to a JDK 
> bug.
> https://bugs.openjdk.java.net/browse/JDK-8165231
> This causes some problem for HBase. i.e. FuzzyRowFilter test fails.
> Fix it by providing a hard-code workaround for the JDK bug.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15179) Cell/DBB end-to-end on the write-path

2017-04-05 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958289#comment-15958289
 ] 

Anoop Sam John commented on HBASE-15179:


We dont need BC for the write path.  Ya we will need some doc on how to enable 
it.  Let me write as a combined release notes here. (Though individual jiras 
say on that item how to do).   Let me see the google docs.. May be some 
comments yet to close and some cleanup after the final patches went in.  (I did 
some parts of that few weeks back).
Sure blog is coming soon..  Taking some perf readings and charts for that.  Am 
on that this week.

> Cell/DBB end-to-end on the write-path
> -
>
> Key: HBASE-15179
> URL: https://issues.apache.org/jira/browse/HBASE-15179
> Project: HBase
>  Issue Type: Umbrella
>  Components: regionserver
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
>
> Umbrella jira to make the HBase write path off heap E2E. We have to make sure 
> we have Cells flowing in entire write path. Starting from request received in 
> RPC layer, till the Cells get flushed out as HFiles, we have to keep the Cell 
> data off heap.
> https://docs.google.com/document/d/1fj5P8JeutQ-Uadb29ChDscMuMaJqaMNRI86C4k5S1rQ/edit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17882) Why do the WalEdit implement the Writable interface

2017-04-05 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958290#comment-15958290
 ] 

Chia-Ping Tsai commented on HBASE-17882:


bq. that it won't read 0.98 or earlier. 
The compatibility is always a big issue (smile). I prepare to cleanup all 
obsolete stuff.



> Why do the WalEdit implement the Writable interface
> ---
>
> Key: HBASE-17882
> URL: https://issues.apache.org/jira/browse/HBASE-17882
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Chia-Ping Tsai
>Priority: Minor
>
> Do we have any use cases? (serialize/deserialize the WalEdit between mapper 
> and reducer?) If not, we should make WalEdit not implement the Writable 
> interface.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17869) UnsafeAvailChecker wrongly returns false on ppc

2017-04-05 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958287#comment-15958287
 ] 

Jerry He commented on HBASE-17869:
--

The spark fix does the same exact string match. But it covers more platforms 
with an extra pattern matching and for fall-back checking.
https://github.com/apache/spark/pull/17472/files
It is probably ok that we don't have the extra checking.

> UnsafeAvailChecker wrongly returns false on ppc
> ---
>
> Key: HBASE-17869
> URL: https://issues.apache.org/jira/browse/HBASE-17869
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.4
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17869.patch
>
>
> On ppc64 arch,  java.nio.Bits.unaligned() wrongly returns false due to a JDK 
> bug.
> https://bugs.openjdk.java.net/browse/JDK-8165231
> This causes some problem for HBase. i.e. FuzzyRowFilter test fails.
> Fix it by providing a hard-code workaround for the JDK bug.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17872) The MSLABImpl generates the invaild cells when unsafe is not availble

2017-04-05 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958286#comment-15958286
 ] 

Anoop Sam John commented on HBASE-17872:


disableUnsafe -> Pls add @VisibleForTesting.


> The MSLABImpl generates the invaild cells when unsafe is not availble
> -
>
> Key: HBASE-17872
> URL: https://issues.apache.org/jira/browse/HBASE-17872
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17872.v0.patch, HBASE-17872.v1.patch
>
>
> We will get the wrong position of buffer in multithreaded environment, so the 
> method makes the invalid cell in MSLAB.
> {noformat}
>   public static int copyFromBufferToBuffer(ByteBuffer in, ByteBuffer out, int 
> sourceOffset,
>   int destinationOffset, int length) {
> if (in.hasArray() && out.hasArray()) {
>   // ...
> } else if (UNSAFE_AVAIL) {
>   // ...
> } else {
>   int outOldPos = out.position();
>   out.position(destinationOffset);
>   ByteBuffer inDup = in.duplicate();
>   inDup.position(sourceOffset).limit(sourceOffset + length);
>   out.put(inDup);
>   out.position(outOldPos);
> }
> return destinationOffset + length;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17849) PE tool random read is not totally random

2017-04-05 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958284#comment-15958284
 ] 

ramkrishna.s.vasudevan commented on HBASE-17849:


I see. I never knew there was a similar issue. This patch is against latest 
trunk and does not introduce new config. I can close one as dup which ever is 
suggested. 

> PE tool random read is not totally random
> -
>
> Key: HBASE-17849
> URL: https://issues.apache.org/jira/browse/HBASE-17849
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17849.patch, HBASE-17849.patch
>
>
> Recently we were using the PE tool for doing some bucket cache related 
> performance tests. One thing that we noted was that the way the random read 
> works is not totally random.
> Suppose we load 200G of data using --size param and then we use --rows=50 
> to do the randomRead. The assumption was among the 200G of data it could 
> generate randomly 50 row keys to do the reads.
> But it so happens that the PE tool generates random rows only on those set of 
> row keys which falls under the first 50 rows. 
> This was quite evident when we tried to use HBASE-15314 in our testing. 
> Suppose we split the bucket cache of size 200G into 2 files each 100G the 
> randomReads with --rows=50 always lands in the first file and not in the 
> 2nd file. Better to make PE purely random.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17881) Remove the ByteBufferCellImpl

2017-04-05 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958282#comment-15958282
 ] 

Chia-Ping Tsai commented on HBASE-17881:


bq. Could we rename ByteBufferKeyValue as ByteBufferCell and replace 
ByteBufferCellImpl with it?
ByteBufferCell is a base class for ByteBufferKeyValue. It seems to me that the 
"rename" is unnecessary.



> Remove the ByteBufferCellImpl
> -
>
> Key: HBASE-17881
> URL: https://issues.apache.org/jira/browse/HBASE-17881
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17881.v0.patch
>
>
> We should substitute ByteBufferKeyValue for ByteBufferCellImpl



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17872) The MSLABImpl generates the invaild cells when unsafe is not availble

2017-04-05 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958281#comment-15958281
 ] 

ramkrishna.s.vasudevan commented on HBASE-17872:


+1.

> The MSLABImpl generates the invaild cells when unsafe is not availble
> -
>
> Key: HBASE-17872
> URL: https://issues.apache.org/jira/browse/HBASE-17872
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17872.v0.patch, HBASE-17872.v1.patch
>
>
> We will get the wrong position of buffer in multithreaded environment, so the 
> method makes the invalid cell in MSLAB.
> {noformat}
>   public static int copyFromBufferToBuffer(ByteBuffer in, ByteBuffer out, int 
> sourceOffset,
>   int destinationOffset, int length) {
> if (in.hasArray() && out.hasArray()) {
>   // ...
> } else if (UNSAFE_AVAIL) {
>   // ...
> } else {
>   int outOldPos = out.position();
>   out.position(destinationOffset);
>   ByteBuffer inDup = in.duplicate();
>   inDup.position(sourceOffset).limit(sourceOffset + length);
>   out.put(inDup);
>   out.position(outOldPos);
> }
> return destinationOffset + length;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17885) Backport HBASE-15871 to branch-1

2017-04-05 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-17885:
--

 Summary: Backport HBASE-15871 to branch-1
 Key: HBASE-17885
 URL: https://issues.apache.org/jira/browse/HBASE-17885
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 1.1.8, 1.2.5, 1.3.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 1.1.9, 1.2.6, 1.3.2


Will try to rebase the branch-1 patch at the earliest. Hope the fix versions 
are correct.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17882) Why do the WalEdit implement the Writable interface

2017-04-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958277#comment-15958277
 ] 

stack commented on HBASE-17882:
---

We could say that hbase2 reads hbase1 WAL files only in release notes, that it 
won't read 0.98 or earlier. Would that work?

> Why do the WalEdit implement the Writable interface
> ---
>
> Key: HBASE-17882
> URL: https://issues.apache.org/jira/browse/HBASE-17882
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Chia-Ping Tsai
>Priority: Minor
>
> Do we have any use cases? (serialize/deserialize the WalEdit between mapper 
> and reducer?) If not, we should make WalEdit not implement the Writable 
> interface.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17872) The MSLABImpl generates the invaild cells when unsafe is not availble

2017-04-05 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17872:
---
Status: Patch Available  (was: Open)

> The MSLABImpl generates the invaild cells when unsafe is not availble
> -
>
> Key: HBASE-17872
> URL: https://issues.apache.org/jira/browse/HBASE-17872
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17872.v0.patch, HBASE-17872.v1.patch
>
>
> We will get the wrong position of buffer in multithreaded environment, so the 
> method makes the invalid cell in MSLAB.
> {noformat}
>   public static int copyFromBufferToBuffer(ByteBuffer in, ByteBuffer out, int 
> sourceOffset,
>   int destinationOffset, int length) {
> if (in.hasArray() && out.hasArray()) {
>   // ...
> } else if (UNSAFE_AVAIL) {
>   // ...
> } else {
>   int outOldPos = out.position();
>   out.position(destinationOffset);
>   ByteBuffer inDup = in.duplicate();
>   inDup.position(sourceOffset).limit(sourceOffset + length);
>   out.put(inDup);
>   out.position(outOldPos);
> }
> return destinationOffset + length;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17872) The MSLABImpl generates the invaild cells when unsafe is not availble

2017-04-05 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17872:
---
Attachment: HBASE-17872.v1.patch

> The MSLABImpl generates the invaild cells when unsafe is not availble
> -
>
> Key: HBASE-17872
> URL: https://issues.apache.org/jira/browse/HBASE-17872
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17872.v0.patch, HBASE-17872.v1.patch
>
>
> We will get the wrong position of buffer in multithreaded environment, so the 
> method makes the invalid cell in MSLAB.
> {noformat}
>   public static int copyFromBufferToBuffer(ByteBuffer in, ByteBuffer out, int 
> sourceOffset,
>   int destinationOffset, int length) {
> if (in.hasArray() && out.hasArray()) {
>   // ...
> } else if (UNSAFE_AVAIL) {
>   // ...
> } else {
>   int outOldPos = out.position();
>   out.position(destinationOffset);
>   ByteBuffer inDup = in.duplicate();
>   inDup.position(sourceOffset).limit(sourceOffset + length);
>   out.put(inDup);
>   out.position(outOldPos);
> }
> return destinationOffset + length;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15871) Memstore flush doesn't finish because of backwardseek() in memstore scanner.

2017-04-05 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-15871:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Memstore flush doesn't finish because of backwardseek() in memstore scanner.
> 
>
> Key: HBASE-15871
> URL: https://issues.apache.org/jira/browse/HBASE-15871
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 1.1.2
>Reporter: Jeongdae Kim
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-15871_1.patch, HBASE-15871_1.patch, 
> HBASE-15871_2.patch, HBASE-15871_3.patch, HBASE-15871_4.patch, 
> HBASE-15871_6.patch, HBASE-15871.branch-1.1.001.patch, 
> HBASE-15871.branch-1.1.002.patch, HBASE-15871.branch-1.1.003.patch, 
> HBASE-15871-branch-1.patch, HBASE-15871.patch, memstore_backwardSeek().PNG
>
>
> Sometimes in our production hbase cluster, it takes a long time to finish 
> memstore flush.( for about more than 30 minutes)
> the reason is that a memstore flusher thread calls 
> StoreScanner.updateReaders(), waits for acquiring a lock that store scanner 
> holds in StoreScanner.next() and backwardseek() in memstore scanner runs for 
> a long time.
> I think that this condition could occur in reverse scan by the following 
> process.
> 1) create a reversed store scanner by requesting a reverse scan.
> 2) flush a memstore in the same HStore.
> 3) puts a lot of cells in memstore and memstore is almost full.
> 4) call the reverse scanner.next() and re-create all scanners in this store 
> because all scanners was already closed by 2)'s flush() and backwardseek() 
> with store's lastTop for all new scanners.
> 5) in this status, memstore is almost full by 2) and all cells in memstore 
> have sequenceID greater than this scanner's readPoint because of 2)'s 
> flush(). this condition causes searching all cells in memstore, and 
> seekToPreviousRow() repeatly seach cells that are already searched if a row 
> has one column. (described this in more detail in a attached file.)
> 6) flush a memstore again in the same HStore, and wait until 4-5) process 
> finished, to update store files in the same HStore after flusing.
> I searched HBase jira. and found a similar issue. (HBASE-14497) but, 
> HBASE-14497's fix can't solve this issue because that fix just changed 
> recursive call to loop.(and already applied to our HBase version)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15871) Memstore flush doesn't finish because of backwardseek() in memstore scanner.

2017-04-05 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958276#comment-15958276
 ] 

ramkrishna.s.vasudevan commented on HBASE-15871:


Oh sorry about this. I had a patch for branch-1 some time back and some how 
missed it putting here. I can open a new issue for this and close this out. 
Thanks.

> Memstore flush doesn't finish because of backwardseek() in memstore scanner.
> 
>
> Key: HBASE-15871
> URL: https://issues.apache.org/jira/browse/HBASE-15871
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 1.1.2
>Reporter: Jeongdae Kim
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-15871_1.patch, HBASE-15871_1.patch, 
> HBASE-15871_2.patch, HBASE-15871_3.patch, HBASE-15871_4.patch, 
> HBASE-15871_6.patch, HBASE-15871.branch-1.1.001.patch, 
> HBASE-15871.branch-1.1.002.patch, HBASE-15871.branch-1.1.003.patch, 
> HBASE-15871-branch-1.patch, HBASE-15871.patch, memstore_backwardSeek().PNG
>
>
> Sometimes in our production hbase cluster, it takes a long time to finish 
> memstore flush.( for about more than 30 minutes)
> the reason is that a memstore flusher thread calls 
> StoreScanner.updateReaders(), waits for acquiring a lock that store scanner 
> holds in StoreScanner.next() and backwardseek() in memstore scanner runs for 
> a long time.
> I think that this condition could occur in reverse scan by the following 
> process.
> 1) create a reversed store scanner by requesting a reverse scan.
> 2) flush a memstore in the same HStore.
> 3) puts a lot of cells in memstore and memstore is almost full.
> 4) call the reverse scanner.next() and re-create all scanners in this store 
> because all scanners was already closed by 2)'s flush() and backwardseek() 
> with store's lastTop for all new scanners.
> 5) in this status, memstore is almost full by 2) and all cells in memstore 
> have sequenceID greater than this scanner's readPoint because of 2)'s 
> flush(). this condition causes searching all cells in memstore, and 
> seekToPreviousRow() repeatly seach cells that are already searched if a row 
> has one column. (described this in more detail in a attached file.)
> 6) flush a memstore again in the same HStore, and wait until 4-5) process 
> finished, to update store files in the same HStore after flusing.
> I searched HBase jira. and found a similar issue. (HBASE-14497) but, 
> HBASE-14497's fix can't solve this issue because that fix just changed 
> recursive call to loop.(and already applied to our HBase version)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17872) The MSLABImpl generates the invaild cells when unsafe is not availble

2017-04-05 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17872:
---
Status: Open  (was: Patch Available)

> The MSLABImpl generates the invaild cells when unsafe is not availble
> -
>
> Key: HBASE-17872
> URL: https://issues.apache.org/jira/browse/HBASE-17872
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17872.v0.patch
>
>
> We will get the wrong position of buffer in multithreaded environment, so the 
> method makes the invalid cell in MSLAB.
> {noformat}
>   public static int copyFromBufferToBuffer(ByteBuffer in, ByteBuffer out, int 
> sourceOffset,
>   int destinationOffset, int length) {
> if (in.hasArray() && out.hasArray()) {
>   // ...
> } else if (UNSAFE_AVAIL) {
>   // ...
> } else {
>   int outOldPos = out.position();
>   out.position(destinationOffset);
>   ByteBuffer inDup = in.duplicate();
>   inDup.position(sourceOffset).limit(sourceOffset + length);
>   out.put(inDup);
>   out.position(outOldPos);
> }
> return destinationOffset + length;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17872) The MSLABImpl generates the invaild cells when unsafe is not availble

2017-04-05 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958275#comment-15958275
 ] 

Chia-Ping Tsai commented on HBASE-17872:


bq. Consider introducing trivial change in hbase-server module to run tests.
It is better to run the Test*FromClient without unsafe. But it will introduce a 
large test and we must expose the BBU#unsafe API. see v1 patch.

> The MSLABImpl generates the invaild cells when unsafe is not availble
> -
>
> Key: HBASE-17872
> URL: https://issues.apache.org/jira/browse/HBASE-17872
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17872.v0.patch
>
>
> We will get the wrong position of buffer in multithreaded environment, so the 
> method makes the invalid cell in MSLAB.
> {noformat}
>   public static int copyFromBufferToBuffer(ByteBuffer in, ByteBuffer out, int 
> sourceOffset,
>   int destinationOffset, int length) {
> if (in.hasArray() && out.hasArray()) {
>   // ...
> } else if (UNSAFE_AVAIL) {
>   // ...
> } else {
>   int outOldPos = out.position();
>   out.position(destinationOffset);
>   ByteBuffer inDup = in.duplicate();
>   inDup.position(sourceOffset).limit(sourceOffset + length);
>   out.put(inDup);
>   out.position(outOldPos);
> }
> return destinationOffset + length;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15691) Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to branch-1

2017-04-05 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958274#comment-15958274
 ] 

ramkrishna.s.vasudevan commented on HBASE-15691:


I can look into this if [~syuanjiang] is busy. 

> Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to 
> branch-1
> -
>
> Key: HBASE-15691
> URL: https://issues.apache.org/jira/browse/HBASE-15691
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.3.0
>Reporter: Andrew Purtell
>Assignee: Stephen Yuan Jiang
> Fix For: 1.2.6, 1.3.2, 1.4.1, 1.5.0
>
> Attachments: HBASE-15691-branch-1.patch, HBASE-15691.v2-branch-1.patch
>
>
> HBASE-10205 was committed to trunk and 0.98 branches only. To preserve 
> continuity we should commit it to branch-1. The change requires more than 
> nontrivial fixups so I will attach a backport of the change from trunk to 
> current branch-1 here. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17877) Replace/improve HBase's byte[] comparator

2017-04-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958273#comment-15958273
 ] 

stack commented on HBASE-17877:
---

First, this is great. Thanks for the work [~vik.karma].

Most compares in hbase are in the few byte range so this should help.

 jmh is best for this sort of compare going forwrad.

On patch, given attribution ("Stolen from Hadoop..class.. i'd say...").  Speed 
up is just in how we handle whats left over after we do the long compares?

> Replace/improve HBase's byte[] comparator
> -
>
> Key: HBASE-17877
> URL: https://issues.apache.org/jira/browse/HBASE-17877
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Vikas Vishwakarma
> Attachments: 17877-1.2.patch, 17877-v2-1.3.patch, 
> ByteComparatorJiraHBASE-17877.pdf
>
>
> [~vik.karma] did some extensive tests and found that Hadoop's version is 
> faster - dramatically faster in some cases.
> Patch forthcoming.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17882) Why do the WalEdit implement the Writable interface

2017-04-05 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958267#comment-15958267
 ] 

Chia-Ping Tsai commented on HBASE-17882:


bq. Would this be a problem?
No, i just noticed that KeyValueCompression is deprecated and it is used in the 
WalEdit only. 

bq. Concern is a new version being able to read old WALs
It is a nice reason.

Thanks for your feedback. [~stack]

> Why do the WalEdit implement the Writable interface
> ---
>
> Key: HBASE-17882
> URL: https://issues.apache.org/jira/browse/HBASE-17882
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Chia-Ping Tsai
>Priority: Minor
>
> Do we have any use cases? (serialize/deserialize the WalEdit between mapper 
> and reducer?) If not, we should make WalEdit not implement the Writable 
> interface.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15179) Cell/DBB end-to-end on the write-path

2017-04-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958262#comment-15958262
 ] 

stack commented on HBASE-15179:
---

Oh, don't we also need to finish up the gdoc? Its nearly done but put a finish 
on it (and how about a blog post? Smile?)

> Cell/DBB end-to-end on the write-path
> -
>
> Key: HBASE-15179
> URL: https://issues.apache.org/jira/browse/HBASE-15179
> Project: HBase
>  Issue Type: Umbrella
>  Components: regionserver
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
>
> Umbrella jira to make the HBase write path off heap E2E. We have to make sure 
> we have Cells flowing in entire write path. Starting from request received in 
> RPC layer, till the Cells get flushed out as HFiles, we have to keep the Cell 
> data off heap.
> https://docs.google.com/document/d/1fj5P8JeutQ-Uadb29ChDscMuMaJqaMNRI86C4k5S1rQ/edit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15179) Cell/DBB end-to-end on the write-path

2017-04-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958260#comment-15958260
 ] 

stack commented on HBASE-15179:
---

[~anoop.hbase] doesn't write path require bucketcache? How about a bit of 
user-doc for the refguide on how to enable? (though I thought we wanted this 
enabled by default? Maybe enabling by default comes later, in 2.1.?)

> Cell/DBB end-to-end on the write-path
> -
>
> Key: HBASE-15179
> URL: https://issues.apache.org/jira/browse/HBASE-15179
> Project: HBase
>  Issue Type: Umbrella
>  Components: regionserver
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
>
> Umbrella jira to make the HBase write path off heap E2E. We have to make sure 
> we have Cells flowing in entire write path. Starting from request received in 
> RPC layer, till the Cells get flushed out as HFiles, we have to keep the Cell 
> data off heap.
> https://docs.google.com/document/d/1fj5P8JeutQ-Uadb29ChDscMuMaJqaMNRI86C4k5S1rQ/edit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17881) Remove the ByteBufferCellImpl

2017-04-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958257#comment-15958257
 ] 

stack commented on HBASE-17881:
---

Don't we want to remove KeyValue references replacing them with Cell instead? 
I'd think we'd go from ByteBufferKeyValue to ByteBufferCell rather than the 
other way around? (Could we rename ByteBufferKeyValue as ByteBufferCell and 
replace ByteBufferCellImpl with it?)

> Remove the ByteBufferCellImpl
> -
>
> Key: HBASE-17881
> URL: https://issues.apache.org/jira/browse/HBASE-17881
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17881.v0.patch
>
>
> We should substitute ByteBufferKeyValue for ByteBufferCellImpl



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17882) Why do the WalEdit implement the Writable interface

2017-04-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958256#comment-15958256
 ] 

stack commented on HBASE-17882:
---

Agreed. Concern is a new version being able to read old WALs. I've not looked 
in a while. Would this be a problem?

> Why do the WalEdit implement the Writable interface
> ---
>
> Key: HBASE-17882
> URL: https://issues.apache.org/jira/browse/HBASE-17882
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Chia-Ping Tsai
>Priority: Minor
>
> Do we have any use cases? (serialize/deserialize the WalEdit between mapper 
> and reducer?) If not, we should make WalEdit not implement the Writable 
> interface.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16469) Several log refactoring/improvement suggestions

2017-04-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958252#comment-15958252
 ] 

Hadoop QA commented on HBASE-16469:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
38s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 0s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 20s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 103m 16s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hbase-it in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
54s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 154m 56s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862210/HBASE-16469.master.001.patch
 |
| JIRA Issue | HBASE-16469 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 6fd2ce626c45 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 029fa29 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6345/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6345/testReport/ |
| modules | C: hbase-client 

[jira] [Updated] (HBASE-17873) Change the IA.Public annotation to IA.Private for unstable API

2017-04-05 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17873:
--
Attachment: HBASE-17873.patch

> Change the IA.Public annotation to IA.Private for unstable API
> --
>
> Key: HBASE-17873
> URL: https://issues.apache.org/jira/browse/HBASE-17873
> Project: HBase
>  Issue Type: Sub-task
>  Components: API
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17873.patch
>
>
> As discussed in mailing list and HBASE-17857.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17873) Change the IA.Public annotation to IA.Private for unstable API

2017-04-05 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17873:
--
Assignee: Duo Zhang
  Status: Patch Available  (was: Open)

> Change the IA.Public annotation to IA.Private for unstable API
> --
>
> Key: HBASE-17873
> URL: https://issues.apache.org/jira/browse/HBASE-17873
> Project: HBase
>  Issue Type: Sub-task
>  Components: API
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17873.patch
>
>
> As discussed in mailing list and HBASE-17857.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17871) scan#setBatch(int) call leads wrong result of VerifyReplication

2017-04-05 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958248#comment-15958248
 ] 

Phil Yang commented on HBASE-17871:
---

+1
Does the patch for master can also be applied to branch-1?

> scan#setBatch(int) call leads wrong result of VerifyReplication
> ---
>
> Key: HBASE-17871
> URL: https://issues.apache.org/jira/browse/HBASE-17871
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Tomu Tsuruhara
>Assignee: Tomu Tsuruhara
>Priority: Minor
> Attachments: after.png, beforethepatch.png, 
> HBASE-17871.master.001.patch, HBASE-17871.master.002.patch, 
> HBASE-17871.master.003.patch, HBASE-17871.master.003.patch
>
>
> VerifyReplication tool printed weird logs.
> {noformat}
> 2017-04-03 23:30:50,252 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> CONTENT_DIFFERENT_ROWS, rowkey=a100193
> 2017-04-03 23:30:50,280 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> ONLY_IN_PEER_TABLE_ROWS, rowkey=a100193
> 2017-04-03 23:30:50,387 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> CONTENT_DIFFERENT_ROWS, rowkey=a100385
> 2017-04-03 23:30:50,414 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> ONLY_IN_PEER_TABLE_ROWS, rowkey=a100385
> 2017-04-03 23:30:50,480 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> CONTENT_DIFFERENT_ROWS, rowkey=a100532
> 2017-04-03 23:30:50,508 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> ONLY_IN_PEER_TABLE_ROWS, rowkey=a100532
> {noformat}
> Here, each bad rows were marked as both {{CONTENT_DIFFERENT_ROWS}} and 
> {{ONLY_IN_PEER_TABLE_ROWS}}.
> This should never happen so I took a look at code and found scan.setBatch 
> call.
> {code}
> @Override
> public void map(ImmutableBytesWritable row, final Result value,
> Context context)
> throws IOException {
>   if (replicatedScanner == null) {
>   ...
> final Scan scan = new Scan();
> scan.setBatch(batch);
> {code}
> As stated in HBASE-16376, {{scan#setBatch(int)}} call implicitly allows scan 
> results to be partial.
> Since {{VerifyReplication}} is assuming each {{scanner.next()}} call returns 
> entire row,
> partial results break compare logic.
> We should avoid setBatch call here.
> Thanks to RPC chunking (explained in this blog 
> https://blogs.apache.org/hbase/entry/scan_improvements_in_hbase_1),
> it's safe and acceptable I think.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17881) Remove the ByteBufferCellImpl

2017-04-05 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958247#comment-15958247
 ] 

Chia-Ping Tsai commented on HBASE-17881:


The failed test pass locally.
I will commit it tomorrow if no objection.

> Remove the ByteBufferCellImpl
> -
>
> Key: HBASE-17881
> URL: https://issues.apache.org/jira/browse/HBASE-17881
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17881.v0.patch
>
>
> We should substitute ByteBufferKeyValue for ByteBufferCellImpl



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17785) RSGroupBasedLoadBalancer fails to assign new table regions when cloning snapshot

2017-04-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958244#comment-15958244
 ] 

Hudson commented on HBASE-17785:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2807 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2807/])
HBASE-17785 RSGroupBasedLoadBalancer fails to assign new table regions 
(apurtell: rev 029fa297129f7ced276d19c4877d19bf32dcfde0)
* (edit) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
* (edit) 
hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroups.java


> RSGroupBasedLoadBalancer fails to assign new table regions when cloning 
> snapshot
> 
>
> Key: HBASE-17785
> URL: https://issues.apache.org/jira/browse/HBASE-17785
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0
>
> Attachments: HBASE-17785.patch
>
>
> A novice starting out with RSGroupBasedLoadBalancer will want to enable it 
> and, before assigning tables to groups, may want to create some test tables. 
> Currently that does not work when creating a table by cloning a snapshot, in 
> a surprising way. All regions of the table fail to open yet it is moved into 
> ENABLED state. The client hangs indefinitely. 
> {noformat}
> 2017-03-14 19:25:49,833 INFO  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> snapshot.CloneSnapshotHandler: Clone snapshot=seed on table=test_1 completed!
> 2017-03-14 19:25:49,871 INFO  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> hbase.MetaTableAccessor: Added 25
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  

[jira] [Updated] (HBASE-17871) scan#setBatch(int) call leads wrong result of VerifyReplication

2017-04-05 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-17871:
--
Attachment: HBASE-17871.master.003.patch

> scan#setBatch(int) call leads wrong result of VerifyReplication
> ---
>
> Key: HBASE-17871
> URL: https://issues.apache.org/jira/browse/HBASE-17871
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Tomu Tsuruhara
>Assignee: Tomu Tsuruhara
>Priority: Minor
> Attachments: after.png, beforethepatch.png, 
> HBASE-17871.master.001.patch, HBASE-17871.master.002.patch, 
> HBASE-17871.master.003.patch, HBASE-17871.master.003.patch
>
>
> VerifyReplication tool printed weird logs.
> {noformat}
> 2017-04-03 23:30:50,252 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> CONTENT_DIFFERENT_ROWS, rowkey=a100193
> 2017-04-03 23:30:50,280 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> ONLY_IN_PEER_TABLE_ROWS, rowkey=a100193
> 2017-04-03 23:30:50,387 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> CONTENT_DIFFERENT_ROWS, rowkey=a100385
> 2017-04-03 23:30:50,414 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> ONLY_IN_PEER_TABLE_ROWS, rowkey=a100385
> 2017-04-03 23:30:50,480 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> CONTENT_DIFFERENT_ROWS, rowkey=a100532
> 2017-04-03 23:30:50,508 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> ONLY_IN_PEER_TABLE_ROWS, rowkey=a100532
> {noformat}
> Here, each bad rows were marked as both {{CONTENT_DIFFERENT_ROWS}} and 
> {{ONLY_IN_PEER_TABLE_ROWS}}.
> This should never happen so I took a look at code and found scan.setBatch 
> call.
> {code}
> @Override
> public void map(ImmutableBytesWritable row, final Result value,
> Context context)
> throws IOException {
>   if (replicatedScanner == null) {
>   ...
> final Scan scan = new Scan();
> scan.setBatch(batch);
> {code}
> As stated in HBASE-16376, {{scan#setBatch(int)}} call implicitly allows scan 
> results to be partial.
> Since {{VerifyReplication}} is assuming each {{scanner.next()}} call returns 
> entire row,
> partial results break compare logic.
> We should avoid setBatch call here.
> Thanks to RPC chunking (explained in this blog 
> https://blogs.apache.org/hbase/entry/scan_improvements_in_hbase_1),
> it's safe and acceptable I think.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17871) scan#setBatch(int) call leads wrong result of VerifyReplication

2017-04-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958211#comment-15958211
 ] 

Ted Yu commented on HBASE-17871:


{code}
HBASE-17871 patch is being downloaded at Wed Apr  5 22:38:21 UTC 2017 from
  https://issues.apache.org/jira/secure/attachment/12862187/after.png -> 
Downloaded
ERROR: Unsure how to process HBASE-17871.
{code}
In the future, attach patch after attaching pictures.

> scan#setBatch(int) call leads wrong result of VerifyReplication
> ---
>
> Key: HBASE-17871
> URL: https://issues.apache.org/jira/browse/HBASE-17871
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Tomu Tsuruhara
>Assignee: Tomu Tsuruhara
>Priority: Minor
> Attachments: after.png, beforethepatch.png, 
> HBASE-17871.master.001.patch, HBASE-17871.master.002.patch, 
> HBASE-17871.master.003.patch
>
>
> VerifyReplication tool printed weird logs.
> {noformat}
> 2017-04-03 23:30:50,252 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> CONTENT_DIFFERENT_ROWS, rowkey=a100193
> 2017-04-03 23:30:50,280 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> ONLY_IN_PEER_TABLE_ROWS, rowkey=a100193
> 2017-04-03 23:30:50,387 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> CONTENT_DIFFERENT_ROWS, rowkey=a100385
> 2017-04-03 23:30:50,414 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> ONLY_IN_PEER_TABLE_ROWS, rowkey=a100385
> 2017-04-03 23:30:50,480 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> CONTENT_DIFFERENT_ROWS, rowkey=a100532
> 2017-04-03 23:30:50,508 ERROR [main] 
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication: 
> ONLY_IN_PEER_TABLE_ROWS, rowkey=a100532
> {noformat}
> Here, each bad rows were marked as both {{CONTENT_DIFFERENT_ROWS}} and 
> {{ONLY_IN_PEER_TABLE_ROWS}}.
> This should never happen so I took a look at code and found scan.setBatch 
> call.
> {code}
> @Override
> public void map(ImmutableBytesWritable row, final Result value,
> Context context)
> throws IOException {
>   if (replicatedScanner == null) {
>   ...
> final Scan scan = new Scan();
> scan.setBatch(batch);
> {code}
> As stated in HBASE-16376, {{scan#setBatch(int)}} call implicitly allows scan 
> results to be partial.
> Since {{VerifyReplication}} is assuming each {{scanner.next()}} call returns 
> entire row,
> partial results break compare logic.
> We should avoid setBatch call here.
> Thanks to RPC chunking (explained in this blog 
> https://blogs.apache.org/hbase/entry/scan_improvements_in_hbase_1),
> it's safe and acceptable I think.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17849) PE tool random read is not totally random

2017-04-05 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958170#comment-15958170
 ] 

Anoop Sam John commented on HBASE-17849:


Noticed issue HBASE-13708 which is exactly this one.. Pls see the patch there 
as well.. One should be closed as dup.

> PE tool random read is not totally random
> -
>
> Key: HBASE-17849
> URL: https://issues.apache.org/jira/browse/HBASE-17849
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17849.patch, HBASE-17849.patch
>
>
> Recently we were using the PE tool for doing some bucket cache related 
> performance tests. One thing that we noted was that the way the random read 
> works is not totally random.
> Suppose we load 200G of data using --size param and then we use --rows=50 
> to do the randomRead. The assumption was among the 200G of data it could 
> generate randomly 50 row keys to do the reads.
> But it so happens that the PE tool generates random rows only on those set of 
> row keys which falls under the first 50 rows. 
> This was quite evident when we tried to use HBASE-15314 in our testing. 
> Suppose we split the bucket cache of size 200G into 2 files each 100G the 
> randomReads with --rows=50 always lands in the first file and not in the 
> 2nd file. Better to make PE purely random.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17873) Change the IA.Public annotation to IA.Private for unstable API

2017-04-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958165#comment-15958165
 ] 

Duo Zhang commented on HBASE-17873:
---

Yeah

./hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ExponentialClientBackoffPolicy.java
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ClientBackoffPolicy.java

These two classes have been introduced long long ago. Fine. Let's keep it 
public.

And for Scan and HBaseCommonTestingUtility, I think thay are stable. And for 
CompactType, it is used in a stable interface and it is simple so I think we 
can make it public and stable. And for async client stuffs, the main 
development is done, so it is OK to just remove the unstable mark. The only 
exception is AsyncAdmin, it is still under development and I have already found 
some annoying problems. So I tend to mark it as IA.Private as I do not want to 
block the release progress of 2.0.

So here, the only change is AsyncAdmin then. Let me prepare a patch.

> Change the IA.Public annotation to IA.Private for unstable API
> --
>
> Key: HBASE-17873
> URL: https://issues.apache.org/jira/browse/HBASE-17873
> Project: HBase
>  Issue Type: Sub-task
>  Components: API
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
>
> As discussed in mailing list and HBASE-17857.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17858) Update refguide about the IS annotation if necessary

2017-04-05 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17858:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
Release Note: Updated refguide to tell users that IS annotation is only 
valid for IA.LimitedPrivate classes.
  Status: Resolved  (was: Patch Available)

Pushed to master. Thanks all for reviewing.

> Update refguide about the IS annotation if necessary
> 
>
> Key: HBASE-17858
> URL: https://issues.apache.org/jira/browse/HBASE-17858
> Project: HBase
>  Issue Type: Sub-task
>  Components: API, documentation
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17858.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17858) Update refguide about the IS annotation if necessary

2017-04-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958156#comment-15958156
 ] 

Duo Zhang commented on HBASE-17858:
---

Then let me commit.

> Update refguide about the IS annotation if necessary
> 
>
> Key: HBASE-17858
> URL: https://issues.apache.org/jira/browse/HBASE-17858
> Project: HBase
>  Issue Type: Sub-task
>  Components: API, documentation
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17858.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17302) The region flush request disappeared from flushQueue

2017-04-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958154#comment-15958154
 ] 

Hadoop QA commented on HBASE-17302:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
48s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 59s 
{color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 48s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 58s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 115m 17s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestRSKilledWhenInitializing |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:e01ee2f |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844012/HBASE-17302-branch-1-addendum-v1.patch
 |
| JIRA Issue | HBASE-17302 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 75269e77acb5 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 

[jira] [Updated] (HBASE-16469) Several log refactoring/improvement suggestions

2017-04-05 Thread Nemo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemo Chen updated HBASE-16469:
--
Description: 
*method invocation replaced by variable*

hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java

{code}String name = regionInfo.getRegionNameAsString();{code}
{code}LOG.warn("Can't close region: was already closed during close(): " +
regionInfo.getRegionNameAsString()); {code}

In the above two examples, the method invocations are assigned to the variables 
before the logging code. These method invocations should be replaced by 
variables in case of simplicity and readability


*method invocation in return statement*

hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


{code}
public String toString() {
return getRegionInfo().getRegionNameAsString();
  }
{code}


{code}
LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
  + " is not mergeable because it is closing or closed");
{code}

{code}
LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
  + " is not mergeable because it has references");
{code}

{code} 
LOG.info("Running close preflush of " + 
getRegionInfo().getRegionNameAsString());
{code}

In these above examples, the "getRegionInfo().getRegionNameAsString())" is the 
return statement of method "toString" in the same class. They should be 
replaced with “this”   in case of simplicity and readability.


*check the logged variable if it is null*
hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java


{code}
if ((sshUserName != null && sshUserName.length() > 0) ||
(sshOptions != null && sshOptions.length() > 0)) {
  LOG.info("Running with SSH user [" + sshUserName + "] and options [" + 
sshOptions + "]");
}
{code}

hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java


{code}
if ((regionState == null && latestState != null)
  || (regionState != null && latestState == null)
  || (regionState != null && latestState != null
&& latestState.getState() != regionState.getState())) {
LOG.warn("Region state changed from " + regionState + " to "
  + latestState + ", while acquiring lock");
  }
{code}
In the above example, the logging variable could null at run time. It is a bad  
practice to include null variables inside logs.


*variable in byte printed directly*

hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java


{code}
byte[] rowKey = dataGenerator.getDeterministicUniqueKey(rowKeyBase);
{code}

{code}
LOG.error("Failed to update the row with key = [" + rowKey
  + "], since we could not get the original row");
{code}

rowKey should be printed as Bytes.toString(rowKey).

 

*object toString contain mi*

The toString method returns getServerName(), so the "server.getServerName()" 
should be replaced with "server" in case of simplicity and readability.

hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java

{code}
LOG.info("Clearing out PFFE for server " + server.getServerName());
return getServerName();
{code}


  was:
*method invocation replaced by variable*

hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java

line 118: {code}String name = regionInfo.getRegionNameAsString();{code}
line 142: {code}LOG.warn("Can't close region: was already closed during 
close(): " +
regionInfo.getRegionNameAsString()); {code}

In the above two examples, the method invocations are assigned to the variables 
before the logging code. These method invocations should be replaced by 
variables in case of simplicity and readability


*method invocation in return statement*

hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java

line 5455:
{code}
public String toString() {
return getRegionInfo().getRegionNameAsString();
  }
{code}

line 1260:
{code}
LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
  + " is not mergeable because it is closing or closed");
{code}
line 1265:
{code}
LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
  + " is not mergeable because it has references");
{code}
line 1413:
{code} 
LOG.info("Running close preflush of " + 
getRegionInfo().getRegionNameAsString());
{code}

In these above examples, the "getRegionInfo().getRegionNameAsString())" is the 
return statement of method "toString" in the same class. They should be 
replaced with “this”   in case of simplicity and readability.


*check the logged variable if it is null*
hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java

line 88: 
{code}
if ((sshUserName != null && sshUserName.length() > 0) ||
(sshOptions != null && sshOptions.length() > 0)) {
  LOG.info("Running with SSH 

[jira] [Updated] (HBASE-16469) Several log refactoring/improvement suggestions

2017-04-05 Thread Nemo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemo Chen updated HBASE-16469:
--
Description: 
*method invocation replaced by variable*

hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java

line 118: {code}String name = regionInfo.getRegionNameAsString();{code}
line 142: {code}LOG.warn("Can't close region: was already closed during 
close(): " +
regionInfo.getRegionNameAsString()); {code}

In the above two examples, the method invocations are assigned to the variables 
before the logging code. These method invocations should be replaced by 
variables in case of simplicity and readability


*method invocation in return statement*

hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java

line 5455:
{code}
public String toString() {
return getRegionInfo().getRegionNameAsString();
  }
{code}

line 1260:
{code}
LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
  + " is not mergeable because it is closing or closed");
{code}
line 1265:
{code}
LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
  + " is not mergeable because it has references");
{code}
line 1413:
{code} 
LOG.info("Running close preflush of " + 
getRegionInfo().getRegionNameAsString());
{code}

In these above examples, the "getRegionInfo().getRegionNameAsString())" is the 
return statement of method "toString" in the same class. They should be 
replaced with “this”   in case of simplicity and readability.


*check the logged variable if it is null*
hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java

line 88: 
{code}
if ((sshUserName != null && sshUserName.length() > 0) ||
(sshOptions != null && sshOptions.length() > 0)) {
  LOG.info("Running with SSH user [" + sshUserName + "] and options [" + 
sshOptions + "]");
}
{code}

hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java

line 980:
{code}
if ((regionState == null && latestState != null)
  || (regionState != null && latestState == null)
  || (regionState != null && latestState != null
&& latestState.getState() != regionState.getState())) {
LOG.warn("Region state changed from " + regionState + " to "
  + latestState + ", while acquiring lock");
  }
{code}
In the above example, the logging variable could null at run time. It is a bad  
practice to include null variables inside logs.


*variable in byte printed directly*

hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java

line 145: 
{code}
byte[] rowKey = dataGenerator.getDeterministicUniqueKey(rowKeyBase);
{code}
line 184:
{code}
LOG.error("Failed to update the row with key = [" + rowKey
  + "], since we could not get the original row");
{code}

rowKey should be printed as Bytes.toString(rowKey).

 

*object toString contain mi*

The toString method returns getServerName(), so the "server.getServerName()" 
should be replaced with "server" in case of simplicity and readability.

hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java

{code}
LOG.info("Clearing out PFFE for server " + server.getServerName());
return getServerName();
{code}


  was:
*method invocation replaced by variable*

hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java

line 118: {code}String name = regionInfo.getRegionNameAsString();{code}
line 142: {code}LOG.warn("Can't close region: was already closed during 
close(): " +
regionInfo.getRegionNameAsString()); {code}

In the above two examples, the method invocations are assigned to the variables 
before the logging code. These method invocations should be replaced by 
variables in case of simplicity and readability


*method invocation in return statement*

hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java

line 5455:
{code}
public String toString() {
return getRegionInfo().getRegionNameAsString();
  }
{code}

line 1260:
{code}
LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
  + " is not mergeable because it is closing or closed");
{code}
line 1265:
{code}
LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
  + " is not mergeable because it has references");
{code}
line 1413:
{code} 
LOG.info("Running close preflush of " + 
getRegionInfo().getRegionNameAsString());
{code}

In these above examples, the "getRegionInfo().getRegionNameAsString())" is the 
return statement of method "toString" in the same class. They should be 
replaced with “this”   in case of simplicity and readability.


*check the logged variable if it is null*
hbase-1.2.2/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java

line 88: 
{code}
if 

[jira] [Updated] (HBASE-16469) Several log refactoring/improvement suggestions

2017-04-05 Thread Nemo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemo Chen updated HBASE-16469:
--
Description: 
*method invocation replaced by variable*

hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java

line 118: {code}String name = regionInfo.getRegionNameAsString();{code}
line 142: {code}LOG.warn("Can't close region: was already closed during 
close(): " +
regionInfo.getRegionNameAsString()); {code}

In the above two examples, the method invocations are assigned to the variables 
before the logging code. These method invocations should be replaced by 
variables in case of simplicity and readability


*method invocation in return statement*

hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java

line 5455:
{code}
public String toString() {
return getRegionInfo().getRegionNameAsString();
  }
{code}

line 1260:
{code}
LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
  + " is not mergeable because it is closing or closed");
{code}
line 1265:
{code}
LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
  + " is not mergeable because it has references");
{code}
line 1413:
{code} 
LOG.info("Running close preflush of " + 
getRegionInfo().getRegionNameAsString());
{code}

In these above examples, the "getRegionInfo().getRegionNameAsString())" is the 
return statement of method "toString" in the same class. They should be 
replaced with “this”   in case of simplicity and readability.


*check the logged variable if it is null*
hbase-1.2.2/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java

line 88: 
{code}
if ((sshUserName != null && sshUserName.length() > 0) ||
(sshOptions != null && sshOptions.length() > 0)) {
  LOG.info("Running with SSH user [" + sshUserName + "] and options [" + 
sshOptions + "]");
}
{code}

hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java

line 980:
{code}
if ((regionState == null && latestState != null)
  || (regionState != null && latestState == null)
  || (regionState != null && latestState != null
&& latestState.getState() != regionState.getState())) {
LOG.warn("Region state changed from " + regionState + " to "
  + latestState + ", while acquiring lock");
  }
{code}
In the above example, the logging variable could null at run time. It is a bad  
practice to include null variables inside logs.


*variable in byte printed directly*

hbase-1.2.2/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java

line 145: 
{code}
byte[] rowKey = dataGenerator.getDeterministicUniqueKey(rowKeyBase);
{code}
line 184:
{code}
LOG.error("Failed to update the row with key = [" + rowKey
  + "], since we could not get the original row");
{code}

rowKey should be printed as Bytes.toString(rowKey).

 

*object toString contain mi*

hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java

{code}
LOG.warn("#" + id + ", the task was rejected by the pool. This is unexpected."+ 
" Server is "+ server.getServerName(),t);
{code}
server is an instance of class ServerName, we found ServerName.java:

hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
{code}
  @Override
  public String toString() {
return getServerName();
  }
{code}
the toString method returns getServerName(), so the "server.getServerName()" 
should be replaced with "server" in case of simplicity and readability

Similar examples are in:

hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java

{code}
LOG.info("Clearing out PFFE for server " + server.getServerName());
return getServerName();
{code}

hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java

line 705: 
{code} LOG.debug(getName() + ": disconnecting client " + c.getHostAddress()); 
{code}

line 1259:
{code} 
public String toString() {
  return getHostAddress() + ":" + remotePort;
}
{code}


  was:
*method invocation replaced by variable*

hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java

line 57: {code}Path file = fStat.getPath();{code}

line 74: {code}LOG.error("Failed to lookup status of:" + fStat.getPath() + ", 
keeping it just incase.", e); {code}

hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java

line 118: {code}String name = regionInfo.getRegionNameAsString();{code}
line 142: {code}LOG.warn("Can't close region: was already closed during 
close(): " +
regionInfo.getRegionNameAsString()); {code}

In the above two examples, the method invocations are assigned to the variables 
before the logging code. These 

[jira] [Updated] (HBASE-16469) Several log refactoring/improvement suggestions

2017-04-05 Thread Nemo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemo Chen updated HBASE-16469:
--
Status: Patch Available  (was: Open)

> Several log refactoring/improvement suggestions
> ---
>
> Key: HBASE-16469
> URL: https://issues.apache.org/jira/browse/HBASE-16469
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 1.2.5
>Reporter: Nemo Chen
>Assignee: Nemo Chen
>  Labels: easyfix, easytest
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-16469.master.001.patch
>
>
> *method invocation replaced by variable*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java
> line 57: {code}Path file = fStat.getPath();{code}
> line 74: {code}LOG.error("Failed to lookup status of:" + fStat.getPath() + ", 
> keeping it just incase.", e); {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
> line 118: {code}String name = regionInfo.getRegionNameAsString();{code}
> line 142: {code}LOG.warn("Can't close region: was already closed during 
> close(): " +
> regionInfo.getRegionNameAsString()); {code}
> In the above two examples, the method invocations are assigned to the 
> variables before the logging code. These method invocations should be 
> replaced by variables in case of simplicity and readability
> 
> *method invocation in return statement*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
> line 5455:
> {code}
> public String toString() {
> return getRegionInfo().getRegionNameAsString();
>   }
> {code}
> line 1260:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it is closing or closed");
> {code}
> line 1265:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it has references");
> {code}
> line 1413:
> {code} 
> LOG.info("Running close preflush of " + 
> getRegionInfo().getRegionNameAsString());
> {code}
> In these above examples, the "getRegionInfo().getRegionNameAsString())" is 
> the return statement of method "toString" in the same class. They should be 
> replaced with “this”   in case of simplicity and readability.
> 
> *check the logged variable if it is null*
> hbase-1.2.2/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
> line 88: 
> {code}
> if ((sshUserName != null && sshUserName.length() > 0) ||
> (sshOptions != null && sshOptions.length() > 0)) {
>   LOG.info("Running with SSH user [" + sshUserName + "] and options [" + 
> sshOptions + "]");
> }
> {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
> line 980:
> {code}
> if ((regionState == null && latestState != null)
>   || (regionState != null && latestState == null)
>   || (regionState != null && latestState != null
> && latestState.getState() != regionState.getState())) {
> LOG.warn("Region state changed from " + regionState + " to "
>   + latestState + ", while acquiring lock");
>   }
> {code}
> In the above example, the logging variable could null at run time. It is a 
> bad  practice to include null variables inside logs.
> 
> *variable in byte printed directly*
> hbase-1.2.2/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java
> line 145: 
> {code}
> byte[] rowKey = dataGenerator.getDeterministicUniqueKey(rowKeyBase);
> {code}
> line 184:
> {code}
> LOG.error("Failed to update the row with key = [" + rowKey
>   + "], since we could not get the original row");
> {code}
> rowKey should be printed as Bytes.toString(rowKey).
>  
> *object toString contain mi*
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
> {code}
> LOG.warn("#" + id + ", the task was rejected by the pool. This is 
> unexpected."+ " Server is "+ server.getServerName(),t);
> {code}
> server is an instance of class ServerName, we found ServerName.java:
> hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
> {code}
>   @Override
>   public String toString() {
> return getServerName();
>   }
> {code}
> the toString method returns getServerName(), so the "server.getServerName()" 
> should be replaced with "server" in case of simplicity and readability
> Similar examples are in:
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java
> {code}
> LOG.info("Clearing out PFFE for server " + server.getServerName());
> return getServerName();
> {code}
> 

[jira] [Updated] (HBASE-16469) Several log refactoring/improvement suggestions

2017-04-05 Thread Nemo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemo Chen updated HBASE-16469:
--
Attachment: HBASE-16469.master.001.patch

> Several log refactoring/improvement suggestions
> ---
>
> Key: HBASE-16469
> URL: https://issues.apache.org/jira/browse/HBASE-16469
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 1.2.5
>Reporter: Nemo Chen
>Assignee: Nemo Chen
>  Labels: easyfix, easytest
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-16469.master.001.patch
>
>
> *method invocation replaced by variable*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java
> line 57: {code}Path file = fStat.getPath();{code}
> line 74: {code}LOG.error("Failed to lookup status of:" + fStat.getPath() + ", 
> keeping it just incase.", e); {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
> line 118: {code}String name = regionInfo.getRegionNameAsString();{code}
> line 142: {code}LOG.warn("Can't close region: was already closed during 
> close(): " +
> regionInfo.getRegionNameAsString()); {code}
> In the above two examples, the method invocations are assigned to the 
> variables before the logging code. These method invocations should be 
> replaced by variables in case of simplicity and readability
> 
> *method invocation in return statement*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
> line 5455:
> {code}
> public String toString() {
> return getRegionInfo().getRegionNameAsString();
>   }
> {code}
> line 1260:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it is closing or closed");
> {code}
> line 1265:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it has references");
> {code}
> line 1413:
> {code} 
> LOG.info("Running close preflush of " + 
> getRegionInfo().getRegionNameAsString());
> {code}
> In these above examples, the "getRegionInfo().getRegionNameAsString())" is 
> the return statement of method "toString" in the same class. They should be 
> replaced with “this”   in case of simplicity and readability.
> 
> *check the logged variable if it is null*
> hbase-1.2.2/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
> line 88: 
> {code}
> if ((sshUserName != null && sshUserName.length() > 0) ||
> (sshOptions != null && sshOptions.length() > 0)) {
>   LOG.info("Running with SSH user [" + sshUserName + "] and options [" + 
> sshOptions + "]");
> }
> {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
> line 980:
> {code}
> if ((regionState == null && latestState != null)
>   || (regionState != null && latestState == null)
>   || (regionState != null && latestState != null
> && latestState.getState() != regionState.getState())) {
> LOG.warn("Region state changed from " + regionState + " to "
>   + latestState + ", while acquiring lock");
>   }
> {code}
> In the above example, the logging variable could null at run time. It is a 
> bad  practice to include null variables inside logs.
> 
> *variable in byte printed directly*
> hbase-1.2.2/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java
> line 145: 
> {code}
> byte[] rowKey = dataGenerator.getDeterministicUniqueKey(rowKeyBase);
> {code}
> line 184:
> {code}
> LOG.error("Failed to update the row with key = [" + rowKey
>   + "], since we could not get the original row");
> {code}
> rowKey should be printed as Bytes.toString(rowKey).
>  
> *object toString contain mi*
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
> {code}
> LOG.warn("#" + id + ", the task was rejected by the pool. This is 
> unexpected."+ " Server is "+ server.getServerName(),t);
> {code}
> server is an instance of class ServerName, we found ServerName.java:
> hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
> {code}
>   @Override
>   public String toString() {
> return getServerName();
>   }
> {code}
> the toString method returns getServerName(), so the "server.getServerName()" 
> should be replaced with "server" in case of simplicity and readability
> Similar examples are in:
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java
> {code}
> LOG.info("Clearing out PFFE for server " + server.getServerName());
> return getServerName();
> {code}
> 

[jira] [Updated] (HBASE-16469) Several log refactoring/improvement suggestions

2017-04-05 Thread Nemo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemo Chen updated HBASE-16469:
--
Attachment: (was: HBASE-16469.master.001.patch)

> Several log refactoring/improvement suggestions
> ---
>
> Key: HBASE-16469
> URL: https://issues.apache.org/jira/browse/HBASE-16469
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 1.2.5
>Reporter: Nemo Chen
>Assignee: Nemo Chen
>  Labels: easyfix, easytest
> Fix For: 2.0.0, 1.5.0
>
>
> *method invocation replaced by variable*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java
> line 57: {code}Path file = fStat.getPath();{code}
> line 74: {code}LOG.error("Failed to lookup status of:" + fStat.getPath() + ", 
> keeping it just incase.", e); {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
> line 118: {code}String name = regionInfo.getRegionNameAsString();{code}
> line 142: {code}LOG.warn("Can't close region: was already closed during 
> close(): " +
> regionInfo.getRegionNameAsString()); {code}
> In the above two examples, the method invocations are assigned to the 
> variables before the logging code. These method invocations should be 
> replaced by variables in case of simplicity and readability
> 
> *method invocation in return statement*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
> line 5455:
> {code}
> public String toString() {
> return getRegionInfo().getRegionNameAsString();
>   }
> {code}
> line 1260:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it is closing or closed");
> {code}
> line 1265:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it has references");
> {code}
> line 1413:
> {code} 
> LOG.info("Running close preflush of " + 
> getRegionInfo().getRegionNameAsString());
> {code}
> In these above examples, the "getRegionInfo().getRegionNameAsString())" is 
> the return statement of method "toString" in the same class. They should be 
> replaced with “this”   in case of simplicity and readability.
> 
> *check the logged variable if it is null*
> hbase-1.2.2/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
> line 88: 
> {code}
> if ((sshUserName != null && sshUserName.length() > 0) ||
> (sshOptions != null && sshOptions.length() > 0)) {
>   LOG.info("Running with SSH user [" + sshUserName + "] and options [" + 
> sshOptions + "]");
> }
> {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
> line 980:
> {code}
> if ((regionState == null && latestState != null)
>   || (regionState != null && latestState == null)
>   || (regionState != null && latestState != null
> && latestState.getState() != regionState.getState())) {
> LOG.warn("Region state changed from " + regionState + " to "
>   + latestState + ", while acquiring lock");
>   }
> {code}
> In the above example, the logging variable could null at run time. It is a 
> bad  practice to include null variables inside logs.
> 
> *variable in byte printed directly*
> hbase-1.2.2/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java
> line 145: 
> {code}
> byte[] rowKey = dataGenerator.getDeterministicUniqueKey(rowKeyBase);
> {code}
> line 184:
> {code}
> LOG.error("Failed to update the row with key = [" + rowKey
>   + "], since we could not get the original row");
> {code}
> rowKey should be printed as Bytes.toString(rowKey).
>  
> *object toString contain mi*
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
> {code}
> LOG.warn("#" + id + ", the task was rejected by the pool. This is 
> unexpected."+ " Server is "+ server.getServerName(),t);
> {code}
> server is an instance of class ServerName, we found ServerName.java:
> hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
> {code}
>   @Override
>   public String toString() {
> return getServerName();
>   }
> {code}
> the toString method returns getServerName(), so the "server.getServerName()" 
> should be replaced with "server" in case of simplicity and readability
> Similar examples are in:
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java
> {code}
> LOG.info("Clearing out PFFE for server " + server.getServerName());
> return getServerName();
> {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
> 

[jira] [Commented] (HBASE-17227) Backport HBASE-17206 to branch-1.3

2017-04-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958070#comment-15958070
 ] 

Hudson commented on HBASE-17227:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #139 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/139/])
HBASE-17227 Backported HBASE-17206 to branch-1.3 (antonov: rev 
eec476677444922591903d0c255912e7c2f8d2f1)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java


> Backport HBASE-17206 to branch-1.3
> --
>
> Key: HBASE-17227
> URL: https://issues.apache.org/jira/browse/HBASE-17227
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.3.0
>Reporter: Duo Zhang
>Assignee: Jan Hentschel
>Priority: Critical
> Fix For: 1.3.1
>
> Attachments: HBASE-17227.branch-1.3.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17206) FSHLog may roll a new writer successfully with unflushed entries

2017-04-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958071#comment-15958071
 ] 

Hudson commented on HBASE-17206:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #139 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/139/])
HBASE-17227 Backported HBASE-17206 to branch-1.3 (antonov: rev 
eec476677444922591903d0c255912e7c2f8d2f1)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java


> FSHLog may roll a new writer successfully with unflushed entries
> 
>
> Key: HBASE-17206
> URL: https://issues.apache.org/jira/browse/HBASE-17206
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: HBASE-17206.patch
>
>
> Found it when debugging the flakey TestFailedAppendAndSync.
> The problem is in waitSafePoint.
> {code}
>   while (true) {
> if (this.safePointAttainedLatch.await(1, TimeUnit.MILLISECONDS)) {
>   break;
> }
> if (syncFuture.isThrowable()) {
>   throw new 
> FailedSyncBeforeLogCloseException(syncFuture.getThrowable());
> }
>   }
>   return syncFuture;
> {code}
> If we attach the safe point quick enough then we will bypass the 
> syncFuture.isThrowable check and will not throw 
> FailedSyncBeforeLogCloseException.
> This may cause incosistency between memstore and wal.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17883) release 1.4.0

2017-04-05 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958058#comment-15958058
 ] 

Sean Busbey commented on HBASE-17883:
-

kicked a bunch of stuff out of 1.4.0. there are still 11 unresolved jiras. 9 of 
them are "patch available" jiras that weren't obviously unready.

> release 1.4.0
> -
>
> Key: HBASE-17883
> URL: https://issues.apache.org/jira/browse/HBASE-17883
> Project: HBase
>  Issue Type: Task
>  Components: community
>Affects Versions: 1.4.0
>Reporter: Sean Busbey
>Priority: Critical
> Fix For: 1.4.0
>
>
> Let's start working through doing the needful; it's been almost 3 months sine 
> 1.3.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14845) hbase-server leaks jdk.tools dependency to mapreduce consumers

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14845:

Fix Version/s: (was: 1.4.0)
   1.5.0

> hbase-server leaks jdk.tools dependency to mapreduce consumers
> --
>
> Key: HBASE-14845
> URL: https://issues.apache.org/jira/browse/HBASE-14845
> Project: HBase
>  Issue Type: Bug
>  Components: build, dependencies
>Affects Versions: 2.0.0, 0.98.14, 1.2.0, 1.1.2, 1.3.0, 1.0.3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-14845.1.patch
>
>
> HBASE-13963 / HBASE-14844 take care of removing leaks of our dependency on 
> jdk-tools.
> Until we move the mapreduce support classes out of hbase-server 
> (HBASE-11843), we need to also avoid leaking the dependency from that module.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16196) Update jruby to a newer version.

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16196:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Update jruby to a newer version.
> 
>
> Key: HBASE-16196
> URL: https://issues.apache.org/jira/browse/HBASE-16196
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, shell
>Reporter: Elliott Clark
>Assignee: Matt Mullins
>Priority: Critical
> Fix For: 2.0.0, 1.5.0
>
> Attachments: 0001-Update-to-JRuby-9.1.2.0-and-JLine-2.12.patch, 
> hbase-16196.branch-1.patch, hbase-16196.v2.branch-1.patch, 
> hbase-16196.v3.branch-1.patch, hbase-16196.v4.branch-1.patch
>
>
> Ruby 1.8.7 is no longer maintained.
> The TTY library in the old jruby is bad. The newer one is less bad.
> Since this is only a dependency on the hbase-shell module and not on 
> hbase-client or hbase-server this should be a pretty simple thing that 
> doesn't have any backwards compat issues.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16030) All Regions are flushed at about same time when MEMSTORE_PERIODIC_FLUSH is on, causing flush spike

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16030:

Fix Version/s: (was: 1.3.2)
   (was: 1.2.6)
   (was: 1.4.0)

> All Regions are flushed at about same time when MEMSTORE_PERIODIC_FLUSH is 
> on, causing flush spike
> --
>
> Key: HBASE-16030
> URL: https://issues.apache.org/jira/browse/HBASE-16030
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.2.1
>Reporter: Tianying Chang
>Assignee: Tianying Chang
> Fix For: 2.0.0
>
> Attachments: hbase-16030.patch, hbase-16030-v2.patch, 
> hbase-16030-v3.patch, Screen Shot 2016-06-15 at 11.35.42 PM.png, Screen Shot 
> 2016-06-15 at 11.52.38 PM.png
>
>
> In our production cluster, we observed that memstore flush spike every hour 
> for all regions/RS. (we use the default memstore periodic flush time of 1 
> hour). 
> This will happend when two conditions are met: 
> 1. the memstore does not have enough data to be flushed before 1 hour limit 
> reached;
> 2. all regions are opened around the same time, (e.g. all RS are started at 
> the same time when start a cluster). 
> With above two conditions, all the regions will be flushed around the same 
> time at: startTime+1hour-delay again and again.
> We added a flush jittering time to randomize the flush time of each region, 
> so that they don't get flushed at around the same time. We had this feature 
> running in our 94.7 and 94.26 cluster. Recently, we upgrade to 1.2, found 
> this issue still there in 1.2. So we are porting this into 1.2 branch. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15691) Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to branch-1

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15691:

Fix Version/s: (was: 1.4.0)
   1.5.0
   1.4.1

> Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to 
> branch-1
> -
>
> Key: HBASE-15691
> URL: https://issues.apache.org/jira/browse/HBASE-15691
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.3.0
>Reporter: Andrew Purtell
>Assignee: Stephen Yuan Jiang
> Fix For: 1.2.6, 1.3.2, 1.4.1, 1.5.0
>
> Attachments: HBASE-15691-branch-1.patch, HBASE-15691.v2-branch-1.patch
>
>
> HBASE-10205 was committed to trunk and 0.98 branches only. To preserve 
> continuity we should commit it to branch-1. The change requires more than 
> nontrivial fixups so I will attach a backport of the change from trunk to 
> current branch-1 here. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-13017) Backport HBASE-12035 Keep table state in Meta to branch-1

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13017:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Backport HBASE-12035 Keep table state in Meta to branch-1
> -
>
> Key: HBASE-13017
> URL: https://issues.apache.org/jira/browse/HBASE-13017
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 1.1.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
>  Labels: backport
> Fix For: 1.5.0
>
> Attachments: HBASE-13017-branch-1.patch, 
> HBASE-13017-branch-1.v1.patch, HBASE-13017-branch-1.v1.patch, 
> HBASE-13017-branch-1.v2.patch, HBASE-13017-branch-1.v3.patch, 
> HBASE-13017-branch-1.v4.patch, HBASE-13017-branch-1.v5.patch, 
> HBASE-13017-branch-1.v6.patch
>
>
> Lets backport that feature to branch-1.0 adapting HBASE-12035 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15291) FileSystem not closed in secure bulkLoad

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15291:

Fix Version/s: (was: 1.4.0)
   1.4.1

> FileSystem not closed in secure bulkLoad
> 
>
> Key: HBASE-15291
> URL: https://issues.apache.org/jira/browse/HBASE-15291
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.2, 0.98.16.1
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Fix For: 2.0.0, 1.2.1, 0.98.18, 1.3.2, 1.4.1
>
> Attachments: HBASE-15291.001.patch, HBASE-15291.002.patch, 
> HBASE-15291.003.patch, HBASE-15291.004.patch, HBASE-15291.addendum, 
> HBASE-15291-revert-master.patch, patch
>
>
> FileSystem not closed in secure bulkLoad after bulkLoad  finish, it will 
> cause memory used more and more if too many bulkLoad .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16750) hbase compilation failed on power system

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16750:

Fix Version/s: (was: 1.4.0)

> hbase compilation failed on power system
> 
>
> Key: HBASE-16750
> URL: https://issues.apache.org/jira/browse/HBASE-16750
> Project: HBase
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 1.1.2
>Reporter: Saravanan Krishnamoorthy
>Assignee: Saravanan Krishnamoorthy
> Fix For: 2.0.0
>
> Attachments: apache_hbase_reference_guide.pdf, 
> apache_hbase_reference_guide.pdfmarks, book.pdf, book.pdfmarks, 
> HBASE-16750.branch-1.patch, HBASE-16750.master.patch
>
>
> Hi,
> hbase compilation failed on IBM power system ppc64le architecture with below 
> error:
> {code}
> Hbase Failure:
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 04:33 min
> [INFO] Finished at: 2016-09-30T08:58:47-04:00
> [INFO] Final Memory: 215M/843M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.asciidoctor:asciidoctor-maven-plugin:1.5.2.1:process-asciidoc 
> (output-pdf) on project hbase: Execution output-pdf of goal 
> org.asciidoctor:asciidoctor-maven-plugin:1.5.2.1:process-asciidoc failed: 
> (NotImplementedError) fstat unimplemented unsupported or native support 
> failed to load -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.asciidoctor:asciidoctor-maven-plugin:1.5.2.1:process-asciidoc 
> (output-pdf) on project hbase: Execution output-pdf of goal 
> org.asciidoctor:asciidoctor-maven-plugin:1.5.2.1:process-asciidoc failed: 
> (NotImplementedError) fstat unimplemented unsupported or native support 
> failed to load
>   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
>   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
>   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
>   at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
>   at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
>   at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
>   at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
>   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
>   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
>   at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
>   at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
>   at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
>   at org.apache.maven.cli.MavenCli.main(MavenCli.java:199)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
>   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
>   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
>   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
> Caused by: org.apache.maven.plugin.PluginExecutionException: Execution 
> output-pdf of goal 
> org.asciidoctor:asciidoctor-maven-plugin:1.5.2.1:process-asciidoc failed: 
> (NotImplementedError) fstat unimplemented unsupported or native support 
> failed to load
>   at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:145)
>   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
>   ... 20 more
> Caused by: org.jruby.exceptions.RaiseException: (NotImplementedError) fstat 
> unimplemented unsupported or native support failed to load
>   at org.jruby.RubyFile.size(org/jruby/RubyFile.java:1108)
>   at 
> RUBY.render_body(/grid/0/jenkins/.m2/repository/org/asciidoctor/asciidoctorj-pdf/1.5.0-alpha.6/asciidoctorj-pdf-1.5.0-alpha.6.jar!/gems/pdf-core-0.2.5/lib/pdf/core/document_state.rb:69)
>   at 
> 

[jira] [Updated] (HBASE-15835) HBaseTestingUtility#startMiniCluster throws "HMasterAddress already in use" RuntimeException when a local instance of HBase is running

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15835:

Fix Version/s: (was: 1.3.2)
   (was: 1.4.0)

> HBaseTestingUtility#startMiniCluster throws "HMasterAddress already in use" 
> RuntimeException when a local instance of HBase is running
> --
>
> Key: HBASE-15835
> URL: https://issues.apache.org/jira/browse/HBASE-15835
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>  Labels: easyfix
> Fix For: 2.0.0
>
> Attachments: HBASE-15835-v1.patch, HBASE-15835-v2.patch, 
> HBASE-15835-v3.patch
>
>
> When a MiniCluster is being started with the 
> {{HBaseTestUtility#startMiniCluster}} method (most typically in the context 
> of JUnit testing), if a local HBase instance is already running (or for that 
> matter, another thread with another MiniCluster is already running), the 
> startup will fail with a RuntimeException saying "HMasterAddress already in 
> use", referring explicitly to contention for the same default master info 
> port (16010).
> This problem most recently came up in conjunction with HBASE-14876 and its 
> sub-JIRAs (development of new HBase-oriented Maven archetypes), but this is 
> apparently a known issue to veteran developers, who tend to set up the 
> @BeforeClass sections of their test modules with code similar to the 
> following:
> {code}
> UTIL = HBaseTestingUtility.createLocalHTU();
> // disable UI's on test cluster.
> UTIL.getConfiguration().setInt("hbase.master.info.port", -1);
> UTIL.getConfiguration().setInt("hbase.regionserver.info.port", -1);
> UTIL.startMiniCluster();
> {code}
> A comprehensive solution modeled on this should be put directly into 
> HBaseTestUtility's main constructor, using one of the following options:
> OPTION 1 (always force random port assignment):
> {code}
> this.getConfiguration().setInt(HConstants.MASTER_INFO_PORT, -1);
> this.getConfiguration().setInt(HConstants.REGIONSERVER_PORT, -1);
> {code}
> OPTION 2 (always force random port assignment if user has not explicitly 
> defined alternate port):
> {code}
> Configuration conf = this.getConfiguration();
> if (conf.getInt(HConstants.MASTER_INFO_PORT, 
> HConstants.DEFAULT_MASTER_INFOPORT)
> == HConstants.DEFAULT_MASTER_INFOPORT) {
>   conf.setInt(HConstants.MASTER_INFO_PORT, -1);
> }
> if (conf.getInt(HConstants.REGIONSERVER_PORT, 
> HConstants.DEFAULT_REGIONSERVER_PORT)
> == HConstants.DEFAULT_REGIONSERVER_PORT) {
>   conf.setInt(HConstants.REGIONSERVER_PORT, -1);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17785) RSGroupBasedLoadBalancer fails to assign new table regions when cloning snapshot

2017-04-05 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-17785:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> RSGroupBasedLoadBalancer fails to assign new table regions when cloning 
> snapshot
> 
>
> Key: HBASE-17785
> URL: https://issues.apache.org/jira/browse/HBASE-17785
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0
>
> Attachments: HBASE-17785.patch
>
>
> A novice starting out with RSGroupBasedLoadBalancer will want to enable it 
> and, before assigning tables to groups, may want to create some test tables. 
> Currently that does not work when creating a table by cloning a snapshot, in 
> a surprising way. All regions of the table fail to open yet it is moved into 
> ENABLED state. The client hangs indefinitely. 
> {noformat}
> 2017-03-14 19:25:49,833 INFO  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> snapshot.CloneSnapshotHandler: Clone snapshot=seed on table=test_1 completed!
> 2017-03-14 19:25:49,871 INFO  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> hbase.MetaTableAccessor: Added 25
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is null
> 2017-03-14 19:25:49,875 WARN  [MASTER_TABLE_OPERATIONS-ip-172-31-5-95:8100-0] 
> rsgroup.RSGroupBasedLoadBalancer: Group for table test_1 is 

[jira] [Updated] (HBASE-17302) The region flush request disappeared from flushQueue

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-17302:

Status: Patch Available  (was: Reopened)

> The region flush request disappeared from flushQueue
> 
>
> Key: HBASE-17302
> URL: https://issues.apache.org/jira/browse/HBASE-17302
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.4, 0.98.23, 2.0.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17302-branch-1.2-v1.patch, 
> HBASE-17302-branch-1-addendum.patch, HBASE-17302-branch-1-addendum-v1.patch, 
> HBASE-17302-branch-master-v1.patch, HBASE-17302-master-addendum.patch, 
> HBASE-17302-master-addendum-v1.patch
>
>
> Region has too many store files delaying flush up to blockingWaitTime ms, and 
> the region flush request is requeued into the flushQueue.
> When the region flush request is requeued into the flushQueue frequently, the 
> request is inexplicably disappeared sometimes. 
> But regionsInQueue still contains the information of the region request, 
> which leads to new flush request can not be inserted into the flushQueue.
> Then, the region will not do flush anymore.
> In order to locate the problem, I added a lot of log in the code.
> {code:title=MemStoreFlusher.java|borderStyle=solid}
> private boolean flushRegion(final HRegion region, final boolean 
> emergencyFlush) {
> long startTime = 0;
> synchronized (this.regionsInQueue) {
>   FlushRegionEntry fqe = this.regionsInQueue.remove(region);
>   // Use the start time of the FlushRegionEntry if available
>   if (fqe != null) {
>   startTime = fqe.createTime;
>   }
>   if (fqe != null && emergencyFlush) {
>   // Need to remove from region from delay queue.  When NOT an
>   // emergencyFlush, then item was removed via a flushQueue.poll.
>   flushQueue.remove(fqe);
>  }
> }
> {code}
> When encountered emergencyFlush, the region flusher will be removed from the 
> flushQueue.
> By comparing the flushQueue content before and after remove, RegionA should 
> have been removed, it is possible to remove RegionB.
> {code:title=MemStoreFlusher.java|borderStyle=solid}
> public boolean equals(Object obj) {
>   if (this == obj) {
>   return true;
>   }
>   if (obj == null || getClass() != obj.getClass()) {
>   return false;
>   }
>   Delayed other = (Delayed) obj;
>   return compareTo(other) == 0;
> }
> {code}
> FlushRegionEntry in achieving the equals function, only comparison of the 
> delay time, if different regions of the same delay time, it is possible that 
> A wrong B.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14470) Reduce memory pressure generated by client

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14470:

Fix Version/s: (was: 1.4.0)

> Reduce memory pressure generated by client
> --
>
> Key: HBASE-14470
> URL: https://issues.apache.org/jira/browse/HBASE-14470
> Project: HBase
>  Issue Type: Task
>  Components: Client, Performance
>Affects Versions: 1.3.0
>Reporter: Nick Dimiduk
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: allocation-by-class.jpg, allocation-by-thread.jpg, 
> c+ma.jfc, object-stats.jpg
>
>
> I think there's room for improvement in our client's memory profile. I ran 
> ltt with jfr running, attaching some snaps of what my client sees. Looks like 
> some kind of object pool or block encoding for result objects will give us a 
> lot of bang for the buck re: allocations and GC pressure. We probably also 
> want to look for an alternative way to represent result objects, something 
> besides the java Map interface with it's Entry bloat.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17884) Backport HBASE-16217 to branch-1

2017-04-05 Thread Gary Helmling (JIRA)
Gary Helmling created HBASE-17884:
-

 Summary: Backport HBASE-16217 to branch-1
 Key: HBASE-17884
 URL: https://issues.apache.org/jira/browse/HBASE-17884
 Project: HBase
  Issue Type: Sub-task
Reporter: Gary Helmling


The change to add calling user to ObserverContext in HBASE-16217 should also be 
applied to branch-1 to avoid use of UserGroupInformation.doAs() for access 
control checks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16217) Identify calling user in ObserverContext

2017-04-05 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HBASE-16217:
--
   Resolution: Fixed
Fix Version/s: (was: 1.4.0)
   Status: Resolved  (was: Patch Available)

This was committed to master quite a while ago and the patch against branch-1 
has gone way stale while waiting on a hibernating HadoopQA.  I'll close this 
out and open a separate JIRA for a backport.

> Identify calling user in ObserverContext
> 
>
> Key: HBASE-16217
> URL: https://issues.apache.org/jira/browse/HBASE-16217
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Fix For: 2.0.0
>
> Attachments: HBASE-16217.branch-1.001.patch, 
> HBASE-16217.master.001.patch, HBASE-16217.master.002.patch, 
> HBASE-16217.master.003.patch
>
>
> We already either explicitly pass down the relevant User instance initiating 
> an action through the call path, or it is available through 
> RpcServer.getRequestUser().  We should carry this through in the 
> ObserverContext for coprocessor upcalls and make use of it for permissions 
> checking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-13903) Speedup IdLock

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13903:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Speedup IdLock
> --
>
> Key: HBASE-13903
> URL: https://issues.apache.org/jira/browse/HBASE-13903
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 0.98.13
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-13903-v0.patch, IdLockPerf.java
>
>
> while testing the read path, I ended up with the profiler showing a lot of 
> time spent in IdLock.
> The IdLock is used by the HFileReader and the BucketCache, so you'll see a 
> lot of it when you have an hotspot on a hfile.
> we end up locked by that synchronized() and with too many calls to 
> map.putIfAbsent()
> {code}
> public Entry getLockEntry(long id) throws IOException {
>   while ((existing = map.putIfAbsent(entry.id, entry)) != null) {
> synchronized (existing) {
>   ...
> }
> // If the entry is not locked, it might already be deleted from the
> // map, so we cannot return it. We need to get our entry into the map
> // or get someone else's locked entry.
>   }
> }
> public void releaseLockEntry(Entry entry) {
>   synchronized (entry) {
> ...
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16071) The VisibilityLabelFilter and AccessControlFilter should not count the "delete cell"

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16071:

Fix Version/s: (was: 1.4.0)
   1.4.1

> The VisibilityLabelFilter and AccessControlFilter should not count the 
> "delete cell"
> 
>
> Key: HBASE-16071
> URL: https://issues.apache.org/jira/browse/HBASE-16071
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0, 1.3.2, 1.4.1
>
> Attachments: HBASE-16071-v1.patch, HBASE-16071-v2.patch, 
> HBASE-16071-v3.patch
>
>
> The VisibilityLabelFilter will see and count the "delete cell" if the 
> scan.isRaw() returns true, so the (put) cell will be skipped if it has lower 
> version than "delete cell"
> The critical code is shown below:
> {code:title=VisibilityLabelFilter.java|borderStyle=solid}
>   public ReturnCode filterKeyValue(Cell cell) throws IOException {
> if (curFamily.getBytes() == null
> || !(CellUtil.matchingFamily(cell, curFamily.getBytes(), 
> curFamily.getOffset(),
> curFamily.getLength( {
>   curFamily.set(cell.getFamilyArray(), cell.getFamilyOffset(), 
> cell.getFamilyLength());
>   // For this family, all the columns can have max of 
> curFamilyMaxVersions versions. No need to
>   // consider the older versions for visibility label check.
>   // Ideally this should have been done at a lower layer by HBase (?)
>   curFamilyMaxVersions = cfVsMaxVersions.get(curFamily);
>   // Family is changed. Just unset curQualifier.
>   curQualifier.unset();
> }
> if (curQualifier.getBytes() == null
> || !(CellUtil.matchingQualifier(cell, curQualifier.getBytes(), 
> curQualifier.getOffset(),
> curQualifier.getLength( {
>   curQualifier.set(cell.getQualifierArray(), cell.getQualifierOffset(),
>   cell.getQualifierLength());
>   curQualMetVersions = 0;
> }
> curQualMetVersions++;
> if (curQualMetVersions > curFamilyMaxVersions) {
>   return ReturnCode.SKIP;
> }
> return this.expEvaluator.evaluate(cell) ? ReturnCode.INCLUDE : 
> ReturnCode.SKIP;
>   }
> {code}
> [VisibilityLabelFilter.java|https://github.com/apache/hbase/blob/d7a4499dfc8b3936a0eca867589fc2b23b597866/hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelFilter.java]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-13260) Bootstrap Tables for fun and profit

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13260:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Bootstrap Tables for fun and profit 
> 
>
> Key: HBASE-13260
> URL: https://issues.apache.org/jira/browse/HBASE-13260
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.5.0
>
> Attachments: hbase-13260_bench.patch, hbase-13260_prototype.patch
>
>
> Over at the ProcV2 discussions(HBASE-12439) and elsewhere I was mentioning an 
> idea where we may want to use regular old regions to store/persist some data 
> needed for HBase master to operate. 
> We regularly use system tables for storing system data. acl, meta, namespace, 
> quota are some examples. We also store the table state in meta now. Some data 
> is persisted in zk only (replication peers and replication state, etc). We 
> are moving away from zk as a permanent storage. As any self-respecting 
> database does, we should store almost all of our data in HBase itself. 
> However, we have an "availability" dependency between different kinds of 
> data. For example all system tables need meta to be assigned first. All 
> master operations need ns table to be assigned, etc. 
> For at least two types of data, (1) procedure v2 states, (2) RS groups in 
> HBASE-6721 we cannot depend on meta being assigned since "assignment" itself 
> will depend on accessing this data. The solution in (1) is to implement a 
> custom WAL format, and custom recover lease and WAL recovery. The solution in 
> (2) is to have the table to store this data, but also cache it in zk for 
> bootrapping initial assignments. 
> For solving both of the above (and possible future use cases if any), I 
> propose we add a "boostrap table" concept, which is: 
>  - A set of predefined tables hosted in a separate dir in HDFS. 
>  - A table is only 1 region, not splittable 
>  - Not assigned through regular assignment 
>  - Hosted only on 1 server (typically master)
>  - Has a dedicated WAL. 
>  - A service does WAL recovery + fencing for these tables. 
> This has the benefit of using a region to keep the data, but frees us to 
> re-implement caching and we can use the same WAL / Memstore / Recovery 
> mechanisms that are battle-tested. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-13458) Create/expand unit test to exercise htrace instrumentation

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13458:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Create/expand unit test to exercise htrace instrumentation
> --
>
> Key: HBASE-13458
> URL: https://issues.apache.org/jira/browse/HBASE-13458
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0, 1.5.0
>
>
> From HBASE-13078, [~ndimiduk] suggested that we can also add a Medium/Large 
> unit test that does some more assertions over the HTrace instrumentation. 
> Some loose goals:
> * Try to verify that spans continue from HBase into HDFS
> * Ensure that user-created spans continue into HBase
> * Validate expected API calls have expected instrumentation
> Other ideas that people have? Any pain points experienced prior by people?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16239) Better logging for RPC related exceptions

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16239:

Fix Version/s: (was: 1.4.0)
   1.4.1

> Better logging for RPC related exceptions
> -
>
> Key: HBASE-16239
> URL: https://issues.apache.org/jira/browse/HBASE-16239
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.2, 1.4.1
>
> Attachments: hbase-16239_v1.patch, hbase-16239_v2.patch, 
> hbase-16239_v2.patch, hbase-16239_v2.patch
>
>
> On many occasions, we have to debug RPC related issues, but it is hard in AP 
> + RetryingRpcCaller since we mask the stack traces until all retries have 
> been exhausted (which takes 10 minutes by default).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14803) Add some debug logs to StoreFileScanner

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14803:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Add some debug logs to StoreFileScanner
> ---
>
> Key: HBASE-14803
> URL: https://issues.apache.org/jira/browse/HBASE-14803
> Project: HBase
>  Issue Type: Bug
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-14803.v0-trunk.patch, HBASE-14803.v1-trunk.patch, 
> HBASE-14803.v2-trunk.patch, HBASE-14803.v3-trunk.patch, 
> HBASE-14803.v4-trunk.patch, HBASE-14803.v4-trunk.patch, 
> HBASE-14803.v5-trunk.patch
>
>
> To validate some behaviors I had to add some logs into StoreFileScanner.
> I think it can be interesting for other people looking for debuging. So 
> sharing the modifications here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-10944) Remove all kv.getBuffer() and kv.getRow() references existing in the code

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-10944:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Remove all kv.getBuffer() and kv.getRow() references existing in the code
> -
>
> Key: HBASE-10944
> URL: https://issues.apache.org/jira/browse/HBASE-10944
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 1.5.0
>
>
> kv.getRow() and kv.getBuffers() are still used in places to form key byte[] 
> and row byte[].  Removing all such instances including testcases will make 
> the usage of Cell complete.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-13605) RegionStates should not keep its list of dead servers

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13605:

Fix Version/s: (was: 1.4.0)
   1.5.0

> RegionStates should not keep its list of dead servers
> -
>
> Key: HBASE-13605
> URL: https://issues.apache.org/jira/browse/HBASE-13605
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Critical
> Fix For: 2.0.0, 1.5.0
>
> Attachments: hbase-13605_v1.patch, hbase-13605_v3-branch-1.1.patch, 
> hbase-13605_v4-branch-1.1.patch, hbase-13605_v4-master.patch
>
>
> As mentioned in 
> https://issues.apache.org/jira/browse/HBASE-9514?focusedCommentId=13769761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13769761
>  and HBASE-12844 we should have only 1 source of cluster membership. 
> The list of dead server and RegionStates doing it's own liveliness check 
> (ServerManager.isServerReachable()) has caused an assignment problem again in 
> a test cluster where the region states "thinks" that the server is dead and 
> SSH will handle the region assignment. However the RS is not dead at all, 
> living happily, and never gets zk expiry or YouAreDeadException or anything. 
> This leaves the list of regions unassigned in OFFLINE state. 
> master assigning the region:
> {code}
> 15-04-20 09:02:25,780 DEBUG [AM.ZK.Worker-pool3-t330] master.RegionStates: 
> Onlined 77dddcd50c22e56bfff133c0e1f9165b on 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268 {ENCODED => 
> 77dddcd50c
> {code}
> Master then disabled the table, and unassigned the region:
> {code}
> 2015-04-20 09:02:27,158 WARN  [ProcedureExecutorThread-1] 
> zookeeper.ZKTableStateManager: Moving table loadtest_d1 state from DISABLING 
> to DISABLING
>  Starting unassign of 
> loadtest_d1,,1429520544378.77dddcd50c22e56bfff133c0e1f9165b. (offlining), 
> current state: {77dddcd50c22e56bfff133c0e1f9165b state=OPEN, 
> ts=1429520545780,   
> server=os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268}
> bleProcedure$BulkDisabler-0] master.AssignmentManager: Sent CLOSE to 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268 for region 
> loadtest_d1,,1429520544378.77dddcd50c22e56bfff133c0e1f9165b.
> 2015-04-20 09:02:27,414 INFO  [AM.ZK.Worker-pool3-t316] master.RegionStates: 
> Offlined 77dddcd50c22e56bfff133c0e1f9165b from 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268
> {code}
> On table re-enable, AM does not assign the region: 
> {code}
> 2015-04-20 09:02:30,415 INFO  [ProcedureExecutorThread-3] 
> balancer.BaseLoadBalancer: Reassigned 25 regions. 25 retained the pre-restart 
> assignment.·
> 2015-04-20 09:02:30,415 INFO  [ProcedureExecutorThread-3] 
> procedure.EnableTableProcedure: Bulk assigning 25 region(s) across 5 
> server(s), retainAssignment=true
> l,16000,1429515659726-GeneralBulkAssigner-4] master.RegionStates: Couldn't 
> reach online server 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268
> l,16000,1429515659726-GeneralBulkAssigner-4] master.AssignmentManager: 
> Updating the state to OFFLINE to allow to be reassigned by SSH
> nmentManager: Skip assigning 
> loadtest_d1,,1429520544378.77dddcd50c22e56bfff133c0e1f9165b., it is on a dead 
> but not processed yet server: 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16102) Procedure v2 - Print Procedure Graphs

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16102:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Procedure v2 - Print Procedure Graphs
> -
>
> Key: HBASE-16102
> URL: https://issues.apache.org/jira/browse/HBASE-16102
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, tooling
>Reporter: Appy
>Assignee: Balazs Meszaros
> Fix For: 2.0.0, 1.5.0
>
>
> Print trees to better visualize hierarchy of procedures when we have child 
> procedures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15779) Examples should use programmatic keytab auth

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15779:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Examples should use programmatic keytab auth
> 
>
> Key: HBASE-15779
> URL: https://issues.apache.org/jira/browse/HBASE-15779
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.0.0, 1.5.0
>
>
> our examples should include programmatic keytab-based access for secure hbase 
> clusters, since most folks who look at them for building long-lived services 
> will need to take that approach rather than an external kinit-and-refresh 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-10974) Improve DBEs read performance by avoiding byte array deep copies for key[] and value[]

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-10974:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Improve DBEs read performance by avoiding byte array deep copies for key[] 
> and value[]
> --
>
> Key: HBASE-10974
> URL: https://issues.apache.org/jira/browse/HBASE-10974
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners
>Affects Versions: 0.99.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-10974_1.patch
>
>
> As part of HBASE-10801, we  tried to reduce the copy of the value [] in 
> forming the KV from the DBEs. 
> The keys required copying and this was restricting us in using Cells and 
> always wanted to copy to be done.
> The idea here is to replace the key byte[] as ByteBuffer and create a 
> consecutive stream of the keys (currently the same byte[] is used and hence 
> the copy).  Use offset and length to track this key bytebuffer.
> The copy of the encoded format to normal Key format is definitely needed and 
> can't be avoided but we could always avoid the deep copy of the bytes to form 
> a KV and thus use cells effectively. Working on a patch, will post it soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-7115) [shell] Provide a way to register custom filters with the Filter Language Parser

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-7115:
---
Fix Version/s: (was: 1.4.0)
   1.5.0

> [shell] Provide a way to register custom filters with the Filter Language 
> Parser
> 
>
> Key: HBASE-7115
> URL: https://issues.apache.org/jira/browse/HBASE-7115
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters, shell
>Affects Versions: 0.95.2
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-7115_trunk.patch, HBASE-7115_trunk.patch, 
> HBASE-7115_trunk_v2.patch
>
>
> HBASE-5428 added this capability to thrift interface but the configuration 
> parameter name is "thrift" specific.
> This patch introduces a more generic parameter "hbase.user.filters" using 
> which the user defined custom filters can be specified in the configuration 
> and loaded in any client that needs to use the filter language parser.
> The patch then uses this new parameter to register any user specified filters 
> while invoking the HBase shell.
> Example usage: Let's say I have written a couple of custom filters with class 
> names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
> *{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
> use them from HBase shell using the filter language.
> To do that, I would add the following configuration to {{hbase-site.xml}}
> {panel}{{}}
> {{  hbase.user.filters}}
> {{  
> }}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}
> {{}}{panel}
> Once this is configured, I can launch HBase shell and use these filters in my 
> {{get}} or {{scan}} just the way I would use a built-in filter.
> {code}
> hbase(main):001:0> scan 't', {FILTER => "SuperDuperFilter(true) AND 
> SilverBulletFilter(42)"}
> ROW  COLUMN+CELL
>  status  column=cf:a, 
> timestamp=30438552, value=world_peace
> 1 row(s) in 0. seconds
> {code}
> To use this feature in any client, the client needs to make the following 
> function call as part of its initialization.
> {code}
> ParseFilter.registerUserFilters(configuration);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15654) Optimize client's MetaCache handling

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15654:

Fix Version/s: (was: 1.4.0)
   1.5.0
   2.0.0

> Optimize client's MetaCache handling
> 
>
> Key: HBASE-15654
> URL: https://issues.apache.org/jira/browse/HBASE-15654
> Project: HBase
>  Issue Type: Umbrella
>  Components: Client
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.5.0
>
>
> This is an umbrella jira to track all individual issues, bugfixes and small 
> optimizations around MetaCache (region locations cache) in the client. 
> Motivation is that under the load one could see a spikes in the number of 
> requests going to meta - reaching tens of thousands requests per second.
> That covers issues when we clear entries from location cache unnecessary, as 
> well as when we do more lookups than necessary when entries are legitimately 
> evicted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16594) ROW_INDEX_V2 DBE

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16594:

Fix Version/s: (was: 1.4.0)
   1.5.0

> ROW_INDEX_V2 DBE
> 
>
> Key: HBASE-16594
> URL: https://issues.apache.org/jira/browse/HBASE-16594
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-16594-master_v1.patch, HBASE-16594-master_v2.patch
>
>
> See HBASE-16213, ROW_INDEX_V1 DataBlockEncoding.
> ROW_INDEX_V1 is the first version which have no storage optimization, 
> ROW_INDEX_V2 do storage optimization: store every row only once, store column 
> family only once in a HFileBlock.
> ROW_INDEX_V1 is : 
> /** 
>  * Store cells following every row's start offset, so we can binary search to 
> a row's cells. 
>  * 
>  * Format: 
>  * flat cells 
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * integer: dataSize 
>  * 
> */
> ROW_INDEX_V2 is :
>  * row1 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  * row2 qualifier timestamp type value tag
>  * row3 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * column family
>  * integer: dataSize 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14008) REST - Throw an appropriate error during schema POST

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14008:

Fix Version/s: (was: 1.4.0)
   1.5.0

> REST - Throw an appropriate error during schema POST
> 
>
> Key: HBASE-14008
> URL: https://issues.apache.org/jira/browse/HBASE-14008
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 0.98.13, 1.1.1
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
>Priority: Minor
>  Labels: REST
> Fix For: 2.0.0, 1.5.0
>
> Attachments: 14008.patch, HBASE-14008.patch
>
>
> When an update is done on the schema through REST and an error occurs, the 
> actual reason is not thrown back to the client. Right now we get a 
> "javax.ws.rs.WebApplicationException" instead of the actual error message.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15677) FailedServerException shouldn't clear MetaCache

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15677:

Fix Version/s: (was: 1.4.0)
   1.5.0
   2.0.0

> FailedServerException shouldn't clear MetaCache
> ---
>
> Key: HBASE-15677
> URL: https://issues.apache.org/jira/browse/HBASE-15677
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.5.0
>
>
> Right now FailedServerException clears meta cache. Seems like it's 
> unnecessary (if we hit that, someone has already gotten some network/remote 
> error in the first place and invalidated located cache for us), and seems it 
> could lead to unnecessary drops, as FailedServers cache has default TTL of 2 
> seconds, so we can encounter situation like this:
>  - thread T1 hit network error and cleared the cache, put server in failed 
> server list
>  - thread T2 tries to get it's request in and gets FailedServerException
>  - thread T1 does meta scan to populate the cache
>  - thread T2 clears the cache after it's got FSE.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-12269) Add support for Scan.setRowPrefixFilter to thrift

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12269:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Add support for Scan.setRowPrefixFilter to thrift
> -
>
> Key: HBASE-12269
> URL: https://issues.apache.org/jira/browse/HBASE-12269
> Project: HBase
>  Issue Type: New Feature
>  Components: Thrift
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-12269-2014-10-15-v1.patch, 
> HBASE-12269-2014-10-16-v2-INCOMPLETE.patch, HBASE-12269-2014-12-05-v2.patch
>
>
> I think having the feature introduced in HBASE-11990 in the hbase thrift 
> interface would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16141) Unwind use of UserGroupInformation.doAs() to convey requester identity in coprocessor upcalls

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16141:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Unwind use of UserGroupInformation.doAs() to convey requester identity in 
> coprocessor upcalls
> -
>
> Key: HBASE-16141
> URL: https://issues.apache.org/jira/browse/HBASE-16141
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Fix For: 2.0.0, 1.5.0
>
>
> In discussion on HBASE-16115, there is some discussion of whether 
> UserGroupInformation.doAs() is the right mechanism for propagating the 
> original requester's identify in certain system contexts (splits, 
> compactions, some procedure calls).  It has the unfortunately of overriding 
> the current user, which makes for very confusing semantics for coprocessor 
> implementors.  We should instead find an alternate mechanism for conveying 
> the caller identity, which does not override the current user context.
> I think we should instead look at passing this through as part of the 
> ObserverContext passed to every coprocessor hook.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16469) Several log refactoring/improvement suggestions

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16469:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Several log refactoring/improvement suggestions
> ---
>
> Key: HBASE-16469
> URL: https://issues.apache.org/jira/browse/HBASE-16469
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 1.2.5
>Reporter: Nemo Chen
>Assignee: Nemo Chen
>  Labels: easyfix, easytest
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-16469.master.001.patch
>
>
> *method invocation replaced by variable*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java
> line 57: {code}Path file = fStat.getPath();{code}
> line 74: {code}LOG.error("Failed to lookup status of:" + fStat.getPath() + ", 
> keeping it just incase.", e); {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
> line 118: {code}String name = regionInfo.getRegionNameAsString();{code}
> line 142: {code}LOG.warn("Can't close region: was already closed during 
> close(): " +
> regionInfo.getRegionNameAsString()); {code}
> In the above two examples, the method invocations are assigned to the 
> variables before the logging code. These method invocations should be 
> replaced by variables in case of simplicity and readability
> 
> *method invocation in return statement*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
> line 5455:
> {code}
> public String toString() {
> return getRegionInfo().getRegionNameAsString();
>   }
> {code}
> line 1260:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it is closing or closed");
> {code}
> line 1265:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it has references");
> {code}
> line 1413:
> {code} 
> LOG.info("Running close preflush of " + 
> getRegionInfo().getRegionNameAsString());
> {code}
> In these above examples, the "getRegionInfo().getRegionNameAsString())" is 
> the return statement of method "toString" in the same class. They should be 
> replaced with “this”   in case of simplicity and readability.
> 
> *check the logged variable if it is null*
> hbase-1.2.2/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
> line 88: 
> {code}
> if ((sshUserName != null && sshUserName.length() > 0) ||
> (sshOptions != null && sshOptions.length() > 0)) {
>   LOG.info("Running with SSH user [" + sshUserName + "] and options [" + 
> sshOptions + "]");
> }
> {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
> line 980:
> {code}
> if ((regionState == null && latestState != null)
>   || (regionState != null && latestState == null)
>   || (regionState != null && latestState != null
> && latestState.getState() != regionState.getState())) {
> LOG.warn("Region state changed from " + regionState + " to "
>   + latestState + ", while acquiring lock");
>   }
> {code}
> In the above example, the logging variable could null at run time. It is a 
> bad  practice to include null variables inside logs.
> 
> *variable in byte printed directly*
> hbase-1.2.2/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java
> line 145: 
> {code}
> byte[] rowKey = dataGenerator.getDeterministicUniqueKey(rowKeyBase);
> {code}
> line 184:
> {code}
> LOG.error("Failed to update the row with key = [" + rowKey
>   + "], since we could not get the original row");
> {code}
> rowKey should be printed as Bytes.toString(rowKey).
>  
> *object toString contain mi*
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
> {code}
> LOG.warn("#" + id + ", the task was rejected by the pool. This is 
> unexpected."+ " Server is "+ server.getServerName(),t);
> {code}
> server is an instance of class ServerName, we found ServerName.java:
> hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
> {code}
>   @Override
>   public String toString() {
> return getServerName();
>   }
> {code}
> the toString method returns getServerName(), so the "server.getServerName()" 
> should be replaced with "server" in case of simplicity and readability
> Similar examples are in:
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java
> {code}
> LOG.info("Clearing out PFFE for server " + server.getServerName());
> return getServerName();
> {code}
> 

[jira] [Updated] (HBASE-15648) Reduce number of concurrent region location lookups when MetaCache entry is cleared

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15648:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Reduce number of concurrent region location lookups when MetaCache entry is 
> cleared
> ---
>
> Key: HBASE-15648
> URL: https://issues.apache.org/jira/browse/HBASE-15648
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-15648-branch-1.3.v1.patch
>
>
> It seems in HConnectionImplementation#locateRegionInMeta if region location 
> is removed from the cache, with large number of client threads we could have 
> many of them getting cache miss and doing meta scan, which looks unnecessary 
> - we could empty mechanism similar to what we have in IdLock in HFileReader 
> to fetch the block to cache, do ensure that if one thread is already looking 
> up location for region R1, other threads who need it's location wait until 
> first thread finishes his work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14187) Add Thrift 1 RPC to batch gets in a single call

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14187:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Add Thrift 1 RPC to batch gets in a single call
> ---
>
> Key: HBASE-14187
> URL: https://issues.apache.org/jira/browse/HBASE-14187
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.0.0, 1.5.0
>
>
> add a method to pull a set of columns from a set of non-contiguous rows in a 
> single RPC call.
> e.g.
> {code}
>/**
>  * Parallel get. For a given table and column, return for
>  * the given rows.
>  *
>  * @param tableName table to get from
>  * @param column column to get
>  * @param rows a list of rows to get
>  * @result list of TRowResult for each item
>  */
> list parallelGet(1:Text tableName,
>  2:Text column,
>  3:list rows)
>  throws (1:IOError io)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16766) Do not rely on InputStream.available()

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16766:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Do not rely on InputStream.available() 
> ---
>
> Key: HBASE-16766
> URL: https://issues.apache.org/jira/browse/HBASE-16766
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.5.0
>
> Attachments: hbase-16766_v1.patch
>
>
> ProtobufLogReader relies on InputStream.available() to figure out whether we 
> have exhausted the file. However InputStream.available() javadoc states: 
> {code}
>  *  Note that while some implementations of {@code InputStream} will 
> return
>  * the total number of bytes in the stream, many will not.  It is
>  * never correct to use the return value of this method to allocate
>  * a buffer intended to hold all data in this stream.
> {code}
> HDFS and many other Hadoop FS's, and things like ByteBufferInputStream, etc 
> all return remaining bytes, so the code works on top of HDFS. However, on 
> other file systems, it may or may not be true that IS.available() returns the 
> remaining bytes. In one specific case, the ADLS wrapper FS used implement 
> {{available()}} call with the correct semantics, which ended up causing data 
> loss in the WAL recovery. We have since fixed ADLS to implement the HDFS 
> semantics, but we should fix HBase itself so that we do not rely on 
> available() call. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-12217) System load average based client pushback

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12217:

Fix Version/s: (was: 1.4.0)
   1.5.0

> System load average based client pushback
> -
>
> Key: HBASE-12217
> URL: https://issues.apache.org/jira/browse/HBASE-12217
> Project: HBase
>  Issue Type: New Feature
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 1.5.0
>
>
> If a RegionServer host is already heavily loaded* then it might not be best 
> to accept more work in the form of coprocessor invocations. This could 
> generalize to all RPC work, perhaps as part of a broader admission control 
> initiative, but I think it makes sense to start small in an obvious place.
> *: We could use % CPU utilization or the UNIX 1min or 5min load average to 
> determine this, and provide an option for choosing between those 
> alternatives. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15833) Make deadline (CoDel) RPC scheduler the default one

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15833:

Fix Version/s: (was: 1.4.0)
   1.5.0
   2.0.0

> Make deadline (CoDel) RPC scheduler the default one
> ---
>
> Key: HBASE-15833
> URL: https://issues.apache.org/jira/browse/HBASE-15833
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Affects Versions: 1.4.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.5.0
>
>
> I've given it some testing on real clusters, but would like to get some more.
> That probably goes back to the discussion thread the other day on AsyncWAL - 
> do we want to make new features the default choice in the first release they 
> appear in, or wait until early adopters test them out themselves.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17559) Verify service threads are using the UncaughtExceptionHandler

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-17559:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Verify service threads are using the UncaughtExceptionHandler
> -
>
> Key: HBASE-17559
> URL: https://issues.apache.org/jira/browse/HBASE-17559
> Project: HBase
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.5.0
>
>
> (Context: 
> https://lists.apache.org/thread.html/47c9a0f7193eaf0546ce241cfe093885366f5177ed867e18b45d77b9@%3Cdev.hbase.apache.org%3E)
> We should take a once-over on the threads we start in the RegionServer/Master 
> to make sure that they're using the UncaughtExceptionHandler to prevent the 
> case where the process keeps running but one of our threads has died.
> Such a situation may result in operators not realizing that their HBase 
> instance is not actually running as expected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14938) Limit the number of znodes for ZK in bulk loaded hfile replication

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14938:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Limit the number of znodes for ZK in bulk loaded hfile replication
> --
>
> Key: HBASE-14938
> URL: https://issues.apache.org/jira/browse/HBASE-14938
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-14938(1).patch, HBASE-14938.patch, 
> HBASE-14938-v1.patch, HBASE-14938-v2(1).patch, HBASE-14938-v2.patch
>
>
> In ZK the maximum allowable size of the data array is 1 MB. Until we have 
> fixed HBASE-10295 we need to handle this.
> Approach to this problem will be discussed in the comments section.
> Note: We have done internal testing with more than 3k nodes in ZK yet to be 
> replicated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16218) Eliminate use of UGI.doAs() in AccessController testing

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16218:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Eliminate use of UGI.doAs() in AccessController testing
> ---
>
> Key: HBASE-16218
> URL: https://issues.apache.org/jira/browse/HBASE-16218
> Project: HBase
>  Issue Type: Sub-task
>  Components: security
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Fix For: 2.0.0, 1.5.0
>
>
> Many tests for AccessController observer coprocessor hooks make use of 
> UGI.doAs() when the test user could simply be passed through.  Eliminate the 
> unnecessary use of doAs().



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-13147) Load actual META table descriptor, don't use statically defined one.

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13147:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Load actual META table descriptor, don't use statically defined one.
> 
>
> Key: HBASE-13147
> URL: https://issues.apache.org/jira/browse/HBASE-13147
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-13147-branch-1.patch, 
> HBASE-13147-branch-1.v2.patch, HBASE-13147.patch, HBASE-13147.v2.patch, 
> HBASE-13147.v3.patch, HBASE-13147.v4.patch, HBASE-13147.v4.patch, 
> HBASE-13147.v5.patch, HBASE-13147.v6.patch, HBASE-13147.v7.patch
>
>
> In HBASE-13087 stumbled on the fact, that region servers don't see actual 
> meta descriptor, they use their own, statically compiled.
> Need to fix that.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15728) Add remaining per-table region / store / flush / compaction related metrics

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15728:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Add remaining per-table region / store / flush / compaction related metrics 
> 
>
> Key: HBASE-15728
> URL: https://issues.apache.org/jira/browse/HBASE-15728
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.5.0
>
> Attachments: hbase-15728_v1.patch
>
>
> Continuing on the work for per-table metrics, HBASE-15518 and HBASE-15671. 
> We need to add some remaining metrics at the per-table level, so that we will 
> have the same metrics reported at the per-regionserver, per-region and 
> per-table levels. 
> After this patch, most of the metrics at the RS and all of the per-region 
> level are also reported at the per-table level. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17319) Truncate table with preserve after split may cause truncation to fail

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-17319:

Fix Version/s: (was: 1.4.0)
   1.5.0
   2.0.0

> Truncate table with preserve after split may cause truncation to fail
> -
>
> Key: HBASE-17319
> URL: https://issues.apache.org/jira/browse/HBASE-17319
> Project: HBase
>  Issue Type: Bug
>  Components: Admin
>Affects Versions: 1.1.7, 1.2.4
>Reporter: Allan Yang
>Assignee: Allan Yang
> Fix For: 2.0.0, 1.5.0
>
> Attachments: 17319.stack, HBASE-17319-branch-1.patch, 
> HBASE-17319.patch, TestTruncateTableProcedure-output.tar.gz
>
>
> In truncateTableProcedue , when getting tables regions  from meta to recreate 
> new regions, split parents are not excluded, so the new regions can end up 
> with the same start key, and the same region dir:
> {noformat}
> 2016-12-14 20:15:22,231 WARN  [RegionOpenAndInitThread-writetest-1] 
> regionserver.HRegionFileSystem: Trying to create a region that already exists 
> on disk: 
> hdfs://hbasedev1/zhengyan-hbase11-func2/.tmp/data/default/writetest/9b2c8d1539cd92661703ceb8a4d518a1
> {noformat} 
> The truncateTableProcedue will retry forever and never get success.
> A attached unit test shows everything.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17762) Add logging to HBaseAdmin for user initiated tasks

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-17762:

Fix Version/s: (was: 0.98.25)
   (was: 1.4.0)
   1.5.0

> Add logging to HBaseAdmin for user initiated tasks
> --
>
> Key: HBASE-17762
> URL: https://issues.apache.org/jira/browse/HBASE-17762
> Project: HBase
>  Issue Type: Task
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-17762.patch, HBASE-17762.v1.patch
>
>
> Things like auditing a forced major compaction are really useful and right 
> now there is no logging when this is triggered.  Other actions may require 
> logging as well. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14160) backport hbase-spark module to branch-1

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14160:

Fix Version/s: (was: 1.4.0)
   1.5.0
   2.0.0

> backport hbase-spark module to branch-1
> ---
>
> Key: HBASE-14160
> URL: https://issues.apache.org/jira/browse/HBASE-14160
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 1.3.0
>Reporter: Sean Busbey
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.5.0
>
> Attachments: 14160.branch-1.v1.txt
>
>
> Once the hbase-spark module gets polished a bit, we should backport to 
> branch-1 so we can publish it sooner.
> needs (per previous discussion):
> * examples refactored into different module
> * user facing documentation
> * defined public API



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15809) Basic Replication WebUI

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-15809:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Basic Replication WebUI
> ---
>
> Key: HBASE-15809
> URL: https://issues.apache.org/jira/browse/HBASE-15809
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication, UI
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Appy
>Priority: Minor
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-15809-v0.patch, HBASE-15809-v0.png, 
> HBASE-15809-v1.patch
>
>
> At the moment the only way to have some insight on replication from the webui 
> is looking at zkdump and metrics.
> the basic information useful to get started debugging are: peer information 
> and the view of WALs offsets for each peer.
> https://reviews.apache.org/r/47275/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-6618) Implement FuzzyRowFilter with ranges support

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-6618:
---
Fix Version/s: (was: 1.4.0)
   1.5.0

> Implement FuzzyRowFilter with ranges support
> 
>
> Key: HBASE-6618
> URL: https://issues.apache.org/jira/browse/HBASE-6618
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Alex Baranau
>Assignee: Alex Baranau
>Priority: Minor
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-6618_2.path, HBASE-6618_3.path, 
> HBASE-6618_4.patch, HBASE-6618_5.patch, HBASE-6618-algo-desc-bits.png, 
> HBASE-6618-algo.patch, HBASE-6618.patch
>
>
> Apart from current ability to specify fuzzy row filter e.g. for 
>  format as _0004 (where 0004 - actionId) it would be 
> great to also have ability to specify the "fuzzy range" , e.g. _0004, 
> ..., _0099.
> See initial discussion here: http://search-hadoop.com/m/WVLJdX0Z65
> Note: currently it is possible to provide multiple fuzzy row rules to 
> existing FuzzyRowFilter, but in case when the range is big (contains 
> thousands of values) it is not efficient.
> Filter should perform efficient fast-forwarding during the scan (this is what 
> distinguishes it from regex row filter).
> While such functionality may seem like a proper fit for custom filter (i.e. 
> not including into standard filter set) it looks like the filter may be very 
> re-useable. We may judge based on the implementation that will hopefully be 
> added.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-12807) Implement 0.89fb-style "roll on slow sync"

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12807:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Implement 0.89fb-style "roll on slow sync"
> --
>
> Key: HBASE-12807
> URL: https://issues.apache.org/jira/browse/HBASE-12807
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.0.0, 1.5.0
>
>
> As a first step towards bringing down wal latency, implement the version of 
> slow sync rolling done in the 0.89-fb branch.
> That is, instead of interrupting a slow sync operation, just request a log 
> roll after whenever we detect one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-11290) Unlock RegionStates

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-11290:

Fix Version/s: (was: 0.98.25)
   (was: 1.4.0)
   1.5.0

> Unlock RegionStates
> ---
>
> Key: HBASE-11290
> URL: https://issues.apache.org/jira/browse/HBASE-11290
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Francis Liu
>Assignee: Francis Liu
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-11290.01.branch-1.patch, HBASE-11290.02.patch, 
> HBASE-11290.03.patch, HBASE-11290-0.98.patch, HBASE-11290-0.98_v2.patch, 
> HBASE-11290.draft.patch, HBASE-11290_trunk.patch
>
>
> Even though RegionStates is a highly accessed data structure in HMaster. Most 
> of it's methods are synchronized. Which limits concurrency. Even simply 
> making some of the getters non-synchronized by using concurrent data 
> structures has helped with region assignments. We can go as simple as this 
> approach or create locks per region or a bucket lock per region bucket.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14927) Backport HBASE-13014 and HBASE-14749 to branch-1

2017-04-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14927:

Fix Version/s: (was: 1.4.0)
   1.5.0
   2.0.0

> Backport HBASE-13014 and HBASE-14749 to branch-1
> 
>
> Key: HBASE-14927
> URL: https://issues.apache.org/jira/browse/HBASE-14927
> Project: HBase
>  Issue Type: Improvement
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   3   4   >