[jira] [Commented] (HBASE-20213) [LOGGING] Aligning formatting and logging less (compactions, in-memory compactions)

2018-03-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401478#comment-16401478
 ] 

ramkrishna.s.vasudevan commented on HBASE-20213:


+1 on these changes. I think since you are testing it now you know better as 
how the changes now helps the logging better. 

> [LOGGING] Aligning formatting and logging less (compactions, in-memory 
> compactions)
> ---
>
> Key: HBASE-20213
> URL: https://issues.apache.org/jira/browse/HBASE-20213
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20213.branch-2.001.patch, 
> HBASE-20213.branch-2.002.patch, HBASE-20213.branch-2.003.patch
>
>
> Here is some logging cleanup that came of a study session this afternoon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20211) ReadOnlyBufferException In UnsafeAccess

2018-03-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401474#comment-16401474
 ] 

ramkrishna.s.vasudevan commented on HBASE-20211:


Thanks for the analysis. I just saw the parent JIRA that is linked to this. 
So the intention is to handle the ReadOnlyBuffer case is it? Currently we don't 
have such a use in our code so that is one reason for doing it that way. 
Ya if you see UnsafeAccess as a standalone util class your suggestion makes 
sense.
bq.Which is almost exactly what the ByteBuffer relative bulk get method does 
anyway, so there is no savings here, just overheard and complexity.
One important diff is that it could change the position if we use relative 
get() na. But here we don't do it. 
And other thing is we found that BB code has some 'prechecks' which we thought 
was unnecessary because we are in control of these buffers. 

> ReadOnlyBufferException In UnsafeAccess
> ---
>
> Key: HBASE-20211
> URL: https://issues.apache.org/jira/browse/HBASE-20211
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: BELUGA BEHR
>Priority: Minor
>
> If you trace the BBUtils API, what you see is this code:
> {code:java}
>   public static void copyFromBufferToArray(byte[] out, ByteBuffer in, int 
> sourceOffset,
>   int destinationOffset, int length) {
> if (in.hasArray()) {
>   System.arraycopy(in.array(), sourceOffset + in.arrayOffset(), out, 
> destinationOffset, length);
> } else if (UNSAFE_AVAIL) {
>   UnsafeAccess.copy(in, sourceOffset, out, destinationOffset, length);
> } else {
>   ByteBuffer inDup = in.duplicate();
>   inDup.position(sourceOffset);
>   inDup.get(out, destinationOffset, length);
> }
>   }
> {code}
> A ByteBuffer is being used here, which is not read-only, so it actually hits 
> on the first condition and executes this code:
> {quote}System.arraycopy(in.array(), sourceOffset + in.arrayOffset(), out, 
> destinationOffset, length);
> {quote}
> Which is almost exactly what the {{ByteBuffer}} relative bulk get method does 
> anyway, so there is no savings here, just overheard and complexity.
> In regards to the second condition... there is a bug there that I just 
> noticed.
> {code:java|title=org.apache.hadoop.hbase.util.UnsafeAccess}
>   public static void copy(ByteBuffer src, int srcOffset, byte[] dest, int 
> destOffset,
>   int length) {
> long srcAddress = srcOffset;
> Object srcBase = null;
> if (src.isDirect()) {
>   srcAddress = srcAddress + ((DirectBuffer) src).address();
> } else {
>   srcAddress = srcAddress + BYTE_ARRAY_BASE_OFFSET + src.arrayOffset();
>   srcBase = src.array();
> }
> long destAddress = destOffset + BYTE_ARRAY_BASE_OFFSET;
> unsafeCopy(srcBase, srcAddress, dest, destAddress, length);
>   }
> {code}
> This issue here is the 
> [arrayOffset()|https://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html#arrayOffset--]
>  call. The JavaDocs here say:
> {quote}Invoke the hasArray method before invoking this method in order to 
> ensure that this buffer has an accessible backing array.
> {quote}
> However, as we saw in the previous method, if _hasArray_ returns true, we do 
> _System.arraycopy,_ so the only reason we would be in this _copy_ code is if 
> there was no access to the backing array, yet here it is, depending on it 
> having such access. That could cause problems with Read-Only ByteBuffers that 
> does not affect the _relative bulk get method_.
> {code:java}
> public class Test {
>   public static void main(String[] args) throws IOException {
> ByteArrayOutputStream baos = new ByteArrayOutputStream();
> ByteBufferWriterOutputStream bbwos = new 
> ByteBufferWriterOutputStream(baos);
> ByteBuffer bbSmall = ByteBuffer.wrap(new byte[512]).asReadOnlyBuffer();
> bbwos.write(bbSmall, 0, 512);
> bbwos.close();
>   }
> }
> Exception in thread "main" java.nio.ReadOnlyBufferException
>   at java.nio.ByteBuffer.arrayOffset(ByteBuffer.java:1024)
>   at org.apache.hadoop.hbase.util.UnsafeAccess.copy(UnsafeAccess.java:398)
>   at 
> org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray(ByteBufferUtils.java:54)
>   at 
> org.apache.hadoop.hbase.io.ByteBufferWriterOutputStream.write(ByteBufferWriterOutputStream.java:59)
>   at org.apache.hadoop.hbase.io.Test.main(Test.java:14)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15809) Basic Replication WebUI

2018-03-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401468#comment-16401468
 ] 

stack commented on HBASE-15809:
---

I like the suggestion of going to the problematic regionserver once the master 
UI identifies which is stuck.

It looks great.

> Basic Replication WebUI
> ---
>
> Key: HBASE-15809
> URL: https://issues.apache.org/jira/browse/HBASE-15809
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication, UI
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Jingyun Tian
>Priority: Critical
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-15809-v0.patch, HBASE-15809-v0.png, 
> HBASE-15809-v1.patch, rep_web_ui.zip
>
>
> At the moment the only way to have some insight on replication from the webui 
> is looking at zkdump and metrics.
> the basic information useful to get started debugging are: peer information 
> and the view of WALs offsets for each peer.
> https://reviews.apache.org/r/47275/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20213) [LOGGING] Aligning formatting and logging less (compactions, in-memory compactions)

2018-03-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20213:
--
Attachment: HBASE-20213.branch-2.003.patch

> [LOGGING] Aligning formatting and logging less (compactions, in-memory 
> compactions)
> ---
>
> Key: HBASE-20213
> URL: https://issues.apache.org/jira/browse/HBASE-20213
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20213.branch-2.001.patch, 
> HBASE-20213.branch-2.002.patch, HBASE-20213.branch-2.003.patch
>
>
> Here is some logging cleanup that came of a study session this afternoon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20213) [LOGGING] Aligning formatting and logging less (compactions, in-memory compactions)

2018-03-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401465#comment-16401465
 ] 

stack commented on HBASE-20213:
---

.003 Allow for null file name when mocking. Fix the checkstyle.

> [LOGGING] Aligning formatting and logging less (compactions, in-memory 
> compactions)
> ---
>
> Key: HBASE-20213
> URL: https://issues.apache.org/jira/browse/HBASE-20213
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20213.branch-2.001.patch, 
> HBASE-20213.branch-2.002.patch, HBASE-20213.branch-2.003.patch
>
>
> Here is some logging cleanup that came of a study session this afternoon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20213) [LOGGING] Aligning formatting and logging less (compactions, in-memory compactions)

2018-03-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20213:
--
Attachment: HBASE-20213.branch-2.002.patch

> [LOGGING] Aligning formatting and logging less (compactions, in-memory 
> compactions)
> ---
>
> Key: HBASE-20213
> URL: https://issues.apache.org/jira/browse/HBASE-20213
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20213.branch-2.001.patch, 
> HBASE-20213.branch-2.002.patch
>
>
> Here is some logging cleanup that came of a study session this afternoon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20213) [LOGGING] Aligning formatting and logging less (compactions, in-memory compactions)

2018-03-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401450#comment-16401450
 ] 

Hadoop QA commented on HBASE-20213:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
37s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
2s{color} | {color:red} hbase-server: The patch generated 5 new + 293 unchanged 
- 7 fixed = 298 total (was 300) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
31s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
13m  5s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 24s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.regionserver.compactions.TestStripeCompactor |
|   | hadoop.hbase.regionserver.compactions.TestDateTieredCompactor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db |
| JIRA Issue | HBASE-20213 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914814/HBASE-20213.branch-2.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 46595d79a68a 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / e0bdc87b27 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 

[jira] [Commented] (HBASE-20202) [AMv2] Don't move region if its a split parent or offlined

2018-03-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401448#comment-16401448
 ] 

stack commented on HBASE-20202:
---

.002 Fix tests that were not expecting the new fast-fail 

> [AMv2] Don't move region if its a split parent or offlined
> --
>
> Key: HBASE-20202
> URL: https://issues.apache.org/jira/browse/HBASE-20202
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20202.branch-2.001.patch, 
> HBASE-20202.branch-2.002.patch
>
>
> Found this one running ITBLLs. We'd just finished splitting a region 
> 91655de06786f786b0ee9c51280e1ee6 and then a move for it comes in. The move 
> fails in an interesting way. The location has been removed from the 
> regionnode kept by the Master. HBASE-20178 adds macro checks on context. Need 
> to add a few checks to the likes of MoveRegionProcedure so we don't try to 
> move an offlined/split parent.
> {code}
> 2018-03-14 10:21:45,678 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=3177, state=SUCCESS; SplitTableRegionProcedure 
> table=IntegrationTestBigLinkedList, parent=91655de06786f786b0ee9c51280e1ee6, 
> daughterA=b67bf6b79eaa83de788b0519f782ce8e, 
> daughterB=99cf6ddb38cad08e3aa7635b6cac2e7b in 10.0210sec   
> 2018-03-14 10:21:45,679 INFO  [PEWorker-15] 
> procedure.MasterProcedureScheduler: pid=3194, ppid=3193, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=af198ca64b196fb3d2f5b3e815b2dad0, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855, 
> IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.
> 2018-03-14 10:21:45,680 INFO  [PEWorker-5] 
> procedure.MasterProcedureScheduler: pid=3187, 
> state=RUNNABLE:MOVE_REGION_UNASSIGN; MoveRegionProcedure 
> hri=IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.,
>  source=ve0530.halxg.cloudera.com,16020,1521007509855, 
> destination=ve0528.halxg.cloudera.com,16020,1521047890874, 
> IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
> 2018-03-14 10:21:45,680 INFO  [PEWorker-15] assignment.RegionStateStore: 
> pid=3194 updating hbase:meta 
> row=IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.,
>  regionState=CLOSING
> 2018-03-14 10:21:45,680 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
> Initialized subprocedures=[{pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855}]
> 2018-03-14 10:21:45,683 INFO  [PEWorker-15] 
> assignment.RegionTransitionProcedure: Dispatch pid=3194, ppid=3193, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=af198ca64b196fb3d2f5b3e815b2dad0, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855; rit=CLOSING, 
> location=ve0530.halxg.cloudera.com,16020,1521007509855
> 2018-03-14 10:21:45,752 INFO  [PEWorker-15] 
> procedure.MasterProcedureScheduler: pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855, 
> IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
> 2018-03-14 10:21:45,753 ERROR [PEWorker-15] procedure2.ProcedureExecutor: 
> CODE-BUG: Uncaught runtime exception: pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855
> java.lang.NullPointerException
>   
>   
>   
>at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionStates.getOrCreateServer(RegionStates.java:934)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionStates.addRegionToServer(RegionStates.java:962)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1548)
>

[jira] [Updated] (HBASE-20202) [AMv2] Don't move region if its a split parent or offlined

2018-03-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20202:
--
Attachment: HBASE-20202.branch-2.002.patch

> [AMv2] Don't move region if its a split parent or offlined
> --
>
> Key: HBASE-20202
> URL: https://issues.apache.org/jira/browse/HBASE-20202
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20202.branch-2.001.patch, 
> HBASE-20202.branch-2.002.patch
>
>
> Found this one running ITBLLs. We'd just finished splitting a region 
> 91655de06786f786b0ee9c51280e1ee6 and then a move for it comes in. The move 
> fails in an interesting way. The location has been removed from the 
> regionnode kept by the Master. HBASE-20178 adds macro checks on context. Need 
> to add a few checks to the likes of MoveRegionProcedure so we don't try to 
> move an offlined/split parent.
> {code}
> 2018-03-14 10:21:45,678 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=3177, state=SUCCESS; SplitTableRegionProcedure 
> table=IntegrationTestBigLinkedList, parent=91655de06786f786b0ee9c51280e1ee6, 
> daughterA=b67bf6b79eaa83de788b0519f782ce8e, 
> daughterB=99cf6ddb38cad08e3aa7635b6cac2e7b in 10.0210sec   
> 2018-03-14 10:21:45,679 INFO  [PEWorker-15] 
> procedure.MasterProcedureScheduler: pid=3194, ppid=3193, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=af198ca64b196fb3d2f5b3e815b2dad0, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855, 
> IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.
> 2018-03-14 10:21:45,680 INFO  [PEWorker-5] 
> procedure.MasterProcedureScheduler: pid=3187, 
> state=RUNNABLE:MOVE_REGION_UNASSIGN; MoveRegionProcedure 
> hri=IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.,
>  source=ve0530.halxg.cloudera.com,16020,1521007509855, 
> destination=ve0528.halxg.cloudera.com,16020,1521047890874, 
> IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
> 2018-03-14 10:21:45,680 INFO  [PEWorker-15] assignment.RegionStateStore: 
> pid=3194 updating hbase:meta 
> row=IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.,
>  regionState=CLOSING
> 2018-03-14 10:21:45,680 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
> Initialized subprocedures=[{pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855}]
> 2018-03-14 10:21:45,683 INFO  [PEWorker-15] 
> assignment.RegionTransitionProcedure: Dispatch pid=3194, ppid=3193, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=af198ca64b196fb3d2f5b3e815b2dad0, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855; rit=CLOSING, 
> location=ve0530.halxg.cloudera.com,16020,1521007509855
> 2018-03-14 10:21:45,752 INFO  [PEWorker-15] 
> procedure.MasterProcedureScheduler: pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855, 
> IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
> 2018-03-14 10:21:45,753 ERROR [PEWorker-15] procedure2.ProcedureExecutor: 
> CODE-BUG: Uncaught runtime exception: pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855
> java.lang.NullPointerException
>   
>   
>   
>at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionStates.getOrCreateServer(RegionStates.java:934)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionStates.addRegionToServer(RegionStates.java:962)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1548)
>

[jira] [Commented] (HBASE-15809) Basic Replication WebUI

2018-03-15 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401440#comment-16401440
 ] 

Guanghao Zhang commented on HBASE-15809:


[~stack] You can check the UI in sub-task. I thought this is enough as a basic 
replication UI for hbase 2.0.. 

> Basic Replication WebUI
> ---
>
> Key: HBASE-15809
> URL: https://issues.apache.org/jira/browse/HBASE-15809
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication, UI
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Jingyun Tian
>Priority: Critical
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-15809-v0.patch, HBASE-15809-v0.png, 
> HBASE-15809-v1.patch, rep_web_ui.zip
>
>
> At the moment the only way to have some insight on replication from the webui 
> is looking at zkdump and metrics.
> the basic information useful to get started debugging are: peer information 
> and the view of WALs offsets for each peer.
> https://reviews.apache.org/r/47275/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20095) Redesign single instance pool in CleanerChore

2018-03-15 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401435#comment-16401435
 ] 

Reid Chan commented on HBASE-20095:
---

{{/testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyRpcServer.java:[148,45]}}
 not part of this patch.
Just leave it?

> Redesign single instance pool in CleanerChore
> -
>
> Key: HBASE-20095
> URL: https://issues.apache.org/jira/browse/HBASE-20095
> Project: HBase
>  Issue Type: Improvement
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Critical
> Attachments: HBASE-20095.master.001.patch, 
> HBASE-20095.master.002.patch, HBASE-20095.master.003.patch, 
> HBASE-20095.master.004.patch, HBASE-20095.master.005.patch, 
> HBASE-20095.master.006.patch, HBASE-20095.master.007.patch, 
> HBASE-20095.master.008.patch, HBASE-20095.master.009.patch, 
> HBASE-20095.master.010.patch, HBASE-20095.master.011.patch, 
> HBASE-20095.master.012.patch, HBASE-20095.master.013.patch, 
> HBASE-20095.master.014.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15809) Basic Replication WebUI

2018-03-15 Thread Jingyun Tian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401433#comment-16401433
 ] 

Jingyun Tian commented on HBASE-15809:
--

[~stack]  Appy's work needs to collect detailed replication information from 
regionservers, which I think is a bit of heavy.  In my design, we can check the 
general replication status on master web, then if one regionserver's 
replication is stucked, we can go to regionserver's web to check details, thus 
no extra information is required. 

bq.We just had a user who had 500+ peers 
For too much peers, I could try to split them into different pages, will try to 
do it next.

bq.Is there more info we could print here?
We can show the configuration of peers on the UI, but for metrics, they are 
almost there.

bq.Should this be in a new page or does the panel only get composed when you 
click on it?
I'm not sure if we should put this in a new page, I think both are acceptable. 
What do you think?  [~zghaobac]


> Basic Replication WebUI
> ---
>
> Key: HBASE-15809
> URL: https://issues.apache.org/jira/browse/HBASE-15809
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication, UI
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Jingyun Tian
>Priority: Critical
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-15809-v0.patch, HBASE-15809-v0.png, 
> HBASE-15809-v1.patch, rep_web_ui.zip
>
>
> At the moment the only way to have some insight on replication from the webui 
> is looking at zkdump and metrics.
> the basic information useful to get started debugging are: peer information 
> and the view of WALs offsets for each peer.
> https://reviews.apache.org/r/47275/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20213) [LOGGING] Aligning formatting and logging less (compactions, in-memory compactions)

2018-03-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20213:
--
Status: Patch Available  (was: Open)

.001 Log less, log using roughly same format. Changed logs in in-memory 
compaction, file archiving, and in compaction.

> [LOGGING] Aligning formatting and logging less (compactions, in-memory 
> compactions)
> ---
>
> Key: HBASE-20213
> URL: https://issues.apache.org/jira/browse/HBASE-20213
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: HBASE-20213.branch-2.001.patch
>
>
> Here is some logging cleanup that came of a study session this afternoon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20213) [LOGGING] Aligning formatting and logging less (compactions, in-memory compactions)

2018-03-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20213:
--
Fix Version/s: 2.0.0

> [LOGGING] Aligning formatting and logging less (compactions, in-memory 
> compactions)
> ---
>
> Key: HBASE-20213
> URL: https://issues.apache.org/jira/browse/HBASE-20213
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20213.branch-2.001.patch
>
>
> Here is some logging cleanup that came of a study session this afternoon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20213) [LOGGING] Aligning formatting and logging less (compactions, in-memory compactions)

2018-03-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20213:
--
Attachment: HBASE-20213.branch-2.001.patch

> [LOGGING] Aligning formatting and logging less (compactions, in-memory 
> compactions)
> ---
>
> Key: HBASE-20213
> URL: https://issues.apache.org/jira/browse/HBASE-20213
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: HBASE-20213.branch-2.001.patch
>
>
> Here is some logging cleanup that came of a study session this afternoon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20213) [LOGGING] Aligning formatting and logging less (compactions, in-memory compactions)

2018-03-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HBASE-20213:
-

 Assignee: stack
Affects Version/s: 2.0.0-beta-2
  Component/s: logging

> [LOGGING] Aligning formatting and logging less (compactions, in-memory 
> compactions)
> ---
>
> Key: HBASE-20213
> URL: https://issues.apache.org/jira/browse/HBASE-20213
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
>
> Here is some logging cleanup that came of a study session this afternoon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20213) [LOGGING] Aligning formatting and logging less (compactions, in-memory compactions)

2018-03-15 Thread stack (JIRA)
stack created HBASE-20213:
-

 Summary: [LOGGING] Aligning formatting and logging less 
(compactions, in-memory compactions)
 Key: HBASE-20213
 URL: https://issues.apache.org/jira/browse/HBASE-20213
 Project: HBase
  Issue Type: Bug
Reporter: stack


Here is some logging cleanup that came of a study session this afternoon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20194) Basic Replication WebUI - Master

2018-03-15 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401418#comment-16401418
 ] 

Guanghao Zhang commented on HBASE-20194:


For master UI, I thought we need show peers information, e.g. list_peers result.

> Basic Replication WebUI - Master
> 
>
> Key: HBASE-20194
> URL: https://issues.apache.org/jira/browse/HBASE-20194
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Critical
> Attachments: webui3.jpg
>
>
> subtask of HBASE-15809. Implementation of Replication WebUI on Master webpage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-15867) Move HBase replication tracking from ZooKeeper to HBase

2018-03-15 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-15867:
-
Fix Version/s: 2.1.0

> Move HBase replication tracking from ZooKeeper to HBase
> ---
>
> Key: HBASE-15867
> URL: https://issues.apache.org/jira/browse/HBASE-15867
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.1.0
>Reporter: Joseph
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 2.1.0
>
>
> Move the WAL file and offset tracking out of ZooKeeper and into an HBase 
> table called hbase:replication. 
> The largest three new changes will be two classes ReplicationTableBase, 
> TableBasedReplicationQueues, and TableBasedReplicationQueuesClient. As of now 
> ReplicationPeers and HFileRef's tracking will not be implemented. Subtasks 
> have been filed for these two jobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-15867) Move HBase replication tracking from ZooKeeper to HBase

2018-03-15 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-15867:
-
Affects Version/s: 2.1.0

> Move HBase replication tracking from ZooKeeper to HBase
> ---
>
> Key: HBASE-15867
> URL: https://issues.apache.org/jira/browse/HBASE-15867
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 2.1.0
>Reporter: Joseph
>Assignee: Zheng Hu
>Priority: Major
>
> Move the WAL file and offset tracking out of ZooKeeper and into an HBase 
> table called hbase:replication. 
> The largest three new changes will be two classes ReplicationTableBase, 
> TableBasedReplicationQueues, and TableBasedReplicationQueuesClient. As of now 
> ReplicationPeers and HFileRef's tracking will not be implemented. Subtasks 
> have been filed for these two jobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19713) Enable TestInterfaceAudienceAnnotations

2018-03-15 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401349#comment-16401349
 ] 

Chia-Ping Tsai edited comment on HBASE-19713 at 3/16/18 1:15 AM:
-

The PR enabling filter only Public class have been merged to 
warbucks-maven-plugin. Close this Jira as "won't fix". The follow-up is 
-HBASE-20212-

 


was (Author: chia7712):
The PR enabling filter only Public class have been merged to 
warbucks-maven-plugin. Close this Jira as "won't fix". The follow-up is 
HBASE-20121

 

> Enable TestInterfaceAudienceAnnotations
> ---
>
> Key: HBASE-19713
> URL: https://issues.apache.org/jira/browse/HBASE-19713
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-19713.branch-2.v0.patch, 
> HBASE-19713.branch-2.v1.patch, HBASE-19713.branch-2.v2.patch, 
> HBASE-19713.v0.patch
>
>
> Make sure TestInterfaceAudienceAnnotations pass before 2.0 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-19713) Enable TestInterfaceAudienceAnnotations

2018-03-15 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved HBASE-19713.

Resolution: Won't Fix

The PR enabling filter only Public class have been merged to 
warbucks-maven-plugin. Close this Jira as "won't fix". The follow-up is 
HBASE-20121

 

> Enable TestInterfaceAudienceAnnotations
> ---
>
> Key: HBASE-19713
> URL: https://issues.apache.org/jira/browse/HBASE-19713
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-19713.branch-2.v0.patch, 
> HBASE-19713.branch-2.v1.patch, HBASE-19713.branch-2.v2.patch, 
> HBASE-19713.v0.patch
>
>
> Make sure TestInterfaceAudienceAnnotations pass before 2.0 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19924) hbase rpc throttling does not work for multi() with request count rater.

2018-03-15 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401374#comment-16401374
 ] 

Chia-Ping Tsai commented on HBASE-19924:


*Q1*
 Given that we need to check both of size and count, it should be ok to get rid 
of following methods
{code:java}
  /**
   * Checks if it is possible to execute the specified operation.
   *
   * @param estimateWriteSize the write size that will be checked against the 
available quota
   * @param estimateReadSize the read size that will be checked against the 
available quota
   * @throws ThrottlingException thrown if not enough avialable resources to 
perform operation.
   */
  void checkQuota(long estimateWriteSize, long estimateReadSize)
throws ThrottlingException;

  /**
   * Removes the specified write and read amount from the quota.
   * At this point the write and read amount will be an estimate,
   * that will be later adjusted with a consumeWrite()/consumeRead() call.
   *
   * @param writeSize the write size that will be removed from the current quota
   * @param readSize the read size that will be removed from the current quota
   */
  void grabQuota(long writeSize, long readSize);
{code}

*Q2*
 avialable _> available

*Q3 (unrelated to this jira)*

Why we consume the write/read req by heap size? Should we make both methods to 
accept both size and count.
{code:TimeBasedLimiter.java}
@Override
public void consumeWrite(final long size) {
  reqSizeLimiter.consume(size);
  writeSizeLimiter.consume(size);
}

@Override
public void consumeRead(final long size) {
  reqSizeLimiter.consume(size);
  readSizeLimiter.consume(size);
}
{code}

*Q4 (unrelated to this jira)*
Is it necessary to add the condition check (xxxSize > 0)? Is it impossible to 
have a zero size op? For example, the Result generated by a check-exist Get may 
have zero heap size since the heap size of Result is the sum of heap size of 
cells.
{code}
if (estimateWriteSize > 0) {
  if (!writeReqsLimiter.canExecute(writeReqs)) {

ThrottlingException.throwNumWriteRequestsExceeded(writeReqsLimiter.waitInterval());
  }
  if (!writeSizeLimiter.canExecute(estimateWriteSize)) {

ThrottlingException.throwWriteSizeExceeded(writeSizeLimiter.waitInterval(estimateWriteSize));
  }
}

if (estimateReadSize > 0) {
{code}

> hbase rpc throttling does not work for multi() with request count rater.
> 
>
> Key: HBASE-19924
> URL: https://issues.apache.org/jira/browse/HBASE-19924
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 0.16.0, 1.2.6
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Major
> Attachments: HBASE-19924-master-v001.patch
>
>
> Basically, rpc throttling does not work for request count based rater for 
> multi. for the following code, when it calls limiter's checkQuota(), 
> numWrites/numReads is lost.
> {code:java}
> @Override
> public void checkQuota(int numWrites, int numReads, int numScans) throws 
> ThrottlingException {
>   writeConsumed = estimateConsume(OperationType.MUTATE, numWrites, 100);
>   readConsumed = estimateConsume(OperationType.GET, numReads, 100);
>   readConsumed += estimateConsume(OperationType.SCAN, numScans, 1000);
>   writeAvailable = Long.MAX_VALUE;
>   readAvailable = Long.MAX_VALUE;
>   for (final QuotaLimiter limiter : limiters) {
> if (limiter.isBypass()) continue;
> limiter.checkQuota(writeConsumed, readConsumed);
> readAvailable = Math.min(readAvailable, limiter.getReadAvailable());
> writeAvailable = Math.min(writeAvailable, limiter.getWriteAvailable());
>   }
>   for (final QuotaLimiter limiter : limiters) {
> limiter.grabQuota(writeConsumed, readConsumed);
>   }
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20212) Make all Public classes have InterfaceAudience category

2018-03-15 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401351#comment-16401351
 ] 

Chia-Ping Tsai commented on HBASE-20212:


It will be a big patch. Perhaps we should create the jira for each module. 
[~busbey] WDYT?

> Make all Public classes have InterfaceAudience category
> ---
>
> Key: HBASE-20212
> URL: https://issues.apache.org/jira/browse/HBASE-20212
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 2.1.0
>
>
> The tasks will be resolved are shown below.
>  # add warbucks-maven-plugin to root pom
>  # make sure all sub modules ref the warbucks-maven-plugin
>  # remove old checker (TestInterfaceAudienceAnnotations)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19713) Enable TestInterfaceAudienceAnnotations

2018-03-15 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401349#comment-16401349
 ] 

Chia-Ping Tsai edited comment on HBASE-19713 at 3/16/18 1:15 AM:
-

The PR enabling filter only Public class have been merged to 
warbucks-maven-plugin. Close this Jira as "won't fix". The follow-up is 
HBASE-20212

 


was (Author: chia7712):
The PR enabling filter only Public class have been merged to 
warbucks-maven-plugin. Close this Jira as "won't fix". The follow-up is 
-HBASE-20212-

 

> Enable TestInterfaceAudienceAnnotations
> ---
>
> Key: HBASE-19713
> URL: https://issues.apache.org/jira/browse/HBASE-19713
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-19713.branch-2.v0.patch, 
> HBASE-19713.branch-2.v1.patch, HBASE-19713.branch-2.v2.patch, 
> HBASE-19713.v0.patch
>
>
> Make sure TestInterfaceAudienceAnnotations pass before 2.0 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20212) Make all Public classes have InterfaceAudience category

2018-03-15 Thread Chia-Ping Tsai (JIRA)
Chia-Ping Tsai created HBASE-20212:
--

 Summary: Make all Public classes have InterfaceAudience category
 Key: HBASE-20212
 URL: https://issues.apache.org/jira/browse/HBASE-20212
 Project: HBase
  Issue Type: Task
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai
 Fix For: 2.0.0, 3.0.0, 2.1.0


The tasks will be resolved are shown below.
 # add warbucks-maven-plugin to root pom
 # make sure all sub modules ref the warbucks-maven-plugin
 # remove old checker (TestInterfaceAudienceAnnotations)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20209) Do Not Use Both Map containsKey and get Methods

2018-03-15 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401344#comment-16401344
 ] 

Duo Zhang commented on HBASE-20209:
---

The fix here seems fine but if you want to pure other containsKey calls in the 
replication related code then you need to be careful.

You should be care abort that a HashMap in java does accept null value, so a 
get which returns null does not mean that the containsKey will return false... 
Especially for replication, the TableCFs map is used to record the table and 
its column families we want to replicate. If we want to replicate all the 
column families, then we use a null or empty list to indicate this.

Thanks.

> Do Not Use Both Map containsKey and get Methods
> ---
>
> Key: HBASE-20209
> URL: https://issues.apache.org/jira/browse/HBASE-20209
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HBASE-20209.1.patch
>
>
> {code:title=ReplicationSink.java}
> String tableName = table.getNameWithNamespaceInclAsString();
> if (bulkLoadHFileMap.containsKey(tableName)) {
>   List> familyHFilePathsList = 
> bulkLoadHFileMap.get(tableName);
>   boolean foundFamily = false;
>   for (int i = 0; i < familyHFilePathsList.size(); i++) {
> Pair familyHFilePathsPair = 
> familyHFilePathsList.get(i);
> if (Bytes.equals(familyHFilePathsPair.getFirst(), family)) {
>   // Found family already present, just add the path to the 
> existing list
>   familyHFilePathsPair.getSecond().add(pathToHfileFromNS);
>   foundFamily = true;
>   break;
> }
>   }
> {code}
> I propose that this code does not use the Map methods _containsKey_ *and* 
> _get_.  Simply use the _get_ method once and check a _null_ return value to 
> check for existence.  Saves a trip to the Map data structure for each call.  
> Also, use enhanced for loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-15454) Freeze date tiered store files older than max age

2018-03-15 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-15454:
--
Fix Version/s: (was: 1.5.0)
   (was: 2.0.0)

> Freeze date tiered store files older than max age
> -
>
> Key: HBASE-15454
> URL: https://issues.apache.org/jira/browse/HBASE-15454
> Project: HBase
>  Issue Type: New Feature
>  Components: Compaction
>Affects Versions: 2.0.0, 1.3.0, 0.98.18, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-15454-v1.patch, HBASE-15454-v2.patch, 
> HBASE-15454-v3.patch, HBASE-15454-v4.patch, HBASE-15454-v5.patch, 
> HBASE-15454-v6.patch, HBASE-15454-v7.patch, HBASE-15454.patch
>
>
> In date tiered compaction, the store files older than max age are never 
> touched by minor compactions. Here we introduce a 'freeze window' operation, 
> which does the follow things:
> 1. Find all store files that contains cells whose timestamp are in the give 
> window.
> 2. Compaction all these files and output one file for each window that these 
> files covered.
> After the compaction, we will have only one in the give window, and all cells 
> whose timestamp are in the give window are in the only file. And if you do 
> not write new cells with an older timestamp in this window, the file will 
> never be changed. This makes it easier to do erasure coding on the freezed 
> file to reduce redundence. And also, it makes it possible to check 
> consistency between master and peer cluster incrementally.
> And why use the word 'freeze'?
> Because there is already an 'HFileArchiver' class. I want to use a different 
> word to prevent confusing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-18788) NPE when running TestSerialReplication

2018-03-15 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-18788.
---
  Resolution: Not A Problem
Hadoop Flags:   (was: Reviewed)

The old TestSerialReplication has already been removed.

> NPE when running TestSerialReplication
> --
>
> Key: HBASE-18788
> URL: https://issues.apache.org/jira/browse/HBASE-18788
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Fabrice MONNIER
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-18788.patch
>
>
> Maybe it can not cause the tests to fail but I still think we need to fix it.
> {noformat}
> 2017-09-11 21:01:37,009 ERROR [ubuntu,44001,1505134829330_Chore_1] 
> hbase.ScheduledChore(190): Caught error
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.cleaner.ReplicationMetaCleaner.chore(ReplicationMetaCleaner.java:87)
>   at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:187)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:110)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20182) Can not locate region after split and merge

2018-03-15 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401314#comment-16401314
 ] 

Duo Zhang commented on HBASE-20182:
---

No, it is not asking for the location of an offlined region, it is just asking 
for a location of a row in the table.

{code}
UTIL.getConnection().getRegionLocator(tableName).getRegionLocation(Bytes.toBytes(1),
 true).getServerName()
{code}

This is what I have done to ask for a location, no region name or encoded 
region name right? But the current code keeps giving me an offlined region and 
finally it fails...


> Can not locate region after split and merge
> ---
>
> Key: HBASE-20182
> URL: https://issues.apache.org/jira/browse/HBASE-20182
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-20182-UT.patch
>
>
> When implementing serial replication feature in HBASE-20046, I found that 
> when splitting a region, we will not remove the parent region, instead we 
> will mark it offline.
> And when locating a region, we will only scan one row so if we locate to the 
> offlined region then we are dead.
> This will not happen for splitting, since one of the new daughter regions 
> have the same start row with the parent region, and the timestamp is greater 
> so when doing reverse scan we will always hit the daughter first.
> But if we also consider merge then bad things happen. Consider we have two 
> regions A and B, we split B to C and D, and then merge A and C to E, then 
> ideally the regions should be E and D, but actually the regions in meta will 
> be E, B and D, and they all have different start rows. If you use a row 
> within the range of old region C, then we will always locate to B and throw 
> exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20202) [AMv2] Don't move region if its a split parent or offlined

2018-03-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401309#comment-16401309
 ] 

Hadoop QA commented on HBASE-20202:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
11s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
11s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  9m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
0s{color} | {color:red} hbase-server: The patch generated 4 new + 183 unchanged 
- 0 fixed = 187 total (was 183) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
32s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  4m  
7s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  4m 
40s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  5m 
15s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} 
|
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m  9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
8s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}160m  5s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
58s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}219m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit 

[jira] [Commented] (HBASE-20189) Typo in Required Java Version error message while building HBase.

2018-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401213#comment-16401213
 ] 

Hudson commented on HBASE-20189:


Results for branch branch-1.3
[build #264 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/264/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/264//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/264//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/264//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Typo in Required Java Version error message while building HBase.
> -
>
> Key: HBASE-20189
> URL: https://issues.apache.org/jira/browse/HBASE-20189
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Trivial
>  Labels: beginner, beginners
> Fix For: 2.0.0, 2.1.0, 1.3.2, 1.2.7, 1.4.3
>
> Attachments: hbase-20189.master.001.patch
>
>
> Change 'requirs' to 'requires'. See below:
> {code:java}
> $ mvn clean install -DskipTests
> ...
> [WARNING] Rule 2: org.apache.maven.plugins.enforcer.RequireJavaVersion failed 
> with message:
> Java is out of date.
>   HBase requirs at least version 1.8 of the JDK to properly build from source.
>   You appear to be using an older version. You can use either "mvn -version" 
> or
>   "mvn enforcer:display-info" to verify what version is active.
>   See the reference guide on building for more information: 
> http://hbase.apache.org/book.html#build
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20146) Regions are stuck while opening when WAL is disabled

2018-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401215#comment-16401215
 ] 

Hudson commented on HBASE-20146:


Results for branch branch-1.3
[build #264 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/264/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/264//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/264//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/264//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Regions are stuck while opening when WAL is disabled
> 
>
> Key: HBASE-20146
> URL: https://issues.apache.org/jira/browse/HBASE-20146
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.3.1
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: HBASE-20146-addendum.patch, HBASE-20146.patch, 
> HBASE-20146.v1.patch
>
>
> On a running cluster we had set {{hbase.regionserver.hlog.enabled}} to false, 
> to disable the WAL for complete cluster, after restarting HBase service, 
> regions are not getting opened leading to HMaster abort as Namespace table 
> regions are not getting assigned. 
> jstack for region open:
> {noformat}
> "RS_OPEN_PRIORITY_REGION-BLR106595:16045-1" #159 prio=5 os_prio=0 
> tid=0x7fdfa4341000 nid=0x419d waiting on condition [0x7fdfa0467000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x87554448> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at org.apache.hadoop.hbase.wal.WALKey.getWriteEntry(WALKey.java:98)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeMarker(WALUtil.java:131)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeRegionEventMarker(WALUtil.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.writeRegionOpenMarker(HRegion.java:1026)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6849)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6803)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6774)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6730)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6681)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This used to work with HBase 1.0.2 version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401214#comment-16401214
 ] 

Hudson commented on HBASE-18864:


Results for branch branch-1.3
[build #264 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/264/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/264//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/264//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/264//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Replication
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20119) Introduce a pojo class to carry coprocessor information in order to make TableDescriptorBuilder accept multiple cp at once

2018-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401195#comment-16401195
 ] 

Hudson commented on HBASE-20119:


Results for branch branch-2.0
[build #43 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/43/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/43//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/43//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/43//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Introduce a pojo class to carry coprocessor information in order to make 
> TableDescriptorBuilder accept multiple cp at once
> --
>
> Key: HBASE-20119
> URL: https://issues.apache.org/jira/browse/HBASE-20119
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0, 3.0.0, 2.1.0
>
> Attachments: HBASE-20119.branch-2.v0.patch, 
> HBASE-20119.v0.patch.patch, HBASE-20119.v1.patch.patch, HBASE-20119.v2.patch, 
> HBASE-20119.v3.patch
>
>
> The way to add cp to TableDescriptorBuilder is shown below.
> {code:java}
> public TableDescriptorBuilder addCoprocessor(String className) throws 
> IOException {
>   return addCoprocessor(className, null, Coprocessor.PRIORITY_USER, null);
> }
> public TableDescriptorBuilder addCoprocessor(String className, Path 
> jarFilePath,
> int priority, final Map kvs) throws IOException {
>   desc.addCoprocessor(className, jarFilePath, priority, kvs);
>   return this;
> }
> public TableDescriptorBuilder addCoprocessorWithSpec(final String specStr) 
> throws IOException {
>   desc.addCoprocessorWithSpec(specStr);
>   return this;
> }{code}
> When loading our config to create table with multiple cps, we have to write 
> the ugly for-loop.
> {code:java}
> val builder = TableDescriptorBuilder.newBuilde(tableName)
>   .setAAA()
>   .setBBB()
> cps.map(toHBaseCp).foreach(builder.addCoprocessor)
> cfs.map(toHBaseCf).foreach(builder.addColumnFamily)
> admin.createTable(builder.build())
> {code}
> If we introduce a pojo to carry the cp data and add the method accepting 
> multiple cps and cfs, it is easier to exercise the fluent interface of 
> TableDescriptorBuilder.
> {code:java}
> admin.createTable(TableDescriptorBuilder.newBuilde(tableName)
> .addCoprocessor(cps.map(toHBaseCp).asJavaCollection)
> .addColumnFamily(cf.map(toHBaseCf).asJavaCollection)
> .setAAA()
> .setBBB()
> .build){code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20190) Fix default for MIGRATE_TABLE_STATE_FROM_ZK_KEY

2018-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401196#comment-16401196
 ] 

Hudson commented on HBASE-20190:


Results for branch branch-2.0
[build #43 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/43/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/43//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/43//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/43//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Fix default for MIGRATE_TABLE_STATE_FROM_ZK_KEY
> ---
>
> Key: HBASE-20190
> URL: https://issues.apache.org/jira/browse/HBASE-20190
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20190.branch-2.001.patch
>
>
> All works but the flag name will confuse: name is 
> MIGRATE_TABLE_STATE_FROM_ZK_KEY but you'd set it to true to NOT migrate from 
> zk. Found by [~tedyu] in the parent issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401175#comment-16401175
 ] 

Hudson commented on HBASE-18864:


Results for branch branch-1.2
[build #268 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/268/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/268//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/268//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/268//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Replication
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20146) Regions are stuck while opening when WAL is disabled

2018-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401176#comment-16401176
 ] 

Hudson commented on HBASE-20146:


Results for branch branch-1.2
[build #268 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/268/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/268//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/268//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/268//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Regions are stuck while opening when WAL is disabled
> 
>
> Key: HBASE-20146
> URL: https://issues.apache.org/jira/browse/HBASE-20146
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.3.1
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: HBASE-20146-addendum.patch, HBASE-20146.patch, 
> HBASE-20146.v1.patch
>
>
> On a running cluster we had set {{hbase.regionserver.hlog.enabled}} to false, 
> to disable the WAL for complete cluster, after restarting HBase service, 
> regions are not getting opened leading to HMaster abort as Namespace table 
> regions are not getting assigned. 
> jstack for region open:
> {noformat}
> "RS_OPEN_PRIORITY_REGION-BLR106595:16045-1" #159 prio=5 os_prio=0 
> tid=0x7fdfa4341000 nid=0x419d waiting on condition [0x7fdfa0467000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x87554448> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at org.apache.hadoop.hbase.wal.WALKey.getWriteEntry(WALKey.java:98)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeMarker(WALUtil.java:131)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeRegionEventMarker(WALUtil.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.writeRegionOpenMarker(HRegion.java:1026)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6849)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6803)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6774)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6730)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6681)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This used to work with HBase 1.0.2 version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20190) Fix default for MIGRATE_TABLE_STATE_FROM_ZK_KEY

2018-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401169#comment-16401169
 ] 

Hudson commented on HBASE-20190:


Results for branch branch-2
[build #489 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/489/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/489//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/489//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/489//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Fix default for MIGRATE_TABLE_STATE_FROM_ZK_KEY
> ---
>
> Key: HBASE-20190
> URL: https://issues.apache.org/jira/browse/HBASE-20190
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20190.branch-2.001.patch
>
>
> All works but the flag name will confuse: name is 
> MIGRATE_TABLE_STATE_FROM_ZK_KEY but you'd set it to true to NOT migrate from 
> zk. Found by [~tedyu] in the parent issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20119) Introduce a pojo class to carry coprocessor information in order to make TableDescriptorBuilder accept multiple cp at once

2018-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401168#comment-16401168
 ] 

Hudson commented on HBASE-20119:


Results for branch branch-2
[build #489 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/489/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/489//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/489//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/489//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Introduce a pojo class to carry coprocessor information in order to make 
> TableDescriptorBuilder accept multiple cp at once
> --
>
> Key: HBASE-20119
> URL: https://issues.apache.org/jira/browse/HBASE-20119
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0, 3.0.0, 2.1.0
>
> Attachments: HBASE-20119.branch-2.v0.patch, 
> HBASE-20119.v0.patch.patch, HBASE-20119.v1.patch.patch, HBASE-20119.v2.patch, 
> HBASE-20119.v3.patch
>
>
> The way to add cp to TableDescriptorBuilder is shown below.
> {code:java}
> public TableDescriptorBuilder addCoprocessor(String className) throws 
> IOException {
>   return addCoprocessor(className, null, Coprocessor.PRIORITY_USER, null);
> }
> public TableDescriptorBuilder addCoprocessor(String className, Path 
> jarFilePath,
> int priority, final Map kvs) throws IOException {
>   desc.addCoprocessor(className, jarFilePath, priority, kvs);
>   return this;
> }
> public TableDescriptorBuilder addCoprocessorWithSpec(final String specStr) 
> throws IOException {
>   desc.addCoprocessorWithSpec(specStr);
>   return this;
> }{code}
> When loading our config to create table with multiple cps, we have to write 
> the ugly for-loop.
> {code:java}
> val builder = TableDescriptorBuilder.newBuilde(tableName)
>   .setAAA()
>   .setBBB()
> cps.map(toHBaseCp).foreach(builder.addCoprocessor)
> cfs.map(toHBaseCf).foreach(builder.addColumnFamily)
> admin.createTable(builder.build())
> {code}
> If we introduce a pojo to carry the cp data and add the method accepting 
> multiple cps and cfs, it is easier to exercise the fluent interface of 
> TableDescriptorBuilder.
> {code:java}
> admin.createTable(TableDescriptorBuilder.newBuilde(tableName)
> .addCoprocessor(cps.map(toHBaseCp).asJavaCollection)
> .addColumnFamily(cf.map(toHBaseCf).asJavaCollection)
> .setAAA()
> .setBBB()
> .build){code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20197) Review of ByteBufferWriterOutputStream.java

2018-03-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401156#comment-16401156
 ] 

Hadoop QA commented on HBASE-20197:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
42s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} hbase-common: The patch generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
24s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m 
23s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  8m 
26s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 
32s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 15s{color} 
| {color:red} hbase-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.io.TestByteBufferWriterOutputStream |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20197 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914786/HBASE-20197.5.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux f757be93bfa4 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / c200bf8f78 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_151 |
| 

[jira] [Resolved] (HBASE-20210) Note in refguide that RSGroups API is private, not for public consumption; shell is only access

2018-03-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-20210.
---
  Resolution: Fixed
Assignee: stack
Hadoop Flags: Reviewed

Pushed to master. Thanks [~psomogyi]

> Note in refguide that RSGroups API is private, not for public consumption; 
> shell is only access
> ---
>
> Key: HBASE-20210
> URL: https://issues.apache.org/jira/browse/HBASE-20210
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20210.master.001.patch
>
>
> Came up yesterday in an internal conversation. Mike Drob noticed that the 
> CPEP for RSGroups is marked audience Private which sort of makes sense given 
> this an evolving feature. The refguide though makes it sound as though you 
> can drive RSGroups from shell or API. Let me shutdown the talk of API being 
> public.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20197) Review of ByteBufferWriterOutputStream.java

2018-03-15 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401125#comment-16401125
 ] 

BELUGA BEHR edited comment on HBASE-20197 at 3/15/18 9:22 PM:
--

[~anoop.hbase] I have attached another patch  [^HBASE-20197.5.patch] which uses 
{{ByteBufferUtils}}.  You all can pick, though I personally prefer  
[^HBASE-20197.4.patch]  for KISS and future-proofing. :)


was (Author: belugabehr):
[~anoop.hbase] I have attached another patch  [^HBASE-20197.5.patch] which uses 
{{ByteBufferUtils}}.  You all can pick. :)

> Review of ByteBufferWriterOutputStream.java
> ---
>
> Key: HBASE-20197
> URL: https://issues.apache.org/jira/browse/HBASE-20197
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HBASE-20197.1.patch, HBASE-20197.2.patch, 
> HBASE-20197.3.patch, HBASE-20197.4.patch, HBASE-20197.5.patch
>
>
> In looking at this class, two things caught my eye.
>  # Default buffer size of 4K
>  # Re-sizing of buffer on demand
>  
> Java's {{BufferedOutputStream}} uses an internal buffer size of 8K on modern 
> JVMs.  This is due to various bench-marking that showed optimal performance 
> at this level.
>  The Re-sizing buffer looks a bit "unsafe":
>  
> {code:java}
> public void write(ByteBuffer b, int off, int len) throws IOException {
>   byte[] buf = null;
>   if (len > TEMP_BUF_LENGTH) {
> buf = new byte[len];
>   } else {
> if (this.tempBuf == null) {
>   this.tempBuf = new byte[TEMP_BUF_LENGTH];
> }
> buf = this.tempBuf;
>   }
> ...
> }
> {code}
> If this method gets one call with a 'len' of 4000, then 4001, then 4002, then 
> 4003, etc. then the 'tempBuf' will be re-created many times.  Also, it seems 
> unsafe to create a buffer as large as the 'len' input.  This could 
> theoretically lead to an internal buffer of 2GB for each instance of this 
> class.
> I propose:
>  # Increase the default buffer size to 8K
>  # Create the buffer once and chunk the output instead of loading data into a 
> single array and writing it to the output stream.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20197) Review of ByteBufferWriterOutputStream.java

2018-03-15 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HBASE-20197:

Attachment: HBASE-20197.5.patch

> Review of ByteBufferWriterOutputStream.java
> ---
>
> Key: HBASE-20197
> URL: https://issues.apache.org/jira/browse/HBASE-20197
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HBASE-20197.1.patch, HBASE-20197.2.patch, 
> HBASE-20197.3.patch, HBASE-20197.4.patch, HBASE-20197.5.patch
>
>
> In looking at this class, two things caught my eye.
>  # Default buffer size of 4K
>  # Re-sizing of buffer on demand
>  
> Java's {{BufferedOutputStream}} uses an internal buffer size of 8K on modern 
> JVMs.  This is due to various bench-marking that showed optimal performance 
> at this level.
>  The Re-sizing buffer looks a bit "unsafe":
>  
> {code:java}
> public void write(ByteBuffer b, int off, int len) throws IOException {
>   byte[] buf = null;
>   if (len > TEMP_BUF_LENGTH) {
> buf = new byte[len];
>   } else {
> if (this.tempBuf == null) {
>   this.tempBuf = new byte[TEMP_BUF_LENGTH];
> }
> buf = this.tempBuf;
>   }
> ...
> }
> {code}
> If this method gets one call with a 'len' of 4000, then 4001, then 4002, then 
> 4003, etc. then the 'tempBuf' will be re-created many times.  Also, it seems 
> unsafe to create a buffer as large as the 'len' input.  This could 
> theoretically lead to an internal buffer of 2GB for each instance of this 
> class.
> I propose:
>  # Increase the default buffer size to 8K
>  # Create the buffer once and chunk the output instead of loading data into a 
> single array and writing it to the output stream.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20197) Review of ByteBufferWriterOutputStream.java

2018-03-15 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HBASE-20197:

Attachment: (was: HBASE-20197.5.patch)

> Review of ByteBufferWriterOutputStream.java
> ---
>
> Key: HBASE-20197
> URL: https://issues.apache.org/jira/browse/HBASE-20197
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HBASE-20197.1.patch, HBASE-20197.2.patch, 
> HBASE-20197.3.patch, HBASE-20197.4.patch, HBASE-20197.5.patch
>
>
> In looking at this class, two things caught my eye.
>  # Default buffer size of 4K
>  # Re-sizing of buffer on demand
>  
> Java's {{BufferedOutputStream}} uses an internal buffer size of 8K on modern 
> JVMs.  This is due to various bench-marking that showed optimal performance 
> at this level.
>  The Re-sizing buffer looks a bit "unsafe":
>  
> {code:java}
> public void write(ByteBuffer b, int off, int len) throws IOException {
>   byte[] buf = null;
>   if (len > TEMP_BUF_LENGTH) {
> buf = new byte[len];
>   } else {
> if (this.tempBuf == null) {
>   this.tempBuf = new byte[TEMP_BUF_LENGTH];
> }
> buf = this.tempBuf;
>   }
> ...
> }
> {code}
> If this method gets one call with a 'len' of 4000, then 4001, then 4002, then 
> 4003, etc. then the 'tempBuf' will be re-created many times.  Also, it seems 
> unsafe to create a buffer as large as the 'len' input.  This could 
> theoretically lead to an internal buffer of 2GB for each instance of this 
> class.
> I propose:
>  # Increase the default buffer size to 8K
>  # Create the buffer once and chunk the output instead of loading data into a 
> single array and writing it to the output stream.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20197) Review of ByteBufferWriterOutputStream.java

2018-03-15 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HBASE-20197:

Status: Patch Available  (was: Open)

[~anoop.hbase] I have attached another patch  [^HBASE-20197.5.patch] which uses 
{{ByteBufferUtils}}.  You all can pick. :)

> Review of ByteBufferWriterOutputStream.java
> ---
>
> Key: HBASE-20197
> URL: https://issues.apache.org/jira/browse/HBASE-20197
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HBASE-20197.1.patch, HBASE-20197.2.patch, 
> HBASE-20197.3.patch, HBASE-20197.4.patch, HBASE-20197.5.patch
>
>
> In looking at this class, two things caught my eye.
>  # Default buffer size of 4K
>  # Re-sizing of buffer on demand
>  
> Java's {{BufferedOutputStream}} uses an internal buffer size of 8K on modern 
> JVMs.  This is due to various bench-marking that showed optimal performance 
> at this level.
>  The Re-sizing buffer looks a bit "unsafe":
>  
> {code:java}
> public void write(ByteBuffer b, int off, int len) throws IOException {
>   byte[] buf = null;
>   if (len > TEMP_BUF_LENGTH) {
> buf = new byte[len];
>   } else {
> if (this.tempBuf == null) {
>   this.tempBuf = new byte[TEMP_BUF_LENGTH];
> }
> buf = this.tempBuf;
>   }
> ...
> }
> {code}
> If this method gets one call with a 'len' of 4000, then 4001, then 4002, then 
> 4003, etc. then the 'tempBuf' will be re-created many times.  Also, it seems 
> unsafe to create a buffer as large as the 'len' input.  This could 
> theoretically lead to an internal buffer of 2GB for each instance of this 
> class.
> I propose:
>  # Increase the default buffer size to 8K
>  # Create the buffer once and chunk the output instead of loading data into a 
> single array and writing it to the output stream.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20197) Review of ByteBufferWriterOutputStream.java

2018-03-15 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HBASE-20197:

Attachment: HBASE-20197.5.patch

> Review of ByteBufferWriterOutputStream.java
> ---
>
> Key: HBASE-20197
> URL: https://issues.apache.org/jira/browse/HBASE-20197
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HBASE-20197.1.patch, HBASE-20197.2.patch, 
> HBASE-20197.3.patch, HBASE-20197.4.patch, HBASE-20197.5.patch
>
>
> In looking at this class, two things caught my eye.
>  # Default buffer size of 4K
>  # Re-sizing of buffer on demand
>  
> Java's {{BufferedOutputStream}} uses an internal buffer size of 8K on modern 
> JVMs.  This is due to various bench-marking that showed optimal performance 
> at this level.
>  The Re-sizing buffer looks a bit "unsafe":
>  
> {code:java}
> public void write(ByteBuffer b, int off, int len) throws IOException {
>   byte[] buf = null;
>   if (len > TEMP_BUF_LENGTH) {
> buf = new byte[len];
>   } else {
> if (this.tempBuf == null) {
>   this.tempBuf = new byte[TEMP_BUF_LENGTH];
> }
> buf = this.tempBuf;
>   }
> ...
> }
> {code}
> If this method gets one call with a 'len' of 4000, then 4001, then 4002, then 
> 4003, etc. then the 'tempBuf' will be re-created many times.  Also, it seems 
> unsafe to create a buffer as large as the 'len' input.  This could 
> theoretically lead to an internal buffer of 2GB for each instance of this 
> class.
> I propose:
>  # Increase the default buffer size to 8K
>  # Create the buffer once and chunk the output instead of loading data into a 
> single array and writing it to the output stream.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20197) Review of ByteBufferWriterOutputStream.java

2018-03-15 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HBASE-20197:

Status: Open  (was: Patch Available)

> Review of ByteBufferWriterOutputStream.java
> ---
>
> Key: HBASE-20197
> URL: https://issues.apache.org/jira/browse/HBASE-20197
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HBASE-20197.1.patch, HBASE-20197.2.patch, 
> HBASE-20197.3.patch, HBASE-20197.4.patch, HBASE-20197.5.patch
>
>
> In looking at this class, two things caught my eye.
>  # Default buffer size of 4K
>  # Re-sizing of buffer on demand
>  
> Java's {{BufferedOutputStream}} uses an internal buffer size of 8K on modern 
> JVMs.  This is due to various bench-marking that showed optimal performance 
> at this level.
>  The Re-sizing buffer looks a bit "unsafe":
>  
> {code:java}
> public void write(ByteBuffer b, int off, int len) throws IOException {
>   byte[] buf = null;
>   if (len > TEMP_BUF_LENGTH) {
> buf = new byte[len];
>   } else {
> if (this.tempBuf == null) {
>   this.tempBuf = new byte[TEMP_BUF_LENGTH];
> }
> buf = this.tempBuf;
>   }
> ...
> }
> {code}
> If this method gets one call with a 'len' of 4000, then 4001, then 4002, then 
> 4003, etc. then the 'tempBuf' will be re-created many times.  Also, it seems 
> unsafe to create a buffer as large as the 'len' input.  This could 
> theoretically lead to an internal buffer of 2GB for each instance of this 
> class.
> I propose:
>  # Increase the default buffer size to 8K
>  # Create the buffer once and chunk the output instead of loading data into a 
> single array and writing it to the output stream.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20210) Note in refguide that RSGroups API is private, not for public consumption; shell is only access

2018-03-15 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401108#comment-16401108
 ] 

Peter Somogyi commented on HBASE-20210:
---

+1

> Note in refguide that RSGroups API is private, not for public consumption; 
> shell is only access
> ---
>
> Key: HBASE-20210
> URL: https://issues.apache.org/jira/browse/HBASE-20210
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20210.master.001.patch
>
>
> Came up yesterday in an internal conversation. Mike Drob noticed that the 
> CPEP for RSGroups is marked audience Private which sort of makes sense given 
> this an evolving feature. The refguide though makes it sound as though you 
> can drive RSGroups from shell or API. Let me shutdown the talk of API being 
> public.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20204) Add locking to RefreshFileConnections in BucketCache

2018-03-15 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401099#comment-16401099
 ] 

Zach York commented on HBASE-20204:
---

I will add a patch once I get through some testing.

> Add locking to RefreshFileConnections in BucketCache
> 
>
> Key: HBASE-20204
> URL: https://issues.apache.org/jira/browse/HBASE-20204
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Reporter: Zach York
>Assignee: Zach York
>Priority: Major
>
> This is a follow-up to HBASE-20141 where [~anoop.hbase] suggested adding 
> locking for refreshing channels.
> I have also seen this become an issue when a RS has to abort and it locks on 
> trying to flush out the remaining data to the cache (since cache on write was 
> turned on).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20202) [AMv2] Don't move region if its a split parent or offlined

2018-03-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20202:
--
Fix Version/s: 2.0.0
   Status: Patch Available  (was: Open)

Try against hadoopqa.

> [AMv2] Don't move region if its a split parent or offlined
> --
>
> Key: HBASE-20202
> URL: https://issues.apache.org/jira/browse/HBASE-20202
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20202.branch-2.001.patch
>
>
> Found this one running ITBLLs. We'd just finished splitting a region 
> 91655de06786f786b0ee9c51280e1ee6 and then a move for it comes in. The move 
> fails in an interesting way. The location has been removed from the 
> regionnode kept by the Master. HBASE-20178 adds macro checks on context. Need 
> to add a few checks to the likes of MoveRegionProcedure so we don't try to 
> move an offlined/split parent.
> {code}
> 2018-03-14 10:21:45,678 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=3177, state=SUCCESS; SplitTableRegionProcedure 
> table=IntegrationTestBigLinkedList, parent=91655de06786f786b0ee9c51280e1ee6, 
> daughterA=b67bf6b79eaa83de788b0519f782ce8e, 
> daughterB=99cf6ddb38cad08e3aa7635b6cac2e7b in 10.0210sec   
> 2018-03-14 10:21:45,679 INFO  [PEWorker-15] 
> procedure.MasterProcedureScheduler: pid=3194, ppid=3193, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=af198ca64b196fb3d2f5b3e815b2dad0, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855, 
> IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.
> 2018-03-14 10:21:45,680 INFO  [PEWorker-5] 
> procedure.MasterProcedureScheduler: pid=3187, 
> state=RUNNABLE:MOVE_REGION_UNASSIGN; MoveRegionProcedure 
> hri=IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.,
>  source=ve0530.halxg.cloudera.com,16020,1521007509855, 
> destination=ve0528.halxg.cloudera.com,16020,1521047890874, 
> IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
> 2018-03-14 10:21:45,680 INFO  [PEWorker-15] assignment.RegionStateStore: 
> pid=3194 updating hbase:meta 
> row=IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.,
>  regionState=CLOSING
> 2018-03-14 10:21:45,680 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
> Initialized subprocedures=[{pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855}]
> 2018-03-14 10:21:45,683 INFO  [PEWorker-15] 
> assignment.RegionTransitionProcedure: Dispatch pid=3194, ppid=3193, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=af198ca64b196fb3d2f5b3e815b2dad0, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855; rit=CLOSING, 
> location=ve0530.halxg.cloudera.com,16020,1521007509855
> 2018-03-14 10:21:45,752 INFO  [PEWorker-15] 
> procedure.MasterProcedureScheduler: pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855, 
> IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
> 2018-03-14 10:21:45,753 ERROR [PEWorker-15] procedure2.ProcedureExecutor: 
> CODE-BUG: Uncaught runtime exception: pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855
> java.lang.NullPointerException
>   
>   
>   
>at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionStates.getOrCreateServer(RegionStates.java:934)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionStates.addRegionToServer(RegionStates.java:962)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1548)
>  

[jira] [Updated] (HBASE-20202) [AMv2] Don't move region if its a split parent or offlined

2018-03-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20202:
--
Priority: Critical  (was: Major)

> [AMv2] Don't move region if its a split parent or offlined
> --
>
> Key: HBASE-20202
> URL: https://issues.apache.org/jira/browse/HBASE-20202
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20202.branch-2.001.patch
>
>
> Found this one running ITBLLs. We'd just finished splitting a region 
> 91655de06786f786b0ee9c51280e1ee6 and then a move for it comes in. The move 
> fails in an interesting way. The location has been removed from the 
> regionnode kept by the Master. HBASE-20178 adds macro checks on context. Need 
> to add a few checks to the likes of MoveRegionProcedure so we don't try to 
> move an offlined/split parent.
> {code}
> 2018-03-14 10:21:45,678 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=3177, state=SUCCESS; SplitTableRegionProcedure 
> table=IntegrationTestBigLinkedList, parent=91655de06786f786b0ee9c51280e1ee6, 
> daughterA=b67bf6b79eaa83de788b0519f782ce8e, 
> daughterB=99cf6ddb38cad08e3aa7635b6cac2e7b in 10.0210sec   
> 2018-03-14 10:21:45,679 INFO  [PEWorker-15] 
> procedure.MasterProcedureScheduler: pid=3194, ppid=3193, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=af198ca64b196fb3d2f5b3e815b2dad0, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855, 
> IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.
> 2018-03-14 10:21:45,680 INFO  [PEWorker-5] 
> procedure.MasterProcedureScheduler: pid=3187, 
> state=RUNNABLE:MOVE_REGION_UNASSIGN; MoveRegionProcedure 
> hri=IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.,
>  source=ve0530.halxg.cloudera.com,16020,1521007509855, 
> destination=ve0528.halxg.cloudera.com,16020,1521047890874, 
> IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
> 2018-03-14 10:21:45,680 INFO  [PEWorker-15] assignment.RegionStateStore: 
> pid=3194 updating hbase:meta 
> row=IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.,
>  regionState=CLOSING
> 2018-03-14 10:21:45,680 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
> Initialized subprocedures=[{pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855}]
> 2018-03-14 10:21:45,683 INFO  [PEWorker-15] 
> assignment.RegionTransitionProcedure: Dispatch pid=3194, ppid=3193, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=af198ca64b196fb3d2f5b3e815b2dad0, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855; rit=CLOSING, 
> location=ve0530.halxg.cloudera.com,16020,1521007509855
> 2018-03-14 10:21:45,752 INFO  [PEWorker-15] 
> procedure.MasterProcedureScheduler: pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855, 
> IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
> 2018-03-14 10:21:45,753 ERROR [PEWorker-15] procedure2.ProcedureExecutor: 
> CODE-BUG: Uncaught runtime exception: pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855
> java.lang.NullPointerException
>   
>   
>   
>at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionStates.getOrCreateServer(RegionStates.java:934)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionStates.addRegionToServer(RegionStates.java:962)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1548)
>   
>

[jira] [Updated] (HBASE-20202) [AMv2] Don't move region if its a split parent or offlined

2018-03-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20202:
--
Attachment: HBASE-20202.branch-2.001.patch

> [AMv2] Don't move region if its a split parent or offlined
> --
>
> Key: HBASE-20202
> URL: https://issues.apache.org/jira/browse/HBASE-20202
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20202.branch-2.001.patch
>
>
> Found this one running ITBLLs. We'd just finished splitting a region 
> 91655de06786f786b0ee9c51280e1ee6 and then a move for it comes in. The move 
> fails in an interesting way. The location has been removed from the 
> regionnode kept by the Master. HBASE-20178 adds macro checks on context. Need 
> to add a few checks to the likes of MoveRegionProcedure so we don't try to 
> move an offlined/split parent.
> {code}
> 2018-03-14 10:21:45,678 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=3177, state=SUCCESS; SplitTableRegionProcedure 
> table=IntegrationTestBigLinkedList, parent=91655de06786f786b0ee9c51280e1ee6, 
> daughterA=b67bf6b79eaa83de788b0519f782ce8e, 
> daughterB=99cf6ddb38cad08e3aa7635b6cac2e7b in 10.0210sec   
> 2018-03-14 10:21:45,679 INFO  [PEWorker-15] 
> procedure.MasterProcedureScheduler: pid=3194, ppid=3193, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=af198ca64b196fb3d2f5b3e815b2dad0, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855, 
> IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.
> 2018-03-14 10:21:45,680 INFO  [PEWorker-5] 
> procedure.MasterProcedureScheduler: pid=3187, 
> state=RUNNABLE:MOVE_REGION_UNASSIGN; MoveRegionProcedure 
> hri=IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.,
>  source=ve0530.halxg.cloudera.com,16020,1521007509855, 
> destination=ve0528.halxg.cloudera.com,16020,1521047890874, 
> IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
> 2018-03-14 10:21:45,680 INFO  [PEWorker-15] assignment.RegionStateStore: 
> pid=3194 updating hbase:meta 
> row=IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.,
>  regionState=CLOSING
> 2018-03-14 10:21:45,680 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
> Initialized subprocedures=[{pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855}]
> 2018-03-14 10:21:45,683 INFO  [PEWorker-15] 
> assignment.RegionTransitionProcedure: Dispatch pid=3194, ppid=3193, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=af198ca64b196fb3d2f5b3e815b2dad0, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855; rit=CLOSING, 
> location=ve0530.halxg.cloudera.com,16020,1521007509855
> 2018-03-14 10:21:45,752 INFO  [PEWorker-15] 
> procedure.MasterProcedureScheduler: pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855, 
> IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
> 2018-03-14 10:21:45,753 ERROR [PEWorker-15] procedure2.ProcedureExecutor: 
> CODE-BUG: Uncaught runtime exception: pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855
> java.lang.NullPointerException
>   
>   
>   
>at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionStates.getOrCreateServer(RegionStates.java:934)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionStates.addRegionToServer(RegionStates.java:962)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1548)
>   
> 

[jira] [Commented] (HBASE-20111) Able to split region explicitly even on shouldSplit return false from split policy

2018-03-15 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401084#comment-16401084
 ] 

Josh Elser commented on HBASE-20111:


{noformat}
[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 32.159 
s <<< FAILURE! - in 
org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
[ERROR] 
testFromClientSideWhileSplitting(org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction)
  Time elapsed: 2.867 s  <<< FAILURE!
java.lang.AssertionError: regionSplitter
at 
org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction.testFromClientSideWhileSplitting(TestEndToEndSplitTransaction.java:125)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 1b1903abee2ceaab63e5fbbee6611aaf 
NOT splittable
at 
org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.checkSplittable(SplitTableRegionProcedure.java:176)
at 
org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.(SplitTableRegionProcedure.java:108)
at 
org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:767)
at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1612)
at 
org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:131)
at org.apache.hadoop.hbase.master.HMaster.splitRegion(HMaster.java:1604)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.splitRegion(MasterRpcServices.java:776)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)

Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 1b1903abee2ceaab63e5fbbee6611aaf 
NOT splittable
at 
org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.checkSplittable(SplitTableRegionProcedure.java:176)
at 
org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.(SplitTableRegionProcedure.java:108)
at 
org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:767)
at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1612)
at 
org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:131)
at org.apache.hadoop.hbase.master.HMaster.splitRegion(HMaster.java:1604)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.splitRegion(MasterRpcServices.java:776)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304{noformat}
Looks like these test failures are real.

> Able to split region explicitly even on shouldSplit return false from split 
> policy
> --
>
> Key: HBASE-20111
> URL: https://issues.apache.org/jira/browse/HBASE-20111
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20111.001.branch-2.0.patch, HBASE-20111.patch, 
> HBASE-20111_test.patch
>
>
> Currently able to split the region explicitly even when the split policy 
> returns from shouldSplit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20111) Able to split region explicitly even on shouldSplit return false from split policy

2018-03-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401081#comment-16401081
 ] 

Hadoop QA commented on HBASE-20111:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
31s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 4s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
3s{color} | {color:red} hbase-server: The patch generated 10 new + 174 
unchanged - 0 fixed = 184 total (was 174) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 8 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
49s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m  7s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}185m 38s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}225m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestMobRestoreSnapshotFromClient |
|   | hadoop.hbase.client.TestFromClientSide |
|   | hadoop.hbase.client.TestAsyncTableBatch |
|   | hadoop.hbase.master.TestMaster |
|   | hadoop.hbase.regionserver.TestSplitTransactionOnCluster |
|   | hadoop.hbase.regionserver.TestCompactionFileNotFound |
|   | hadoop.hbase.master.assignment.TestSplitTableRegionProcedure |
|   | hadoop.hbase.client.TestSplitOrMergeStatus |
|   | hadoop.hbase.client.TestFromClientSideWithCoprocessor |
|   | hadoop.hbase.master.TestAssignmentListener |
|   | hadoop.hbase.tool.TestSecureLoadIncrementalHFilesSplitRecovery |
|   | hadoop.hbase.tool.TestLoadIncrementalHFilesSplitRecovery |
|   | hadoop.hbase.master.normalizer.TestSimpleRegionNormalizerOnCluster |
|   | hadoop.hbase.regionserver.TestEndToEndSplitTransaction |
|   | hadoop.hbase.master.TestCatalogJanitorInMemoryStates |
|   | hadoop.hbase.TestSequenceIdMonotonicallyIncreasing |
|   | hadoop.hbase.client.TestMultiRespectsLimits |
|   | hadoop.hbase.client.TestRestoreSnapshotFromClient |
|   | hadoop.hbase.client.TestAsyncRegionAdminApi2 |
|   | 

[jira] [Commented] (HBASE-20146) Regions are stuck while opening when WAL is disabled

2018-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401075#comment-16401075
 ] 

Hudson commented on HBASE-20146:


Results for branch branch-1.4
[build #256 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/256/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/256//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/256//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/256//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Regions are stuck while opening when WAL is disabled
> 
>
> Key: HBASE-20146
> URL: https://issues.apache.org/jira/browse/HBASE-20146
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.3.1
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: HBASE-20146-addendum.patch, HBASE-20146.patch, 
> HBASE-20146.v1.patch
>
>
> On a running cluster we had set {{hbase.regionserver.hlog.enabled}} to false, 
> to disable the WAL for complete cluster, after restarting HBase service, 
> regions are not getting opened leading to HMaster abort as Namespace table 
> regions are not getting assigned. 
> jstack for region open:
> {noformat}
> "RS_OPEN_PRIORITY_REGION-BLR106595:16045-1" #159 prio=5 os_prio=0 
> tid=0x7fdfa4341000 nid=0x419d waiting on condition [0x7fdfa0467000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x87554448> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at org.apache.hadoop.hbase.wal.WALKey.getWriteEntry(WALKey.java:98)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeMarker(WALUtil.java:131)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeRegionEventMarker(WALUtil.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.writeRegionOpenMarker(HRegion.java:1026)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6849)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6803)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6774)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6730)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6681)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This used to work with HBase 1.0.2 version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401074#comment-16401074
 ] 

Hudson commented on HBASE-18864:


Results for branch branch-1.4
[build #256 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/256/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/256//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/256//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/256//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Replication
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20202) [AMv2] Don't move region if its a split parent or offlined

2018-03-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401073#comment-16401073
 ] 

stack commented on HBASE-20202:
---

HBASE-20178 added fast-fail on construction of Procedures where they'd 
fast-fail on Construction if table was disabled or if server or cluster was 
doing down.

This issue follows on. It adds region-level checks to region procedures into 
the Procedure Constructor. No point doing a move if a region is offlined for 
example (e.g. the log sample from above). Also added some recheck of context to 
prepare steps for procedures. Move didn't have a prepare. When prepare runs, 
region is locked, owned by this Procedure. At prepare step for Move we were not 
checking Region was online.

> [AMv2] Don't move region if its a split parent or offlined
> --
>
> Key: HBASE-20202
> URL: https://issues.apache.org/jira/browse/HBASE-20202
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
>
> Found this one running ITBLLs. We'd just finished splitting a region 
> 91655de06786f786b0ee9c51280e1ee6 and then a move for it comes in. The move 
> fails in an interesting way. The location has been removed from the 
> regionnode kept by the Master. HBASE-20178 adds macro checks on context. Need 
> to add a few checks to the likes of MoveRegionProcedure so we don't try to 
> move an offlined/split parent.
> {code}
> 2018-03-14 10:21:45,678 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=3177, state=SUCCESS; SplitTableRegionProcedure 
> table=IntegrationTestBigLinkedList, parent=91655de06786f786b0ee9c51280e1ee6, 
> daughterA=b67bf6b79eaa83de788b0519f782ce8e, 
> daughterB=99cf6ddb38cad08e3aa7635b6cac2e7b in 10.0210sec   
> 2018-03-14 10:21:45,679 INFO  [PEWorker-15] 
> procedure.MasterProcedureScheduler: pid=3194, ppid=3193, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=af198ca64b196fb3d2f5b3e815b2dad0, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855, 
> IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.
> 2018-03-14 10:21:45,680 INFO  [PEWorker-5] 
> procedure.MasterProcedureScheduler: pid=3187, 
> state=RUNNABLE:MOVE_REGION_UNASSIGN; MoveRegionProcedure 
> hri=IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.,
>  source=ve0530.halxg.cloudera.com,16020,1521007509855, 
> destination=ve0528.halxg.cloudera.com,16020,1521047890874, 
> IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
> 2018-03-14 10:21:45,680 INFO  [PEWorker-15] assignment.RegionStateStore: 
> pid=3194 updating hbase:meta 
> row=IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.,
>  regionState=CLOSING
> 2018-03-14 10:21:45,680 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
> Initialized subprocedures=[{pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855}]
> 2018-03-14 10:21:45,683 INFO  [PEWorker-15] 
> assignment.RegionTransitionProcedure: Dispatch pid=3194, ppid=3193, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=af198ca64b196fb3d2f5b3e815b2dad0, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855; rit=CLOSING, 
> location=ve0530.halxg.cloudera.com,16020,1521007509855
> 2018-03-14 10:21:45,752 INFO  [PEWorker-15] 
> procedure.MasterProcedureScheduler: pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855, 
> IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
> 2018-03-14 10:21:45,753 ERROR [PEWorker-15] procedure2.ProcedureExecutor: 
> CODE-BUG: Uncaught runtime exception: pid=3195, ppid=3187, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
> server=ve0530.halxg.cloudera.com,16020,1521007509855
> java.lang.NullPointerException
>   
>   
>   
>at 
> 

[jira] [Commented] (HBASE-20201) HBase must provide commons-cli-1.4 for mapreduce jobs with H3

2018-03-15 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401061#comment-16401061
 ] 

Josh Elser commented on HBASE-20201:


bq. Yuck on new hbase-thirdparty but yeah, if this only recourse. It looks 
like commons-cli has no dependencies so it should be an easy integration

Yeah. It's a shame that commons-cli didn't break things more nicely, but oh 
well. 

bq. If we're going to roll a new hbase-thirdparty, we should try and figure if 
other stuff that needs integrating.

Anything specific in mind? Or just a poll to the dev list?

bq. I can roll it if you want but would be great if the knowledge got shared 
around...

Oh, no worries. I was intending to volunteer to do that. Don't need to push 
that on you. I should make myself competent enough to do it.

 

> HBase must provide commons-cli-1.4 for mapreduce jobs with H3
> -
>
> Key: HBASE-20201
> URL: https://issues.apache.org/jira/browse/HBASE-20201
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Romil Choksi
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Been trying to get some pre-existing mapreduce tests working against HBase2.
> There's an inherent problem right now that hadoop-common depends on 
> commons-cli-1.2 and HBase depends on commons-cli-1.4. This means that if you 
> use {{$(hbase mapredcp)}} to submit a mapreduce job via {{hadoop jar}}, 
> you'll get an error like:
> {noformat}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/commons/cli/DefaultParser
>     at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.isHelpCommand(AbstractHBaseTool.java:165)
>     at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:133)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>     at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.doStaticMain(AbstractHBaseTool.java:270)
>     at hbase_it.App.main(App.java:85)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.commons.cli.DefaultParser
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>     ... 11 more{noformat}
> My guess is that in previous versions, we didn't have this conflict with 
> Hadoop (we were on the same version). Now, we're not.
> I see two routes:
>  # We just alter the mapredcp to include our "correct" commons-cli-1.4 on the 
> classpath and remind users to make use of the {{HADOOP_USER_CLASSPATH_FIRST}} 
> environment variable
>  # We put commons-cli into our hbase-thirdparty and stop using it directly.
> The former is definitely quicker, but I'm guessing the latter would insulate 
> us more nicely.
> Thoughts, [~stack], [~busbey], [~mdrob] (and others who have done H3 work?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20208) Review of SequenceIdAccounting.java

2018-03-15 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401052#comment-16401052
 ] 

BELUGA BEHR commented on HBASE-20208:
-

Failures do not look related.

> Review of SequenceIdAccounting.java
> ---
>
> Key: HBASE-20208
> URL: https://issues.apache.org/jira/browse/HBASE-20208
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HBASE-20208.1.patch
>
>
> # Fix checkstyle warnings
> # Use re-usable libraries where possible
> # Improve Map Access
> What got my attention on this class was:
> {code}
> for (Map.Entry e : sequenceids.entrySet()) {
>   long oldestFlushing = Long.MAX_VALUE;
>   long oldestUnflushed = Long.MAX_VALUE;
>   if (flushing != null && flushing.containsKey(e.getKey())) {
> oldestFlushing = flushing.get(e.getKey());
>   }
>   if (unflushed != null && unflushed.containsKey(e.getKey())) {
> oldestUnflushed = unflushed.get(e.getKey());
>   }
>   long min = Math.min(oldestFlushing, oldestUnflushed);
>   if (min <= e.getValue()) {
> return false;
>   }
> {code}
> Here, the two maps are calling _containsKey_ and then _get_.  It is also 
> calling {{e.getKey()}} repeatedly.
> I propose changing this so that {{e.getKey()}} is only called once and 
> instead of looking up an entry with _containsKey_ and then a _get_, simply 
> use _get_ once and check for a 'null' value to check for existence.  It saves 
> two trips through the Map Collection on each loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20211) ReadOnlyBufferException In UnsafeAccess

2018-03-15 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HBASE-20211:
---

 Summary: ReadOnlyBufferException In UnsafeAccess
 Key: HBASE-20211
 URL: https://issues.apache.org/jira/browse/HBASE-20211
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 2.0.0
Reporter: BELUGA BEHR


If you trace the BBUtils API, what you see is this code:
{code:java}
  public static void copyFromBufferToArray(byte[] out, ByteBuffer in, int 
sourceOffset,
  int destinationOffset, int length) {
if (in.hasArray()) {
  System.arraycopy(in.array(), sourceOffset + in.arrayOffset(), out, 
destinationOffset, length);
} else if (UNSAFE_AVAIL) {
  UnsafeAccess.copy(in, sourceOffset, out, destinationOffset, length);
} else {
  ByteBuffer inDup = in.duplicate();
  inDup.position(sourceOffset);
  inDup.get(out, destinationOffset, length);
}
  }
{code}

A ByteBuffer is being used here, which is not read-only, so it actually hits on 
the first condition and executes this code:
{quote}System.arraycopy(in.array(), sourceOffset + in.arrayOffset(), out, 
destinationOffset, length);
{quote}
Which is almost exactly what the {{ByteBuffer}} relative bulk get method does 
anyway, so there is no savings here, just overheard and complexity.

In regards to the second condition... there is a bug there that I just noticed.
{code:java|title=org.apache.hadoop.hbase.util.UnsafeAccess}
  public static void copy(ByteBuffer src, int srcOffset, byte[] dest, int 
destOffset,
  int length) {
long srcAddress = srcOffset;
Object srcBase = null;
if (src.isDirect()) {
  srcAddress = srcAddress + ((DirectBuffer) src).address();
} else {
  srcAddress = srcAddress + BYTE_ARRAY_BASE_OFFSET + src.arrayOffset();
  srcBase = src.array();
}
long destAddress = destOffset + BYTE_ARRAY_BASE_OFFSET;
unsafeCopy(srcBase, srcAddress, dest, destAddress, length);
  }
{code}
This issue here is the 
[arrayOffset()|https://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html#arrayOffset--]
 call. The JavaDocs here say:
{quote}Invoke the hasArray method before invoking this method in order to 
ensure that this buffer has an accessible backing array.
{quote}
However, as we saw in the previous method, if _hasArray_ returns true, we do 
_System.arraycopy,_ so the only reason we would be in this _copy_ code is if 
there was no access to the backing array, yet here it is, depending on it 
having such access. That could cause problems with Read-Only ByteBuffers that 
does not affect the _relative bulk get method_.
{code:java}
public class Test {
  public static void main(String[] args) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ByteBufferWriterOutputStream bbwos = new ByteBufferWriterOutputStream(baos);
ByteBuffer bbSmall = ByteBuffer.wrap(new byte[512]).asReadOnlyBuffer();
bbwos.write(bbSmall, 0, 512);
bbwos.close();
  }
}

Exception in thread "main" java.nio.ReadOnlyBufferException
at java.nio.ByteBuffer.arrayOffset(ByteBuffer.java:1024)
at org.apache.hadoop.hbase.util.UnsafeAccess.copy(UnsafeAccess.java:398)
at 
org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray(ByteBufferUtils.java:54)
at 
org.apache.hadoop.hbase.io.ByteBufferWriterOutputStream.write(ByteBufferWriterOutputStream.java:59)
at org.apache.hadoop.hbase.io.Test.main(Test.java:14)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20197) Review of ByteBufferWriterOutputStream.java

2018-03-15 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401040#comment-16401040
 ] 

BELUGA BEHR edited comment on HBASE-20197 at 3/15/18 8:09 PM:
--

[~anoop.hbase] Thanks for the feedback.  I made the change in line with 
[KISS|https://en.wikipedia.org/wiki/KISS_principle].  The BBUtil code is doing 
an arrayCopy, the ByteBuffer is doing an arrayCopy... it just didn't feel 
necessary to include an extra module just to save the 'duplicate' call.  As a 
bonus, the side benefit of this change is to future-proof the code, as 
introduced by my failure test.  Thank you for your consideration.

The failed tests look like timeouts, unrelated to this change.


was (Author: belugabehr):
[~anoop.hbase] Thanks for the feedback.  I made the change in line with 
[KISS|https://en.wikipedia.org/wiki/KISS_principle].  The BBUtil code is doing 
an arrayCopy, the ByteBuffer is doing an arrayCopy... it just didn't feel 
necessary to include an extra module just for the heck of it and it's a 
one-line change.  As a bonus, the side benefit of this change is to 
future-proof the code, as introduced by my failure test.  Thank you for your 
consideration.

The failed tests look like timeouts, unrelated to this change.

> Review of ByteBufferWriterOutputStream.java
> ---
>
> Key: HBASE-20197
> URL: https://issues.apache.org/jira/browse/HBASE-20197
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HBASE-20197.1.patch, HBASE-20197.2.patch, 
> HBASE-20197.3.patch, HBASE-20197.4.patch
>
>
> In looking at this class, two things caught my eye.
>  # Default buffer size of 4K
>  # Re-sizing of buffer on demand
>  
> Java's {{BufferedOutputStream}} uses an internal buffer size of 8K on modern 
> JVMs.  This is due to various bench-marking that showed optimal performance 
> at this level.
>  The Re-sizing buffer looks a bit "unsafe":
>  
> {code:java}
> public void write(ByteBuffer b, int off, int len) throws IOException {
>   byte[] buf = null;
>   if (len > TEMP_BUF_LENGTH) {
> buf = new byte[len];
>   } else {
> if (this.tempBuf == null) {
>   this.tempBuf = new byte[TEMP_BUF_LENGTH];
> }
> buf = this.tempBuf;
>   }
> ...
> }
> {code}
> If this method gets one call with a 'len' of 4000, then 4001, then 4002, then 
> 4003, etc. then the 'tempBuf' will be re-created many times.  Also, it seems 
> unsafe to create a buffer as large as the 'len' input.  This could 
> theoretically lead to an internal buffer of 2GB for each instance of this 
> class.
> I propose:
>  # Increase the default buffer size to 8K
>  # Create the buffer once and chunk the output instead of loading data into a 
> single array and writing it to the output stream.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20197) Review of ByteBufferWriterOutputStream.java

2018-03-15 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401040#comment-16401040
 ] 

BELUGA BEHR commented on HBASE-20197:
-

[~anoop.hbase] Thanks for the feedback.  I made the change in line with 
[KISS|https://en.wikipedia.org/wiki/KISS_principle].  The BBUtil code is doing 
an arrayCopy, the ByteBuffer is doing an arrayCopy... it just didn't feel 
necessary to include an extra module just for the heck of it and it's a 
one-line change.  As a bonus, the side benefit of this change is to 
future-proof the code, as introduced by my failure test.  Thank you for your 
consideration.

The failed tests look like timeouts, unrelated to this change.

> Review of ByteBufferWriterOutputStream.java
> ---
>
> Key: HBASE-20197
> URL: https://issues.apache.org/jira/browse/HBASE-20197
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HBASE-20197.1.patch, HBASE-20197.2.patch, 
> HBASE-20197.3.patch, HBASE-20197.4.patch
>
>
> In looking at this class, two things caught my eye.
>  # Default buffer size of 4K
>  # Re-sizing of buffer on demand
>  
> Java's {{BufferedOutputStream}} uses an internal buffer size of 8K on modern 
> JVMs.  This is due to various bench-marking that showed optimal performance 
> at this level.
>  The Re-sizing buffer looks a bit "unsafe":
>  
> {code:java}
> public void write(ByteBuffer b, int off, int len) throws IOException {
>   byte[] buf = null;
>   if (len > TEMP_BUF_LENGTH) {
> buf = new byte[len];
>   } else {
> if (this.tempBuf == null) {
>   this.tempBuf = new byte[TEMP_BUF_LENGTH];
> }
> buf = this.tempBuf;
>   }
> ...
> }
> {code}
> If this method gets one call with a 'len' of 4000, then 4001, then 4002, then 
> 4003, etc. then the 'tempBuf' will be re-created many times.  Also, it seems 
> unsafe to create a buffer as large as the 'len' input.  This could 
> theoretically lead to an internal buffer of 2GB for each instance of this 
> class.
> I propose:
>  # Increase the default buffer size to 8K
>  # Create the buffer once and chunk the output instead of loading data into a 
> single array and writing it to the output stream.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20208) Review of SequenceIdAccounting.java

2018-03-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400999#comment-16400999
 ] 

Hadoop QA commented on HBASE-20208:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
14s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} hbase-server: The patch generated 0 new + 0 
unchanged - 13 fixed = 0 total (was 13) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 9s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 36s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}175m 26s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}218m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.procedure.TestCreateTableProcedure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20208 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914724/HBASE-20208.1.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 423ce04b5283 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 31da4d0bce |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11982/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results 

[jira] [Commented] (HBASE-20207) Update doc on the steps to revert RegionServer groups feature

2018-03-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400972#comment-16400972
 ] 

Hadoop QA commented on HBASE-20207:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
49s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}240m 
11s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}256m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20207 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914711/HBASE-20207.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux 20120e6e10c0 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 31da4d0bce |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_151 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11980/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11980/testReport/ |
| Max. process+thread count | 4532 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11980/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Update doc on the steps to revert RegionServer groups feature
> -
>
> Key: HBASE-20207
> URL: https://issues.apache.org/jira/browse/HBASE-20207
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, rsgroup
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
> Attachments: HBASE-20207.patch
>
>
> Reverting the {{rsgroup}} feature from a {{hbase}} cluster involves 
> additional steps on top of removing the changes to {{hbase-site.xml}}. 
> Documenting it will help cluster admins to be aware of them when {{rsgroup}} 
> feature is being enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19562) Purge mirror writing of region and table info into fs at .tableinfo and .regioninfo

2018-03-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400941#comment-16400941
 ] 

stack commented on HBASE-19562:
---

Pushing out to 2.1.0.

I looked at this a few days ago. I was trying to update it but noticed the 
backup feature in master branch is messing w/ these files and depends on them 
so I got stuck there.

> Purge mirror writing of region and table info into fs at .tableinfo and 
> .regioninfo
> ---
>
> Key: HBASE-19562
> URL: https://issues.apache.org/jira/browse/HBASE-19562
> Project: HBase
>  Issue Type: Bug
>  Components: fs
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.1.0
>
> Attachments: 
> 0002-HBASE-19562-Purge-mirror-writing-of-region-and-table.patch
>
>
> We don't use these files in hbase2 yet we keep writing them when we create a 
> table or region.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18788) NPE when running TestSerialReplication

2018-03-15 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400940#comment-16400940
 ] 

Mike Drob commented on HBASE-18788:
---

Moving to subtask of HBASE-20046 since serial replication was dropped from 2.0 
- not sure if this is still relevant in master 

> NPE when running TestSerialReplication
> --
>
> Key: HBASE-18788
> URL: https://issues.apache.org/jira/browse/HBASE-18788
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Fabrice MONNIER
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-18788.patch
>
>
> Maybe it can not cause the tests to fail but I still think we need to fix it.
> {noformat}
> 2017-09-11 21:01:37,009 ERROR [ubuntu,44001,1505134829330_Chore_1] 
> hbase.ScheduledChore(190): Caught error
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.cleaner.ReplicationMetaCleaner.chore(ReplicationMetaCleaner.java:87)
>   at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:187)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:110)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19562) Purge mirror writing of region and table info into fs at .tableinfo and .regioninfo

2018-03-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19562:
--
Fix Version/s: (was: 2.0.0)
   2.1.0

> Purge mirror writing of region and table info into fs at .tableinfo and 
> .regioninfo
> ---
>
> Key: HBASE-19562
> URL: https://issues.apache.org/jira/browse/HBASE-19562
> Project: HBase
>  Issue Type: Bug
>  Components: fs
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.1.0
>
> Attachments: 
> 0002-HBASE-19562-Purge-mirror-writing-of-region-and-table.patch
>
>
> We don't use these files in hbase2 yet we keep writing them when we create a 
> table or region.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-18216) [AMv2] Workaround for HBASE-18152, corrupt procedure WAL

2018-03-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-18216.
---
Resolution: Fixed

Re-resolving. My backport trashed branch-1. Sorry about that [~apurtell] (Just 
saw the damage I did..). So, Andrew reopened this to revert from branch-1. The 
workarounds are in branch-2.

550b6c585e HBASE-18216 [AMv2] Workaround for HBASE-18152, corrupt procedure 
WAL; ADDENDUM
0b43353bf7 HBASE-18216 [AMv2] Workaround for HBASE-18152, corrupt procedure WAL

Re-resolving.



> [AMv2] Workaround for HBASE-18152, corrupt procedure WAL
> 
>
> Key: HBASE-18216
> URL: https://issues.apache.org/jira/browse/HBASE-18216
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-18216.branch-1.001.patch
>
>
> Let me commit workaround for the issue up in HBASE-18152, corruption in the 
> master wal procedure files. Testing on cluster shows it helps.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18788) NPE when running TestSerialReplication

2018-03-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18788:
--
Issue Type: Sub-task  (was: Bug)
Parent: HBASE-20046

> NPE when running TestSerialReplication
> --
>
> Key: HBASE-18788
> URL: https://issues.apache.org/jira/browse/HBASE-18788
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Fabrice MONNIER
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-18788.patch
>
>
> Maybe it can not cause the tests to fail but I still think we need to fix it.
> {noformat}
> 2017-09-11 21:01:37,009 ERROR [ubuntu,44001,1505134829330_Chore_1] 
> hbase.ScheduledChore(190): Caught error
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.cleaner.ReplicationMetaCleaner.chore(ReplicationMetaCleaner.java:87)
>   at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:187)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:110)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18788) NPE when running TestSerialReplication

2018-03-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18788:
--
Fix Version/s: (was: 2.0.0)
   2.1.0
   3.0.0

> NPE when running TestSerialReplication
> --
>
> Key: HBASE-18788
> URL: https://issues.apache.org/jira/browse/HBASE-18788
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: Fabrice MONNIER
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-18788.patch
>
>
> Maybe it can not cause the tests to fail but I still think we need to fix it.
> {noformat}
> 2017-09-11 21:01:37,009 ERROR [ubuntu,44001,1505134829330_Chore_1] 
> hbase.ScheduledChore(190): Caught error
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.master.cleaner.ReplicationMetaCleaner.chore(ReplicationMetaCleaner.java:87)
>   at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:187)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:110)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19997) [rolling upgrade] 1.x => 2.x

2018-03-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400921#comment-16400921
 ] 

stack commented on HBASE-19997:
---

Might need this to do the rolling upgrade.

> [rolling upgrade] 1.x => 2.x
> 
>
> Key: HBASE-19997
> URL: https://issues.apache.org/jira/browse/HBASE-19997
> Project: HBase
>  Issue Type: Umbrella
>Reporter: stack
>Priority: Blocker
> Fix For: 2.0.0
>
>
> An umbrella issue of issues needed so folks can do a rolling upgrade from 
> hbase-1.x to hbase-2.x.
> (Recent) Notables:
>  * hbase-1.x can't read hbase-2.x WALs -- hbase-1.x doesn't know the 
> AsyncProtobufLogWriter class used writing the WAL -- see 
> https://issues.apache.org/jira/browse/HBASE-19166?focusedCommentId=16362897=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16362897
>  for exception.
>  ** Might be ok... means WAL split fails on an hbase1 RS... must wait till an 
> hbase-2.x RS picks up the WAL for it to be split.
>  * hbase-1 can't open regions from tables created by hbase-2; it can't find 
> the Table descriptor. See 
> https://issues.apache.org/jira/browse/HBASE-19116?focusedCommentId=16363276=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16363276
>  ** This might be ok if the tables we are doing rolling upgrade over were 
> written with hbase-1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18494) [AMv2] Modify LoadBalancer to consider highest versioned Region Servers as favorites for system table regions

2018-03-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18494:
--
Labels: rolling_upgrade  (was: )

> [AMv2] Modify LoadBalancer to consider highest versioned Region Servers as 
> favorites for system table regions
> -
>
> Key: HBASE-18494
> URL: https://issues.apache.org/jira/browse/HBASE-18494
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Affects Versions: 2.0.0
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
>  Labels: rolling_upgrade
> Fix For: 2.0.0
>
>
> Modify LoadBalancer to consider highest versioned Region Servers as favorites 
> for system table regions. Will help with rolling upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-13147) Load actual META table descriptor, don't use statically defined one.

2018-03-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400918#comment-16400918
 ] 

stack commented on HBASE-13147:
---

This is kinda thing that might be needed doing a rolling upgrade. If we are 
going to do it, should be for 2.0.0... else 3.0.0. Not being worked on 
though... not in years. Moving out.

> Load actual META table descriptor, don't use statically defined one.
> 
>
> Key: HBASE-13147
> URL: https://issues.apache.org/jira/browse/HBASE-13147
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
>Priority: Major
> Attachments: HBASE-13147-branch-1.patch, 
> HBASE-13147-branch-1.v2.patch, HBASE-13147.patch, HBASE-13147.v2.patch, 
> HBASE-13147.v3.patch, HBASE-13147.v4.patch, HBASE-13147.v4.patch, 
> HBASE-13147.v5.patch, HBASE-13147.v6.patch, HBASE-13147.v7.patch
>
>
> In HBASE-13087 stumbled on the fact, that region servers don't see actual 
> meta descriptor, they use their own, statically compiled.
> Need to fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-13147) Load actual META table descriptor, don't use statically defined one.

2018-03-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-13147:
--
Fix Version/s: (was: 1.5.0)
   (was: 2.0.0)

> Load actual META table descriptor, don't use statically defined one.
> 
>
> Key: HBASE-13147
> URL: https://issues.apache.org/jira/browse/HBASE-13147
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
>Priority: Major
> Attachments: HBASE-13147-branch-1.patch, 
> HBASE-13147-branch-1.v2.patch, HBASE-13147.patch, HBASE-13147.v2.patch, 
> HBASE-13147.v3.patch, HBASE-13147.v4.patch, HBASE-13147.v4.patch, 
> HBASE-13147.v5.patch, HBASE-13147.v6.patch, HBASE-13147.v7.patch
>
>
> In HBASE-13087 stumbled on the fact, that region servers don't see actual 
> meta descriptor, they use their own, statically compiled.
> Need to fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18415) The local timeout may cause Admin to submit duplicate request

2018-03-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18415:
--
Fix Version/s: (was: 2.0.0)
   2.1.0
   3.0.0

> The local timeout may cause Admin to submit duplicate request
> -
>
> Key: HBASE-18415
> URL: https://issues.apache.org/jira/browse/HBASE-18415
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 1.4.3
>
> Attachments: HBASE-18415.branch-1.ut.patch, 
> HBASE-18415.branch-1.v0.patch, HBASE-18415.branch-1.v1.patch, 
> HBASE-18415.branch-1.v2.patch, HBASE-18415.branch-1.v3.patch, 
> HBASE-18415.branch-1.v3.patch, HBASE-18415.branch-1.v3.patch, 
> HBASE-18415.branch-1.v4.patch, HBASE-18415.branch-1.v4.patch, 
> HBASE-18415.branch-1.v4.patch
>
>
> After a timeout occurs on first request, client will retry the request with 
> distinct group/nonce. The second request may bring the TableXXXException back 
> if the first request have changed the table state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16025) Cache table state to reduce load on META

2018-03-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400916#comment-16400916
 ] 

stack commented on HBASE-16025:
---

Need this server-side and client-side (hbase-15539). This one is important 
given how often master goes to meta in 2.0.0 vs hbase1.

> Cache table state to reduce load on META
> 
>
> Key: HBASE-16025
> URL: https://issues.apache.org/jira/browse/HBASE-16025
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Gary Helmling
>Assignee: Gary Helmling
>Priority: Critical
> Fix For: 2.0.0
>
>
> HBASE-12035 moved keeping table enabled/disabled state from ZooKeeper into 
> hbase:meta.  When we retry operations on the client, we check table state in 
> order to return a specific message if the table is disabled.  This means that 
> in master we will be going back to meta for every retry, even if a region's 
> location has not changed.  This is going to cause performance issues when a 
> cluster is already loaded, ie. in cases where regionservers may be returning 
> CallQueueTooBigException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18216) [AMv2] Workaround for HBASE-18152, corrupt procedure WAL

2018-03-15 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400915#comment-16400915
 ] 

Mike Drob commented on HBASE-18216:
---

@stack - is this good to close? it was kicked out of branch-1, but is there 
more work on it for branch-2?

> [AMv2] Workaround for HBASE-18152, corrupt procedure WAL
> 
>
> Key: HBASE-18216
> URL: https://issues.apache.org/jira/browse/HBASE-18216
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-18216.branch-1.001.patch
>
>
> Let me commit workaround for the issue up in HBASE-18152, corruption in the 
> master wal procedure files. Testing on cluster shows it helps.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-13160) SplitLogWorker does not pick up the task immediately

2018-03-15 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400904#comment-16400904
 ] 

Mike Drob commented on HBASE-13160:
---

task locking was removed with distributed log replay in HBASE-19128, 
unscheduling from 2.0+

> SplitLogWorker does not pick up the task immediately
> 
>
> Key: HBASE-13160
> URL: https://issues.apache.org/jira/browse/HBASE-13160
> Project: HBase
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.3
>
> Attachments: hbase-13160_v1.patch
>
>
> We were reading some code with Jeffrey, and we realized that the 
> SplitLogWorker's internal task loop is weird. It does {{ls}} every second and 
> sleeps, but have another mechanism to learn about new tasks, but does not 
> make affective use of the zk notification. 
> I have a simple patch which might improve this area. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-13160) SplitLogWorker does not pick up the task immediately

2018-03-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-13160:
--
Fix Version/s: (was: 2.0.0)

> SplitLogWorker does not pick up the task immediately
> 
>
> Key: HBASE-13160
> URL: https://issues.apache.org/jira/browse/HBASE-13160
> Project: HBase
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.3
>
> Attachments: hbase-13160_v1.patch
>
>
> We were reading some code with Jeffrey, and we realized that the 
> SplitLogWorker's internal task loop is weird. It does {{ls}} every second and 
> sleeps, but have another mechanism to learn about new tasks, but does not 
> make affective use of the zk notification. 
> I have a simple patch which might improve this area. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20209) Do Not Use Both Map containsKey and get Methods

2018-03-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400897#comment-16400897
 ] 

Hadoop QA commented on HBASE-20209:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 3s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
11s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m  
7s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  8m  
5s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m  
9s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}110m 
19s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20209 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914727/HBASE-20209.1.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux aec8bab56199 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 31da4d0bce |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC3 |
| hadoopcheck | 

[jira] [Updated] (HBASE-12943) Set sun.net.inetaddr.ttl in HBase

2018-03-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-12943:
--
Fix Version/s: (was: 2.0.0)
   2.1.0
   3.0.0

> Set sun.net.inetaddr.ttl in HBase
> -
>
> Key: HBASE-12943
> URL: https://issues.apache.org/jira/browse/HBASE-12943
> Project: HBase
>  Issue Type: Bug
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 1.4.3
>
> Attachments: 12943-1-master.txt
>
>
> The default value of config: sun.net.inetaddr.ttl is -1 and the java 
> processes will cache the mapping of hostname to ip address  forever, See: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/net/properties.html
> But things go wrong when a regionserver with same hostname and different ip 
> address rejoins the hbase cluster. The HMaster will get wrong ip address of 
> the regionserver from this cache and every region assignment to this 
> regionserver will be blocked for a time because the HMaster can't communicate 
> with the regionserver.
> A tradeoff is to set the sun.net.inetaddr.ttl to 10m or 1h and make the wrong 
> cache expired.
> Suggestions are welcomed. Thanks~



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-13840) Server UIs should rename column labels from KVs to Cell

2018-03-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob resolved HBASE-13840.
---
   Resolution: Duplicate
Fix Version/s: (was: 1.5.0)
   (was: 2.0.0)

Closing as dup of HBASE-20132

> Server UIs should rename column labels from KVs to Cell
> ---
>
> Key: HBASE-13840
> URL: https://issues.apache.org/jira/browse/HBASE-13840
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver, UI
>Affects Versions: 1.1.0
>Reporter: Lars George
>Priority: Major
>
> Currently the master UI still refers to KVs in some of the tables. We should 
> do a sweep and rename to Cell.
> Also do for RS templates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-16218) Eliminate use of UGI.doAs() in AccessController testing

2018-03-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-16218:
--
Fix Version/s: (was: 2.0.0)
   2.1.0
   3.0.0

> Eliminate use of UGI.doAs() in AccessController testing
> ---
>
> Key: HBASE-16218
> URL: https://issues.apache.org/jira/browse/HBASE-16218
> Project: HBase
>  Issue Type: Sub-task
>  Components: security
>Reporter: Gary Helmling
>Assignee: Gary Helmling
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0
>
>
> Many tests for AccessController observer coprocessor hooks make use of 
> UGI.doAs() when the test user could simply be passed through.  Eliminate the 
> unnecessary use of doAs().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-16141) Unwind use of UserGroupInformation.doAs() to convey requester identity in coprocessor upcalls

2018-03-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-16141:
--
Fix Version/s: (was: 2.0.0)
   2.1.0
   3.0.0

Moving this and open subtasks to 2.1/3.0, please pull back if work gets done on 
them

> Unwind use of UserGroupInformation.doAs() to convey requester identity in 
> coprocessor upcalls
> -
>
> Key: HBASE-16141
> URL: https://issues.apache.org/jira/browse/HBASE-16141
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Gary Helmling
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0
>
>
> In discussion on HBASE-16115, there is some discussion of whether 
> UserGroupInformation.doAs() is the right mechanism for propagating the 
> original requester's identify in certain system contexts (splits, 
> compactions, some procedure calls).  It has the unfortunately of overriding 
> the current user, which makes for very confusing semantics for coprocessor 
> implementors.  We should instead find an alternate mechanism for conveying 
> the caller identity, which does not override the current user context.
> I think we should instead look at passing this through as part of the 
> ObserverContext passed to every coprocessor hook.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20090) Properly handle Preconditions check failure in MemStoreFlusher$FlushHandler.run

2018-03-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400881#comment-16400881
 ] 

Ted Yu commented on HBASE-20090:


hadoopcheck seems to be related to build environment:
{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-install-plugin:2.5.2:install (default-install) 
on project hbase-thrift: Failed to install metadata 
org.apache.hbase:hbase-thrift:3.0.0-SNAPSHOT/maven-metadata.xml: Could not 
parse metadata 
/home/jenkins/.m2/repository/org/apache/hbase/hbase-thrift/3.0.0-SNAPSHOT/maven-metadata-local.xml:
 in epilog non whitespace content is not allowed but got / (position: END_TAG 
seen ...\n/... @25:2)  -> [Help 1]
{code}
Not caused by the patch.

> Properly handle Preconditions check failure in 
> MemStoreFlusher$FlushHandler.run
> ---
>
> Key: HBASE-20090
> URL: https://issues.apache.org/jira/browse/HBASE-20090
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: 20090-server-61260-01-07.log, 20090.v10.txt, 
> 20090.v10.txt, 20090.v6.txt, 20090.v7.txt, 20090.v8.txt, 20090.v9.txt
>
>
> Copied the following from a comment since this was better description of the 
> race condition.
> The original description was merged to the beginning of my first comment 
> below.
> With more debug logging, we can see the scenario where the exception was 
> triggered.
> {code}
> 2018-03-02 17:28:30,097 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit: 
> Splitting TestTable,,1520011528142.0453f29030757eedb6e6a1c57e88c085., 
> compaction_queue=(0:0), split_queue=1
> 2018-03-02 17:28:30,098 DEBUG 
> [RpcServer.priority.FPBQ.Fifo.handler=19,queue=1,port=16020] 
> regionserver.IncreasingToUpperBoundRegionSplitPolicy: ShouldSplit because 
> info  size=6.9G, sizeToCheck=256.0M, regionsWithCommonTable=1
> 2018-03-02 17:28:30,296 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=24,queue=0,port=16020] 
> regionserver.MemStoreFlusher: wake up flusher due to ABOVE_ONHEAP_LOWER_MARK
> 2018-03-02 17:28:30,297 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Flush thread woke up because memory above low 
> water=381.5 M
> 2018-03-02 17:28:30,297 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=25,queue=1,port=16020] 
> regionserver.MemStoreFlusher: wake up flusher due to ABOVE_ONHEAP_LOWER_MARK
> 2018-03-02 17:28:30,298 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: region 
> TestTable,,1520011528142.0453f29030757eedb6e6a1c57e88c085. with size 400432696
> 2018-03-02 17:28:30,298 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. with size 0
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Flush of region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. due to global
>  heap pressure. Flush type=ABOVE_ONHEAP_LOWER_MARKTotal Memstore Heap 
> size=381.9 MTotal Memstore Off-Heap size=0, Region memstore size=0
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: wake up by WAKEUPFLUSH_INSTANCE
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Nothing to flush for 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae.
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Excluding unflushable region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. -trying to 
> find a different region to flush.
> {code}
> Region 0453f29030757eedb6e6a1c57e88c085 was being split.
> In HRegion#flushcache, the log from else branch can be seen in 
> 20090-server-61260-01-07.log :
> {code}
>   synchronized (writestate) {
> if (!writestate.flushing && writestate.writesEnabled) {
>   this.writestate.flushing = true;
> } else {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("NOT flushing memstore for region " + this
> + ", flushing=" + writestate.flushing + ", writesEnabled="
> + writestate.writesEnabled);
>   }
> {code}
> Meaning, region 0453f29030757eedb6e6a1c57e88c085 couldn't flush, leaving 
> memory pressure at high level.
> When MemStoreFlusher ran to the following call, the region was no longer a 
> flush candidate:
> {code}
>   HRegion bestFlushableRegion =
>   getBiggestMemStoreRegion(regionsBySize, excludedRegions, true);
> {code}
> So the other region, 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. , was examined 
> next. Since the region was not receiving write, the (current) Precondition 
> check failed.
> The proposed fix is to convert the Precondition to normal return.



--
This message was sent by 

[jira] [Commented] (HBASE-16025) Cache table state to reduce load on META

2018-03-15 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400874#comment-16400874
 ] 

Mike Drob commented on HBASE-16025:
---

Related to HBASE-15539? Duplicate? Subtask?

> Cache table state to reduce load on META
> 
>
> Key: HBASE-16025
> URL: https://issues.apache.org/jira/browse/HBASE-16025
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Gary Helmling
>Assignee: Gary Helmling
>Priority: Critical
> Fix For: 2.0.0
>
>
> HBASE-12035 moved keeping table enabled/disabled state from ZooKeeper into 
> hbase:meta.  When we retry operations on the client, we check table state in 
> order to return a specific message if the table is disabled.  This means that 
> in master we will be going back to meta for every retry, even if a region's 
> location has not changed.  This is going to cause performance issues when a 
> cluster is already loaded, ie. in cases where regionservers may be returning 
> CallQueueTooBigException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20090) Properly handle Preconditions check failure in MemStoreFlusher$FlushHandler.run

2018-03-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400872#comment-16400872
 ] 

Hadoop QA commented on HBASE-20090:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 7s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} hbase-server: The patch generated 0 new + 29 
unchanged - 1 fixed = 29 total (was 30) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 7s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m  
1s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  7m 
58s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m  
0s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}108m 
37s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20090 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914726/20090.v10.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux d9d986ba910a 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HBASE-13147) Load actual META table descriptor, don't use statically defined one.

2018-03-15 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400870#comment-16400870
 ] 

Mike Drob commented on HBASE-13147:
---

Move out of 2.0?

> Load actual META table descriptor, don't use statically defined one.
> 
>
> Key: HBASE-13147
> URL: https://issues.apache.org/jira/browse/HBASE-13147
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
>Priority: Major
> Fix For: 2.0.0, 1.5.0
>
> Attachments: HBASE-13147-branch-1.patch, 
> HBASE-13147-branch-1.v2.patch, HBASE-13147.patch, HBASE-13147.v2.patch, 
> HBASE-13147.v3.patch, HBASE-13147.v4.patch, HBASE-13147.v4.patch, 
> HBASE-13147.v5.patch, HBASE-13147.v6.patch, HBASE-13147.v7.patch
>
>
> In HBASE-13087 stumbled on the fact, that region servers don't see actual 
> meta descriptor, they use their own, statically compiled.
> Need to fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15454) Freeze date tiered store files older than max age

2018-03-15 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400871#comment-16400871
 ] 

Mike Drob commented on HBASE-15454:
---

Move out of 2.0?

> Freeze date tiered store files older than max age
> -
>
> Key: HBASE-15454
> URL: https://issues.apache.org/jira/browse/HBASE-15454
> Project: HBase
>  Issue Type: New Feature
>  Components: Compaction
>Affects Versions: 2.0.0, 1.3.0, 0.98.18, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.0.0, 3.0.0, 1.5.0
>
> Attachments: HBASE-15454-v1.patch, HBASE-15454-v2.patch, 
> HBASE-15454-v3.patch, HBASE-15454-v4.patch, HBASE-15454-v5.patch, 
> HBASE-15454-v6.patch, HBASE-15454-v7.patch, HBASE-15454.patch
>
>
> In date tiered compaction, the store files older than max age are never 
> touched by minor compactions. Here we introduce a 'freeze window' operation, 
> which does the follow things:
> 1. Find all store files that contains cells whose timestamp are in the give 
> window.
> 2. Compaction all these files and output one file for each window that these 
> files covered.
> After the compaction, we will have only one in the give window, and all cells 
> whose timestamp are in the give window are in the only file. And if you do 
> not write new cells with an older timestamp in this window, the file will 
> never be changed. This makes it easier to do erasure coding on the freezed 
> file to reduce redundence. And also, it makes it possible to check 
> consistency between master and peer cluster incrementally.
> And why use the word 'freeze'?
> Because there is already an 'HFileArchiver' class. I want to use a different 
> word to prevent confusing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19805) NPE in HMaster while issuing a sequence of table splits

2018-03-15 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400863#comment-16400863
 ] 

Mike Drob commented on HBASE-19805:
---

Are we still seeing this? [~sergey.soldatov] [~elserj]

> NPE in HMaster while issuing a sequence of table splits
> ---
>
> Key: HBASE-19805
> URL: https://issues.apache.org/jira/browse/HBASE-19805
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0-beta-1
>Reporter: Josh Elser
>Assignee: Sergey Soldatov
>Priority: Critical
> Fix For: 2.0.0
>
>
> I wrote a toy program to test the client tarball in HBASE-19735. After the 
> first few region splits, I see the following error in the Master log. 
> {noformat}
> 2018-01-16 14:07:52,797 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=16000] master.HMaster: 
> Client=jelser//192.168.1.23 split 
> myTestTable,1,1516129669054.8313b755f74092118f9dd30a4190ee23.
> 2018-01-16 14:07:52,797 ERROR 
> [RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=16000] ipc.RpcServer: 
> Unexpected throwable object
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.client.ConnectionUtils.getStubKey(ConnectionUtils.java:229)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getAdmin(ConnectionImplementation.java:1175)
>   at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.getAdmin(ConnectionUtils.java:149)
>   at 
> org.apache.hadoop.hbase.master.assignment.Util.getRegionInfoResponse(Util.java:59)
>   at 
> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.checkSplittable(SplitTableRegionProcedure.java:146)
>   at 
> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.(SplitTableRegionProcedure.java:103)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:761)
>   at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1626)
>   at 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:134)
>   at org.apache.hadoop.hbase.master.HMaster.splitRegion(HMaster.java:1618)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.splitRegion(MasterRpcServices.java:778)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:404)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> {noformat}
> {code}
>   public static void main(String[] args) throws Exception {
> Configuration conf = HBaseConfiguration.create();
> try (Connection conn = ConnectionFactory.createConnection(conf);
> Admin admin = conn.getAdmin()) {
>   final TableName tn = TableName.valueOf("myTestTable");
>   if (admin.tableExists(tn)) {
> admin.disableTable(tn);
> admin.deleteTable(tn);
>   }
>   final TableDescriptor desc = TableDescriptorBuilder.newBuilder(tn)
>   
> .addColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder(Bytes.toBytes("f1")).build())
>   .build();
>   admin.createTable(desc);
>   List splitPoints = new ArrayList<>(16);
>   for (int i = 1; i <= 16; i++) {
> splitPoints.add(Integer.toString(i, 16));
>   }
>   
>   System.out.println("Splits: " + splitPoints);
>   int numRegions = admin.getRegions(tn).size();
>   for (String splitPoint : splitPoints) {
> System.out.println("Splitting on " + splitPoint);
> admin.split(tn, Bytes.toBytes(splitPoint));
> Thread.sleep(200);
> int newRegionSize = admin.getRegions(tn).size();
> while (numRegions == newRegionSize) {
>   Thread.sleep(50);
>   newRegionSize = admin.getRegions(tn).size();
> }
>   }
> {code}
> A quick glance, looks like {{Util.getRegionInfoResponse}} is to blame.
> {code}
>   static GetRegionInfoResponse getRegionInfoResponse(final MasterProcedureEnv 
> env,
>   final ServerName regionLocation, final RegionInfo hri, boolean 
> includeBestSplitRow)
>   throws IOException {
> // TODO: There is no timeout on this controller. Set one!
> HBaseRpcController controller = 
> env.getMasterServices().getClusterConnection().
> getRpcControllerFactory().newController();
> final AdminService.BlockingInterface admin =
> 
> env.getMasterServices().getClusterConnection().getAdmin(regionLocation);
> {code}
> We don't validate that we have 

[jira] [Commented] (HBASE-19562) Purge mirror writing of region and table info into fs at .tableinfo and .regioninfo

2018-03-15 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400860#comment-16400860
 ] 

Mike Drob commented on HBASE-19562:
---

Can this wait until 2.1?

> Purge mirror writing of region and table info into fs at .tableinfo and 
> .regioninfo
> ---
>
> Key: HBASE-19562
> URL: https://issues.apache.org/jira/browse/HBASE-19562
> Project: HBase
>  Issue Type: Bug
>  Components: fs
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 
> 0002-HBASE-19562-Purge-mirror-writing-of-region-and-table.patch
>
>
> We don't use these files in hbase2 yet we keep writing them when we create a 
> table or region.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-16328) Reimplement web UI fixes without license problems

2018-03-15 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-16328:
--
Fix Version/s: (was: 2.0.0)
   2.1.0

> Reimplement web UI fixes without license problems
> -
>
> Key: HBASE-16328
> URL: https://issues.apache.org/jira/browse/HBASE-16328
> Project: HBase
>  Issue Type: Improvement
>  Components: dependencies, security, UI
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 1.2.3
>Reporter: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.1.0, 1.5.0
>
>
> After HBASE-16317 we're missing some good improvements in our web ui.
> This jira is to track re-implementing the reverted commits after either
> # getting the ESAPI project to stop using cat-x dependencies
> # reimplementing the functionality without the ESAPI project
> For review, the category-x list is here:
> https://www.apache.org/legal/resolved#category-x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20095) Redesign single instance pool in CleanerChore

2018-03-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400853#comment-16400853
 ] 

Hadoop QA commented on HBASE-20095:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
52s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 43s{color} 
| {color:red} hbase-server generated 1 new + 187 unchanged - 1 fixed = 188 
total (was 188) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
52s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 16s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}155m 
46s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}200m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20095 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914709/HBASE-20095.master.014.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 4a5da809dce7 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 31da4d0bce |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC3 |
| javac | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11979/artifact/patchprocess/diff-compile-javac-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11979/testReport/ |
| Max. process+thread count | 4051 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Commented] (HBASE-20206) WALEntryStream should not switch WAL file silently

2018-03-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400840#comment-16400840
 ] 

Hadoop QA commented on HBASE-20206:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  7m 
40s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
24s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
23m 50s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}150m 14s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}206m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.replication.TestReplicationEmptyWALRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20206 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914705/HBASE-20206.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 354f34641965 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 31da4d0bce |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11978/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11978/testReport/ |
| Max. process+thread count | 4869 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console 

[jira] [Updated] (HBASE-20190) Fix default for MIGRATE_TABLE_STATE_FROM_ZK_KEY

2018-03-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20190:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to branch-2.0, branch-2, and master. Thanks.

> Fix default for MIGRATE_TABLE_STATE_FROM_ZK_KEY
> ---
>
> Key: HBASE-20190
> URL: https://issues.apache.org/jira/browse/HBASE-20190
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20190.branch-2.001.patch
>
>
> All works but the flag name will confuse: name is 
> MIGRATE_TABLE_STATE_FROM_ZK_KEY but you'd set it to true to NOT migrate from 
> zk. Found by [~tedyu] in the parent issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20132) Change the "KV" to "Cell" for web UI

2018-03-15 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-20132:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Resolve it now. We can reopen it to backport to branch-2.0 at any time

> Change the "KV" to "Cell" for web UI
> 
>
> Key: HBASE-20132
> URL: https://issues.apache.org/jira/browse/HBASE-20132
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Guangxu Cheng
>Priority: Minor
>  Labels: beginner, beginners
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20132.master.001.patch
>
>
> grep the source code. The related words which should be revised are shown 
> below.
>  # Num. Compacting KVs
>  # Num. Compacted KVs
>  # Remaining KVs
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20190) Fix default for MIGRATE_TABLE_STATE_FROM_ZK_KEY

2018-03-15 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400782#comment-16400782
 ] 

Chia-Ping Tsai commented on HBASE-20190:


+1

> Fix default for MIGRATE_TABLE_STATE_FROM_ZK_KEY
> ---
>
> Key: HBASE-20190
> URL: https://issues.apache.org/jira/browse/HBASE-20190
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: HBASE-20190.branch-2.001.patch
>
>
> All works but the flag name will confuse: name is 
> MIGRATE_TABLE_STATE_FROM_ZK_KEY but you'd set it to true to NOT migrate from 
> zk. Found by [~tedyu] in the parent issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20111) Able to split region explicitly even on shouldSplit return false from split policy

2018-03-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400781#comment-16400781
 ] 

stack commented on HBASE-20111:
---

Thanks Josh.

> Able to split region explicitly even on shouldSplit return false from split 
> policy
> --
>
> Key: HBASE-20111
> URL: https://issues.apache.org/jira/browse/HBASE-20111
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20111.001.branch-2.0.patch, HBASE-20111.patch, 
> HBASE-20111_test.patch
>
>
> Currently able to split the region explicitly even when the split policy 
> returns from shouldSplit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20119) Introduce a pojo class to carry coprocessor information in order to make TableDescriptorBuilder accept multiple cp at once

2018-03-15 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-20119:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Push to branch-2.0+

> Introduce a pojo class to carry coprocessor information in order to make 
> TableDescriptorBuilder accept multiple cp at once
> --
>
> Key: HBASE-20119
> URL: https://issues.apache.org/jira/browse/HBASE-20119
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0, 3.0.0, 2.1.0
>
> Attachments: HBASE-20119.branch-2.v0.patch, 
> HBASE-20119.v0.patch.patch, HBASE-20119.v1.patch.patch, HBASE-20119.v2.patch, 
> HBASE-20119.v3.patch
>
>
> The way to add cp to TableDescriptorBuilder is shown below.
> {code:java}
> public TableDescriptorBuilder addCoprocessor(String className) throws 
> IOException {
>   return addCoprocessor(className, null, Coprocessor.PRIORITY_USER, null);
> }
> public TableDescriptorBuilder addCoprocessor(String className, Path 
> jarFilePath,
> int priority, final Map kvs) throws IOException {
>   desc.addCoprocessor(className, jarFilePath, priority, kvs);
>   return this;
> }
> public TableDescriptorBuilder addCoprocessorWithSpec(final String specStr) 
> throws IOException {
>   desc.addCoprocessorWithSpec(specStr);
>   return this;
> }{code}
> When loading our config to create table with multiple cps, we have to write 
> the ugly for-loop.
> {code:java}
> val builder = TableDescriptorBuilder.newBuilde(tableName)
>   .setAAA()
>   .setBBB()
> cps.map(toHBaseCp).foreach(builder.addCoprocessor)
> cfs.map(toHBaseCf).foreach(builder.addColumnFamily)
> admin.createTable(builder.build())
> {code}
> If we introduce a pojo to carry the cp data and add the method accepting 
> multiple cps and cfs, it is easier to exercise the fluent interface of 
> TableDescriptorBuilder.
> {code:java}
> admin.createTable(TableDescriptorBuilder.newBuilde(tableName)
> .addCoprocessor(cps.map(toHBaseCp).asJavaCollection)
> .addColumnFamily(cf.map(toHBaseCf).asJavaCollection)
> .setAAA()
> .setBBB()
> .build){code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >