[jira] [Assigned] (HBASE-24248) AsyncRpcRetryingCaller should include master call details

2022-11-10 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HBASE-24248:
-

Assignee: (was: Mingliang Liu)

> AsyncRpcRetryingCaller should include master call details
> -
>
> Key: HBASE-24248
> URL: https://issues.apache.org/jira/browse/HBASE-24248
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Priority: Minor
>
> I think the below is a retry loop from a coprocessor execution pointed at a 
> master. The call details are missing some important details, such as:
>  * hostname and port
>  * call id
>  * remote (rpc) method name
> {noformat}
> 20/04/23 21:09:26 WARN client.AsyncRpcRetryingCaller: Call to master failed, 
> tries = 6, maxAttempts = 10, timeout = 120 ms, time elapsed = 2902 ms
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: 
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not 
> running yet
> at 
> org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2873)
> at 
> org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2885)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.rpcPreCheck(MasterRpcServices.java:438)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:882)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
> at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
> at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:99)
> at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:89)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils.translateException(ConnectionUtils.java:321)
> at 
> org.apache.hadoop.hbase.client.AsyncRpcRetryingCaller.onError(AsyncRpcRetryingCaller.java:159)
> at 
> org.apache.hadoop.hbase.client.AsyncMasterRequestRpcRetryingCaller.lambda$null$4(AsyncMasterRequestRpcRetryingCaller.java:73)
> at 
> org.apache.hadoop.hbase.util.FutureUtils.lambda$addListener$0(FutureUtils.java:68)
> at 
> java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
> at 
> java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
> at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
> at 
> org.apache.hadoop.hbase.client.MasterCoprocessorRpcChannelImpl$1.run(MasterCoprocessorRpcChannelImpl.java:63)
> at 
> org.apache.hadoop.hbase.client.MasterCoprocessorRpcChannelImpl$1.run(MasterCoprocessorRpcChannelImpl.java:58)
> at 
> org.apache.hbase.thirdparty.com.google.protobuf.RpcUtil$1.run(RpcUtil.java:79)
> at 
> org.apache.hbase.thirdparty.com.google.protobuf.RpcUtil$1.run(RpcUtil.java:70)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:378)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403)
> at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:117)
> at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:132)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invoke

[jira] [Assigned] (HBASE-24692) WebUI header bar overlaps page content when window is too narrow

2022-04-15 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HBASE-24692:
-

Assignee: (was: Mingliang Liu)

> WebUI header bar overlaps page content when window is too narrow
> 
>
> Key: HBASE-24692
> URL: https://issues.apache.org/jira/browse/HBASE-24692
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Priority: Minor
> Attachments: 24692-ex1.png, 24692-ex2.png, 24692-ex3.png, 
> 24692-ex4.png
>
>
> It seems the CSS on our WebUI is such that the header will expand down 
> vertically as the content wraps dynamically. However, the page content does 
> not shift down along with it, resulting in the header overlapping the page 
> content.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (HBASE-24696) Include JVM information on Web UI under "Software Attributes"

2020-07-23 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163859#comment-17163859
 ] 

Mingliang Liu edited comment on HBASE-24696 at 7/23/20, 6:44 PM:
-

Thank you [~psomogyi] Very useful information. I did not think of inferring 
from the download page or checking the book.


was (Author: liuml07):
Thank you [~psomogyi] Very useful information. I did not think of inferring 
from or download page or checking the book.

> Include JVM information on Web UI under "Software Attributes"
> -
>
> Key: HBASE-24696
> URL: https://issues.apache.org/jira/browse/HBASE-24696
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0, 2.1.10, 2.2.7
>
> Attachments: Screen Shot 2020-07-17 at 10.55.56 PM.png
>
>
> It's a small thing, but seems like an omission.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24696) Include JVM information on Web UI under "Software Attributes"

2020-07-23 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163859#comment-17163859
 ] 

Mingliang Liu commented on HBASE-24696:
---

Thank you [~psomogyi] Very useful information. I did not think of inferring 
from or download page or checking the book.

> Include JVM information on Web UI under "Software Attributes"
> -
>
> Key: HBASE-24696
> URL: https://issues.apache.org/jira/browse/HBASE-24696
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0, 2.1.10, 2.2.7
>
> Attachments: Screen Shot 2020-07-17 at 10.55.56 PM.png
>
>
> It's a small thing, but seems like an omission.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24696) Include JVM information on Web UI under "Software Attributes"

2020-07-23 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163849#comment-17163849
 ] 

Mingliang Liu commented on HBASE-24696:
---

[~psomogyi] Do we have a wiki / link tracking active branches? In Hadoop, there 
is a wiki page: 
https://cwiki.apache.org/confluence/display/HADOOP/EOL+%28End-of-life%29+Release+Branches

> Include JVM information on Web UI under "Software Attributes"
> -
>
> Key: HBASE-24696
> URL: https://issues.apache.org/jira/browse/HBASE-24696
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0, 2.1.10, 2.2.7
>
> Attachments: Screen Shot 2020-07-17 at 10.55.56 PM.png
>
>
> It's a small thing, but seems like an omission.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24692) WebUI header bar overlaps page content when window is too narrow

2020-07-22 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163124#comment-17163124
 ] 

Mingliang Liu commented on HBASE-24692:
---

[~ndimiduk] Feel free to re-assign if you have a plan. When I took this up, I 
was thinking it can be fixed with some quick adjustment. It makes perfect sense 
to upgrade the Bootstrap to newer version. Grouping into sub-menus also is a 
good idea since this is growing longer. I'll find time next months working on 
this if this is not yet re-assigned.

> WebUI header bar overlaps page content when window is too narrow
> 
>
> Key: HBASE-24692
> URL: https://issues.apache.org/jira/browse/HBASE-24692
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: 24692-ex1.png, 24692-ex2.png, 24692-ex3.png, 
> 24692-ex4.png
>
>
> It seems the CSS on our WebUI is such that the header will expand down 
> vertically as the content wraps dynamically. However, the page content does 
> not shift down along with it, resulting in the header overlapping the page 
> content.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24696) Include JVM information on Web UI under "Software Attributes"

2020-07-21 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162376#comment-17162376
 ] 

Mingliang Liu commented on HBASE-24696:
---

Thanks [~ndimiduk] I saw major conflicts when trying to backport locally. So I 
prepared a new patch and filed PR #2117.

> Include JVM information on Web UI under "Software Attributes"
> -
>
> Key: HBASE-24696
> URL: https://issues.apache.org/jira/browse/HBASE-24696
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0
>
> Attachments: Screen Shot 2020-07-17 at 10.55.56 PM.png
>
>
> It's a small thing, but seems like an omission.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24696) Include JVM information on Web UI under "Software Attributes"

2020-07-21 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162161#comment-17162161
 ] 

Mingliang Liu commented on HBASE-24696:
---

Thank you both [~vjasani]  and [~ndimiduk] 

> Include JVM information on Web UI under "Software Attributes"
> -
>
> Key: HBASE-24696
> URL: https://issues.apache.org/jira/browse/HBASE-24696
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0
>
> Attachments: Screen Shot 2020-07-17 at 10.55.56 PM.png
>
>
> It's a small thing, but seems like an omission.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24696) Include JVM information on Web UI under "Software Attributes"

2020-07-17 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-24696:
--
Attachment: (was: Screen Shot 2020-07-17 at 10.54.09 PM.png)

> Include JVM information on Web UI under "Software Attributes"
> -
>
> Key: HBASE-24696
> URL: https://issues.apache.org/jira/browse/HBASE-24696
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Nick Dimiduk
>Priority: Minor
> Attachments: Screen Shot 2020-07-17 at 10.55.56 PM.png
>
>
> It's a small thing, but seems like an omission.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24696) Include JVM information on Web UI under "Software Attributes"

2020-07-17 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-24696:
--
Attachment: Screen Shot 2020-07-17 at 10.55.56 PM.png

> Include JVM information on Web UI under "Software Attributes"
> -
>
> Key: HBASE-24696
> URL: https://issues.apache.org/jira/browse/HBASE-24696
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Nick Dimiduk
>Priority: Minor
> Attachments: Screen Shot 2020-07-17 at 10.55.56 PM.png
>
>
> It's a small thing, but seems like an omission.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24696) Include JVM information on Web UI under "Software Attributes"

2020-07-17 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17160339#comment-17160339
 ] 

Mingliang Liu commented on HBASE-24696:
---

I filed a simple PR. Not sure if that is towards the right direction. Thanks 
[~ndimiduk]


> Include JVM information on Web UI under "Software Attributes"
> -
>
> Key: HBASE-24696
> URL: https://issues.apache.org/jira/browse/HBASE-24696
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Nick Dimiduk
>Priority: Minor
> Attachments: Screen Shot 2020-07-17 at 10.54.09 PM.png
>
>
> It's a small thing, but seems like an omission.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24696) Include JVM information on Web UI under "Software Attributes"

2020-07-17 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-24696:
--
Attachment: Screen Shot 2020-07-17 at 10.54.09 PM.png

> Include JVM information on Web UI under "Software Attributes"
> -
>
> Key: HBASE-24696
> URL: https://issues.apache.org/jira/browse/HBASE-24696
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Nick Dimiduk
>Priority: Minor
> Attachments: Screen Shot 2020-07-17 at 10.54.09 PM.png
>
>
> It's a small thing, but seems like an omission.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24708) Flaky Test TestRegionReplicas#testVerifySecondaryAbilityToReadWithOnFiles

2020-07-17 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17160330#comment-17160330
 ] 

Mingliang Liu commented on HBASE-24708:
---

Do you have links to builds / stack trace?

> Flaky Test TestRegionReplicas#testVerifySecondaryAbilityToReadWithOnFiles
> -
>
> Key: HBASE-24708
> URL: https://issues.apache.org/jira/browse/HBASE-24708
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.3.0
>Reporter: Huaxiang Sun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24692) WebUI header bar overlaps page content when window is too narrow

2020-07-17 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17160292#comment-17160292
 ] 

Mingliang Liu commented on HBASE-24692:
---

Yes, that problem is also seen in my Chrome and Safari on macOS...Have no clue 
so far how that fix could be since it comes from the bootstrap CSS.

> WebUI header bar overlaps page content when window is too narrow
> 
>
> Key: HBASE-24692
> URL: https://issues.apache.org/jira/browse/HBASE-24692
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: 24692-ex1.png, 24692-ex2.png, 24692-ex3.png, 
> 24692-ex4.png
>
>
> It seems the CSS on our WebUI is such that the header will expand down 
> vertically as the content wraps dynamically. However, the page content does 
> not shift down along with it, resulting in the header overlapping the page 
> content.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-24692) WebUI header bar overlaps page content when window is too narrow

2020-07-17 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17160274#comment-17160274
 ] 

Mingliang Liu edited comment on HBASE-24692 at 7/17/20, 11:55 PM:
--

[~ndimiduk] I had a look at this, but only find that the navigation bar (or 
header bar as you referred to) collapses. So if you click the icon on top 
right, you will see the navigation bar shows up. I guess this was designed for 
narrow screens.



was (Author: liuml07):
[~ndimiduk] I had a look at this, but only find that the navigation bar (or 
header bar as you referred to) collapses. So if you click the icon on top 
right, you will see the navigation bar shows up. I guess this was designed for 
narrow screens.

  !24692-ex4.png! 

> WebUI header bar overlaps page content when window is too narrow
> 
>
> Key: HBASE-24692
> URL: https://issues.apache.org/jira/browse/HBASE-24692
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: 24692-ex1.png, 24692-ex2.png, 24692-ex3.png, 
> 24692-ex4.png
>
>
> It seems the CSS on our WebUI is such that the header will expand down 
> vertically as the content wraps dynamically. However, the page content does 
> not shift down along with it, resulting in the header overlapping the page 
> content.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24692) WebUI header bar overlaps page content when window is too narrow

2020-07-17 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-24692:
--
Attachment: 24692-ex4.png

> WebUI header bar overlaps page content when window is too narrow
> 
>
> Key: HBASE-24692
> URL: https://issues.apache.org/jira/browse/HBASE-24692
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: 24692-ex1.png, 24692-ex2.png, 24692-ex3.png, 
> 24692-ex4.png
>
>
> It seems the CSS on our WebUI is such that the header will expand down 
> vertically as the content wraps dynamically. However, the page content does 
> not shift down along with it, resulting in the header overlapping the page 
> content.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-24692) WebUI header bar overlaps page content when window is too narrow

2020-07-17 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17160274#comment-17160274
 ] 

Mingliang Liu edited comment on HBASE-24692 at 7/17/20, 11:54 PM:
--

[~ndimiduk] I had a look at this, but only find that the navigation bar (or 
header bar as you referred to) collapses. So if you click the icon on top 
right, you will see the navigation bar shows up. I guess this was designed for 
narrow screens.

  !24692-ex4.png! 


was (Author: liuml07):
[~ndimiduk] I had a look at this, but only find that the navigation bar (or 
header bar as you referred to) collapses. So if you click the icon on top 
right, you will see the navigation bar shows up. I guess this was designed for 
narrow screens.

 

> WebUI header bar overlaps page content when window is too narrow
> 
>
> Key: HBASE-24692
> URL: https://issues.apache.org/jira/browse/HBASE-24692
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: 24692-ex1.png, 24692-ex2.png, 24692-ex3.png, 
> 24692-ex4.png
>
>
> It seems the CSS on our WebUI is such that the header will expand down 
> vertically as the content wraps dynamically. However, the page content does 
> not shift down along with it, resulting in the header overlapping the page 
> content.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24692) WebUI header bar overlaps page content when window is too narrow

2020-07-17 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17160274#comment-17160274
 ] 

Mingliang Liu commented on HBASE-24692:
---

[~ndimiduk] I had a look at this, but only find that the navigation bar (or 
header bar as you referred to) collapses. So if you click the icon on top 
right, you will see the navigation bar shows up. I guess this was designed for 
narrow screens.

 

> WebUI header bar overlaps page content when window is too narrow
> 
>
> Key: HBASE-24692
> URL: https://issues.apache.org/jira/browse/HBASE-24692
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: 24692-ex1.png, 24692-ex2.png, 24692-ex3.png
>
>
> It seems the CSS on our WebUI is such that the header will expand down 
> vertically as the content wraps dynamically. However, the page content does 
> not shift down along with it, resulting in the header overlapping the page 
> content.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-24692) WebUI header bar overlaps page content when window is too narrow

2020-07-17 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HBASE-24692:
-

Assignee: Mingliang Liu

> WebUI header bar overlaps page content when window is too narrow
> 
>
> Key: HBASE-24692
> URL: https://issues.apache.org/jira/browse/HBASE-24692
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: 24692-ex1.png, 24692-ex2.png, 24692-ex3.png
>
>
> It seems the CSS on our WebUI is such that the header will expand down 
> vertically as the content wraps dynamically. However, the page content does 
> not shift down along with it, resulting in the header overlapping the page 
> content.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24324) NPE from /procedures.jsp on backup master

2020-05-06 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100553#comment-17100553
 ] 

Mingliang Liu commented on HBASE-24324:
---

Is this because MasterProcedureExecutor is null on backup master? I guess we 
can add a null check.

> NPE from /procedures.jsp on backup master
> -
>
> Key: HBASE-24324
> URL: https://issues.apache.org/jira/browse/HBASE-24324
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Priority: Major
>
> When going to {{/procedures.jsp}} on a backup master (i.e., a user hits 
> refresh on a window they have open, meanwhile, the active master has flipped 
> over), we throw an NPE back to the user. Instead, we should do practically 
> anything else.
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.generated.master.procedures_jsp._jspService(procedures_jsp.java:63)
>   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
>   at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>   at 
> org.apache.hadoop.hbase.http.SecurityHeadersFilter.doFilter(SecurityHeadersFilter.java:66)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>   at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:52)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>   at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1491)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:50)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:539)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23707) Add IntelliJ check style plugin configuration

2020-05-04 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099411#comment-17099411
 ] 

Mingliang Liu commented on HBASE-23707:
---

Cool, thanks [~ndimiduk] I'll try that and review the IntelliJ article shortly.

> Add IntelliJ check style plugin configuration
> -
>
> Key: HBASE-23707
> URL: https://issues.apache.org/jira/browse/HBASE-23707
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.0, 1.6.0
>
>
> IntelliJ defines project configuration across a number of files, specifically 
> so that some configurations can be committed with the source repository. The 
> checkstyle plugin configuration is one such config file; add it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23707) Add IntelliJ check style plugin configuration

2020-04-30 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097167#comment-17097167
 ] 

Mingliang Liu commented on HBASE-23707:
---

[~ndimiduk]
{quote}
What happens when you select the root pom.xml instead of just the folder? Does 
it detect the maven model type?
{quote}
I tried this as well, no it does not work. I also run 'mvn clean package 
-DskipTests' first before open/import, and still does not help. So it seems 
this ".idea" directory just makes import (directory or {{pom.xml}}) stop 
working correctly. I would believe this is IntelliJ version dependent, since 
it's IntelliJ who makes the decision how to initializes a new project from 
source with existent ".idea" director.

{quote}
Maybe we need to commit another file from .idea directory, one that specifies 
the project type?
{quote}
This seems a good idea, but I do not know what this could be. Also, this could 
still be IntelliJ version dependent? On my current IntelliJ, I have following 
files in ".idea"
{code}
$ ls -l .idea
-rw-r--r--1 mingliang.liu  wheel   159B Apr 25 16:15 $CACHE_FILE$
-rw-r--r--1 mingliang.liu  wheel   917B Apr 26 21:44 checkstyle-idea.xml
-rw-r--r--1 mingliang.liu  wheel   6.6K Apr 25 16:16 compiler.xml
-rw-r--r--1 mingliang.liu  wheel   3.4K Apr 25 16:16 encodings.xml
-rw-r--r--1 mingliang.liu  wheel   874B Apr 25 16:16 jarRepositories.xml
drwxr-xr-x  234 mingliang.liu  wheel   7.3K Apr 25 16:16 libraries/
-rw-r--r--1 mingliang.liu  wheel   384B Apr 25 16:15 misc.xml
-rw-r--r--1 mingliang.liu  wheel   6.8K Apr 25 16:16 modules.xml
-rw-r--r--1 mingliang.liu  wheel   180B Apr 25 16:15 vcs.xml
-rw-r--r--1 mingliang.liu  wheel14K Apr 27 02:27 workspace.xml
{code}

The last resort is to keep this file, but we put this plugin setting file into 
the "dev-support/ " directory. And we can kindly ask developer to replace the 
".idea/checkstyle-idea.xml" with "dev-support/idea/checkstyle-idea.xml" file 
manually after importing project. This can be written in the book.

> Add IntelliJ check style plugin configuration
> -
>
> Key: HBASE-23707
> URL: https://issues.apache.org/jira/browse/HBASE-23707
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.0, 1.6.0
>
>
> IntelliJ defines project configuration across a number of files, specifically 
> so that some configurations can be committed with the source repository. The 
> checkstyle plugin configuration is one such config file; add it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24248) AsyncRpcRetryingCaller should include master call details

2020-04-27 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17093243#comment-17093243
 ] 

Mingliang Liu commented on HBASE-24248:
---

[~ndimiduk] When I started looking at the code, I found those information are 
not yet readily available in the current context building 
{{AsyncMasterRequestRpcRetryingCaller}}. Master hostname:port can be retrieved 
from connection. But call Id and method name should be retrieved and passed 
when creating this caller. Before I figure the implementation, I was wondering 
do we need this for all master RPC or only for coprocessor execution at master? 
Thanks,

> AsyncRpcRetryingCaller should include master call details
> -
>
> Key: HBASE-24248
> URL: https://issues.apache.org/jira/browse/HBASE-24248
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
>
> I think the below is a retry loop from a coprocessor execution pointed at a 
> master. The call details are missing some important details, such as:
>  * hostname and port
>  * call id
>  * remote (rpc) method name
> {noformat}
> 20/04/23 21:09:26 WARN client.AsyncRpcRetryingCaller: Call to master failed, 
> tries = 6, maxAttempts = 10, timeout = 120 ms, time elapsed = 2902 ms
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: 
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not 
> running yet
> at 
> org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2873)
> at 
> org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2885)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.rpcPreCheck(MasterRpcServices.java:438)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:882)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
> at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
> at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:99)
> at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:89)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils.translateException(ConnectionUtils.java:321)
> at 
> org.apache.hadoop.hbase.client.AsyncRpcRetryingCaller.onError(AsyncRpcRetryingCaller.java:159)
> at 
> org.apache.hadoop.hbase.client.AsyncMasterRequestRpcRetryingCaller.lambda$null$4(AsyncMasterRequestRpcRetryingCaller.java:73)
> at 
> org.apache.hadoop.hbase.util.FutureUtils.lambda$addListener$0(FutureUtils.java:68)
> at 
> java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
> at 
> java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
> at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
> at 
> org.apache.hadoop.hbase.client.MasterCoprocessorRpcChannelImpl$1.run(MasterCoprocessorRpcChannelImpl.java:63)
> at 
> org.apache.hadoop.hbase.client.MasterCoprocessorRpcChannelImpl$1.run(MasterCoprocessorRpcChannelImpl.java:58)
> at 
> org.apache.hbase.thirdparty.com.google.protobuf.RpcUtil$1.run(RpcUtil.java:79)
> at 
> org.apache.hbase.thirdparty.com.google.protobuf.RpcUtil$1.run(RpcUtil.java:70)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:378)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407)
> at 
> org.apache.hadoop.hbase.ipc.Abstr

[jira] [Commented] (HBASE-23707) Add IntelliJ check style plugin configuration

2020-04-24 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17092053#comment-17092053
 ] 

Mingliang Liu commented on HBASE-23707:
---

[~ndimiduk] Lovely as it is, I found this settings file being under ".idea/" 
directory would make the (new) project import stop working. The problem I have 
can be reproduced as following:
 1. git clone g...@github.com:apache/hbase.git ./myhbase
 2. IntelliJ IDEA -> File -> New project from existing sources (or Open)
 3. The project is just opened, but the Maven modules, source/test code 
structure and indexing, and other project auto-import stuff just do not happen.

In this situation, it fails to know a file is Java source. The reason I *guess* 
is because we have existing ".idea/" directory in source code tree, where only 
one checkstyle plugin file is provided. IntelliJ is not smart enough to know 
this project has been imported. So I removed this checkstyle settings file (and 
".idea/" directory), the import start working. I can run unit tests there.

Can you reproduce this? If this is not what I did something wrong, I propose we 
put this plugin setting file into the "dev-support/ " directory. And we can 
kindly let developer know (in doc somewhere) they can replace the 
".idea/checkstyle-idea.xml" with "dev-support/idea/checkstyle-idea.xml" file 
manually after importing project (or we have a helper script in future?). This 
manual step seems fine since developer will need to manually install CheckStyle 
plugin anyway. If this looks good, I can file a simple patch to rename this 
file. I saw some projects commit their whole IntelliJ settings into source 
code, but I'm not sure that is better give so any personal settings will get 
those files "dirty" in git.

I tried with MacOS and Linux. IntelliJ version is
{code:java}
IntelliJ IDEA 2020.1 (Ultimate Edition)
Build #IU-201.6668.121, built on April 8, 2020

Runtime version: 11.0.6+8-b765.25 x86_64
VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
macOS 10.15.3
GC: ParNew, ConcurrentMarkSweep
Memory: 8029M
Cores: 8
Non-Bundled Plugins: CheckStyle-IDEA, intellij-shellscript, com.paperetto.dash, 
io.protostuff.protostuff-jetbrains-plugin, org.intellij.plugins.hcl
{code}

> Add IntelliJ check style plugin configuration
> -
>
> Key: HBASE-23707
> URL: https://issues.apache.org/jira/browse/HBASE-23707
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 1.6.0
>
>
> IntelliJ defines project configuration across a number of files, specifically 
> so that some configurations can be committed with the source repository. The 
> checkstyle plugin configuration is one such config file; add it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-24248) AsyncRpcRetryingCaller should include master call details

2020-04-24 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HBASE-24248:
-

Assignee: Mingliang Liu

> AsyncRpcRetryingCaller should include master call details
> -
>
> Key: HBASE-24248
> URL: https://issues.apache.org/jira/browse/HBASE-24248
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
>
> I think the below is a retry loop from a coprocessor execution pointed at a 
> master. The call details are missing some important details, such as:
>  * hostname and port
>  * call id
>  * remote (rpc) method name
> {noformat}
> 20/04/23 21:09:26 WARN client.AsyncRpcRetryingCaller: Call to master failed, 
> tries = 6, maxAttempts = 10, timeout = 120 ms, time elapsed = 2902 ms
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: 
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not 
> running yet
> at 
> org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2873)
> at 
> org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2885)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.rpcPreCheck(MasterRpcServices.java:438)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:882)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
> at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
> at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:99)
> at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:89)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils.translateException(ConnectionUtils.java:321)
> at 
> org.apache.hadoop.hbase.client.AsyncRpcRetryingCaller.onError(AsyncRpcRetryingCaller.java:159)
> at 
> org.apache.hadoop.hbase.client.AsyncMasterRequestRpcRetryingCaller.lambda$null$4(AsyncMasterRequestRpcRetryingCaller.java:73)
> at 
> org.apache.hadoop.hbase.util.FutureUtils.lambda$addListener$0(FutureUtils.java:68)
> at 
> java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
> at 
> java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
> at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
> at 
> org.apache.hadoop.hbase.client.MasterCoprocessorRpcChannelImpl$1.run(MasterCoprocessorRpcChannelImpl.java:63)
> at 
> org.apache.hadoop.hbase.client.MasterCoprocessorRpcChannelImpl$1.run(MasterCoprocessorRpcChannelImpl.java:58)
> at 
> org.apache.hbase.thirdparty.com.google.protobuf.RpcUtil$1.run(RpcUtil.java:79)
> at 
> org.apache.hbase.thirdparty.com.google.protobuf.RpcUtil$1.run(RpcUtil.java:70)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:378)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403)
> at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:117)
> at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:132)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractCh

[jira] [Commented] (HBASE-24248) AsyncRpcRetryingCaller should include master call details

2020-04-23 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091070#comment-17091070
 ] 

Mingliang Liu commented on HBASE-24248:
---

The idea seems right to me. I can take this up this weekend if you are not 
working on or already have a PR. Thanks [~ndimiduk]

> AsyncRpcRetryingCaller should include master call details
> -
>
> Key: HBASE-24248
> URL: https://issues.apache.org/jira/browse/HBASE-24248
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Priority: Minor
>
> I think the below is a retry loop from a coprocessor execution pointed at a 
> master. The call details are missing some important details, such as:
>  * hostname and port
>  * call id
>  * remote (rpc) method name
> {noformat}
> 20/04/23 21:09:26 WARN client.AsyncRpcRetryingCaller: Call to master failed, 
> tries = 6, maxAttempts = 10, timeout = 120 ms, time elapsed = 2902 ms
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: 
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not 
> running yet
> at 
> org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2873)
> at 
> org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2885)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.rpcPreCheck(MasterRpcServices.java:438)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:882)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
> at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
> at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:99)
> at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:89)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils.translateException(ConnectionUtils.java:321)
> at 
> org.apache.hadoop.hbase.client.AsyncRpcRetryingCaller.onError(AsyncRpcRetryingCaller.java:159)
> at 
> org.apache.hadoop.hbase.client.AsyncMasterRequestRpcRetryingCaller.lambda$null$4(AsyncMasterRequestRpcRetryingCaller.java:73)
> at 
> org.apache.hadoop.hbase.util.FutureUtils.lambda$addListener$0(FutureUtils.java:68)
> at 
> java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
> at 
> java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
> at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
> at 
> org.apache.hadoop.hbase.client.MasterCoprocessorRpcChannelImpl$1.run(MasterCoprocessorRpcChannelImpl.java:63)
> at 
> org.apache.hadoop.hbase.client.MasterCoprocessorRpcChannelImpl$1.run(MasterCoprocessorRpcChannelImpl.java:58)
> at 
> org.apache.hbase.thirdparty.com.google.protobuf.RpcUtil$1.run(RpcUtil.java:79)
> at 
> org.apache.hbase.thirdparty.com.google.protobuf.RpcUtil$1.run(RpcUtil.java:70)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:378)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403)
> at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:117)
> at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:132)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.c

[jira] [Commented] (HBASE-24234) ChecksumUtil class validateChecksum method log level does not match

2020-04-22 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17090021#comment-17090021
 ] 

Mingliang Liu commented on HBASE-24234:
---

HBASE-23047 seems to fix this. Was it not backport into older versions? CC: 
[~wchevreuil]

> ChecksumUtil class validateChecksum method log level does not match
> ---
>
> Key: HBASE-24234
> URL: https://issues.apache.org/jira/browse/HBASE-24234
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 2.2.0, 2.2.1, 2.2.2, 2.2.3, 2.2.4
>Reporter: Xiao Zhang
>Assignee: Xiao Zhang
>Priority: Minor
>
> In the validateChecksum method of the ChecksumUtil class, still use LOG.info 
> after judging LOG.isTraceEnabled.
> eg:
>  if (LOG.isTraceEnabled()) {
>  LOG.info("dataLength=" + buffer.capacity()
>  + ", sizeWithHeader=" + onDiskDataSizeWithHeader
>  + ", checksumType=" + cktype.getName()
>  + ", file=" + pathName
>  + ", offset=" + offset
>  + ", headerSize=" + hdrSize
>  + ", bytesPerChecksum=" + bytesPerChecksum);
>  }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23969) Meta browser should show all `info` columns

2020-04-21 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17089294#comment-17089294
 ] 

Mingliang Liu commented on HBASE-23969:
---

[~ndimiduk] I have implemented the idea above and posted the 
[screenshot|https://issues.apache.org/jira/secure/attachment/13000789/Screen%20Shot%202020-04-21%20at%2010.16.58%20PM.png].
 Note the horizontal scrollbar, the table head tooltip and tooltip for other 
{{}} are still available, though the screenshot just shows one line for 
{{inf:merge*}} column (since I have only one mouse).

> Meta browser should show all `info` columns
> ---
>
> Key: HBASE-23969
> URL: https://issues.apache.org/jira/browse/HBASE-23969
> Project: HBase
>  Issue Type: Improvement
>  Components: master, UI
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: Screen Shot 2020-04-10 at 4.02.50 AM.png, Screen Shot 
> 2020-04-11 at 3.27.57 AM.png, Screen Shot 2020-04-17 at 7.07.06 PM.png, 
> Screen Shot 2020-04-21 at 10.16.58 PM.png
>
>
> The Meta table browser lists region states. There are other {{info}} columns 
> in the table, which should be displayed. Looking through {{HConstants}}, it 
> seems we need to add the following:
>  * {{server}}
>  * {{sn}}
>  * {{splitA}}
>  * {{splitB}}
>  * {{merge}}
>  * {{mergeA}}
>  * {{mergeB}}
> Are there others?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23969) Meta browser should show all `info` columns

2020-04-21 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23969:
--
Attachment: Screen Shot 2020-04-21 at 10.16.58 PM.png

> Meta browser should show all `info` columns
> ---
>
> Key: HBASE-23969
> URL: https://issues.apache.org/jira/browse/HBASE-23969
> Project: HBase
>  Issue Type: Improvement
>  Components: master, UI
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: Screen Shot 2020-04-10 at 4.02.50 AM.png, Screen Shot 
> 2020-04-11 at 3.27.57 AM.png, Screen Shot 2020-04-17 at 7.07.06 PM.png, 
> Screen Shot 2020-04-21 at 10.16.58 PM.png
>
>
> The Meta table browser lists region states. There are other {{info}} columns 
> in the table, which should be displayed. Looking through {{HConstants}}, it 
> seems we need to add the following:
>  * {{server}}
>  * {{sn}}
>  * {{splitA}}
>  * {{splitB}}
>  * {{merge}}
>  * {{mergeA}}
>  * {{mergeB}}
> Are there others?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23969) Meta browser should show all `info` columns

2020-04-21 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17089013#comment-17089013
 ] 

Mingliang Liu commented on HBASE-23969:
---

Hi [~ndimiduk] Yes this would be useful if we can provide that, especially if 
the table is high (vertically). So users will not go back to the table header 
to check what this data is about.
* Most columns in table.jsp can have static tooltip on data (e.g. ServerName 
has {{...}}).
* Composed information like "Server" can have {{...}} as the tooltip. 
* For "info:split*" column, we can add tooltip for each line, so they show as 
"info:splitA" or "info:splitB". I'll check how to add tooltip for different 
lines in the same {{}}. Honestly I'm far from an HTML expert.
* For "info:merge*" column in table.jsp, I do not find existing code returning 
value and the source. I'll add a new method to get this.

I'll update the patch and post a screenshot. If I figure the last two items 
above needs much code change, can I address that in a follow-up item?

> Meta browser should show all `info` columns
> ---
>
> Key: HBASE-23969
> URL: https://issues.apache.org/jira/browse/HBASE-23969
> Project: HBase
>  Issue Type: Improvement
>  Components: master, UI
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: Screen Shot 2020-04-10 at 4.02.50 AM.png, Screen Shot 
> 2020-04-11 at 3.27.57 AM.png, Screen Shot 2020-04-17 at 7.07.06 PM.png
>
>
> The Meta table browser lists region states. There are other {{info}} columns 
> in the table, which should be displayed. Looking through {{HConstants}}, it 
> seems we need to add the following:
>  * {{server}}
>  * {{sn}}
>  * {{splitA}}
>  * {{splitB}}
>  * {{merge}}
>  * {{mergeA}}
>  * {{mergeB}}
> Are there others?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24219) Avoid redundant connection creation during Quota operation

2020-04-20 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17088149#comment-17088149
 ] 

Mingliang Liu commented on HBASE-24219:
---

Are you suggesting we switch to non managed connection by replacing the call to 
{{init(final Configuration conf, final Scan scan)}} with {{init(final 
Connection conn, final Scan scan)}}?

> Avoid redundant connection creation during Quota operation
> --
>
> Key: HBASE-24219
> URL: https://issues.apache.org/jira/browse/HBASE-24219
> Project: HBase
>  Issue Type: Improvement
>  Components: Quotas
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Major
>
> QuotaRetriever.init() create a managed connection with the given 
> configuration for each operation, which can be avoided by reusing existing 
> connection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24208) Remove RS entry from zk draining servers node while RS getting stopped

2020-04-17 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17086339#comment-17086339
 ] 

Mingliang Liu commented on HBASE-24208:
---

Just curious, if we remove the ZK node, after the drained RS and HMaster 
restart, will the drained status respected?

> Remove RS entry from zk draining servers node while RS getting stopped
> --
>
> Key: HBASE-24208
> URL: https://issues.apache.org/jira/browse/HBASE-24208
> Project: HBase
>  Issue Type: Improvement
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Major
>
> When a RS is been decommissioned, we will add an entry into the zk node. This 
> will be there unless the same RS instance is recommissioned. 
> But if we want to scale down a cluster, the best path would be to 
> decommission the RSs in the scaling down nodes.  The regions in these RSs 
> will get moved to live RSs. In this case these decommissioned RSs will get 
> stopped later. These will never get recommissioned.  The zk nodes will still 
> be there under draining servers path.
> We can remove this zk node when the RS is getting stopped.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23969) Meta browser should show all `info` columns

2020-04-17 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17086225#comment-17086225
 ] 

Mingliang Liu commented on HBASE-23969:
---

I implemented the above idea with less interpretation in recent patch, and 
attach the screenshot here for discussion. Note there is a tooltip if you hover 
your mouse over a table head.

> Meta browser should show all `info` columns
> ---
>
> Key: HBASE-23969
> URL: https://issues.apache.org/jira/browse/HBASE-23969
> Project: HBase
>  Issue Type: Improvement
>  Components: master, UI
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: Screen Shot 2020-04-10 at 4.02.50 AM.png, Screen Shot 
> 2020-04-11 at 3.27.57 AM.png, Screen Shot 2020-04-17 at 7.07.06 PM.png
>
>
> The Meta table browser lists region states. There are other {{info}} columns 
> in the table, which should be displayed. Looking through {{HConstants}}, it 
> seems we need to add the following:
>  * {{server}}
>  * {{sn}}
>  * {{splitA}}
>  * {{splitB}}
>  * {{merge}}
>  * {{mergeA}}
>  * {{mergeB}}
> Are there others?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23969) Meta browser should show all `info` columns

2020-04-17 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23969:
--
Attachment: Screen Shot 2020-04-17 at 7.07.06 PM.png

> Meta browser should show all `info` columns
> ---
>
> Key: HBASE-23969
> URL: https://issues.apache.org/jira/browse/HBASE-23969
> Project: HBase
>  Issue Type: Improvement
>  Components: master, UI
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: Screen Shot 2020-04-10 at 4.02.50 AM.png, Screen Shot 
> 2020-04-11 at 3.27.57 AM.png, Screen Shot 2020-04-17 at 7.07.06 PM.png
>
>
> The Meta table browser lists region states. There are other {{info}} columns 
> in the table, which should be displayed. Looking through {{HConstants}}, it 
> seems we need to add the following:
>  * {{server}}
>  * {{sn}}
>  * {{splitA}}
>  * {{splitB}}
>  * {{merge}}
>  * {{mergeA}}
>  * {{mergeB}}
> Are there others?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23969) Meta browser should show all `info` columns

2020-04-15 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17084471#comment-17084471
 ] 

Mingliang Liu commented on HBASE-23969:
---

[~stack] Good questions on names in table.jsp. I see the existing items in 
table.jsp are using meaningful names instead of column names in {{hbase:meta}} 
table. Specially the current "ServerName" in table.jsp is composed of 
"info:server" + "info:startcode*" columns in hbase:meta table.
{quote}
Would suggest no interpretation of the base:meta table content 
{quote}
I'm not sure no interpretation makes perfect sense. We can make this new 
*Target ServerName*  use its name in hbase:meta "info:sn". Some concerns are:
# Should existing "ServerName" use *info:server,info:startcode*? Or we split it 
into two columns in table.jsp as they are in hbase:meta. Even with that, the 
column name for startcode could still be *info:startcode* or 
*info:serverstartcode_*
# As I mentioned to [~ndimiduk], the "info:merge*" columns could have more than 
two, and their names could be *mergeA*/*mergeB* or *merge*/*merge0001*. 
Showing this in table.jsp with a static format is not straightforward to me. If 
we show raw data in hbase:meta here, we may need all columns 
"mergeA/mergeB/merge/merge0001/..." (they are empty most times) in 
table.jsp. I was assuming the table.jsp viewer cares about the parent regions 
than the stored column name in hbase:meta, so I merged them together. 
SplitA/SplitB names and number of columns in hbase:meta are static so we did 
not merge them.
# There might be other hbase:meta columns eventually which have different name 
across HBase versions. I guess expressing those data without any interpretation 
is not ideal either.

So, for the sake of least "interpretation" of the hbase:meta table, my proposal 
is:
# Rename existing *ServerName* to *Server* in table.jsp, which shows the 
"info:server"+"info:startcode*" columns. Keep *Target Server* for column 
*info:sn* since this is clear and simple.
# Still show *info:merge** data together in table.jsp with newline-delimited 
list. Show *info:splitA* and *info:SplitB* together as well, so they follow the 
same pattern.
# Rename the *Merger ServerName* column in table.jsp *info:merge** which is 
regex of possible column names in hbase:meta
# Name the *SplitA/SplitB* column in table.jsp *info:split** which is regex of 
column names in hbase:meta
# Add [tooltip|https://en.wikipedia.org/wiki/Tooltip] for each column in table 
head, so it would show the detailed comment what this is about.

Thoughts? [~ndimiduk][~vjasani]



> Meta browser should show all `info` columns
> ---
>
> Key: HBASE-23969
> URL: https://issues.apache.org/jira/browse/HBASE-23969
> Project: HBase
>  Issue Type: Improvement
>  Components: master, UI
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: Screen Shot 2020-04-10 at 4.02.50 AM.png, Screen Shot 
> 2020-04-11 at 3.27.57 AM.png
>
>
> The Meta table browser lists region states. There are other {{info}} columns 
> in the table, which should be displayed. Looking through {{HConstants}}, it 
> seems we need to add the following:
>  * {{server}}
>  * {{sn}}
>  * {{splitA}}
>  * {{splitB}}
>  * {{merge}}
>  * {{mergeA}}
>  * {{mergeB}}
> Are there others?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23969) Meta browser should show all `info` columns

2020-04-11 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17081232#comment-17081232
 ] 

Mingliang Liu commented on HBASE-23969:
---

[~ndimiduk] and [~vjasani] I have addressed all comments in the latest commit 
of the [PR|https://github.com/apache/hbase/pull/1485], and I have also attached 
the recent 
[screenshot|https://issues.apache.org/jira/secure/attachment/12999626/Screen%20Shot%202020-04-11%20at%203.27.57%20AM.png].

> Meta browser should show all `info` columns
> ---
>
> Key: HBASE-23969
> URL: https://issues.apache.org/jira/browse/HBASE-23969
> Project: HBase
>  Issue Type: Improvement
>  Components: master, UI
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: Screen Shot 2020-04-10 at 4.02.50 AM.png, Screen Shot 
> 2020-04-11 at 3.27.57 AM.png
>
>
> The Meta table browser lists region states. There are other {{info}} columns 
> in the table, which should be displayed. Looking through {{HConstants}}, it 
> seems we need to add the following:
>  * {{server}}
>  * {{sn}}
>  * {{splitA}}
>  * {{splitB}}
>  * {{merge}}
>  * {{mergeA}}
>  * {{mergeB}}
> Are there others?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23969) Meta browser should show all `info` columns

2020-04-11 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23969:
--
Attachment: Screen Shot 2020-04-11 at 3.27.57 AM.png

> Meta browser should show all `info` columns
> ---
>
> Key: HBASE-23969
> URL: https://issues.apache.org/jira/browse/HBASE-23969
> Project: HBase
>  Issue Type: Improvement
>  Components: master, UI
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: Screen Shot 2020-04-10 at 4.02.50 AM.png, Screen Shot 
> 2020-04-11 at 3.27.57 AM.png
>
>
> The Meta table browser lists region states. There are other {{info}} columns 
> in the table, which should be displayed. Looking through {{HConstants}}, it 
> seems we need to add the following:
>  * {{server}}
>  * {{sn}}
>  * {{splitA}}
>  * {{splitB}}
>  * {{merge}}
>  * {{mergeA}}
>  * {{mergeB}}
> Are there others?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23969) Meta browser should show all `info` columns

2020-04-10 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080412#comment-17080412
 ] 

Mingliang Liu commented on HBASE-23969:
---

Thanks [~ndimiduk]!

I brows the code quickly and think the "ServerName" information in table.jsp 
now is actually 
[using|https://github.com/apache/hbase/blob/eface7440722e4e85f7848cdbc1f975f4785f334/hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java#L952]
 the "info:server" column in meta table. We here need to include the "info:sn" 
information. When we show it in table.jsp, can we call it 
"TransitioningOnServerName", or is there a better name? Alternatively, we can 
simply rename the existing "ServerName" column in table.jsp as "server" and 
this new column as "sn". This seems not very descriptive though.

For the merge/mergeA/mergeB, I see we now support merging multiple regions, and 
the number can be more than 2. The old "mergeA" and "mergeB" are now deprecated 
in favor of "merge" and "merge0001. So I'm thinking this meta browser shows 
the "merge*" values as a single string in the "MergeRegionName" column of the 
table in table.jsp. Is this a good idea? This way we don't need dynamic columns 
in table.jsp. I guess user cares about the parent regions than the column name 
in "hbase:meta" table, be it "mergeA" or "merge".

Last, is the sequence number during open required? We can add that as well 
since it's already in location information.

I have a basic patch, and attach the screenshot here. It enables X-scroll and 
disables the wrap for long text.

> Meta browser should show all `info` columns
> ---
>
> Key: HBASE-23969
> URL: https://issues.apache.org/jira/browse/HBASE-23969
> Project: HBase
>  Issue Type: Improvement
>  Components: master, UI
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: Screen Shot 2020-04-10 at 4.02.50 AM.png
>
>
> The Meta table browser lists region states. There are other {{info}} columns 
> in the table, which should be displayed. Looking through {{HConstants}}, it 
> seems we need to add the following:
>  * {{server}}
>  * {{sn}}
>  * {{splitA}}
>  * {{splitB}}
>  * {{merge}}
>  * {{mergeA}}
>  * {{mergeB}}
> Are there others?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23969) Meta browser should show all `info` columns

2020-04-10 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23969:
--
Attachment: Screen Shot 2020-04-10 at 4.02.50 AM.png

> Meta browser should show all `info` columns
> ---
>
> Key: HBASE-23969
> URL: https://issues.apache.org/jira/browse/HBASE-23969
> Project: HBase
>  Issue Type: Improvement
>  Components: master, UI
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: Screen Shot 2020-04-10 at 4.02.50 AM.png
>
>
> The Meta table browser lists region states. There are other {{info}} columns 
> in the table, which should be displayed. Looking through {{HConstants}}, it 
> seems we need to add the following:
>  * {{server}}
>  * {{sn}}
>  * {{splitA}}
>  * {{splitB}}
>  * {{merge}}
>  * {{mergeA}}
>  * {{mergeB}}
> Are there others?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23969) Meta browser should show all `info` columns

2020-04-09 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079037#comment-17079037
 ] 

Mingliang Liu commented on HBASE-23969:
---

Thanks [~ndimiduk]. Yes the wide-scrolling page seems good for this, and we can 
add the columns selector as a follow-up effort.

For the "server" column, I see there is an existing "ServerName" column showing 
in Meta browser. Is that the same information already covered as "server"?

> Meta browser should show all `info` columns
> ---
>
> Key: HBASE-23969
> URL: https://issues.apache.org/jira/browse/HBASE-23969
> Project: HBase
>  Issue Type: Improvement
>  Components: master, UI
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
>
> The Meta table browser lists region states. There are other {{info}} columns 
> in the table, which should be displayed. Looking through {{HConstants}}, it 
> seems we need to add the following:
>  * {{server}}
>  * {{sn}}
>  * {{splitA}}
>  * {{splitB}}
>  * {{merge}}
>  * {{mergeA}}
>  * {{mergeB}}
> Are there others?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-23969) Meta browser should show all `info` columns

2020-04-08 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HBASE-23969:
-

Component/s: UI
   Assignee: Mingliang Liu

Hi [~ndimiduk] I can work on this one. I'm currently checking this meta browser 
on my local work. I'm thinking if the page is wide enough to show all those 
columns. Not sure which is better, soft wrapping some columns, or a horizontal 
scroll bar.

> Meta browser should show all `info` columns
> ---
>
> Key: HBASE-23969
> URL: https://issues.apache.org/jira/browse/HBASE-23969
> Project: HBase
>  Issue Type: Improvement
>  Components: master, UI
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Minor
>
> The Meta table browser lists region states. There are other {{info}} columns 
> in the table, which should be displayed. Looking through {{HConstants}}, it 
> seems we need to add the following:
>  * {{server}}
>  * {{sn}}
>  * {{splitA}}
>  * {{splitB}}
>  * {{merge}}
>  * {{mergeA}}
>  * {{mergeB}}
> Are there others?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23748) Include HBASE-21284 to branch-2.2

2020-01-30 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17027203#comment-17027203
 ] 

Mingliang Liu commented on HBASE-23748:
---

Yes, I see the patch is good. Shall we resolve this JIRA? 

> Include HBASE-21284 to branch-2.2
> -
>
> Key: HBASE-23748
> URL: https://issues.apache.org/jira/browse/HBASE-23748
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
> Fix For: 2.2.3
>
> Attachments: hbase-23748.branch-2.2.001.patch
>
>
> HBASE-21284 was ought to be present in 2.2. But by the time the commit was 
> done, the branch had been cut already. Hence this Jira to track it's 
> inclusion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23366) Test failure due to flaky tests on ppc64le

2019-12-18 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16999592#comment-16999592
 ] 

Mingliang Liu commented on HBASE-23366:
---

[~AK2019] I'm not seeing this errors in our daily build because we are not on 
this 2.2 release. Tests are failing for different reasons, so you can check 
status of each failing test by searching in JIRA. Perhaps they have been 
addressed in later releases, for e.g. 
https://issues.apache.org/jira/browse/HBASE-22637?jql=text%20~%20TestMetaTableMetrics%20ORDER%20BY%20created%20DESC
 If they have been fixed in later releases, you can either backport to your 
fork, or upgrade your version if that fits.

> Test failure due to flaky tests on ppc64le
> --
>
> Key: HBASE-23366
> URL: https://issues.apache.org/jira/browse/HBASE-23366
> Project: HBase
>  Issue Type: Test
>Affects Versions: 2.2.0
> Environment: {color:#172b4d}os: rhel 7.6{color}
> {color:#172b4d} arch: ppc64le{color}
>Reporter: AK97
>Priority: Major
>
> I have been trying to build the Apache Hbase on rhel_7.6/ppc64le. The build 
> passes, however it leads to flaky test failures in module hbase-server.
> All the tests pass most of the times when run individually.
> Following is the list of the tests that fail often:
>  * TestMetaTableMetrics
>  * TestMasterAbortWhileMergingTable
>  * TestSnapshotFromMaster
>  * TestReplicationAdminWithClusters
>  * TestAsyncDecommissionAdminApi
>  * TestCompactSplitThread
>  
>    
> I am on branch rel/2.2.0
> {color:#172b4d}Would like some help on understanding the cause for the same . 
> I am running it on a High end VM with good connectivity.{color}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently

2019-12-17 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16998453#comment-16998453
 ] 

Mingliang Liu edited comment on HBASE-22607 at 12/17/19 6:21 PM:
-

Thanks for confirming, [~AK2019]. The previous v1 full patch, as mentioned 
above, was to offer a fix and to introduce a change to test that patch. To 
avoid confusion, I have uploaded the v2 addendum change which only contains the 
fix. I will leave to [~stack] if we need that in our code.


was (Author: liuml07):
Thanks for confirming, [~AK2019]. The previous v1 full patch, as mentioned 
above, was to introduce a change to test that patch. To avoid confusion, I have 
uploaded the v2 addendum change which only contains the fix. I will leave to 
[~stack] if we need that in our code.

> TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() 
> fails intermittently
> -
>
> Key: HBASE-22607
> URL: https://issues.apache.org/jira/browse/HBASE-22607
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0, 2.0.6
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
> Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, 
> HBASE-22607.002.patch, HBASE-22607.addendum.000.patch, 
> HBASE-22607.addendum.001.patch, HBASE-22607.addendum.002.patch
>
>
> In previous runs, test 
> {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} 
> fails intermittently with {{java.net.ConnectException: Connection refused}} 
> exception, see build 
> [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  
> [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  and 
> [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/].
> So one sample exception is like:
> {quote}
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy20.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580)
>   at 
> org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637)
>   at 
> org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80)

[jira] [Commented] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently

2019-12-17 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16998453#comment-16998453
 ] 

Mingliang Liu commented on HBASE-22607:
---

Thanks for confirming, [~AK2019]. The previous v1 full patch, as mentioned 
above, was to introduce a change to test that patch. To avoid confusion, I have 
uploaded the v2 addendum change which only contains the fix. I will leave to 
[~stack] if we need that in our code.

> TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() 
> fails intermittently
> -
>
> Key: HBASE-22607
> URL: https://issues.apache.org/jira/browse/HBASE-22607
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0, 2.0.6
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
> Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, 
> HBASE-22607.002.patch, HBASE-22607.addendum.000.patch, 
> HBASE-22607.addendum.001.patch, HBASE-22607.addendum.002.patch
>
>
> In previous runs, test 
> {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} 
> fails intermittently with {{java.net.ConnectException: Connection refused}} 
> exception, see build 
> [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  
> [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  and 
> [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/].
> So one sample exception is like:
> {quote}
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy20.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580)
>   at 
> org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637)
>   at 
> org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80)
> {quote}
> This seems that, somehow the rootdir filesystem is not LocalFileSystem, but 
> on HDFS. I have not dig deeper why this happens since it's failing 
> intermittently and I can not reproduce it locally. Since this is testing 
> export snapshot tool without cluster, we can enforce it using 
> LocalFileSystem; no breaking change.



--
This message was sent b

[jira] [Updated] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently

2019-12-17 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-22607:
--
Attachment: HBASE-22607.addendum.002.patch

> TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() 
> fails intermittently
> -
>
> Key: HBASE-22607
> URL: https://issues.apache.org/jira/browse/HBASE-22607
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0, 2.0.6
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
> Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, 
> HBASE-22607.002.patch, HBASE-22607.addendum.000.patch, 
> HBASE-22607.addendum.001.patch, HBASE-22607.addendum.002.patch
>
>
> In previous runs, test 
> {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} 
> fails intermittently with {{java.net.ConnectException: Connection refused}} 
> exception, see build 
> [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  
> [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  and 
> [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/].
> So one sample exception is like:
> {quote}
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy20.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580)
>   at 
> org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637)
>   at 
> org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80)
> {quote}
> This seems that, somehow the rootdir filesystem is not LocalFileSystem, but 
> on HDFS. I have not dig deeper why this happens since it's failing 
> intermittently and I can not reproduce it locally. Since this is testing 
> export snapshot tool without cluster, we can enforce it using 
> LocalFileSystem; no breaking change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently

2019-12-13 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995824#comment-16995824
 ] 

Mingliang Liu commented on HBASE-22607:
---

[~AK2019] Thanks for providing more context. This is interesting scenario. I 
have not reproduced the UT failure ever. Your use case could be common so I 
just dig further.

{quote}
The error seemd to be in TestExportSnapshotNoCluster class. Correct me if I am 
wrong. 
{quote}
The test error is coming from the {{TestExportSnapshot}} because 
{{TestExportSnapshotNoCluster}} is using the static helper test method 
{{TestExportSnapshot::testExportFileSystemState}}. So the v0 addendum patch was 
to fix {{TestExportSnapshot}}..

{quote}
I changed hbase-mapreduce/target/test-classes/hbase-site.xml by replacing 
'hdfs://localhost:35345' to  
'file:/hbase/hbase-mapreduce/target/test-data/35120b7a-8ae0-1738-09a2-497820fe4ff9/.hbase-snapshot/tableWithRefsV1'
 that solved the error. 
{quote}
This is very interesting. First, I don't see the default FS was set in 
{{hbase-mapreduce/src/test/resources/hbase-site.xml}} source code, so I'm not 
sure who changes that file with HDFS value. If it's missing, each UT (mostly 
{{HBaseTestingUtility}}) will have the default FS set with the MiniDFS cluster 
path. However this UT expects no cluster, and I assume we don't need DFS 
cluster either. So in the 
{{hbase-mapreduce/target/test-classes/hbase-site.xml}}, some stale value might 
be there created by other test classes?

To solve that, you can:
- delete {{hdfs://localhost:35345}} in your 
hbase-mapreduce/target/test-classes/hbase-site.xml}} file, or
- add {{conf.set(FileSystem.FS_DEFAULT_NAME_KEY, testDir.toString());}} in 
{{TestExportSnapshotNoCluster::setUpBaseConf}} method.

I have attached a new patch [^HBASE-22607.addendum.001.patch] which would fail 
if we do not above steps, and succeed if we do either of the steps. Please try.

As I think this is a corner case when 
{{hbase-mapreduce/target/test-classes/hbase-site.xml}} is somehow screwed, I 
guess we can push this one-line fix in our code? We need to revert change in 
{{hbase-mapreduce/src/test/resources/hbase-site.xml}} in above patch. I can 
prepare a new JIRA if needed. [~stack]

 

> TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() 
> fails intermittently
> -
>
> Key: HBASE-22607
> URL: https://issues.apache.org/jira/browse/HBASE-22607
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0, 2.0.6
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
> Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, 
> HBASE-22607.002.patch, HBASE-22607.addendum.000.patch, 
> HBASE-22607.addendum.001.patch
>
>
> In previous runs, test 
> {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} 
> fails intermittently with {{java.net.ConnectException: Connection refused}} 
> exception, see build 
> [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  
> [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  and 
> [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/].
> So one sample exception is like:
> {quote}
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy20.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(Distribu

[jira] [Updated] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently

2019-12-13 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-22607:
--
Attachment: HBASE-22607.addendum.001.patch

> TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() 
> fails intermittently
> -
>
> Key: HBASE-22607
> URL: https://issues.apache.org/jira/browse/HBASE-22607
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0, 2.0.6
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
> Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, 
> HBASE-22607.002.patch, HBASE-22607.addendum.000.patch, 
> HBASE-22607.addendum.001.patch
>
>
> In previous runs, test 
> {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} 
> fails intermittently with {{java.net.ConnectException: Connection refused}} 
> exception, see build 
> [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  
> [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  and 
> [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/].
> So one sample exception is like:
> {quote}
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy20.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580)
>   at 
> org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637)
>   at 
> org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80)
> {quote}
> This seems that, somehow the rootdir filesystem is not LocalFileSystem, but 
> on HDFS. I have not dig deeper why this happens since it's failing 
> intermittently and I can not reproduce it locally. Since this is testing 
> export snapshot tool without cluster, we can enforce it using 
> LocalFileSystem; no breaking change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23574) TestFixKerberosTicketOrder fails intermittently

2019-12-12 Thread Mingliang Liu (Jira)
Mingliang Liu created HBASE-23574:
-

 Summary: TestFixKerberosTicketOrder fails intermittently
 Key: HBASE-23574
 URL: https://issues.apache.org/jira/browse/HBASE-23574
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Mingliang Liu


One example is at: 
[https://builds.apache.org/job/hadoop-multibranch/job/PR-1757/3/testReport/org.apache.hadoop.security/TestFixKerberosTicketOrder/test/]

 

Sample stack:
{code:java}
org.apache.hadoop.security.KerberosAuthException: failure to login: for 
principal: client from keytab 
/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1757/src/hadoop-common-project/hadoop-common/target/keytab
 javax.security.auth.login.LoginException: Invalid argument (400) - Cannot find 
key for type/kvno to decrypt AS REP - AES128 CTS mode with HMAC SHA1-96/1
at 
org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1972)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1348)
at 
org.apache.hadoop.security.TestFixKerberosTicketOrder.test(TestFixKerberosTicketOrder.java:81)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: javax.security.auth.login.LoginException: Invalid argument (400) - 
Cannot find key for type/kvno to decrypt AS REP - AES128 CTS mode with HMAC 
SHA1-96/1
at 
com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:804)
at 
com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at 
javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at 
javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at 
org.apache.hadoop.security.UserGroupInformation$HadoopLoginContext.login(UserGroupInformation.java:2051)
at 
org.a

[jira] [Commented] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently

2019-12-12 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995347#comment-16995347
 ] 

Mingliang Liu commented on HBASE-22607:
---

[~AK2019] Could you share the command to run the test, and the whole output of 
error? Thanks,

> TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() 
> fails intermittently
> -
>
> Key: HBASE-22607
> URL: https://issues.apache.org/jira/browse/HBASE-22607
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0, 2.0.6
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
> Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, 
> HBASE-22607.002.patch, HBASE-22607.addendum.000.patch
>
>
> In previous runs, test 
> {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} 
> fails intermittently with {{java.net.ConnectException: Connection refused}} 
> exception, see build 
> [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  
> [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  and 
> [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/].
> So one sample exception is like:
> {quote}
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy20.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580)
>   at 
> org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637)
>   at 
> org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80)
> {quote}
> This seems that, somehow the rootdir filesystem is not LocalFileSystem, but 
> on HDFS. I have not dig deeper why this happens since it's failing 
> intermittently and I can not reproduce it locally. Since this is testing 
> export snapshot tool without cluster, we can enforce it using 
> LocalFileSystem; no breaking change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently

2019-12-12 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995257#comment-16995257
 ] 

Mingliang Liu edited comment on HBASE-22607 at 12/13/19 1:16 AM:
-

[~AK2019] That is interesting.

Can you reproduce this consistently? If so, the problem might be easier to 
debug. I can not debug here because I never see this with multiple runs.
{code}
git checkout rel/2.2.0
commit=$(git log master | grep -B 5 HBASE-22607 | grep commit | awk '{print 
$2}')
git cherry-pick $commit
mvn clean package
mvn test -Dtest=TestExportSnapshotNoCluster
{code}


So I check the line number and it is not very clear which line error out in 
{{testSnapshotWithRefsExportFileSystemState}}. I guess it's in LoC 216 of 
{{TestExportSnapshot}}.
{code:title=TestExportSnapshot.java:216}
copyDir = copyDir.makeQualified(fs);
{code}

If so, the {{fs}} is created using a new Configuration which is NOT patched as 
in  {{TestExportSnapshotNoCluster}}. Could you try the addendum diff  
[^HBASE-22607.addendum.000.patch] ? Hopefully it will fix this. Otherwise we 
may have to debug further, which perhaps is orthogonal to this patch.



was (Author: liuml07):
[~AK2019] That is interesting.

Can you reproduce this consistently? If so, the problem might be easier to 
debug. I can not debug here because I never see this with multiple runs.
{code}
git checkout rel/2.2.0
commit=$(git log master | grep -B 5 HBASE-22607 | grep commit | awk '{print 
$2}')
git cherry-pick $commit
mvn clean package
mvn test -Dtest=TestExportSnapshotNoCluster
{code}


So I check the line number and it is not very clear which line error out in 
{{testSnapshotWithRefsExportFileSystemState(}}. I guess it's in LoC 216 of 
{{TestExportSnapshot}}. If so, the fs is created using new Configuration which 
is patched as in  {{TestExportSnapshotNoCluster}}.
{code:title=TestExportSnapshot.java:216}
copyDir = copyDir.makeQualified(fs);
{code}

Could you try the addendum diff  [^HBASE-22607.addendum.000.patch] ? Hopefully 
it will fix this. Otherwise we may have to debug further, which perhaps is 
orthogonal to this patch.


> TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() 
> fails intermittently
> -
>
> Key: HBASE-22607
> URL: https://issues.apache.org/jira/browse/HBASE-22607
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0, 2.0.6
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
> Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, 
> HBASE-22607.002.patch, HBASE-22607.addendum.000.patch
>
>
> In previous runs, test 
> {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} 
> fails intermittently with {{java.net.ConnectException: Connection refused}} 
> exception, see build 
> [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  
> [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  and 
> [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/].
> So one sample exception is like:
> {quote}
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy20.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537)
>   at org.apache.hadoop.fs

[jira] [Commented] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently

2019-12-12 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995257#comment-16995257
 ] 

Mingliang Liu commented on HBASE-22607:
---

[~AK2019] That is interesting.

Can you reproduce this consistently? If so, the problem might be easier to 
debug. I can not debug here because I never see this with multiple runs.
{code}
git checkout rel/2.2.0
commit=$(git log master | grep -B 5 HBASE-22607 | grep commit | awk '{print 
$2}')
git cherry-pick $commit
mvn clean package
mvn test -Dtest=TestExportSnapshotNoCluster
{code}


So I check the line number and it is not very clear which line error out in 
{{testSnapshotWithRefsExportFileSystemState(}}. I guess it's in LoC 216 of 
{{TestExportSnapshot}}. If so, the fs is created using new Configuration which 
is patched as in  {{TestExportSnapshotNoCluster}}.
{code:title=TestExportSnapshot.java:216}
copyDir = copyDir.makeQualified(fs);
{code}

Could you try the addendum diff  [^HBASE-22607.addendum.000.patch] ? Hopefully 
it will fix this. Otherwise we may have to debug further, which perhaps is 
orthogonal to this patch.


> TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() 
> fails intermittently
> -
>
> Key: HBASE-22607
> URL: https://issues.apache.org/jira/browse/HBASE-22607
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0, 2.0.6
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
> Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, 
> HBASE-22607.002.patch, HBASE-22607.addendum.000.patch
>
>
> In previous runs, test 
> {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} 
> fails intermittently with {{java.net.ConnectException: Connection refused}} 
> exception, see build 
> [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  
> [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  and 
> [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/].
> So one sample exception is like:
> {quote}
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy20.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580)
>   at 
> org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647)
>   at 
> org.apache.hadoop.hbase.snapshot.Sn

[jira] [Updated] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently

2019-12-12 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-22607:
--
Attachment: HBASE-22607.addendum.000.patch

> TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() 
> fails intermittently
> -
>
> Key: HBASE-22607
> URL: https://issues.apache.org/jira/browse/HBASE-22607
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0, 2.0.6
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
> Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, 
> HBASE-22607.002.patch, HBASE-22607.addendum.000.patch
>
>
> In previous runs, test 
> {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} 
> fails intermittently with {{java.net.ConnectException: Connection refused}} 
> exception, see build 
> [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  
> [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  and 
> [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/].
> So one sample exception is like:
> {quote}
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy20.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580)
>   at 
> org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637)
>   at 
> org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80)
> {quote}
> This seems that, somehow the rootdir filesystem is not LocalFileSystem, but 
> on HDFS. I have not dig deeper why this happens since it's failing 
> intermittently and I can not reproduce it locally. Since this is testing 
> export snapshot tool without cluster, we can enforce it using 
> LocalFileSystem; no breaking change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23313) [hbck2] setRegionState should update Master in-memory state too

2019-11-20 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978248#comment-16978248
 ] 

Mingliang Liu commented on HBASE-23313:
---

> I don't see an RPC for this now, I guess we would need that added.

Thanks for confirmation. I guess we will then need the change all the way to 
MetaTableAccessor then...?

> [hbck2] setRegionState should update Master in-memory state too
> ---
>
> Key: HBASE-23313
> URL: https://issues.apache.org/jira/browse/HBASE-23313
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Reporter: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
> setRegionState changes the hbase:meta table info:state column. It does not 
> alter the Master's in-memory state. This means you have to kill Master and 
> have another assume Active Master role of a state-change to be noticed. 
> Better if the setRegionState just went via Master and updated Master and 
> hbase:meta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently

2019-11-20 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978218#comment-16978218
 ] 

Mingliang Liu commented on HBASE-22607:
---

I see this still happens now, see 
https://builds.apache.org/job/PreCommit-HBASE-Build/1012/testReport/. Perhaps 
we can try with this patch since it does not hurt.

> TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() 
> fails intermittently
> -
>
> Key: HBASE-22607
> URL: https://issues.apache.org/jira/browse/HBASE-22607
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0, 2.0.6
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, 
> HBASE-22607.002.patch
>
>
> In previous runs, test 
> {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} 
> fails intermittently with {{java.net.ConnectException: Connection refused}} 
> exception, see build 
> [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  
> [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  and 
> [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/].
> So one sample exception is like:
> {quote}
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy20.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580)
>   at 
> org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637)
>   at 
> org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80)
> {quote}
> This seems that, somehow the rootdir filesystem is not LocalFileSystem, but 
> on HDFS. I have not dig deeper why this happens since it's failing 
> intermittently and I can not reproduce it locally. Since this is testing 
> export snapshot tool without cluster, we can enforce it using 
> LocalFileSystem; no breaking change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23313) [hbck2] setRegionState should update Master in-memory state too

2019-11-20 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978200#comment-16978200
 ] 

Mingliang Liu commented on HBASE-23313:
---

{quote}
Better if the setRegionState just went via Master
{quote}

Do we have an RPC call for this now? 

> [hbck2] setRegionState should update Master in-memory state too
> ---
>
> Key: HBASE-23313
> URL: https://issues.apache.org/jira/browse/HBASE-23313
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Reporter: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
> setRegionState changes the hbase:meta table info:state column. It does not 
> alter the Master's in-memory state. This means you have to kill Master and 
> have another assume Active Master role of a state-change to be noticed. 
> Better if the setRegionState just went via Master and updated Master and 
> hbase:meta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23314) Make HBaseObjectStoreSemantics FilterFileSystem

2019-11-19 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977647#comment-16977647
 ] 

Mingliang Liu edited comment on HBASE-23314 at 11/19/19 5:29 PM:
-

Thank you very much for prompt review and commit [~apurtell] [~wchevreuil]! I 
have filed a new ticket [HADOOP-16722] in Hadoop.


was (Author: liuml07):
Thank you very much for prompt review and commit [~apurtell] [~wchevreuil]! I 
have filed a new ticket [[HADOOP-16722]] in Hadoop.

> Make HBaseObjectStoreSemantics FilterFileSystem
> ---
>
> Key: HBASE-23314
> URL: https://issues.apache.org/jira/browse/HBASE-23314
> Project: HBase
>  Issue Type: Improvement
>  Components: hboss
>Affects Versions: 1.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 1.0.0-alpha2
>
>
> HBaseObjectStoreSemantics, as a wrapper of object store file system 
> implementation, currently extends FileSystem itself. There is no 
> straightforward way to expose its wrapped files system. However, some tooling 
> would need to operate using the wrapped object store file systems, for e.g. 
> S3GuardTool is expecting the file system implementation is S3A so it can 
> access the metadata store easily. A simple S3GuardTool against HBOSS will get 
> confusing error like "s3a://mybucket is not a S3A file system".
> Let's make HBaseObjectStoreSemantics a FilterFileSystem so that places like 
> S3GuardTool can use {{getRawFilesSystem()}} to retrieve the wrapped file 
> system. Doing this should not break the contract of HBOSS contract.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23314) Make HBaseObjectStoreSemantics FilterFileSystem

2019-11-19 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977647#comment-16977647
 ] 

Mingliang Liu commented on HBASE-23314:
---

Thank you very much for prompt review and commit [~apurtell] [~wchevreuil]! I 
have filed a new ticket [[HADOOP-16722]] in Hadoop.

> Make HBaseObjectStoreSemantics FilterFileSystem
> ---
>
> Key: HBASE-23314
> URL: https://issues.apache.org/jira/browse/HBASE-23314
> Project: HBase
>  Issue Type: Improvement
>  Components: hboss
>Affects Versions: 1.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 1.0.0-alpha2
>
>
> HBaseObjectStoreSemantics, as a wrapper of object store file system 
> implementation, currently extends FileSystem itself. There is no 
> straightforward way to expose its wrapped files system. However, some tooling 
> would need to operate using the wrapped object store file systems, for e.g. 
> S3GuardTool is expecting the file system implementation is S3A so it can 
> access the metadata store easily. A simple S3GuardTool against HBOSS will get 
> confusing error like "s3a://mybucket is not a S3A file system".
> Let's make HBaseObjectStoreSemantics a FilterFileSystem so that places like 
> S3GuardTool can use {{getRawFilesSystem()}} to retrieve the wrapped file 
> system. Doing this should not break the contract of HBOSS contract.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23314) Make HBaseObjectStoreSemantics FilterFileSystem

2019-11-18 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23314:
--
Status: Patch Available  (was: Open)

> Make HBaseObjectStoreSemantics FilterFileSystem
> ---
>
> Key: HBASE-23314
> URL: https://issues.apache.org/jira/browse/HBASE-23314
> Project: HBase
>  Issue Type: Improvement
>  Components: hboss
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>
> HBaseObjectStoreSemantics, as a wrapper of object store file system 
> implementation, currently extends FileSystem itself. There is no 
> straightforward way to expose its wrapped files system. However, some tooling 
> would need to operate using the wrapped object store file systems, for e.g. 
> S3GuardTool is expecting the file system implementation is S3A so it can 
> access the metadata store easily. A simple S3GuardTool against HBOSS will get 
> confusing error like "s3a://mybucket is not a S3A file system".
> Let's make HBaseObjectStoreSemantics a FilterFileSystem so that places like 
> S3GuardTool can use {{getRawFilesSystem()}} to retrieve the wrapped file 
> system. Doing this should not break the contract of HBOSS contract.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23314) Make HBaseObjectStoreSemantics FilterFileSystem

2019-11-18 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16976958#comment-16976958
 ] 

Mingliang Liu commented on HBASE-23314:
---

[~wchevreuil], [~mackrorysd] and [~ste...@apache.org] Does this make sense? 
Thanks,

> Make HBaseObjectStoreSemantics FilterFileSystem
> ---
>
> Key: HBASE-23314
> URL: https://issues.apache.org/jira/browse/HBASE-23314
> Project: HBase
>  Issue Type: Improvement
>  Components: hboss
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>
> HBaseObjectStoreSemantics, as a wrapper of object store file system 
> implementation, currently extends FileSystem itself. There is no 
> straightforward way to expose its wrapped files system. However, some tooling 
> would need to operate using the wrapped object store file systems, for e.g. 
> S3GuardTool is expecting the file system implementation is S3A so it can 
> access the metadata store easily. A simple S3GuardTool against HBOSS will get 
> confusing error like "s3a://mybucket is not a S3A file system".
> Let's make HBaseObjectStoreSemantics a FilterFileSystem so that places like 
> S3GuardTool can use {{getRawFilesSystem()}} to retrieve the wrapped file 
> system. Doing this should not break the contract of HBOSS contract.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23314) Make HBaseObjectStoreSemantics FilterFileSystem

2019-11-18 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23314:
--
Issue Type: Improvement  (was: New Feature)

> Make HBaseObjectStoreSemantics FilterFileSystem
> ---
>
> Key: HBASE-23314
> URL: https://issues.apache.org/jira/browse/HBASE-23314
> Project: HBase
>  Issue Type: Improvement
>  Components: hboss
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>
> HBaseObjectStoreSemantics, as a wrapper of object store file system 
> implementation, currently extends FileSystem itself. There is no 
> straightforward way to expose its wrapped files system. However, some tooling 
> would need to operate using the wrapped object store file systems, for e.g. 
> S3GuardTool is expecting the file system implementation is S3A so it can 
> access the metadata store easily. A simple S3GuardTool against HBOSS will get 
> confusing error like "s3a://mybucket is not a S3A file system".
> Let's make HBaseObjectStoreSemantics a FilterFileSystem so that places like 
> S3GuardTool can use {{getRawFilesSystem()}} to retrieve the wrapped file 
> system. Doing this should not break the contract of HBOSS contract.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23314) Make HBaseObjectStoreSemantics FilterFileSystem

2019-11-18 Thread Mingliang Liu (Jira)
Mingliang Liu created HBASE-23314:
-

 Summary: Make HBaseObjectStoreSemantics FilterFileSystem
 Key: HBASE-23314
 URL: https://issues.apache.org/jira/browse/HBASE-23314
 Project: HBase
  Issue Type: New Feature
  Components: hboss
Reporter: Mingliang Liu
Assignee: Mingliang Liu


HBaseObjectStoreSemantics, as a wrapper of object store file system 
implementation, currently extends FileSystem itself. There is no 
straightforward way to expose its wrapped files system. However, some tooling 
would need to operate using the wrapped object store file systems, for e.g. 
S3GuardTool is expecting the file system implementation is S3A so it can access 
the metadata store easily. A simple S3GuardTool against HBOSS will get 
confusing error like "s3a://mybucket is not a S3A file system".

Let's make HBaseObjectStoreSemantics a FilterFileSystem so that places like 
S3GuardTool can use {{getRawFilesSystem()}} to retrieve the wrapped file 
system. Doing this should not break the contract of HBOSS contract.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23289) Update book links to Hadoop wiki

2019-11-16 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16975931#comment-16975931
 ] 

Mingliang Liu commented on HBASE-23289:
---

The v0 patch fixes all links I found (using `rg` command) in both code and book 
to Hadoop wiki. One page about decommissioning DataNodes was missing in Hadoop 
wiki in old&new wiki, so I replaced with HDFS official site doc. Another link 
in code about lzo compression was stated clearly out-of-date in Hadoop wiki, 
and I replaced with the lzo link to book.

> Update book links to Hadoop wiki
> 
>
> Key: HBASE-23289
> URL: https://issues.apache.org/jira/browse/HBASE-23289
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-23289.000.patch
>
>
> Seems Hadoop has moved their wiki, so now links throughout our book are 
> broken. We've found and fixed a couple one-offs, but we should do a sweep and 
> clean up the rest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23289) Update links to Hadoop wiki in code and book

2019-11-16 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23289:
--
Status: Patch Available  (was: Open)

> Update links to Hadoop wiki in code and book
> 
>
> Key: HBASE-23289
> URL: https://issues.apache.org/jira/browse/HBASE-23289
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-23289.000.patch
>
>
> Seems Hadoop has moved their wiki, so now links throughout our book are 
> broken. We've found and fixed a couple one-offs, but we should do a sweep and 
> clean up the rest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23289) Update links to Hadoop wiki in code and book

2019-11-16 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23289:
--
Summary: Update links to Hadoop wiki in code and book  (was: Update book 
links to Hadoop wiki)

> Update links to Hadoop wiki in code and book
> 
>
> Key: HBASE-23289
> URL: https://issues.apache.org/jira/browse/HBASE-23289
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-23289.000.patch
>
>
> Seems Hadoop has moved their wiki, so now links throughout our book are 
> broken. We've found and fixed a couple one-offs, but we should do a sweep and 
> clean up the rest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23289) Update book links to Hadoop wiki

2019-11-16 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23289:
--
Attachment: HBASE-23289.000.patch

> Update book links to Hadoop wiki
> 
>
> Key: HBASE-23289
> URL: https://issues.apache.org/jira/browse/HBASE-23289
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-23289.000.patch
>
>
> Seems Hadoop has moved their wiki, so now links throughout our book are 
> broken. We've found and fixed a couple one-offs, but we should do a sweep and 
> clean up the rest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23283) Provide clear and consistent logging about the period of enabled chores

2019-11-14 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973649#comment-16973649
 ] 

Mingliang Liu edited comment on HBASE-23283 at 11/15/19 12:58 AM:
--

[~busbey] I  don't have commit access so could you kindly do the favor and 
commit this? Thanks!


was (Author: liuml07):
[[~busbey]I  don't have commit access so could you kindly do the favor...Thanks!

> Provide clear and consistent logging about the period of enabled chores
> ---
>
> Key: HBASE-23283
> URL: https://issues.apache.org/jira/browse/HBASE-23283
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 3.0.0, 2.3.0, 1.7.0
>Reporter: Sean Busbey
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HBASE-23283.000.patch
>
>
> Similar to HBASE-23038, we should always log info about our enabled chores. 
> Right now wether or not we get some information is up to particular Chore 
> constructors and by and large we don't get any log messages when things can 
> get started, even if the period is something impossibly long (e.g. 3000 days).
> When we go to schedule the chore here:
> {code}
>   if (chore.getPeriod() <= 0) {
> LOG.info("The period is {} seconds, {} is disabled", 
> chore.getPeriod(), chore.getName());
> return false;
>   }
> {code}
> we should add an else clause that says it's enabled. It looks like we could 
> then just call chore.toString to get the proper details about the chore and 
> its period.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23290) shell processlist command is broken

2019-11-14 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16974583#comment-16974583
 ] 

Mingliang Liu commented on HBASE-23290:
---

Thanks [~stack]. Yes I think this should go to all 
2.0.0+ branches as it's caused by [HBASE-18239], which has the following 
change: 
{code}
- line = "| %s | %s | %s | %s | %s |" % cells
+ line = format('| %s | %s | %s | %s | %s |', cells)
{code}



> shell processlist command is broken
> ---
>
> Key: HBASE-23290
> URL: https://issues.apache.org/jira/browse/HBASE-23290
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-23290.000.patch, Screen Shot 2019-11-14 at 1.56.00 
> AM.png
>
>
> {code}
> hbase(main):008:0> help 'processlist'
> Show regionserver task list.
>   hbase> processlist
>   hbase> processlist 'all'
>   hbase> processlist 'general'
>   hbase> processlist 'handler'
>   hbase> processlist 'rpc'
>   hbase> processlist 'operation'
>   hbase> processlist 'all','host187.example.com'
>   hbase> processlist 'all','host187.example.com,16020'
>   hbase> processlist 'all','host187.example.com,16020,1289493121758'
> hbase(main):009:0> processlist 'all'
> 3377 tasks as of: 2019-11-13 22:58:57
> ERROR: too few arguments
> For usage try 'help "processlist"'
> Took 2.2107 seconds
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23290) shell processlist command is broken

2019-11-14 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16974109#comment-16974109
 ] 

Mingliang Liu edited comment on HBASE-23290 at 11/14/19 8:20 PM:
-

Passing each element of {{cell}} array as separate arguments, rather than the 
whole array as one argument.


was (Author: liuml07):
Passing each element of {{cell}} array as separate arguments, rather than the 
whole array as one argument.

 

 

> shell processlist command is broken
> ---
>
> Key: HBASE-23290
> URL: https://issues.apache.org/jira/browse/HBASE-23290
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-23290.000.patch, Screen Shot 2019-11-14 at 1.56.00 
> AM.png
>
>
> {code}
> hbase(main):008:0> help 'processlist'
> Show regionserver task list.
>   hbase> processlist
>   hbase> processlist 'all'
>   hbase> processlist 'general'
>   hbase> processlist 'handler'
>   hbase> processlist 'rpc'
>   hbase> processlist 'operation'
>   hbase> processlist 'all','host187.example.com'
>   hbase> processlist 'all','host187.example.com,16020'
>   hbase> processlist 'all','host187.example.com,16020,1289493121758'
> hbase(main):009:0> processlist 'all'
> 3377 tasks as of: 2019-11-13 22:58:57
> ERROR: too few arguments
> For usage try 'help "processlist"'
> Took 2.2107 seconds
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23290) shell processlist command is broken

2019-11-14 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23290:
--
Attachment: Screen Shot 2019-11-14 at 1.56.00 AM.png

> shell processlist command is broken
> ---
>
> Key: HBASE-23290
> URL: https://issues.apache.org/jira/browse/HBASE-23290
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-23290.000.patch, Screen Shot 2019-11-14 at 1.56.00 
> AM.png
>
>
> {code}
> hbase(main):008:0> help 'processlist'
> Show regionserver task list.
>   hbase> processlist
>   hbase> processlist 'all'
>   hbase> processlist 'general'
>   hbase> processlist 'handler'
>   hbase> processlist 'rpc'
>   hbase> processlist 'operation'
>   hbase> processlist 'all','host187.example.com'
>   hbase> processlist 'all','host187.example.com,16020'
>   hbase> processlist 'all','host187.example.com,16020,1289493121758'
> hbase(main):009:0> processlist 'all'
> 3377 tasks as of: 2019-11-13 22:58:57
> ERROR: too few arguments
> For usage try 'help "processlist"'
> Took 2.2107 seconds
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23290) shell processlist command is broken

2019-11-14 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23290:
--
Attachment: HBASE-23290.000.patch
  Assignee: Mingliang Liu
Status: Patch Available  (was: Open)

Passing each element of {{cell}} array as separate arguments, rather than the 
whole array as one argument.

 

 

> shell processlist command is broken
> ---
>
> Key: HBASE-23290
> URL: https://issues.apache.org/jira/browse/HBASE-23290
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-23290.000.patch
>
>
> {code}
> hbase(main):008:0> help 'processlist'
> Show regionserver task list.
>   hbase> processlist
>   hbase> processlist 'all'
>   hbase> processlist 'general'
>   hbase> processlist 'handler'
>   hbase> processlist 'rpc'
>   hbase> processlist 'operation'
>   hbase> processlist 'all','host187.example.com'
>   hbase> processlist 'all','host187.example.com,16020'
>   hbase> processlist 'all','host187.example.com,16020,1289493121758'
> hbase(main):009:0> processlist 'all'
> 3377 tasks as of: 2019-11-13 22:58:57
> ERROR: too few arguments
> For usage try 'help "processlist"'
> Took 2.2107 seconds
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23289) Update book links to Hadoop wiki

2019-11-13 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973654#comment-16973654
 ] 

Mingliang Liu commented on HBASE-23289:
---

Thanks for filing this [~ndimiduk] On the other Jira [HBASE-23284] I tested a 
few other links and they were fine, so I thought what I found was the only last 
one. I searched again and did find at least one here, so I'll pick this Jira up 
and fix all links.

 

My plan is to retrieve all links starting with "wiki.apache.org" in the repo, 
open it and check if HTTP response is returning "301 Moved Permanently". Will 
post a patch later this week.

> Update book links to Hadoop wiki
> 
>
> Key: HBASE-23289
> URL: https://issues.apache.org/jira/browse/HBASE-23289
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Major
>
> Seems Hadoop has moved their wiki, so now links throughout our book are 
> broken. We've found and fixed a couple one-offs, but we should do a sweep and 
> clean up the rest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-23289) Update book links to Hadoop wiki

2019-11-13 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HBASE-23289:
-

Assignee: Mingliang Liu

> Update book links to Hadoop wiki
> 
>
> Key: HBASE-23289
> URL: https://issues.apache.org/jira/browse/HBASE-23289
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Nick Dimiduk
>Assignee: Mingliang Liu
>Priority: Major
>
> Seems Hadoop has moved their wiki, so now links throughout our book are 
> broken. We've found and fixed a couple one-offs, but we should do a sweep and 
> clean up the rest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23283) Provide clear and consistent logging about the period of enabled chores

2019-11-13 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973649#comment-16973649
 ] 

Mingliang Liu commented on HBASE-23283:
---

[[~busbey]I  don't have commit access so could you kindly do the favor...Thanks!

> Provide clear and consistent logging about the period of enabled chores
> ---
>
> Key: HBASE-23283
> URL: https://issues.apache.org/jira/browse/HBASE-23283
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 3.0.0, 2.3.0, 1.7.0
>Reporter: Sean Busbey
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HBASE-23283.000.patch
>
>
> Similar to HBASE-23038, we should always log info about our enabled chores. 
> Right now wether or not we get some information is up to particular Chore 
> constructors and by and large we don't get any log messages when things can 
> get started, even if the period is something impossibly long (e.g. 3000 days).
> When we go to schedule the chore here:
> {code}
>   if (chore.getPeriod() <= 0) {
> LOG.info("The period is {} seconds, {} is disabled", 
> chore.getPeriod(), chore.getName());
> return false;
>   }
> {code}
> we should add an else clause that says it's enabled. It looks like we could 
> then just call chore.toString to get the proper details about the chore and 
> its period.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23284) Fix Hadoop wiki link in Developer guide to "Distributions and Commercial Support"

2019-11-13 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973124#comment-16973124
 ] 

Mingliang Liu commented on HBASE-23284:
---

Ping [~stack] and [~ndimiduk]. Thanks,

> Fix Hadoop wiki link in Developer guide to "Distributions and Commercial 
> Support"
> -
>
> Key: HBASE-23284
> URL: https://issues.apache.org/jira/browse/HBASE-23284
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HBASE-23284.000.patch
>
>
> Similar to [HBASE-23272], the link to Hadoop wiki is removed for section 
> "Distributions and Commercial Support" is broken. Let's update it here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23284) Fix Hadoop wiki link in Developer guide to "Distributions and Commercial Support"

2019-11-13 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23284:
--
Status: Patch Available  (was: Open)

> Fix Hadoop wiki link in Developer guide to "Distributions and Commercial 
> Support"
> -
>
> Key: HBASE-23284
> URL: https://issues.apache.org/jira/browse/HBASE-23284
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HBASE-23284.000.patch
>
>
> Similar to [HBASE-23272], the link to Hadoop wiki is removed for section 
> "Distributions and Commercial Support" is broken. Let's update it here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23284) Fix Hadoop wiki link in Developer guide to "Distributions and Commercial Support"

2019-11-13 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23284:
--
Attachment: HBASE-23284.000.patch

> Fix Hadoop wiki link in Developer guide to "Distributions and Commercial 
> Support"
> -
>
> Key: HBASE-23284
> URL: https://issues.apache.org/jira/browse/HBASE-23284
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HBASE-23284.000.patch
>
>
> Similar to [HBASE-23272], the link to Hadoop wiki is removed for section 
> "Distributions and Commercial Support" is broken. Let's update it here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23284) Fix Hadoop wiki link in Developer guide to "Distributions and Commercial Support"

2019-11-13 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23284:
--
Affects Version/s: 3.0.0

> Fix Hadoop wiki link in Developer guide to "Distributions and Commercial 
> Support"
> -
>
> Key: HBASE-23284
> URL: https://issues.apache.org/jira/browse/HBASE-23284
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
>
> Similar to [HBASE-23272], the link to Hadoop wiki is removed for section 
> "Distributions and Commercial Support" is broken. Let's update it here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23284) Fix Hadoop wiki link in Developer guide to "Distributions and Commercial Support"

2019-11-13 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23284:
--
Issue Type: Task  (was: New Feature)

> Fix Hadoop wiki link in Developer guide to "Distributions and Commercial 
> Support"
> -
>
> Key: HBASE-23284
> URL: https://issues.apache.org/jira/browse/HBASE-23284
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
>
> Similar to [HBASE-23272], the link to Hadoop wiki is removed for section 
> "Distributions and Commercial Support" is broken. Let's update it here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23284) Fix Hadoop wiki link in Developer guide to "Distributions and Commercial Support"

2019-11-13 Thread Mingliang Liu (Jira)
Mingliang Liu created HBASE-23284:
-

 Summary: Fix Hadoop wiki link in Developer guide to "Distributions 
and Commercial Support"
 Key: HBASE-23284
 URL: https://issues.apache.org/jira/browse/HBASE-23284
 Project: HBase
  Issue Type: New Feature
  Components: documentation
Reporter: Mingliang Liu
Assignee: Mingliang Liu


Similar to [HBASE-23272], the link to Hadoop wiki is removed for section 
"Distributions and Commercial Support" is broken. Let's update it here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23283) Provide clear and consistent logging about the period of enabled chores

2019-11-13 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23283:
--
Status: Patch Available  (was: Open)

> Provide clear and consistent logging about the period of enabled chores
> ---
>
> Key: HBASE-23283
> URL: https://issues.apache.org/jira/browse/HBASE-23283
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 3.0.0, 2.3.0, 1.7.0
>Reporter: Sean Busbey
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HBASE-23283.000.patch
>
>
> Similar to HBASE-23038, we should always log info about our enabled chores. 
> Right now wether or not we get some information is up to particular Chore 
> constructors and by and large we don't get any log messages when things can 
> get started, even if the period is something impossibly long (e.g. 3000 days).
> When we go to schedule the chore here:
> {code}
>   if (chore.getPeriod() <= 0) {
> LOG.info("The period is {} seconds, {} is disabled", 
> chore.getPeriod(), chore.getName());
> return false;
>   }
> {code}
> we should add an else clause that says it's enabled. It looks like we could 
> then just call chore.toString to get the proper details about the chore and 
> its period.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23283) Provide clear and consistent logging about the period of enabled chores

2019-11-13 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-23283:
--
Attachment: HBASE-23283.000.patch

> Provide clear and consistent logging about the period of enabled chores
> ---
>
> Key: HBASE-23283
> URL: https://issues.apache.org/jira/browse/HBASE-23283
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 3.0.0, 2.3.0, 1.7.0
>Reporter: Sean Busbey
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HBASE-23283.000.patch
>
>
> Similar to HBASE-23038, we should always log info about our enabled chores. 
> Right now wether or not we get some information is up to particular Chore 
> constructors and by and large we don't get any log messages when things can 
> get started, even if the period is something impossibly long (e.g. 3000 days).
> When we go to schedule the chore here:
> {code}
>   if (chore.getPeriod() <= 0) {
> LOG.info("The period is {} seconds, {} is disabled", 
> chore.getPeriod(), chore.getName());
> return false;
>   }
> {code}
> we should add an else clause that says it's enabled. It looks like we could 
> then just call chore.toString to get the proper details about the chore and 
> its period.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23283) Provide clear and consistent logging about the period of enabled chores

2019-11-12 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16973086#comment-16973086
 ] 

Mingliang Liu commented on HBASE-23283:
---

Thanks, this makes sense. Unless you are working on it, I assign it to me and 
will provide a patch [~busbey] 

> Provide clear and consistent logging about the period of enabled chores
> ---
>
> Key: HBASE-23283
> URL: https://issues.apache.org/jira/browse/HBASE-23283
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 3.0.0, 2.3.0, 1.7.0
>Reporter: Sean Busbey
>Priority: Minor
>
> Similar to HBASE-23038, we should always log info about our enabled chores. 
> Right now wether or not we get some information is up to particular Chore 
> constructors and by and large we don't get any log messages when things can 
> get started, even if the period is something impossibly long (e.g. 3000 days).
> When we go to schedule the chore here:
> {code}
>   if (chore.getPeriod() <= 0) {
> LOG.info("The period is {} seconds, {} is disabled", 
> chore.getPeriod(), chore.getName());
> return false;
>   }
> {code}
> we should add an else clause that says it's enabled. It looks like we could 
> then just call chore.toString to get the proper details about the chore and 
> its period.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-23283) Provide clear and consistent logging about the period of enabled chores

2019-11-12 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HBASE-23283:
-

Assignee: Mingliang Liu

> Provide clear and consistent logging about the period of enabled chores
> ---
>
> Key: HBASE-23283
> URL: https://issues.apache.org/jira/browse/HBASE-23283
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 3.0.0, 2.3.0, 1.7.0
>Reporter: Sean Busbey
>Assignee: Mingliang Liu
>Priority: Minor
>
> Similar to HBASE-23038, we should always log info about our enabled chores. 
> Right now wether or not we get some information is up to particular Chore 
> constructors and by and large we don't get any log messages when things can 
> get started, even if the period is something impossibly long (e.g. 3000 days).
> When we go to schedule the chore here:
> {code}
>   if (chore.getPeriod() <= 0) {
> LOG.info("The period is {} seconds, {} is disabled", 
> chore.getPeriod(), chore.getName());
> return false;
>   }
> {code}
> we should add an else clause that says it's enabled. It looks like we could 
> then just call chore.toString to get the proper details about the chore and 
> its period.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-22804) Provide an API to get list of successful regions and total expected regions in Canary

2019-08-09 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16904184#comment-16904184
 ] 

Mingliang Liu edited comment on HBASE-22804 at 8/10/19 3:54 AM:


Nits:
# {{regionMap}} is created at construction time, and it will not be set null. 
So in test {{assertNotNull("verify region map exists", regionMap);}} seems not 
necessary. That's said, we can also make {{regionMap}} final.
# I think it's bit clearer to have newly added counters in separate lines.
{code}
149 writeFailureCount = new AtomicLong(0),
150 readSuccessCount = new AtomicLong(0),
151 writeSuccessCount = new AtomicLong(0);
{code}
to
{code}
private final AtomicLong writeFailureCount = new AtomicLong(0);
private final AtomicLong readSuccessCount = new AtomicLong(0);
private final AtomicLong writeSuccessCount = new AtomicLong(0);
{code}
# You can focus on {{master}} branch first, and after ready for commit to 
prepare a {{branch-1}}/{{branch-2}} patch if it does not apply cleanly. This 
can save a little bit time.
#  {{TableName tableName = TableName.valueOf("testTable");}} Test table name 
can be a variable and be referenced later. The value can be the test method 
name to avoid potential conflict.
{code}
final String tableName = name.getMethodName();
Table table = testingUtility.createTable(TableName.valueOf(tableName), new 
byte[][] { FAMILY });
...
String[] args = { "-writeSniffing", "-t", "1", tableName };
{code}
# {{for (Map.Entry entry : 
regionMap.entrySet())}} When iterating a Map you can consider using simpler 
format {{for (String regionName : regionMap.keySet())}}.

Question:
{{private Map regionMap = Maps.newConcurrentMap();}} 
can be replaced with {{private Map regionMap = new 
ConcurrentHashMap<>();}} to not use guava? Actually I'm thinking do we need it 
to be concurrent Map? We populate all regions before the sniff and then all 
other places simply get the individual item to update. Thoughts?


was (Author: liuml07):
Nits:
# {{regionMap}} is created at construction time, and it will be set null. So in 
test {{assertNotNull("verify region map exists", regionMap);}} seems not 
necessary. That's said, we can also make {{regionMap}} final.
# I think it's bit clearer to have newly added counters in separate lines.
{code}
149 writeFailureCount = new AtomicLong(0),
150 readSuccessCount = new AtomicLong(0),
151 writeSuccessCount = new AtomicLong(0);
{code}
to
{code}
private final AtomicLong writeFailureCount = new AtomicLong(0);
private final AtomicLong readSuccessCount = new AtomicLong(0);
private final AtomicLong writeSuccessCount = new AtomicLong(0);
{code}
# You can focus on {{master}} branch first, and after ready for commit to 
prepare a {{branch-1}}/{{branch-2}} patch if it does not apply cleanly. This 
can save a little bit time.
#  {{TableName tableName = TableName.valueOf("testTable");}} Test table name 
can be a variable and be referenced later. The value can be the test method 
name to avoid potential conflict.
{code}
final String tableName = name.getMethodName();
Table table = testingUtility.createTable(TableName.valueOf(tableName), new 
byte[][] { FAMILY });
...
String[] args = { "-writeSniffing", "-t", "1", tableName };
{code}
# {{for (Map.Entry entry : 
regionMap.entrySet())}} When iterating a Map you can consider using simpler 
format {{for (String regionName : regionMap.keySet())}}.

Question:
{{private Map regionMap = Maps.newConcurrentMap();}} 
can be replaced with {{private Map regionMap = new 
ConcurrentHashMap<>();}} to not use guava? Actually I'm thinking do we need it 
to be concurrent Map? We populate all regions before the sniff and then all 
other places simply get the individual item to update, right? If so I think 
making it {{final}} will be enough. Thoughts?

> Provide an API to get list of successful regions and total expected regions 
> in Canary
> -
>
> Key: HBASE-22804
> URL: https://issues.apache.org/jira/browse/HBASE-22804
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0, 2.1.5, 2.2.1
>Reporter: Caroline
>Assignee: Caroline
>Priority: Minor
>  Labels: Canary
> Attachments: HBASE-22804.branch-1.001.patch, 
> HBASE-22804.branch-2.001.patch, HBASE-22804.master.001.patch
>
>
> At present HBase Canary tool only prints the successes as part of logs. 
> Providing an API to get the list of successes, as well as total number of 
> expected regions, will make it easier to get a more accurate availability 
> estimate.
>   



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HBASE-22804) Provide an API to get list of successful regions and total expected regions in Canary

2019-08-09 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16904184#comment-16904184
 ] 

Mingliang Liu commented on HBASE-22804:
---

Nits:
# {{regionMap}} is created at construction time, and it will be set null. So in 
test {{assertNotNull("verify region map exists", regionMap);}} seems not 
necessary. That's said, we can also make {{regionMap}} final.
# I think it's bit clearer to have newly added counters in separate lines.
{code}
149 writeFailureCount = new AtomicLong(0),
150 readSuccessCount = new AtomicLong(0),
151 writeSuccessCount = new AtomicLong(0);
{code}
to
{code}
private final AtomicLong writeFailureCount = new AtomicLong(0);
private final AtomicLong readSuccessCount = new AtomicLong(0);
private final AtomicLong writeSuccessCount = new AtomicLong(0);
{code}
# You can focus on {{master}} branch first, and after ready for commit to 
prepare a {{branch-1}}/{{branch-2}} patch if it does not apply cleanly. This 
can save a little bit time.
#  {{TableName tableName = TableName.valueOf("testTable");}} Test table name 
can be a variable and be referenced later. The value can be the test method 
name to avoid potential conflict.
{code}
final String tableName = name.getMethodName();
Table table = testingUtility.createTable(TableName.valueOf(tableName), new 
byte[][] { FAMILY });
...
String[] args = { "-writeSniffing", "-t", "1", tableName };
{code}
# {{for (Map.Entry entry : 
regionMap.entrySet())}} When iterating a Map you can consider using simpler 
format {{for (String regionName : regionMap.keySet())}}.

Question:
{{private Map regionMap = Maps.newConcurrentMap();}} 
can be replaced with {{private Map regionMap = new 
ConcurrentHashMap<>();}} to not use guava? Actually I'm thinking do we need it 
to be concurrent Map? We populate all regions before the sniff and then all 
other places simply get the individual item to update, right? If so I think 
making it {{final}} will be enough. Thoughts?

> Provide an API to get list of successful regions and total expected regions 
> in Canary
> -
>
> Key: HBASE-22804
> URL: https://issues.apache.org/jira/browse/HBASE-22804
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0, 2.1.5, 2.2.1
>Reporter: Caroline
>Assignee: Caroline
>Priority: Minor
>  Labels: Canary
> Attachments: HBASE-22804.branch-1.001.patch, 
> HBASE-22804.branch-2.001.patch, HBASE-22804.master.001.patch
>
>
> At present HBase Canary tool only prints the successes as part of logs. 
> Providing an API to get the list of successes, as well as total number of 
> expected regions, will make it easier to get a more accurate availability 
> estimate.
>   



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (HBASE-22460) Reopen a region if store reader references may have leaked

2019-07-11 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16882461#comment-16882461
 ] 

Mingliang Liu edited comment on HBASE-22460 at 7/12/19 1:31 AM:


Curious, once leaked, is there any other existing means healing it? If not 
{{refCound}} number would be enough; holding for some time might not be needed. 
Thanks,


was (Author: liuml07):
Curious, once leaked, is there any other existing means healing it? If so 
{{refCound}} number would be enough; holding for some time might not be needed. 
Thanks,

> Reopen a region if store reader references may have leaked
> --
>
> Key: HBASE-22460
> URL: https://issues.apache.org/jira/browse/HBASE-22460
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Priority: Minor
>
> We can leak store reader references if a coprocessor or core function somehow 
> opens a scanner, or wraps one, and then does not take care to call close on 
> the scanner or the wrapped instance. A reasonable mitigation for a reader 
> reference leak would be a fast reopen of the region on the same server 
> (initiated by the RS) This will release all resources, like the refcount, 
> leases, etc. The clients should gracefully ride over this like any other 
> region transition. This reopen would be like what is done during schema 
> change application and ideally would reuse the relevant code. If the refcount 
> is over some ridiculous threshold this mitigation could be triggered along 
> with a fat WARN in the logs. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HBASE-22460) Reopen a region if store reader references may have leaked

2019-07-10 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16882461#comment-16882461
 ] 

Mingliang Liu commented on HBASE-22460:
---

Curious, once leaked, is there any other existing means healing it? If so 
{{refCound}} number would be enough; holding for some time might not be needed. 
Thanks,

> Reopen a region if store reader references may have leaked
> --
>
> Key: HBASE-22460
> URL: https://issues.apache.org/jira/browse/HBASE-22460
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Priority: Minor
>
> We can leak store reader references if a coprocessor or core function somehow 
> opens a scanner, or wraps one, and then does not take care to call close on 
> the scanner or the wrapped instance. A reasonable mitigation for a reader 
> reference leak would be a fast reopen of the region on the same server 
> (initiated by the RS) This will release all resources, like the refcount, 
> leases, etc. The clients should gracefully ride over this like any other 
> region transition. This reopen would be like what is done during schema 
> change application and ideally would reuse the relevant code. If the refcount 
> is over some ridiculous threshold this mitigation could be triggered along 
> with a fat WARN in the logs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-22656) [Metrics] Tabe metrics 'BatchPut' and 'BatchDelete' are never updated

2019-07-05 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16879579#comment-16879579
 ] 

Mingliang Liu edited comment on HBASE-22656 at 7/6/19 1:52 AM:
---

+1 (non-binding)

Nice catch. The two method {{RegionServerTableMetrics::updatePutBatch()}} and 
{{RegionServerTableMetrics::updateDeleteBatch()}} are never used.


was (Author: liuml07):
+1 (non-binding)

Nice catch. The two method {{RegionServerTableMetrics::updatePutBatch()}} and 
{{RegionServerTableMetrics::updateDeleteBatch}} are never used.

> [Metrics]  Tabe metrics 'BatchPut' and 'BatchDelete' are never updated
> --
>
> Key: HBASE-22656
> URL: https://issues.apache.org/jira/browse/HBASE-22656
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-22656.master.001.patch
>
>
> {code}
>   public void updatePutBatch(TableName tn, long t) {
> if (tableMetrics != null && tn != null) {
>   tableMetrics.updatePut(tn, t); // Here should use updatePutBatch
> }
> ...
>   }
>   public void updateDeleteBatch(TableName tn, long t) {
> if (tableMetrics != null && tn != null) {
>   tableMetrics.updateDelete(tn, t); // Here should use updateDeleteBatch
> }
> ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22656) [Metrics] Tabe metrics 'BatchPut' and 'BatchDelete' are never updated

2019-07-05 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16879579#comment-16879579
 ] 

Mingliang Liu commented on HBASE-22656:
---

+1 (non-binding)

Nice catch. The two method {{RegionServerTableMetrics::updatePutBatch()}} and 
{{RegionServerTableMetrics::updateDeleteBatch}} are never used.

> [Metrics]  Tabe metrics 'BatchPut' and 'BatchDelete' are never updated
> --
>
> Key: HBASE-22656
> URL: https://issues.apache.org/jira/browse/HBASE-22656
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-22656.master.001.patch
>
>
> {code}
>   public void updatePutBatch(TableName tn, long t) {
> if (tableMetrics != null && tn != null) {
>   tableMetrics.updatePut(tn, t); // Here should use updatePutBatch
> }
> ...
>   }
>   public void updateDeleteBatch(TableName tn, long t) {
> if (tableMetrics != null && tn != null) {
>   tableMetrics.updateDelete(tn, t); // Here should use updateDeleteBatch
> }
> ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22649) FileNotFoundException shown in UI when tried to access HFILE URL of a column family name have special char (e.g #)

2019-07-03 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878231#comment-16878231
 ] 

Mingliang Liu commented on HBASE-22649:
---

I guess newly added {{getEncodedPath()}} returning string instead of a Path 
makes more sense if it's only for this use case. Also 
{{StandardCharsets.UTF_8.name()}} to replace {{UTF-8}} literal string might be 
better?

> FileNotFoundException shown in UI when tried to access HFILE URL of a column 
> family name have special char (e.g #)
> --
>
> Key: HBASE-22649
> URL: https://issues.apache.org/jira/browse/HBASE-22649
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 3.0.0, 2.1.5, 1.3.5
>Reporter: Ashok shetty
>Assignee: Y. SREENIVASULU REDDY
>Priority: Major
> Fix For: 3.0.0, 2.1.6, 1.3.6
>
> Attachments: HBASE-22649.branch-1.002.patch, 
> HBASE-22649.branch-1.patch
>
>
> 【Test step】:
> 1. create 'specialchar' ,'#'
> 2.put 'specialchar','r1','#:cq','1000'
> 3.flush 'specialchar'
> 4.put 'specialchar','r2','#:cq','1000'
> 5.flush 'specialchar'
>  
> Once hfile is created, click the hfile link in UI.
> The following error is throwing.
> {noformat}
> java.io.FileNotFoundException: Path is not a file: 
> /hbase/data/default/specialchar/df9d19830c562c4eeb3f8b396211d52d
>  at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:90)
>  at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:76)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:153)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1942)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:739)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:432)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently

2019-07-03 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878216#comment-16878216
 ] 

Mingliang Liu commented on HBASE-22607:
---

[~stack] Do you think this patch makes some sense as a workaround? Thanks!

> TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() 
> fails intermittently
> -
>
> Key: HBASE-22607
> URL: https://issues.apache.org/jira/browse/HBASE-22607
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0, 2.0.6
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, 
> HBASE-22607.002.patch
>
>
> In previous runs, test 
> {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} 
> fails intermittently with {{java.net.ConnectException: Connection refused}} 
> exception, see build 
> [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  
> [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  and 
> [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/].
> So one sample exception is like:
> {quote}
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy20.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580)
>   at 
> org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637)
>   at 
> org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80)
> {quote}
> This seems that, somehow the rootdir filesystem is not LocalFileSystem, but 
> on HDFS. I have not dig deeper why this happens since it's failing 
> intermittently and I can not reproduce it locally. Since this is testing 
> export snapshot tool without cluster, we can enforce it using 
> LocalFileSystem; no breaking change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently

2019-06-20 Thread Mingliang Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-22607:
--
Attachment: HBASE-22607.002.patch

> TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() 
> fails intermittently
> -
>
> Key: HBASE-22607
> URL: https://issues.apache.org/jira/browse/HBASE-22607
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0, 2.0.6
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, 
> HBASE-22607.002.patch
>
>
> In previous runs, test 
> {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} 
> fails intermittently with {{java.net.ConnectException: Connection refused}} 
> exception, see build 
> [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  
> [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  and 
> [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/].
> So one sample exception is like:
> {quote}
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy20.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580)
>   at 
> org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637)
>   at 
> org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80)
> {quote}
> This seems that, somehow the rootdir filesystem is not LocalFileSystem, but 
> on HDFS. I have not dig deeper why this happens since it's failing 
> intermittently and I can not reproduce it locally. Since this is testing 
> export snapshot tool without cluster, we can enforce it using 
> LocalFileSystem; no breaking change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently

2019-06-20 Thread Mingliang Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-22607:
--
Summary: 
TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails 
intermittently  (was: TestExportSnapshotNoCluster fails intermittently)

> TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() 
> fails intermittently
> -
>
> Key: HBASE-22607
> URL: https://issues.apache.org/jira/browse/HBASE-22607
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0, 2.0.6
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch
>
>
> In previous runs, test 
> {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} 
> fails intermittently with {{java.net.ConnectException: Connection refused}} 
> exception, see build 
> [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  
> [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  and 
> [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/].
> So one sample exception is like:
> {quote}
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy20.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580)
>   at 
> org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637)
>   at 
> org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80)
> {quote}
> This seems that, somehow the rootdir filesystem is not LocalFileSystem, but 
> on HDFS. I have not dig deeper why this happens since it's failing 
> intermittently and I can not reproduce it locally. Since this is testing 
> export snapshot tool without cluster, we can enforce it using 
> LocalFileSystem; no breaking change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently

2019-06-20 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16868944#comment-16868944
 ] 

Mingliang Liu commented on HBASE-22607:
---

Since this is a workaround, the v1 patch fails fast if FileSystem is non-local.

> TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() 
> fails intermittently
> -
>
> Key: HBASE-22607
> URL: https://issues.apache.org/jira/browse/HBASE-22607
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0, 2.0.6
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch
>
>
> In previous runs, test 
> {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} 
> fails intermittently with {{java.net.ConnectException: Connection refused}} 
> exception, see build 
> [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  
> [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  and 
> [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/].
> So one sample exception is like:
> {quote}
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy20.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580)
>   at 
> org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637)
>   at 
> org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80)
> {quote}
> This seems that, somehow the rootdir filesystem is not LocalFileSystem, but 
> on HDFS. I have not dig deeper why this happens since it's failing 
> intermittently and I can not reproduce it locally. Since this is testing 
> export snapshot tool without cluster, we can enforce it using 
> LocalFileSystem; no breaking change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22607) TestExportSnapshotNoCluster fails intermittently

2019-06-20 Thread Mingliang Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-22607:
--
Attachment: HBASE-22607.001.patch

> TestExportSnapshotNoCluster fails intermittently
> 
>
> Key: HBASE-22607
> URL: https://issues.apache.org/jira/browse/HBASE-22607
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0, 2.0.6
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch
>
>
> In previous runs, test 
> {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} 
> fails intermittently with {{java.net.ConnectException: Connection refused}} 
> exception, see build 
> [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  
> [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/],
>  and 
> [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/].
> So one sample exception is like:
> {quote}
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy20.getListing(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580)
>   at 
> org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410)
>   at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637)
>   at 
> org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80)
> {quote}
> This seems that, somehow the rootdir filesystem is not LocalFileSystem, but 
> on HDFS. I have not dig deeper why this happens since it's failing 
> intermittently and I can not reproduce it locally. Since this is testing 
> export snapshot tool without cluster, we can enforce it using 
> LocalFileSystem; no breaking change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21284) Forward port HBASE-21000 to branch-2

2019-06-20 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16868931#comment-16868931
 ] 

Mingliang Liu commented on HBASE-21284:
---

Thanks Andrew!

> Forward port HBASE-21000 to branch-2
> 
>
> Key: HBASE-21284
> URL: https://issues.apache.org/jira/browse/HBASE-21284
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21284.001.patch, HBASE-21284.002.patch
>
>
> See parent for details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22611) hbase-common module's class "org.apache.hadoop.hbase.io.encoding.RowIndexCodecV1" DataOutputStream is not closed.

2019-06-20 Thread Mingliang Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-22611:
--
Status: Patch Available  (was: Open)

> hbase-common module's class 
> "org.apache.hadoop.hbase.io.encoding.RowIndexCodecV1" DataOutputStream is not 
> closed.
> -
>
> Key: HBASE-22611
> URL: https://issues.apache.org/jira/browse/HBASE-22611
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.1.5, 2.1.4
>Reporter: yuliangwan
>Priority: Major
> Fix For: 3.0.0
>
>
>  
> public ByteBuffer decodeKeyValues(DataInputStream source,
>  HFileBlockDecodingContext decodingCtx) throws IOException\{...}
> DataOutputStream is not close after use.
> {code:java}
> //代码占位符
> else {
>   RowIndexSeekerV1 seeker = new 
> RowIndexSeekerV1(CellComparatorImpl.COMPARATOR,
>   decodingCtx);
>   seeker.setCurrentBuffer(new SingleByteBuff(sourceAsBuffer));
>   List kvs = new ArrayList<>();
>   kvs.add(seeker.getCell());
>   while (seeker.next()) {
> kvs.add(seeker.getCell());
>   }
>   boolean includesMvcc = decodingCtx.getHFileContext().isIncludesMvcc();
>   ByteArrayOutputStream baos = new ByteArrayOutputStream();
>   DataOutputStream out = new DataOutputStream(baos);
>   for (Cell cell : kvs) {
> KeyValue currentCell = KeyValueUtil.copyToNewKeyValue(cell);
> out.write(currentCell.getBuffer(), currentCell.getOffset(),
> currentCell.getLength());
> if (includesMvcc) {
>   WritableUtils.writeVLong(out, cell.getSequenceId());
> }
>   }
>   out.flush();
>   return ByteBuffer.wrap(baos.getBuffer(), 0, baos.size());
> }
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22611) hbase-common module's class "org.apache.hadoop.hbase.io.encoding.RowIndexCodecV1" DataOutputStream is not closed.

2019-06-20 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16868867#comment-16868867
 ] 

Mingliang Liu commented on HBASE-22611:
---

+1 (non-binding)


> hbase-common module's class 
> "org.apache.hadoop.hbase.io.encoding.RowIndexCodecV1" DataOutputStream is not 
> closed.
> -
>
> Key: HBASE-22611
> URL: https://issues.apache.org/jira/browse/HBASE-22611
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.1.4, 2.1.5
>Reporter: yuliangwan
>Priority: Major
> Fix For: 3.0.0
>
>
>  
> public ByteBuffer decodeKeyValues(DataInputStream source,
>  HFileBlockDecodingContext decodingCtx) throws IOException\{...}
> DataOutputStream is not close after use.
> {code:java}
> //代码占位符
> else {
>   RowIndexSeekerV1 seeker = new 
> RowIndexSeekerV1(CellComparatorImpl.COMPARATOR,
>   decodingCtx);
>   seeker.setCurrentBuffer(new SingleByteBuff(sourceAsBuffer));
>   List kvs = new ArrayList<>();
>   kvs.add(seeker.getCell());
>   while (seeker.next()) {
> kvs.add(seeker.getCell());
>   }
>   boolean includesMvcc = decodingCtx.getHFileContext().isIncludesMvcc();
>   ByteArrayOutputStream baos = new ByteArrayOutputStream();
>   DataOutputStream out = new DataOutputStream(baos);
>   for (Cell cell : kvs) {
> KeyValue currentCell = KeyValueUtil.copyToNewKeyValue(cell);
> out.write(currentCell.getBuffer(), currentCell.getOffset(),
> currentCell.getLength());
> if (includesMvcc) {
>   WritableUtils.writeVLong(out, cell.getSequenceId());
> }
>   }
>   out.flush();
>   return ByteBuffer.wrap(baos.getBuffer(), 0, baos.size());
> }
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22605) Ref guide includes dev guidance only applicable to EOM versions

2019-06-20 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16868834#comment-16868834
 ] 

Mingliang Liu commented on HBASE-22605:
---

Thanks [~busbey]!

> Ref guide includes dev guidance only applicable to EOM versions
> ---
>
> Key: HBASE-22605
> URL: https://issues.apache.org/jira/browse/HBASE-22605
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Mingliang Liu
>Priority: Trivial
>  Labels: beginner
> Attachments: HBASE-22605.000.patch
>
>
> The ref guide section on developer guidance has this blurb:
> {quote}
> h2. Implementing Writable
> h3. Applies pre-0.96 only
> bq. In 0.96, HBase moved to protocol buffers (protobufs). The below section 
> on Writables applies to 0.94.x and previous, not to 0.96 and beyond.
> Every class returned by RegionServers must implement the Writable interface. 
> If you are creating a new class that needs to implement this interface, do 
> not forget the default constructor.
> {quote}
> ([ref|http://hbase.apache.org/book.html#common.patch.feedback.writable])
> this should be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >